Jump to content

Recommended Posts

Posted
22 hours ago, CharonY said:

That is a rather bad example, as complex modeling is not very amenable to manual input and was heavily automated from the get-go. I have little concerns on its application to such (already) heavily automated processes.

Well I used that example because your job is next. I believe any creative endeavor will as you guys say “change” will be less valuable.

Do you think AI can’t teach? It can summarize the notes and provide reading. College is mostly self studying. So if the AI wrote the test from the curriculum not only could it make custom tests, it could add a time limit and proctor the exam.

I understand it will change the way students study. That doesn’t worry me. I use art as the example that if you paint a picture then the AI copies it and draws a thousand more variations is your painting still special?

 

Posted (edited)
Just now, Trurl said:

Do you think AI can’t teach?

If the AI has never read the early work of the Reverend Bayes, but only been trained on Pearson and Fisher and Gosset what would it teach ?

 

After all this thread is entitled statistics in science.

Edited by studiot
Posted
10 hours ago, swansont said:

I’ve seen several treatments of this framed as if cutting corners was a brand-new phenomenon, but yes, the corners are now bigger and easier to cut.

I agree, it is a matter of degree and especially in terms of motivation it follows a very old mold. The one thing I am uncertain of is what how many of these issues (from AI to social media and, say Cliffs notes can be layered on top of each other before we see a really new change.

  

7 hours ago, Trurl said:

Do you think AI can’t teach?

Not well at this point. It really just provides answers of various quality but is not really good in asking the right questions.

Quote

It can summarize the notes and provide reading. College is mostly self studying. So if the AI wrote the test from the curriculum not only could it make custom tests, it could add a time limit and proctor the exam.

Well, that is exactly not teaching.

Quote

If the AI has never read the early work of the Reverend Bayes, but only been trained on Pearson and Fisher and Gosset what would it teach ?

I am wondering, even if it has how well it works. A big part of teaching (including in statistics) is not just highlighting, say, methods and approaches, but also anticipating how students might (or might not) think about it and get them to ask questions that tell you where their misunderstanding in concepts or approach might be. I am not sure current AI are good at that as they take whatever misconceptions the querier has as gospel (mostly).

Posted
Just now, CharonY said:

I agree, it is a matter of degree and especially in terms of motivation it follows a very old mold. The one thing I am uncertain of is what how many of these issues (from AI to social media and, say Cliffs notes can be layered on top of each other before we see a really new change.

  

Not well at this point. It really just provides answers of various quality but is not really good in asking the right questions.

Well, that is exactly not teaching.

I am wondering, even if it has how well it works. A big part of teaching (including in statistics) is not just highlighting, say, methods and approaches, but also anticipating how students might (or might not) think about it and get them to ask questions that tell you where their misunderstanding in concepts or approach might be. I am not sure current AI are good at that as they take whatever misconceptions the querier has as gospel (mostly).

My question was a rather tongue-in-cheek oblique reference to the long running row between the opponents and proponents of bayesian statistics as a way of highlighting the fact that, to the best of my knowledge, no AI yet constructed has ever discovered anything by itself nor had an original 'thought' of its own.

How could it , when it is programmed to weigh up the most probable response to any given text string, based on what has already been written (usually in English) ?

In other words at the higher level you are talking about student were (and I hope still are) taught by people who have actually discovered (new) things.

Posted
27 minutes ago, studiot said:

My question was a rather tongue-in-cheek oblique reference to the long running row between the opponents and proponents of bayesian statistics as a way of highlighting the fact that, to the best of my knowledge, no AI yet constructed has ever discovered anything by itself nor had an original 'thought' of its own.

How could it , when it is programmed to weigh up the most probable response to any given text string, based on what has already been written (usually in English) ?

In other words at the higher level you are talking about student were (and I hope still are) taught by people who have actually discovered (new) things.

All true (and I just wanted to seize on your comment to at least pretend to move slightly back on topic...:)). And yes the last point is really important.

Posted
9 hours ago, Trurl said:

Do you think AI can’t teach? It can summarize the notes and provide reading.

It can do a really poor job of summarizing. I don’t see how that means it can teach.

Can it explain concepts? Can it figure out why an explanation doesn’t work for some students, figure out what the misconception is, and come up with alternate explanations? Give examples, because I think you’re overestimating the capabilities of LLMs.

Can it answer questions that aren’t part of its “training”? Google tried to excuse the poor performance of its AI on novel questions. Can it figure out a poorly-phrased question, which you will get from students who don’t understand enough to explain what they don’t know.

Posted
2 hours ago, swansont said:

Can it explain concepts?

I don’t know the true capabilities of AI. You could argue it isn’t thinking. But its ability to find patterns (Some that we don’t even understand.) is remarkable.

But even if our thoughts are more advanced, more creative it doesn’t matter. Take board games we might perform better on D&D, but anything that has a logical pattern the AI will dominate.

Say this thing learns statistics and game theory. Even without it the pattern recognition and speed of processing mimics how we think and is beyond our abilities.

I saw a YouTube video with a computer machine learning Super Mario Brothers that did nothing for weeks but little by little played the game.

3 hours ago, swansont said:

 

Can it answer questions that aren’t part of its “training”?

Well I am for education and teachers. However while you could say teachers design curriculum an AI can train on it. I heard somewhere AI was training on Khan Academy.

You can’t replace the instruction and feedback of a teacher, but a self study or preparing for a licensing exam would be perfect for AI.

Even if you feel like AI can’t do your job, it can still train and spit out junk that works good enough to substitute in your place.

Bottom line I don’t like it. It’s not the computer’s fault. Someone is trying to make money ripping (training) on others content.

And I think the 500 billion AI initiative would by better spent on space and other projects.

Posted
7 hours ago, Trurl said:

I don’t know the true capabilities of AI. You could argue it isn’t thinking. But its ability to find patterns (Some that we don’t even understand.) is remarkable.

Which is not teaching, nor LLMs, AFAIK.

7 hours ago, Trurl said:

But even if our thoughts are more advanced, more creative it doesn’t matter. Take board games we might perform better on D&D, but anything that has a logical pattern the AI will dominate.

Again, this is not teaching.

7 hours ago, Trurl said:

Say this thing learns statistics and game theory. Even without it the pattern recognition and speed of processing mimics how we think and is beyond our abilities.

Still not teaching.

7 hours ago, Trurl said:

I saw a YouTube video with a computer machine learning Super Mario Brothers that did nothing for weeks but little by little played the game.

There are different forms of AI. When you say “It can summarize the notes and provide reading” you are referring to LLMs which is not necessarily the same set of algorithms as a program that has some other task.

And being good at one task is not a valid argument that it will be good at a different task (just like not all humans would make good teachers)

7 hours ago, Trurl said:

Well I am for education and teachers. However while you could say teachers design curriculum an AI can train on it. I heard somewhere AI was training on Khan Academy.

Why not just use Khan Academy then? Why throw a layer of crap into the mix?

7 hours ago, Trurl said:

You can’t replace the instruction and feedback of a teacher, but a self study or preparing for a licensing exam would be perfect for AI.

Self-study comes from some source material. Why not just use that? 

7 hours ago, Trurl said:

Even if you feel like AI can’t do your job, it can still train and spit out junk that works good enough to substitute in your place.

These two things are contradictory. If it can’t do the job, it’s not good enough.

 

Posted

A teacher should be able to help with solving sample problems. However, several times I have checked, LLMs got stuck with wrong solutions in spite of my attempts to point to where these solutions go wrong. This happens even when correct solutions are available on the Internet and are easily findable with a simple straightforward search.

Posted

LLMs sometimes also just outright contradict themselves. I think there is a bit of a contradiction in the idea to try to train a system that cannot think to help someone else to think. Maybe it can be overcome eventually, but right now I don't see it.

One aspect regarding self-learning: traditionally that is done with book, where folks often read context beyond what they expect. The reasons is simple, students do not know what they don't know and learning exclusively by writing questions will not reveal the gaps and I believe will strengthen misconception, based on the GIGO principle.

In contrast, a prof or teacher can identify gaps and misconceptions and direct them to new sources they weren't aware of (or, more likely didn't want to read until being told to do so).

Posted

I advise caution against lumping all AIs and LLMs into the same one “it’s crap!” bucket.

For several months now I’ve watched many members here proclaiming how poor the performance of these models is when they have quite likely only tried the badly outdated low performing free models from several years ago, and rather likely have done so using poorly formed prompts.

Such things happen, and it wouldn’t merit comment if it didn’t so often lead to the hasty generalization that all AIs suck. The best model you use today will already be the worst model you ever use by next week. 

Posted
8 hours ago, iNow said:

I advise caution against lumping all AIs and LLMs into the same one “it’s crap!” bucket.

For several months now I’ve watched many members here proclaiming how poor the performance of these models is when they have quite likely only tried the badly outdated low performing free models from several years ago, and rather likely have done so using poorly formed prompts.

Such things happen, and it wouldn’t merit comment if it didn’t so often lead to the hasty generalization that all AIs suck. The best model you use today will already be the worst model you ever use by next week. 

Does this address any of my comments on so called AI ?

 

If so how please ?

 

for example

On 2/5/2025 at 11:08 PM, studiot said:

to the best of my knowledge, no AI yet constructed has ever discovered anything by itself nor had an original 'thought' of its own.

 

Posted
15 hours ago, iNow said:

I advise caution against lumping all AIs and LLMs into the same one “it’s crap!” bucket.

Given the demonstrated performance thus far, LLMs are crap until proven otherwise, IMO. It’s being presented as a solution now, not that it might become a viable solution some day. Until it passes a Turing test, I don’t like deeming it AI anyway. To me, Faux Intelligence is more apt.

Some of the examples above are machine learning, which is indeed a different beast (or set of beasts) than LLMs, and I agree it probably would be best to specify the implementation being referenced, much like we specify biology, geology, chemistry, astronomy or physics instead of just saying “science” since there are distinct differences between how they are conducted.

On 2/5/2025 at 6:08 PM, studiot said:

to the best of my knowledge, no AI yet constructed has ever discovered anything by itself nor had an original 'thought' of its own.

Agree. Computers do certain things more quickly than humans, and that’s the advantage being exploited for e.g. pattern recognition using ML or in sorting through piles of data to attempt to summarize something. 

Posted (edited)
Just now, swansont said:

Given the demonstrated performance thus far, LLMs are crap until proven otherwise, IMO. It’s being presented as a solution now, not that it might become a viable solution some day. Until it passes a Turing test, I don’t like deeming it AI anyway. To me, Faux Intelligence is more apt.

Some of the examples above are machine learning, which is indeed a different beast (or set of beasts) than LLMs, and I agree it probably would be best to specify the implementation being referenced, much like we specify biology, geology, chemistry, astronomy or physics instead of just saying “science” since there are distinct differences between how they are conducted.

Agree. Computers do certain things more quickly than humans, and that’s the advantage being exploited for e.g. pattern recognition using ML or in sorting through piles of data to attempt to summarize something. 

Yup, but only after considerable creation, organisation and optimisation effort by humans.

Which is why they are so useful for repetitive drudgery, once that spadework has been put in.

 

Edit I did mean to ask why noone seemed interested in my debunking the AI wave height measurer, but then I discovered that the thread wqs moved to trash so I could not post there anymore.

Considering the current debate is no one interested in my comments there ?

My back of envelope calculation suggests that the deflection angle of the horizontal for an instrument set up 1000m from the wave is about 36 seconds of arc which makes the height error 0.2 m. aft5er that the error grws rapidly with distance.

Edited by studiot
Posted
1 hour ago, studiot said:

Edit I did mean to ask why noone seemed interested in my debunking the AI wave height measurer, but then I discovered that the thread wqs moved to trash so I could not post there anymore.

Just the AI part was moved there, because it’s not allowed in mainstream discussions, owing to these veracity issues.

Posted
7 hours ago, studiot said:

Does this address any of my comments on so called AI ?

🤷🏽‍♂️ 

Wasn’t directed to you specifically and was a comment to the larger tenor of comments 

6 hours ago, swansont said:

Given the demonstrated performance thus far

My intuition is you’re likely not keeping up with current capabilities and thus are in no position to make such a declaration. You’re well informed on vast majority of topics, only encouraging you here to avoid hasty generalizations given limited exposure. 

Posted
1 hour ago, iNow said:

My intuition is you’re likely not keeping up with current capabilities and thus are in no position to make such a declaration. You’re well informed on vast majority of topics, only encouraging you here to avoid hasty generalizations given limited exposure. 

Where are these perfect LLMs? Not ChatGPT, not whatever Google is using for its summaries. Not Apple. And whatever performance this mystery LLM has, (why isn’t it being adopted everywhere?) it doesn’t erase the bad performance of what’s widely used, because what I’ve seen is crap, and I won’t trust them until they’ve demonstrated they aren’t. As I said.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.