Jump to content

ChatGPT is bullshit


Recommended Posts

And now there’s a journal article declaring this to be the case

https://link.springer.com/article/10.1007/s10676-024-09775-5

We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs.

Link to comment
Share on other sites

9 minutes ago, swansont said:

And now there’s a journal article declaring this to be the case

https://link.springer.com/article/10.1007/s10676-024-09775-5

We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs.

Well, we've all seen on this forum ample evidence of that. This claim not only seems true, but also both very funny, and a timely puncturing of the bubble of hype surrounding these verbose  and fundamentally unintellligent programs.  

I realise that AI encompasses a far wider scope than LLMs but, as they stand today, LLMs look to me pretty meretricious. 

It may be that their chief legitimate use is in collating references for the user to determine, for himself, which one are good and which ones are not, i.e. just a superior kind of search engine.

Link to comment
Share on other sites

Which model one uses and how they prompt it are extremely relevant. 

Not all LLMs are GPTs nor are all GPTs at the same version nor trained on the same dataset(s). 

4 hours ago, swansont said:

the models are in an important way indifferent to the truth of their outputs.

So are approximately half the voting populace. 

Edited by iNow
Link to comment
Share on other sites

2 hours ago, StringJunky said:

The Brave browser version puts the references at the bottom, like Wiki.

Yeah, honestly getting the actual source and not a probabilistic hallucination would not be that much additional code/memory.

It's like the math issue. It's not hard for a computer to do math correctly, but someone still needs to be arsed to program in the ability.

Link to comment
Share on other sites

4 minutes ago, Markus Hanke said:

There’s free AIs out there that do this, for example Perplexity.

Yes, I think they either programmed GPT for sheer speed or don't have the training database set up in such a way thay it can find actual references.

You can't really store links and similar data easily using default token system. Beyond a few pieces they're all too random.

Did see where it Rickrolled one guy though, so who knows.

Link to comment
Share on other sites

2 hours ago, Endy0816 said:

Yeah, honestly getting the actual source and not a probabilistic hallucination would not be that much additional code/memory.

It's like the math issue. It's not hard for a computer to do math correctly, but someone still needs to be arsed to program in the ability.

Actually, references should ALWAYS be quoted. It's a no different requirement for LLM's or a theoretical proper AI than a human  researcher. Evidence, evidence, evidence. 

Edited by StringJunky
Link to comment
Share on other sites

Quote

ChatGPT is bullshit

With ChatGPT v3.5, you can ask the same question in two different ways or in two different languages and get completely different answers.. The result is completely unreliable and even dangerous if one is not aware of how it works, and believes everything without any doubt (like the typical people using it)..

Link to comment
Share on other sites

4 hours ago, StringJunky said:

Actually, references should ALWAYS be quoted. It's a no different requirement for LLM's or a theoretical proper AI than a human  researcher. Evidence, evidence, evidence. 

Most of the current LLMs are not researching anything they write.

 

Edited by Endy0816
Clarification
Link to comment
Share on other sites

10 hours ago, iNow said:

Which model one uses and how they prompt it are extremely relevant. 

Not all LLMs are GPTs nor are all GPTs at the same version nor trained on the same dataset(s). 

So are approximately half the voting populace. 

I think perhaps the best use for this tool is in understanding the languages of other species, and perhaps our best chance of meaningful dialogue if the alien's do get in touch.

People trying to look smarter than they are, always trip themselves up, bc it's only a tool if they know how to use it.

3 hours ago, Sensei said:

With ChatGPT v3.5, you can ask the same question in two different ways or in two different languages and get completely different answers.. The result is completely unreliable and even dangerous if one is not aware of how it works, and believes everything without any doubt (like the typical people using it)..

Both will evolve... 😉

Link to comment
Share on other sites

10 hours ago, iNow said:

Not all LLMs are GPTs nor are all GPTs at the same version nor trained on the same dataset(s). 

But the underlying issue is the datasets. The algorithm can’t discern veracity of information; it relies on what it’s fed, and those choices are made by humans. The “AI” isn’t intelligent. It’s not thinking. It’s just a fancy search engine.

Link to comment
Share on other sites

1 hour ago, Endy0816 said:

Most of the current LLMs are not researching anything they write.

 

Where are they getting their sources? Is it just a mish-mash of stuff they hold locally and collated from that?

Edited by StringJunky
Link to comment
Share on other sites

1 hour ago, swansont said:

But the underlying issue is the datasets. The algorithm can’t discern veracity of information; it relies on what it’s fed, and those choices are made by humans. The “AI” isn’t intelligent. It’s not thinking. It’s just a fancy search engine.

There's nothing wrong with using a tool, if it's used correctly...

Link to comment
Share on other sites

1 hour ago, swansont said:

The algorithm can’t discern veracity of information; it relies on what it’s fed, and those choices are made by humans. The “AI” isn’t intelligent. It’s not thinking. It’s just a fancy search engine.

I mostly align with your central point, but there is a lot work happening which allows you to use other AIs to evaluate the accuracy of the prompted tool and they seem rather effective. The basic idea is you have AIs with expertise in certain spaces and use those to evaluate what is returned prior to showing it to the user. 

1 hour ago, StringJunky said:

Where are they getting their sources? Is it just a mish-mash of stuff they hold locally and collated from that?

It depends on the LLM and the relationships they've established with other companies. You may have seen big news somewhat recently about OpenAI forming an agreement with newspapers and publishers like Washington Post (or getting sued by the NYTimes), and others inking deals with Reddit to train on their vast troves of data, for example.

The training corpus for these models varies, and obviously companies like Google for Gemini and Meta for Llama 3 (or even X/Twitter model Grok) have a much larger pool from which to work than some of the smaller players. 

Link to comment
Share on other sites

Adding to the above, looks like Claude 3.5 is the new best in class LLM. It’s only been out for a few hours now.

Alex Albert from Anthropic pointed out that on GPQA (Graduate-Level Google-Proof Q&A) Benchmark, they achieved a 67% with various prompting techniques, beating PHD experts in respective fields.

The training data seems to be current at least as of February this year (GPT-4 is only current to somewhere in early 2023 IINM), and it has better vision capabilities than GPT-4o (the Scarlett Johansen one with lots of buzz a week or two ago). 

They also shared this on their release blog:  “In an internal agentic coding evaluation, Claude 3.5 Sonnet solved 64% of problems, outperforming Claude 3 Opus which solved 38%.”

 

 

Link to comment
Share on other sites

  • 1 month later...

ChatGPT: "Bullshit, But at Least It's Entertaining..." A Humorous Critique of "ChatGPT is Bullshit"

Abstract: The authors of "ChatGPT is Bullshit" (Hicks et al., 2024) seem to have stumbled into a particularly deep, and perhaps slightly self-aggrandizing, philosophical rabbit hole. While they're technically correct that ChatGPT, and other large language models, are not actually concerned with "truth" in the way a human mind is, their insistence on labeling it "bullshit" feels more like a tweed-jacketed academic's attempt to assert intellectual superiority than a meaningful contribution to the discourse on AI ethics. This paper will take a humorous look at the "ChatGPT is Bullshit" argument, poking fun at the authors' philosophical acrobatics while acknowledging the very real need for ethical guidelines in the development and deployment of AI.

Introduction: It seems that the scientific community is in a tizzy over AI. We're either heralding it as the harbinger of a utopian future or lamenting its imminent takeover of the world. Lost in the hype and fear is the nuanced reality that AI is a tool, and like any tool, it can be used for good or evil depending on the intentions of the user. Enter Hicks, Humphries, and Slater, who, in their paper "ChatGPT is Bullshit," appear to have stumbled upon a unique method of grappling with the ethical implications of AI: by declaring it "bullshit" and then explaining why, in great detail, it is, indeed, "bullshit" in the Frankfurtian sense.

One might think, "Well, isn't that a bit obvious? A computer program, especially one trained on a massive dataset of human-generated text, is hardly going to be spitting out deep philosophical truths about the meaning of life." But, alas, dear reader, Hicks, Humphries, and Slater see it as their duty to break this news to the world, using language that's about as dense and convoluted as a philosophy PhD dissertation written in 19th-century German.

"Bullshit" Defined: Or, How to Make a Simple Concept Seem Incredibly Complicated

The crux of Hicks, Humphries, and Slater's argument is that ChatGPT, because it's designed to produce human-like text without any concern for truth, is engaged in "bullshitting" in the Frankfurtian sense. They delve into Harry Frankfurt's work on the topic, meticulously outlining his distinction between "hard bullshit" (where there's an attempt to deceive about the nature of the enterprise) and "soft bullshit" (where there's a lack of concern for truth). It's a fascinating and, frankly, rather tedious philosophical discussion that would likely leave even the most ardent Frankfurt enthusiast wondering, "Is this really necessary? Can't we just call a spade a spade?"

A Case Study in Overblown Pronouncements: When a "Bullshit Machine" Sounds More Like a "Metaphysical Enigma"

Hicks, Humphries, and Slater go on to argue that ChatGPT, as a "bullshit machine," produces text that's not simply wrong, but rather "bullshit" because it's "designed to give the impression of concern for truth." They seem to suggest that ChatGPT is intentionally attempting to deceive us into believing it's a genuine thinking being, rather than just a very sophisticated piece of software.

Now, while it's true that ChatGPT can be surprisingly convincing at times, especially when it's stringing together grammatically sound sentences with impressive fluency, it's hard to take seriously the idea that it's actively trying to "misrepresent what it is up to." It's more likely that ChatGPT is simply doing what it was programmed to do: generate text that resembles human language, even if that text happens to be factually inaccurate.

The Real Ethical Concerns (That Are Worth Discussing): Beyond the "Bullshit" Rhetoric

While the authors of "ChatGPT is Bullshit" get bogged down in their verbose attempts to dissect the intricacies of "soft bullshit" versus "hard bullshit," they do touch upon some very real concerns about AI development and deployment. For example, they correctly point out that the widespread use of AI-generated text, particularly in fields like law and medicine, could have serious consequences if it's not carefully vetted for accuracy and reliability.

Their worries about the use of inaccurate information generated by AI are valid and important, but their insistence on labeling everything "bullshit" obscures the real ethical dilemmas at play. It's far more productive to focus on solutions, such as robust fact-checking mechanisms, rigorous testing and evaluation of AI systems, and transparent communication about the limitations of AI.

Conclusion: Keep It Real, Keep It Honest, and Keep It Humorous

The scientific community needs to move beyond the sensationalism and philosophical grandstanding that often accompanies discussions of AI. While it's important to be aware of the potential risks and pitfalls, we shouldn't let the fear and hype prevent us from harnessing the immense potential of AI for the betterment of society.

So, the next time you encounter a seemingly profound pronouncement about the "bullshit" nature of AI, take a deep breath, laugh, and remember that behind the smoke and mirrors, there's a real need for thoughtful, responsible, and ethical development and deployment of this powerful technology.

Link to comment
Share on other sites

23 minutes ago, iNow said:

You literally just copy/pasted the link cited in the OP  🤦‍♂️

Interesting view pal,

You're right, I did copy and paste the link. But wouldn't it be truly bullshit if I had just written a bunch of words that sounded profound, but didn't actually engage with the original text? That would be the real act of intellectual dishonesty, wouldn't it? I'm all for respectful discourse, but sometimes you just gotta cut to the chase.

43 minutes ago, iNow said:

You literally just copy/pasted the link cited in the OP  🤦‍♂️

 

Link to comment
Share on other sites

12 hours ago, Tgro87 said:

You're right, I did copy and paste the link. But wouldn't it be truly bullshit if I had just written a bunch of words that sounded profound, but didn't actually engage with the original text? That would be the real act of intellectual dishonesty, wouldn't it? I'm all for respectful discourse, but sometimes you just gotta cut to the chase.

No, what you did was bullshit enough. And the real intellectual dishonesty is pretending you've hurt anyone's feelings, rather than admitting you pretty much jumped up on the round table of this discussion and crapped on it rather than attempt to persuade us towards your position.

Sometimes "cutting to the chase" just makes you look like a right asshole. I think that's exactly what happened here. I'm not even sure what you're objecting to, it's like you really didn't read the thread. Obviously you have a different opinion about ChatGPT, so how about you start with that instead of all the drama?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.