geordief Posted April 6, 2023 Posted April 6, 2023 (edited) 7 minutes ago, Genady said: The way these models work, they create their own falsehoods, called "hallucinations." So is there a way of addressing this failing without banning them entirely? That seems a Quichote-esque avenue as the djinni is out of the bottle. Edited April 6, 2023 by geordief
Genady Posted April 6, 2023 Posted April 6, 2023 4 minutes ago, geordief said: is there a way of addressing this failing without banning them entirely? So far, no.
geordief Posted April 6, 2023 Posted April 6, 2023 4 minutes ago, Genady said: So far, no. Can we weight the entries in the database in terms of factual reliability(and enforce that standard by law) I mean ,does CHATGPT just gobble up "information" provided by serial murderers and normal Joe Soaps equally permissively?
Genady Posted April 6, 2023 Posted April 6, 2023 26 minutes ago, geordief said: Can we weight the entries in the database in terms of factual reliability(and enforce that standard by law) I mean ,does CHATGPT just gobble up "information" provided by serial murderers and normal Joe Soaps equally permissively? There is a selection of the input, AFAIK. Not on the level of individual items though.
studiot Posted April 6, 2023 Author Posted April 6, 2023 2 hours ago, Ghideon said: I just wanted to say that I find this topic to be one of the most interesting discussions lately, and I'm closely following the debate. However, I'm having trouble formulating a substantial contribution. One observation I'd like to make is that generative AI seems to have ignited an increasingly polarized debate between its proponents and opponents, potentially more so than many previous applications of computer science. Do you share my observation, or do you have a different perspective? All contributions welcome. I'm glad the members are less polarised and more objective than your fears. I think it's too early to assess the depth of polarisation. Hofstadter has some interesting analyses of AI
Ghideon Posted April 7, 2023 Posted April 7, 2023 16 hours ago, studiot said: I'm glad the members are less polarised and more objective than your fears. I think it's too early to assess the depth of polarisation. Thanks for your reply! I think I need to clarify; I'm speaking about a polarised debate on global level. Discussion here on the forum is quite objective. I'll continue to follow this topic. A basic example: some jurisdictions have rules about giving financial product advice. How will such rules be modified when tools such as ChatGPT are available and in use in regulated areas? I do not know but I'm curios. (I'm currently on an assignment in the financial industry) I had not seen the Hofstadter text before, I'll look more into what he as to say.
geordief Posted April 7, 2023 Posted April 7, 2023 (edited) 36 minutes ago, Ghideon said: I'm speaking about a polarised debate on global level. Off topic,perhaps but can you try to give a short idea of the lines along which the debate is polarized in other settings**? Is it along the lines of whether a moratorium is called for? Or are there major discussions and differing approaches in other areas? **I have just listened to a bit of commentary on a few tv stations,that is all. Edited April 7, 2023 by geordief
Genady Posted April 7, 2023 Posted April 7, 2023 50 minutes ago, Ghideon said: A basic example: some jurisdictions have rules about giving financial product advice. How will such rules be modified when tools such as ChatGPT are available and in use in regulated areas? ChatGPT is a regurgitated Internet search. Are there any rules about a financial product advice acquired from Internet search? The same apply to ChatGPT.
studiot Posted April 7, 2023 Author Posted April 7, 2023 5 hours ago, Ghideon said: I had not seen the Hofstadter text before, I'll look more into what he as to say. This is a 20 years on edition with some additional material and comments. 1
Recommended Posts