AIkonoklazt Posted January 3 Author Posted January 3 2 hours ago, iNow said: Duly noted Please substantiate this statement, especially given how many thousands of new LLMs are being deployed each week. In addition to what TheVat just said, there are long lists of search results that you could look up on this yourself, including this one: https://perpet.io/blog/which-ai-tool-to-pick-for-your-next-project-chatgpt-llama-google-bard-claude/ The above talks about products by MS/OpenAI, Google, and Meta. I've seen discussions around ones that are still in development on LinkedIn, and they ALSO operate on the same basic principle (see section "The main mechanism of LLM-based AI tools" in the article) Now, please substantiate what you said. As for your statement regarding "thousands of new LLMs" each week, I don't think that's really true either, since the bulk of new LLM tools coming out are based on existing models (versions of GPT, and open-sourced ones like Llama) 1
iNow Posted January 3 Posted January 3 (edited) 26 minutes ago, AIkonoklazt said: there are long lists of search results that you could look up on this yourself, "Google it, bruh." Lol. Yeah. I think we're engaged in this topic at different levels. Thanks for your link to a consulting firm. Not helpful beyond confirming for me that you keep making absolute comments about chatbots and ignore the rest of the space. 26 minutes ago, AIkonoklazt said: The above talks about products by MS/OpenAI, Google, and Meta. I've seen discussions around ones that are still in development on LinkedIn Yes, those are the big pricey corporate ones. Have some fun here to learn about a few of the others, as well as how they're scored for performance. https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard Or here if you prefer to filter on task type: https://huggingface.co/models 26 minutes ago, AIkonoklazt said: As for your statement regarding "thousands of new LLMs" each week, I don't think that's really true either TBH, I don't care what you think. There's nearly 500K at the link above alone. 50 minutes ago, TheVat said: The fundamental algorithms of LLM machines has not changed in the past year. There are, IMO, simply far too many and they are evolving too quickly to make such absolute comments and generalizations. Edited January 3 by iNow
AIkonoklazt Posted January 3 Author Posted January 3 36 minutes ago, iNow said: "Google it, bruh." Lol. Yeah. I think we're engaged in this topic at different levels. Thanks for your link to a consulting firm. Not helpful beyond confirming for me that you keep making absolute comments about chatbots and ignore the rest of the space. Yes, those are the big pricey corporate ones. Have some fun here to learn about a few of the others, as well as how they're scored for performance. https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard Or here if you prefer to filter on task type: https://huggingface.co/models TBH, I don't care what you think. There's nearly 500K at the link above alone. There are, IMO, simply far too many and they are evolving too quickly to make such absolute comments and generalizations. Your Huggingface links proves my point. Which of those don't use transformers? Ah, someone talking about "performance (metrics)". It doesn't mean jack squat. You can move those metrics around all you want to show whatever you please, up to and including any purported but non-existent "emergent behaviors," because that's the topic at the moment- a novel feature that would make anything before it "obsolete": https://arxiv.org/pdf/2304.15004.pdf No; none of what you supplied supported your unfounded claims of obsolescence. Hope you saw the irony in your own chiding regarding generalizations, because uh, that's what you also did in a spectacular fashion with your "gameboy" comments.
GoombaLuke11 Posted January 3 Posted January 3 This is really interesting to read, thx for posting this, I love brain food :).
AIkonoklazt Posted January 3 Author Posted January 3 12 minutes ago, GoombaLuke11 said: This is really interesting to read, thx for posting this, I love brain food :). You're very welcome. If you don't want to start following people on LinkedIn, you can use this article as a starter: https://theconversation.com/why-a-computer-will-never-be-truly-conscious-120644 It's also a collection of arguments like mine, except it talks about a different set of arguments including Turing's own Halting Problem. Have fun. 1
Phi for All Posted January 3 Posted January 3 5 hours ago, mar_mar said: I didn't say a word about feelings. It was subconscious, the concept, which was underrated by some members of a forum. The thing is that one can't create a new work without participation of one's subconscious. Underrated? That's not what we did. We pointed out that the term "subconscious" is being phased out in favor of better descriptive concepts. We explained it rather well, I thought, but you must not have read that part. To recap, the term subconscious isn't applicable in the context you're using it, so you got some pushback (not underrating). Specifically, "one can't create a new work without participation of one's" preconscious, the stuff that you aren't thinking of right now but can recall fairly quickly. The preconscious mind is what helps you solve problems, and where you'll find what you call "intuition". Not entirely sure, but it looks like the unconscious mind and the preconscious mind are parts of the subconscious. So all preconscious thoughts are part of the subconscious, but not all subconscious thoughts are preconscious. Does that make sense? It's difficult discussing this with you since I don't think you care much about the actual science, and are focused on being right about your beliefs.
iNow Posted January 3 Posted January 3 (edited) 1 hour ago, AIkonoklazt said: Which of those don't use transformers? When did I claim none used transformers?? 1 hour ago, AIkonoklazt said: Hope you saw the irony in your own chiding regarding generalizations Definitely, though at least I’m not claiming things to be impossible like an evangelist based on what are considered now stone age versions of the tech. 1 hour ago, AIkonoklazt said: https://arxiv.org/pdf/2304.15004.pdf Thanks. I read that a few months ago and heard them present an updated poster about it at NeurIPS 2023 Edited January 3 by iNow
AIkonoklazt Posted January 3 Author Posted January 3 29 minutes ago, iNow said: When did I claim none used transformers?? I'm showing you how your "argument" doesn't make sense by pointing out that all those "thousands" deployed every week are relying on the same essential tech. Quote Definitely, though at least I’m not claiming things to be impossible like an evangelist based on what are considered now stone age versions of the tech. Thanks for showing us all that you definitely didn't understand my argument, if you read it at all. Formalism doesn't depend on any version of any tech.
mar_mar Posted January 3 Posted January 3 1 hour ago, Phi for All said: Underrated? That's not what we did. We pointed out that the term "subconscious" is being phased out in favor of better descriptive concepts. We explained it rather well, I thought, but you must not have read that part. To recap, the term subconscious isn't applicable in the context you're using it, so you got some pushback (not underrating). Specifically, "one can't create a new work without participation of one's" preconscious, the stuff that you aren't thinking of right now but can recall fairly quickly. The preconscious mind is what helps you solve problems, and where you'll find what you call "intuition". Not entirely sure, but it looks like the unconscious mind and the preconscious mind are parts of the subconscious. So all preconscious thoughts are part of the subconscious, but not all subconscious thoughts are preconscious. Does that make sense? It's difficult discussing this with you since I don't think you care much about the actual science, and are focused on being right about your beliefs. I understood. Thank you for the explanation, it was useful. i feel, this concept is off topic for this thread. The one thing i recognized for myself is that it's better to explore one's unconscious/subconscious, because it is simultaneously powerful and dangerous thing. and consciousness is the instrument for exploration.
iNow Posted January 4 Posted January 4 2 hours ago, AIkonoklazt said: you definitely didn't understand my argument Lack of agreement isn’t lack of comprehension. 2 hours ago, AIkonoklazt said: Formalism doesn't depend on any version of any tech. Thanks for sharing. Formalism is an excessive adherence to prescribed forms. Have fun with that. Such is not my style at all.
AIkonoklazt Posted January 4 Author Posted January 4 5 hours ago, TheVat said: The LLM is still essentially what it was, a stochastic parrot which makes ranked lists of word probabilities and word pairing probabilities. There is zero modeling of an actual world. The only thing its doing is modeling next-word probabilities, its only "world" is statistical frequencies of strings within blocks of text. It is so very very far from AGI that all the hype around it is just ludicrous. They understand nothing. I would recommend anything by Emily Bender, who coined the phrase stochastic parrot, on this topic. The fundamental algorithms of LLM machines has not changed in the past year. There's a very long list of people on my LinkedIn feed that have been frustrated by the seemingly unending and unrelenting hype surrounding LLMs. I think all of us are basically exhausted at this point from the continual exasperation. I attended the annual Stochastic Parrots Day celebration on the 2nd anniversary of Bender's paper via Twitch, and it looks like they do have a recording of the different sessions: https://peertube.dair-institute.org/w/p/5k7JempgUbCAcpTjUZPuKQ The panel on worker exploitation is rather prescient considering the later Hollywood actors' and screenwriters' strikes stemming from generative AI: https://www.theverge.com/2023/7/13/23794224/sag-aftra-actors-strike-ai-image-rights https://www.polygon.com/23742770/ai-writers-strike-chat-gpt-explained There was a Google docs from the event where all the attendants submit links relevant to the topic. I put my link on there along with some other articles I've collected but I'll need to go look for the URL of the sheet. 1
iNow Posted January 4 Posted January 4 8 minutes ago, AIkonoklazt said: There's a very long list of people on my LinkedIn feed that have been frustrated by the seemingly unending and unrelenting hype surrounding LLMs LinkedIn is largely a cesspool of self promoters and blustery overhyped marketing so this isn’t surprising. 1
AIkonoklazt Posted January 4 Author Posted January 4 19 minutes ago, iNow said: Lack of agreement isn’t lack of comprehension. Thanks for sharing. Formalism is an excessive adherence to prescribed forms. Have fun with that. Such is not my style at all. The second sentence of the reply shows you've no idea what I'm talking about with regard to formalism.
iNow Posted January 4 Posted January 4 2 minutes ago, AIkonoklazt said: The second sentence of the reply shows you've no idea what I'm talking about with regard to formalism.
AIkonoklazt Posted January 4 Author Posted January 4 (edited) 33 minutes ago, iNow said: LinkedIn is largely a cesspool of self promoters and blustery overhyped marketing so this isn’t surprising. Way to go with generalization. Professors of Emeritus are already retired educators, they're in it for the knowledge sharing. Boy, just how desperate are you at grasping every last little straw? 24 minutes ago, iNow said: Congrats on being able to use a search. Now either look at something with more than just one sense defined, or try not cutting off what you don't want to acknowledge from screencaps. Here. I'll throw you a bone. Try sticking the term "logical" or "mathematical" in front of the word "formalism." https://royalsocietypublishing.org/doi/10.1098/rspa.2017.0872 Edited January 4 by AIkonoklazt -1
AIkonoklazt Posted January 4 Author Posted January 4 iNow is sticking to his role for this particular thread to a T.
iNow Posted January 4 Posted January 4 Mostly I dislike when people pretend that they can tell what will or will not be possible in the future, or who declare things to be impossible when those things are still very much only in their infancy. ✌🏼 1
AIkonoklazt Posted January 4 Author Posted January 4 Mostly I dislike it when people pretend that they know even a shred of what they're talking about when they absolutely don't, starting with the word "formalism." Particularly those who entertain the idea that a particular formalism change just because the implementation of that formalism changes. Again, perfectly demonstrating utter ignorance on the matter. Oh, and not to mention constantly jabbing and venturing into metadebate. -2
dimreepr Posted January 4 Posted January 4 (edited) 11 hours ago, AIkonoklazt said: Mostly I dislike it when people pretend that they know even a shred of what they're talking about when they absolutely don't, starting with the word "formalism." Particularly those who entertain the idea that a particular formalism change just because the implementation of that formalism changes. Again, perfectly demonstrating utter ignorance on the matter. Oh, and not to mention constantly jabbing and venturing into metadebate. As far as I can tell, what you mostly dislike is reasonable argument's that you can't refute. The irony is strong with this one @iNow, he refuses to learn anything from science that doesn't agree with his belief; almost Pythonesque, in a life of Brian quest for the holy grail sort of way. Edited January 4 by dimreepr
iNow Posted January 4 Posted January 4 (edited) This thread is already way too personal, and I'd prefer we focus less on the individual. I'll try to be better at this myself, but we all need to remain focused on the positions and merit of the information being shared. Let's be clear: Many of Alkon's points are entirely valid. Much of what he shares contains very good and useful information. Likewise, some of what I've shared has been somewhat weak. This is all true, and so is the fact that he clearly has a strong interest in this topic and obviously allocates much of his time learning about it. That's exactly what we ALL should be doing... learning, growing, understanding and I applaud him for it. I just cannot personally join him in that final leap where he keeps making absolute comments about what will and will not be possible in the future, or where he dismisses things based solely on a rigid framing of terms or the quite limited technologies which are most familiar and most hyped today (or those being discussed on LinkedIn, for example). Nobody has a crystal ball, and nobody should IMO argue in the manner he has by starting with formalized rigid unbending structures and preconceived conclusions. We can make any logic work if we put all data into rigid potentially inaccurate semantic boxes and I see a lot of that here. If that works for him, then great! But it doesn't work for me, nor I propose does it work for most people who are scientifically minded (I believe he may be more philosopher than empiricist, but that's not intended as either a judgement or sleight, just a general observation). The technology in this space is changing at an incredible pace. It is equally being amplified by parallel technologies in processing power and capabilities. There are literally tens of thousands of seriously brilliant engineers working on this every single minute of every single day, and my core position here is that we must be EXTREMELY cautious and avoid making broad sweeping proclamations and predictions with any illusions of certainty. We must temper our confidence. What's potentially worse here is that we barely have workable definitions of consciousness and unconsciousness, the actual topic of the central claims made in the OP... so any assertions about what does and does not fit into those ill-defined ever-evolving categories strike me as specious, at best. Anyway... enough personal bullshit, yeah? This is an interesting topic that's fun to explore if we can please be civil with one another (and yes... the same reminder applies equally to me). Edited January 4 by iNow 3
dimreepr Posted January 4 Posted January 4 51 minutes ago, iNow said: This thread is already way too personal, and I'd prefer we focus less on the individual. I'll try to be better at this myself, but we all need to remain focused on the positions and merit of the information being shared. Let's be clear: Many of Alkon's points are entirely valid. Much of what he shares contains very good and useful information. Likewise, some of what I've shared has been somewhat weak. This is all true, and so is the fact that he clearly has a strong interest in this topic and obviously allocates much of his time learning about it. That's exactly what we ALL should be doing... learning, growing, understanding and I applaud him for it. I just cannot personally join him in that final leap where he keeps making absolute comments about what will and will not be possible in the future, or where he dismisses things based solely on a rigid framing of terms or the quite limited technologies which are most familiar and most hyped today (or those being discussed on LinkedIn, for example). Nobody has a crystal ball, and nobody should IMO argue in the manner he has by starting with formalized rigid unbending structures and preconceived conclusions. We can make any logic work if we put all data into rigid potentially inaccurate semantic boxes and I see a lot of that here. If that works for him, then great! But it doesn't work for me, nor I propose does it work for most people who are scientifically minded (I believe he may be more philosopher than empiricist, but that's not intended as either a judgement or sleight, just a general observation). The technology in this space is changing at an incredible pace. It is equally being amplified by parallel technologies in processing power and capabilities. There are literally tens of thousands of seriously brilliant engineers working on this every single minute of every single day, and my core position here is that we must be EXTREMELY cautious and avoid making broad sweeping proclamations and predictions with any illusions of certainty. We must temper our confidence. What's potentially worse here is that we barely have workable definitions of consciousness and unconsciousness, the actual topic of the central claims made in the OP... so any assertions about what does and does not fit into those ill-defined ever-evolving categories strike me as specious, at best. Anyway... enough personal bullshit, yeah? This is an interesting topic that's fun to explore if we can please be civil with one another (and yes... the same reminder applies equally to me). Indeed, it's easy to forget that these discussions are little more than a parlour game... 1
AIkonoklazt Posted January 5 Author Posted January 5 Okay, +1 to iNow, he's being fair now. I like fair. I think more explanation may be in order on how's and why's of a formalism. First, we ask ourselves, what is a machine, and why is it any different than something that's not one? A machine is a designed object that has things that move things around. We have to design behaviors into this object, so that it does what we want it to do. What is this "things that move things around"? It doesn't matter. The bottom line is we must have to specify something about "things that move things around," the way to do this is generally an instruction. An instruction to things that move things around is an algorithm. What I have done, is abstract things so far up that algorithms themselves is a form. It doesn't matter how we implement "things that move things around" (formalism). Algorithms in general is now a form. Anything that is made, has to do this general thing. The technology doesn't matter. In my article, I gave the example how even catapults follow this. Architecture doesn't matter; You can use gears, you can use water pipes, basically "anything that move things around." As you can see, this is the furthest one can get from "rigidity." This has to do with the principles of computation itself. My argument is via principles- It is mostly an a priori argument that's independent of time and place. A trillion years from now, if you have something that moves things around, you're going to have something that move things around....... I hope this provides a reasonable (re)starting point. I'm not going to get into reasons that reverse engineering doesn't makes sense yet (well, they're in the article, but as far as re-explaining everything is concerned I should just keep it short for now.) p.s. Organisms are not designed, and therefore not subject to algorithms. See scientific finding referred to in my article regarding behavior of neural groups in a fly 1
dimreepr Posted January 5 Posted January 5 8 hours ago, AIkonoklazt said: Organisms are not designed, and therefore not subject to algorithms. See scientific finding referred to in my article regarding behavior of neural groups in a fly But they do have a design and they were subject to an algorithm, causation caused it; therefore, with enough understanding it's possible to reverse engineer, you...
TheVat Posted January 5 Posted January 5 12 hours ago, AIkonoklazt said: This has to do with the principles of computation itself. My argument is via principles- It is mostly an a priori argument that's independent of time and place. Yes, and it builds on the CRA, a philosophical argument that is a priori. The crux of such arguments is that human brains are largely analog/analog-to-digital and react directly to the world which they are a part of, as opposed to machines (yes, can be circuits or valved water pipes or air hoses or anything) which are digital devices which follow instructions for computation - AKA algorithms - and interact with the world via compiled digital inputs, i.e. they manipulate numeric strings. For the original CRA supporter, computers having minds is rather like a virtual weather simulation that starts gushing water from its processors. Ain't gonna happen. So it comes down to, generally: can a digital simulation in any way become that which it simulates, and second: is genuine understanding (with the conscious awareness that implies) something that could happen if digital systems somehow morphed into fully embodied entities that actually interact with an exterior world? This latter is the focus of the Robot Argument (RA), which philosophers like Dennett and Jerry Fodor, among others have endorsed versions of. The RA involves something usually called externalist semantics. This goes with Searle that syntax and internal connections in isolation from the world are insufficient for semantics, while suggesting a hope that more embodied forms with causal connections to the world can provide content to the internal symbols. So Dennett et al (iirc Hans Moravec is also a fan of RA, no surprise, right?) are open to the notion that a symbol manipulator could, in principle, "graduate" to actual semantics and really attach meanings to the symbols it is manipulating. Full disclosure: I worked briefly with a very narrow form of AI back in the day, developed a couple of expert systems back in the late 80s, and leaned towards the CRA. While I still question functionalism, I have become more open to externalist semantics and the RA in terms of some future entity that might interact both analogically and digitally with the world, a blend of organic and machine forms, a creature that operates both with symbols AND has a non-symbolic system that succeeds by being embedded in a particular environment. Pretty pie in the sky, right?
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now