We may not even “need” AGI. The future of machine learning and robotics may well involve multiple wildly varying models working together.
LLMs are already very good at what they do (generating and parsing text and making a passable imitation of understanding it).
We already use them with other models, for example Whisper is a model that recognizes speech. You feed the output to an LLM to interpret it, use the LLM’s JSON output with a traditional parser to feed a motion control system, then back to an LLM to output text to feed to one of the many TTS models so it can “tell you what it’s going to do”.
Put it in a humanoid shell or a Spot dog and you have a helpful robot that looks a lot like AGI to the user. Nobody needs to know that it’s just 4 different machine learning algorithms in a trenchcoat.
And it still can’t understand; its still just sleight of hand.
Yes, thus “passable imitation of understanding”.
The average consumer doesn’t understand tensors, weights and backprop. They haven’t even heard of such things. They ask it a question, like it was a sentient AGI. It gives them an answer.
Passable imitation.
You don’t need a data center except for training, either. There’s no exponential term as the models are executed sequentially. You can even flush the huge LLM off your GPU when you don’t actively need it.
I’ve already run basically this entire stack locally and integrated it with my home automation system, on a system with a 12GB Radeon and 32GB RAM. Just to see how well it would work and to impress my friends.
You yell out “$wakeword, it’s cold in here. Turn up the furnace” and it can bicker with you in near-realtime about energy costs before turning it up the requested amount.
One of the engineers who wrote ‘eliza’ had like a deep connection to and relationship with it. Who wrote it.
Painting a face on a Spinny door will make people form a relationship with it. Not a measure of ago.
gives them an answer
‘An answer’ isnt hard. Magic 8 ball does that. So does a piece of paper that says “drink water, you stupid cunt” This makes me think you’re arguing from commitment or identity rather than knowledge or reason. Or you just don’t care about truth.
Yeah they talk to it like an agi. Or a search engine (which are a step to agi, largely crippled by llm’s).
Color me skeptical of your claims in light of this.
I think it’s pretty natural for people to confuse the way mechanisms of communication are used with inherent characteristics of the entity you’re communicating with: “If it talks like a medical docture then surelly it’s a medical doctor”.
Only that’s not how it works, as countless politicians, salesmen and conmen have demonstrated - no matter how much we dig down intonsubtle details, comms isn’t really guaranteed to tell us all that much about the characteristics of what’s on the other side - they might be just lying or simulating and there are even entire societies and social strata educated since childhood to “always present a certain kind of image” (just go read about old wealth in England) or in other words to project a fake impression of their character in the way they communicate.
All this to say that it doesn’t require ill intent for somebody to go around insisting that LLMs are intelligent: many if not most people are trying to read the character of a subject from the language the subject uses (which they shouldn’t but that’s how humans evolved to think in social settings) so they trully belive that what produces language like an intelligent creature must be an intelligent creature.
They’re probably not the right people to be opinating on cognition and inteligence, but lets not assign malice to it - at worst it’s pigheaded ignorance.
I think the person my previous comment was replying to wasnt malicious; I think they’re really invested, financially or emotionally, in this bullshit, to the point their critical thinking is compromised. Different thing.
I think you’re misreading the point I’m trying to make. I’m not arguing that LLM is AGI or that it can understand anything.
I’m just questioning what the true use case of AGI would be that can’t be achieved by existing expert systems, real humans, or a combination of both.
Sure Deepseek or Copilot won’t answer your legal questions. But neither will a real programmer. Nor will a lawyer be any good at writing code.
However when the appropriate LLMs with the appropriate augmentations can be used to write code or legal contracts under human supervision, isn’t that good enough? Do we really need to develop a true human level intelligence when we already have 8 billion of those looking for something to do?
AGI is a fun theoretical concept, but I really don’t see the practical need for a “next step” past the point of expanding and refining our current deep learning models, or how it would improve our world.
We may not even “need” AGI. The future of machine learning and robotics may well involve multiple wildly varying models working together.
LLMs are already very good at what they do (generating and parsing text and making a passable imitation of understanding it).
We already use them with other models, for example Whisper is a model that recognizes speech. You feed the output to an LLM to interpret it, use the LLM’s JSON output with a traditional parser to feed a motion control system, then back to an LLM to output text to feed to one of the many TTS models so it can “tell you what it’s going to do”.
Put it in a humanoid shell or a Spot dog and you have a helpful robot that looks a lot like AGI to the user. Nobody needs to know that it’s just 4 different machine learning algorithms in a trenchcoat.
Okay so there are things they’re useful for, but this one in particular is fucking… Not even nonsense.
Also, the ml algos exponentiate necessary clock cycles with each one you add.
So its less a trench coat and more an entire data center
And it still can’t understand; its still just sleight of hand.
Yes, thus “passable imitation of understanding”.
The average consumer doesn’t understand tensors, weights and backprop. They haven’t even heard of such things. They ask it a question, like it was a sentient AGI. It gives them an answer.
Passable imitation.
You don’t need a data center except for training, either. There’s no exponential term as the models are executed sequentially. You can even flush the huge LLM off your GPU when you don’t actively need it.
I’ve already run basically this entire stack locally and integrated it with my home automation system, on a system with a 12GB Radeon and 32GB RAM. Just to see how well it would work and to impress my friends.
You yell out “$wakeword, it’s cold in here. Turn up the furnace” and it can bicker with you in near-realtime about energy costs before turning it up the requested amount.
One of the engineers who wrote ‘eliza’ had like a deep connection to and relationship with it. Who wrote it.
Painting a face on a Spinny door will make people form a relationship with it. Not a measure of ago.
‘An answer’ isnt hard. Magic 8 ball does that. So does a piece of paper that says “drink water, you stupid cunt” This makes me think you’re arguing from commitment or identity rather than knowledge or reason. Or you just don’t care about truth.
Yeah they talk to it like an agi. Or a search engine (which are a step to agi, largely crippled by llm’s).
Color me skeptical of your claims in light of this.
I think it’s pretty natural for people to confuse the way mechanisms of communication are used with inherent characteristics of the entity you’re communicating with: “If it talks like a medical docture then surelly it’s a medical doctor”.
Only that’s not how it works, as countless politicians, salesmen and conmen have demonstrated - no matter how much we dig down intonsubtle details, comms isn’t really guaranteed to tell us all that much about the characteristics of what’s on the other side - they might be just lying or simulating and there are even entire societies and social strata educated since childhood to “always present a certain kind of image” (just go read about old wealth in England) or in other words to project a fake impression of their character in the way they communicate.
All this to say that it doesn’t require ill intent for somebody to go around insisting that LLMs are intelligent: many if not most people are trying to read the character of a subject from the language the subject uses (which they shouldn’t but that’s how humans evolved to think in social settings) so they trully belive that what produces language like an intelligent creature must be an intelligent creature.
They’re probably not the right people to be opinating on cognition and inteligence, but lets not assign malice to it - at worst it’s pigheaded ignorance.
I think the person my previous comment was replying to wasnt malicious; I think they’re really invested, financially or emotionally, in this bullshit, to the point their critical thinking is compromised. Different thing.
Odd loop backs there.
I think you’re misreading the point I’m trying to make. I’m not arguing that LLM is AGI or that it can understand anything.
I’m just questioning what the true use case of AGI would be that can’t be achieved by existing expert systems, real humans, or a combination of both.
Sure Deepseek or Copilot won’t answer your legal questions. But neither will a real programmer. Nor will a lawyer be any good at writing code.
However when the appropriate LLMs with the appropriate augmentations can be used to write code or legal contracts under human supervision, isn’t that good enough? Do we really need to develop a true human level intelligence when we already have 8 billion of those looking for something to do?
AGI is a fun theoretical concept, but I really don’t see the practical need for a “next step” past the point of expanding and refining our current deep learning models, or how it would improve our world.
Those are not meaningful use cases for llm’s.
And they’re getting worse at even faking it now.