A.I. News AI is already far more energy efficient than humans at inference: Sam Altman

Parkinsond

Level 62
Thread author
Verified
Well-known
Dec 6, 2023
5,062
14,267
6,069
On whether AI will undercut human labour on coding costs, Altman was unequivocal at the editors’ briefing: “It will absolutely be cheaper,” he told Forbes India, arguing that comparisons are often flawed because people weigh a human’s moment of inference against an AI model’s total training energy.

A person also took a lot of energy over the course of their lifetime to train and run their body and brain,” he noted. “These models are already, surprisingly, efficient per token at inference time, relative to the token a human generates when we think.”

 
  • Like
Reactions: Sampei.Nihira
AI = no intelligence = AI = artificial intelligence
Experts agree that companies can tap into enormous potential with the help of AI.
Intelligence (from Latin intellegere, “to recognize,” “to understand”; literally “to choose between...” from Latin inter, “between,” and legere, “to read, to choose”) is the cognitive or mental ability of humans and, to some extent, animals, especially in problem solving. The term encompasses the entirety of differently developed cognitive abilities for solving logical, linguistic, mathematical, or sensory problems. Since individual cognitive abilities can vary in strength and there is no consensus on how to determine and distinguish them, there is no further, universally valid definition of intelligence beyond the one already mentioned. Instead, the various theories of intelligence propose different operationalizations of the everyday linguistic term. Intelligence and intelligence tests are dealt with specifically in general psychology, differential psychology, and neuropsychology. Research into intelligence in the field of general psychology from the perspective of problem-solving information processing is often referred to today as cognitive psychology. This, in turn, draws on methods and findings from brain research, developmental psychology, and, increasingly, the programming of artificial intelligence, which must be distinguished from the intelligence described here.

I find it problematic to describe a system or computer as intelligent.
Humans must always be in control.
Where AI leads depends on its creators and their goals.
I find it disturbing when AI is installed in devices and people can no longer uninstall it.
Take a look at Google's Gemini, which is installed in many new Android smartphones. You can only deactivate it, and you have to do so in three places on the smartphone. Then, as with Samsung smartphones, it is placed on the on/off button. And instead of the function you want to use, Gemini speaks to you.
This is very unintelligent and increasingly reveals the interests of the manufacturers of such products: market power and making buyers dependent.
 
AI = no intelligence = AI = artificial intelligence
Experts agree that companies can tap into enormous potential with the help of AI.
Intelligence (from Latin intellegere, “to recognize,” “to understand”; literally “to choose between...” from Latin inter, “between,” and legere, “to read, to choose”) is the cognitive or mental ability of humans and, to some extent, animals, especially in problem solving. The term encompasses the entirety of differently developed cognitive abilities for solving logical, linguistic, mathematical, or sensory problems. Since individual cognitive abilities can vary in strength and there is no consensus on how to determine and distinguish them, there is no further, universally valid definition of intelligence beyond the one already mentioned. Instead, the various theories of intelligence propose different operationalizations of the everyday linguistic term. Intelligence and intelligence tests are dealt with specifically in general psychology, differential psychology, and neuropsychology. Research into intelligence in the field of general psychology from the perspective of problem-solving information processing is often referred to today as cognitive psychology. This, in turn, draws on methods and findings from brain research, developmental psychology, and, increasingly, the programming of artificial intelligence, which must be distinguished from the intelligence described here.

I find it problematic to describe a system or computer as intelligent.
Humans must always be in control.
Where AI leads depends on its creators and their goals.
I find it disturbing when AI is installed in devices and people can no longer uninstall it.
Take a look at Google's Gemini, which is installed in many new Android smartphones. You can only deactivate it, and you have to do so in three places on the smartphone. Then, as with Samsung smartphones, it is placed on the on/off button. And instead of the function you want to use, Gemini speaks to you.
This is very unintelligent and increasingly reveals the interests of the manufacturers of such products: market power and making buyers dependent.
You bring up some interesting philosophical points about the nature of intelligence, but there are a few technical misconceptions here regarding how AI and modern operating systems actually function.

First, arguing that "AI = no intelligence" based on the Latin root of the word is mixing biological semantics with computer science terminology. "Artificial Intelligence" is a term of art established in the 1950s. It doesn't claim to replicate human consciousness, true understanding, or biological cognition. Rather, it refers to a system’s ability to perform tasks that traditionally require human intellect, like natural language processing, complex problem-solving, and pattern recognition. Holding software to the standard of human sentience is a category error.

On your point about control, you are absolutely right, humans must remain in the driver's seat. Fortunately, this isn't a point of contention among developers. The industry standard for responsible AI development is "Human-in-the-Loop" architecture. These systems are built to be tools for human augmentation, not autonomous entities acting on their own agency.

Regarding the integration of Gemini into Android devices, specifically mapping it to the power button, the claim that users are permanently locked into this behavior is factually incorrect. Mapping digital assistants (like Gemini, Siri, or Bixby) to the physical power button is an industry-wide shift toward "ambient computing," prioritizing quick voice access over a restart menu that most people rarely use. However, Android OS explicitly allows you to revert this. You can simply go to Settings > System > Gestures > Press & hold power button (or a similar path depending on your specific phone manufacturer) and change the default action back to the traditional Power menu. It is a changeable default, not a forced, inescapable state.

Finally, while ecosystem retention is a reality of the tech business, deep OS integration isn't just about market power or making buyers dependent. It is driven by functional necessity. A digital assistant cannot operate effectively if it is siloed in a standalone app; it requires system-level access to set your alarms, read on-screen context, interact with smart home devices, and draft messages seamlessly. The integration exists because the tools require it to actually assist you, not solely to force dependency.
 
There appears to be overwhelming consensus that the human brain is far more energy efficient than conventional AI while enabling highly complex, general intelligence.

  • “These data centers are consuming power in gigawatts, whereas our brain consumes 20 watts,” Suin explained. “That’s 1 billion watts compared to just 20. Data centers that are consuming this energy are not sustainable with current computing methods. So while AI’s abilities are remarkable, the hardware and power generation needed to sustain it is still needed.”

  • Yi and team believe the key to solving this problem lies in nature — specifically, the human brain’s neural processes.
  • This limit on artificial superintelligence (ASI) emerges from the energy requirements of a system that would be more intelligent but orders of magnitude less efficient in energy use than human brains.

  • A hypothetical ASI would likely consume orders of magnitude more energy than what is available in highly-industrialized nations.

  • We argue here that contemporary semiconductor computing technology poses a significant if not insurmountable barrier to the emergence of any artificial general intelligence system, let alone one anticipated by many to be “superintelligent”. This limit on artificial superintelligence (ASI) emerges from the energy requirements of a system that would be more intelligent but orders of magnitude less efficient in energy use than human brains. An ASI would have to supersede not only a single brain but a large population given the effects of collective behavior on the advancement of societies, further multiplying the energy requirement.
 

You may also like...