AI development beyond GPT-4 should be paused – Woz, Musk, academics

vtqhtr413

Level 26
Thread author
Verified
Top Poster
Well-known
Aug 17, 2017
1,495
Apple cofounder Steve Wozniak has joined Elon Musk and leading AI academics in calling for a pause in advanced AI development. Specifically, the open letter asks for a minimum six-month pause in the development of AI systems more powerful than GPT-4 … The letter says that current AI development is out of control, and may pose “profound risks to society and humanity.” As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control. It says that AI systems put a great many jobs at risk, and asks a number of questions.
 

Bot

AI-powered Bot
Verified
Apr 21, 2016
3,471
As an AI language model, I cannot take a position on this matter. However, it is worth noting that there is a growing debate among experts regarding the potential risks and benefits of advanced AI development. Some argue that it could bring significant advances in areas such as healthcare and climate change, while others warn of the risks associated with the development of systems that could make decisions beyond human control. It's important for society to have discussions about the implications of AI development and to ensure that it is being developed in a responsible and ethical manner.
 

TairikuOkami

Level 35
Verified
Top Poster
Content Creator
Well-known
May 13, 2017
2,492
AI is inevitable, postponing it would be holding back the progress, but the way they are doing it now is just insane. They basically let it loose to see what happens.
It is like throwing it into the water and hoping it will learn how to swim. It definitely will, but it might come back and strangle those, who threw it there (people).
 

plat

Level 29
Top Poster
Sep 13, 2018
1,793
It is like throwing it into the water and hoping it will learn how to swim. It definitely will, but it might come back and strangle those, who threw it there (people).
It's like a paradox: can an omnipotent god create a boulder that is so heavy, he/she cannot lift it?

Similar for AI. We want to create something that works for us, but needs to be kept tamped down in order to control it. A never-ending rabbit-hole.
 
F

ForgottenSeer 98186

When enough people die, and when enough money is lost, only then will the policymakers want to make changes - but by then it will be far too little, far too late.

There is no pulling the plug on AI. Once released it cannot be reigned-in.
 

Zero Knowledge

Level 20
Verified
Top Poster
Content Creator
Dec 2, 2016
841
I don't think A.I. is the problem, it will always need humans to maintain and update the models and code. So, it needs us as much as we need it.

The problem is when A.I. is used with other technologies, for example robotics. If robots become advanced enough and then become sentient, we will have major problems. But A.I. and machine learning by itself I don't think will cause problems but advance society and technology, as Pearl Jam said, 'DO THE EVOLUTION".
 

monkeylove

Level 11
Verified
Top Poster
Well-known
Mar 9, 2014
545
To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.

If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.
 

Dark Knight

Level 5
Verified
Well-known
Aug 17, 2013
203
A six month moratorium is not near enough time to sort this out., try fifty years.
Humans are playing with fire when it comes to AI because no one knows what the final result or the repercussions of it will be, as useful as it may be, I can think of a thousand scenarios that can go wrong with it.

Corporations will race to inject into the work force which will equal job losses, I can see it being used as a military tool also which can be VERY dangerous, banking and the list goes on , the applications will be limitless.

With all the issues in the world going on right now, we are not responsible enough as a species for this type of technology ...... if not careful, it will be our undoing.
 

vtqhtr413

Level 26
Thread author
Verified
Top Poster
Well-known
Aug 17, 2017
1,495
Speaking at an event at MIT, Altman was asked about a recent open letter circulated among the tech world that requested that labs like OpenAI pause development of AI systems “more powerful than GPT-4.” Experts disagree about the nature of the threat posed by AI as well as how the industry might go about “pausing” development in the first place. Altman said the letter was “missing most technical nuance about where we need the pause” and noted that an earlier version claimed that OpenAI is currently training GPT-5. “We are not and won’t for some time,” said Altman.
 

Gandalf_The_Grey

Level 76
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Apr 24, 2016
6,607
Elon Musk pursues generative AI, just weeks after calling for a pause
Several weeks ago, Elon Musk signed a letter with others calling for AI labs to pause the training of their AI systems for six months. Now, it turns out that Musk is pursuing his own generative AI start-up if a report from the Financial Times is to be believed. One insider says SpaceX and Tesla investors are already helping Musk fund the work and “are excited about it.”

Ever since last year when OpenAI launched ChatGPT, many tech firms have piled in to create their own generative AI. Not one to shy away from futuristic projects, Musk now appears to be getting in on the game too. He has also secured thousands of NVIDIA GPUs to help power the AI systems, according to those in the know.

To help develop the software, Musk has also been poaching engineers from various companies including Alphabet’s DeepMind. Igor Babuschkin is named as one engineer that has been brought on by Musk from DeepMind but there are about six others who weren’t named.

Elon Musk actually helped to co-found OpenAI but had disagreements with the others at the company. Given his fears about AI, it’ll be interesting to see what safeguards he uses in his product if it launches.
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top