2029: the year when robots will have the power to outsmart their makers

Venustus

Level 59
Thread author
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Dec 30, 2012
4,809
Garry-Kasparov-versus-Dee-011.jpg

Garry Kasparov versus Deep Blue in 1997. The computer won - as Ray Kurzweil predicted. Photograph: Stan Honda/AFP/Getty Images

"Computers will be cleverer than humans by 2029, according to Ray Kurzweil, Google's director of engineering.

The entrepreneur and futurologist has predicted that in 15 years' time computers will be more intelligent than we are and will be able to understand what we say, learn from experience, make jokes, tell stories and even flirt.

Kurzweil, 66, who is considered by some to be the world's leading artificial intelligence (AI) visionary, is recognised by technologists for popularising the idea of "the singularity" – the moment in the future when men and machines will supposedly converge. Google hired him at the end of 2012 to work on the company's next breakthrough: an artificially intelligent search engine that knows us better than we know ourselves.

In an interview in today's Observer New Review, Kurzweil says that the company hasn't given him a particular set of instructions, apart from helping to bring natural language understanding to Google."


More
 
Last edited:

tim one

Level 21
Verified
Honorary Member
Top Poster
Malware Hunter
Jul 31, 2014
1,086
The fact is that if AI were a disorganized mush of stuff....like some pieces in a box, I think there would be little hope of seeing happen something interesting.

But AI is not a mush.

It is increasingly powered by algorithms whose purpose is to find structures and correlations in a sea of data by using tricks in part inspired by biological intelligence: codes that speak to codes, data packets that run by searching optimal paths, softwares that talk to the hardware. Superimposed on this ecosystem there is human mind who takes care, raises and feeds the traffic of the information. And increasingly, our own interactions will determine deep changes in this sea of data.

From all this you might get something similar to a strong artificial intelligence? I don't know. But it is a situation that has never existed in the four billion years of life on this planet. And this brings us back to the question of a possible threat of artificial intelligence.

If this is the way in which we create a strong artificial intelligence, the immediate danger simply concerns the fact that humanity today rely on the Internet ecosystem. Not only for the way in which we communicate or find information, but for how our life is organized, about food supplies, planes, trains, cargo ships, our financial systems, everything.

A strong AI could be really devastating if we imagine an evolutionary path that could randomly generate a perceptual artificial consciousness.
We know that our evolution is entrusted to the needs but also the coincidences, the same could happen to the AI: sure we can not exclude that.
 

danb

From VoodooShield
Verified
Top Poster
Developer
Well-known
May 31, 2017
1,635
Hehehe, the algos are just about as good as they are going to get for a very long time, so it will NEVER happen in our lifetime.

And even if it did, it would also make the exact same wrong decisions we make.

We have nothing to worry about.
 
D

Deleted member 178

Let say for one minute that an Ai will reach a state of self-consciousness, the only thing that will differs from us will be morality.

A machine doesn't have any morale, pity, or tolerance; it will take the simplest way to solve a problem ( 1 or 0 aka binary decision) based on the rule it was imposed with. And even with it, if he decides that the rule is flawed, it may just bypass it.
 

danb

From VoodooShield
Verified
Top Poster
Developer
Well-known
May 31, 2017
1,635
Let say for one minute that an Ai will reach a state of self-consciousness, the only thing that will differs from us will be morality.

A machine doesn't have any morale, pity, or tolerance; it will take the simplest way to solve a problem ( 1 or 0 aka binary decision) based on the rule it was imposed with. And even with it, if he decides that the rule is flawed, it may just bypass it.
Hehehe, great point... I actually just thought of post 12 tonight while I was reading the thread, so I will have to think about your response for a while, because it is a good one ;).

Off the top of my head, why would he have to bypass, assuming that there was a fault in the initial attack, since nothing is perfect? That is an infinite loop, and Turing would be really upset with us.

The other thing is... the truth is always revealed in the end.
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top