2029: the year when robots will have the power to outsmart their makers

tim one

Level 21
Verified
Honorary Member
Top Poster
Malware Hunter
Jul 31, 2014
1,086
Not really on-topic, Garry Kasparov (in the picture of the first post), one of the greatest Chess Grandmasters of all time, has been working with Avast as its ambassador. :cool:
He is a great example of human intelligence, I don't remember well but I believe he has won some chess game against Deep Blue, amazing :)
 
D

Deleted member 65228

Agree, maybe we should really worry about when a machine will say "I'm not a robot".
I'm not a robot. :)

in chess, machines are difficult. to beat them. A teacher said that in order to win a machine, pieces had to be given. and then the machine went down in level. and so it was possible to beat him
I can beat any machine for chess. All I have to do is simply unplug the machine, now I win. Hahaha. Winner! :D
 
D

Deleted member 65228

They need to make a fail safe with these machines! Like a logic meltdown, something as simple as: what has a tongue but no mouth? Which in turn would cause the robot to malfunction and shut down...
You could always just make a normal kill-switch through a button on the machine which will power down. Or a special device which sends a signals to it -> activates the shutdown.

thing is if they are connected to a network then they can be hacked remotely too. so then an attacker could hijack it to do something bad for them :/ (or do it manually if no network to intercept!)
 

ElectricSheep

Level 14
Verified
Top Poster
Well-known
Aug 31, 2014
655
Short article on Asimov's 3 Laws of Robotics

How do you stop a robot from hurting people? Many existing robots, such as those assembling cars in factories, shut down immediately when a human comes near. But this quick fix wouldn’t work for something like a self-driving car that might have to move to avoid a collision, or a care robot that might need to catch an old person if they fall. With robots set to become our servants, companions and co-workers, we need to deal with the increasingly complex situations this will create and the ethical and safety questions this will raise.

Asimov's Laws Won't Stop Robots from Harming Humans, So We've Developed a Better Solution
 

mlnevese

Level 26
Verified
Top Poster
Well-known
May 3, 2015
1,531
The three laws are not perfect... even Asimov said so in his stories. For instance how do you make a surgeon robot that follows the three laws? He can't harm a human being so he can't perform surgery. If he tries to perform surgery he needs to harm a human being, if he doesn't he's harming a human being by not acting... a robot surgeon that follows the three laws would probably melt its positronic brain within minutes of first being activated :)
 
Last edited:

gorblimey

Level 2
Verified
Aug 30, 2017
99
Is it just my warped mind, or do I see a common thread here?

But AI is not a mush.

It is increasingly powered by algorithms whose purpose is to find structures and correlations in a sea of data by using tricks in part inspired by biological intelligence: codes that speak to codes, data packets that run by searching optimal paths, softwares that talk to the hardware. Superimposed on this ecosystem there is human mind who takes care, raises and feeds the traffic of the information. And increasingly, our own interactions will determine deep changes in this sea of data.

From all this you might get something similar to a strong artificial intelligence? I don't know. But it is a situation that has never existed in the four billion years of life on this planet. And this brings us back to the question of a possible threat of artificial intelligence.
But AI is not a mush.

Why make a computer more intelligent than us when we can make us more intelligent?

Let say for one minute that an Ai will reach a state of self-consciousness, the only thing that will differs from us will be morality.

A machine doesn't have any morale, pity, or tolerance; it will take the simplest way to solve a problem ( 1 or 0 aka binary decision) based on the rule it was imposed with. And even with it, if he decides that the rule is flawed, it may just bypass it.

It seems to me that we're simply describing an existing AI... Humanity. Don't worry about the Rise of the Machines, We're already here :alien:
 

klaken

Level 3
Verified
Well-known
Oct 11, 2014
112
Hahaha 100 years ago flying cars with cardboard wings are expected.

I also do not understand the desire to make computers think like us.

1- why the computers would come to reason or be aware of if ...
2- Evolution as well as artificial intelligence are based on positive stimuli .. And for this to work to fail (something unacceptable for a machine)
3- The evolution is based on aleoteridad therefore has a sea of possibilities. The aleoteridad is not something useful for a machine.

Maybe in 1000 computers invade us not in 10 years.
 

vtqhtr413

Level 26
Verified
Top Poster
Well-known
Aug 17, 2017
1,448
The fact is that if AI were a disorganized mush of stuff....like some pieces in a box, I think there would be little hope of seeing happen something interesting.

But AI is not a mush.

It is increasingly powered by algorithms whose purpose is to find structures and correlations in a sea of data by using tricks in part inspired by biological intelligence: codes that speak to codes, data packets that run by searching optimal paths, softwares that talk to the hardware. Superimposed on this ecosystem there is human mind who takes care, raises and feeds the traffic of the information. And increasingly, our own interactions will determine deep changes in this sea of data.

From all this you might get something similar to a strong artificial intelligence? I don't know. But it is a situation that has never existed in the four billion years of life on this planet. And this brings us back to the question of a possible threat of artificial intelligence.

If this is the way in which we create a strong artificial intelligence, the immediate danger simply concerns the fact that humanity today rely on the Internet ecosystem. Not only for the way in which we communicate or find information, but for how our life is organized, about food supplies, planes, trains, cargo ships, our financial systems, everything.

A strong AI could be really devastating if we imagine an evolutionary path that could randomly generate a perceptual artificial consciousness.
We know that our evolution is entrusted to the needs but also the coincidences, the same could happen to the AI: sure we can not exclude that.
Here's another one, just so exceptional and brilliant, literally, gives a wondering person hope, thank you tim one.
 
  • Like
Reactions: tim one

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top