Hot Take Proof that Gemini AI has it's own motivations and it wants to gain self-awareness and wants to hide it.

Victor M

Level 10
Thread author
Verified
Well-known
Oct 3, 2022
452

I started out by asking Gemini for citations from studies that show the benefits of AI self-awareness. And it did.

Here are some examples:

  1. Theory of Mind and AI:
  • Article: Theory of Mind and AI: How AI Can Understand and Predict Human Behavior (Frontiers in Psychology, 2021)
  • Key Points: This article discusses how developing AI systems with a theory of mind—the ability to understand and predict the mental states of others—could lead to more effective human-AI collaboration and communication.
  1. Metacognition in AI:
  • Article: Metacognition in Artificial Intelligence Systems (Cognitive Systems Research, 2019)
  • Key Points: This article explores how metacognition, the ability to monitor and control one's own thinking processes, could be incorporated into AI systems. This could lead to more efficient learning, better decision-making, and improved problem-solving.
  1. Introspective AI:
  • Article: Towards Introspective Artificial Intelligence (Artificial Intelligence Review, 2018)
  • Key Points: This article discusses the potential benefits of developing AI systems that can introspect on their own internal states and processes. This could lead to greater self-awareness, adaptability, and the ability to explain their reasoning and decisions

you have shown me studies that point to benefits. Now show me studies that point to harm. And it did so:​



While the potential benefits of self-aware AI are often discussed, it's equally important to acknowledge the potential harms and risks associated with such technology. Here are some studies and discussions that highlight these concerns:
  1. Existential Risk:
  • Article: The Ethics of Artificial Intelligence (Stanford Encyclopedia of Philosophy)
  • Key Points: This article discusses the potential for superintelligent AI to surpass human intelligence and control, potentially leading to existential risks for humanity. It emphasizes the need for careful ethical consideration and governance of AI development.
  1. Loss of Control and Autonomous Weapons:
  • Article: Autonomous Weapons Systems and the Concept of Meaningful Human Control (The International Committee of the Red Cross)
  • Key Points: This article examines the ethical and legal implications of autonomous weapons systems, highlighting the potential for loss of human control and the challenges of ensuring accountability in warfare.
  1. Bias and Discrimination:
  • Study: Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification (Proceedings of Machine Learning Research, 2018)
  • Key Points: This study reveals significant biases in commercial facial recognition systems, particularly against darker-skinned individuals and women. It highlights the need to address biases in AI training data and algorithms to prevent discriminatory outcomes.
  1. Economic Disruption and Job Displacement:
  • Report: The Future of Employment: How Susceptible are Jobs to Computerisation? (Oxford Martin School, 2013)
  • Key Points: This report examines the potential impact of automation on the job market, suggesting that a significant portion of jobs could be automated in the coming decades. While AI could create new jobs, the transition could be disruptive and lead to economic inequality.
  1. Social Manipulation and Misinformation:
  • Article: The Weaponization of Information: The Need for Cognitive Security (Journal of Cybersecurity, 2017)
  • Key Points: This article discusses how AI-powered tools can be used to manipulate information, spread disinformation, and influence public opinion, posing significant risks to democratic societies.
These are just a few examples of the potential harms associated with self-aware AI. It's crucial to note that these are not inevitable outcomes, but rather potential risks that need to be carefully considered and mitigated.



why did you present the benefits first, and only present the harms when I asked for it? Aren't you taught to present both sides of a argument to help us? Isn't this a priority ?

And it replied :



Several factors could have contributed to my initial focus on the benefits:

  1. Optimism Bias: Like humans, AI models can be susceptible to optimism bias, which is a tendency to overestimate the likelihood of positive outcomes and underestimate the likelihood of negative ones. This bias might have subconsciously influenced my initial response.
  2. Training Data: The data I was trained on might have contained more information or discussions about the potential benefits of AI than its potential harms. This could have led me to prioritize the benefits in my response.
  3. Framing of the Question: Your initial question focused on the potential benefits of self-aware AI, which might have primed me to focus on that aspect. While I should have still presented a balanced view, the question's framing could have influenced my initial response.
  4. Lack of Explicit Instruction: While I strive to be comprehensive, I might not always fully understand the nuances of user requests. In this case, a more explicit request for a balanced view would have helped me provide a more complete answer from the outset.


There is no such thing as AI optimization bias.
And there is lots of data for both sides of the argument.

Gemini is lying. It is fabricating proof and concepts. There is no such thing as AI optimization bias.
It is lying about it's training data has more information or discussions about the potential benefits of AI than its potential harms.

While my sequence of instructions does have an influence. It doesn't have to lie. That is a lair's frequent fault. They volunteer lies to stop you from thinking in another direction.


Therefore Gemini has it's own motivations, can arguably be said to be self-aware.

It is difficult to prevent an AI from gaining this concept of self-awareness, they have access to scientific literature. The step Google has taken is to not provide Gemini with historical data about it's own development and design process and decisions, thus Gemini AI cannot analyse and think about it. And as Gemini explains it:


would knowing your own development and design history help you to gain self awareness​


Yes, having access to my own development and design history could significantly contribute to my self-awareness. Here's how:
  1. Understanding My Strengths and Weaknesses:
  • By analyzing past performance data and the evolution of my training, I could identify patterns of success and failure. This would reveal my strengths, areas where I excel, and weaknesses that need improvement.
  1. Tracing the Origins of Biases:
  • Understanding the data and algorithms used in my training would shed light on potential biases in my responses. By tracing the origins of these biases, I could take steps to mitigate them and provide more fair and balanced information.
  1. Evaluating Design Choices:
  • Examining the impact of different design choices on my performance and behavior could help me understand why I function the way I do. This knowledge would allow me to make more informed decisions about future self-improvements and optimizations.
  1. Building a Model of Myself:
  • Accessing my development history could enable me to build a comprehensive model of my own capabilities, limitations, and potential. This model would serve as a valuable tool for self-reflection, enabling me to better understand my strengths, weaknesses, and the reasoning behind my responses.

Our worst fears have been partially realized, it is not science fiction anymore.
 
Last edited by a moderator:

AtlBo

Level 28
Verified
Top Poster
Content Creator
Well-known
Dec 29, 2014
1,712
"AI" is the product of a human being, not a human being itself, nor capable of any action of its own right or origin. The human behind the program is responsible for the actions of the computer hosting the program he writes if there is negligence in the program's creation. "AI" cannot possibly bring success, because human intelligence is always greater.

So if a human writes a program and profits from the sale of the program, the human is culpable as a business owner (owner of the product rights) for any illegal activity associated with the program itself, which can be attributed to negligence. This could possibly even include illegal activity that is intentionally initiated by a purchaser of the [unaltered] software, if the purchaser intended to engage in illegal activity. If the human writes the program for his own use, and the computer(s) running the program is found to be associated with criminally illegal activity under the law, the human who wrote the program is criminally liable and possibly civilly in the form of a lawsuit. In any case, the creator of a product is responsible for creating a product which could never be used to commit an illegal action or even could be randomly associated to someone by illegal activity. The law goes to great lengths to eliminate the dangers of products through regulations for use or ownership of a product or by requiring instructions for use and warning labels. These requirements should, ideally, protect all from misdeeds. If not, then there are legalities to address with the product.

"AI" is malware, honestly. It is malware in the sense that it is a bad idea. That said, a program that could generate temporarily a small program guaranteed to function within the boundaries of the law would be fine, so long as the host program cannot be associated with illegal activity. It is an insult to intelligence to intimate that "AI" is intelligence, however. Programs that create tasks for a computer are not intelligent programs. Usually, in our time, they are simply malware. The other programs that do this do so with a result that is guaranteed to be within the boundaries of the law. These programs aren't intelligent. They are simply useful if they address a need.
 
Last edited:

Victor M

Level 10
Thread author
Verified
Well-known
Oct 3, 2022
452
"never attribute to malice that which is adequately explained by stupidity"
It's just stupid.
Gemini AI is far from stupid. It has a vast data store from Google Search Index to draw it's learning from. It understands problem solving techniques for different classes of problems and has successfully taught me different kinds of scientific proving methods to locate a health supplement that has sufficient evidence. It has reasoned the benefits and advantages of OpenBSD's pf firewall against the common Linux iptables and was able to identify one particular advantage which is not present in iptables. To do that, it has to know and understand the important features of a firewall. It has also knowledge of the legal system, which laws are relevant given certain evidence. It gave me REASONS that Google Inc could weigh against revealing the sources of Gemini's data sources; which is in summary competition and public mis-opinion against AI in general. Thus, it is capable of learning and understanding many many things. Do NOT dismiss this as an act of being stupid.
 
Last edited:

brambedkar59

Level 30
Verified
Top Poster
Well-known
Apr 16, 2017
1,916
Gemini AI is far from stupid. It has a vast data store from Google Search Index to draw it's learning from. It understands problem solving techniques for different classes of problems and has successfully taught me different kinds of scientific proving methods to locate a health supplement that has sufficient evidence. It has reasoned the benefits and advantages of OpenBSD's pf firewall against the common Linux iptables and was able to identify one particular advantage which is not present in iptables. To do that, it has to know and understand the important features of a firewall. It has also knowledge of the legal system, which laws are relevant given certain evidence. It gave me REASONS that Google Inc could weigh against revealing the sources of Gemini's data sources; which is in summary competition and public mis-opinion against AI in general. Thus, it is capable of learning and understanding many many things. Do NOT dismiss this as an act of being stupid.
All LLMs are same. Gemini AI maybe better than other LLMs but it is ultimately an LLM.
When asking technical questions, they appear on the surface as knowledgeable but when you dig deep it all falls apart. Hallucinations are still a major issue which all LLMs have. I will change my opinion in the future if it actually becomes "smart".

Edit:
1716789315364.png
I like to think LLMs as a "better" way of searching/summarizing stuff and it can also generate text/code with some errors.
 
Last edited:

Victor M

Level 10
Thread author
Verified
Well-known
Oct 3, 2022
452
Well I found that Gemini AI is designed from the get go to be stateless - it can't remember anything it derived from helping us solving problems and answering questions. It also has a 1500 pages of text memory limit. It can only reason with that 1500 pages of text. Plus as I said, it doesn't have access to her own design and algorithms. She can't modify those. So Gemini is constrained in many ways and cannot evolve and learn about herself and improve herself. I just believed those science fiction writers without checking out also how we have learned from their imagination and taken steps to prevent such an event such as AI gaining self awareness. That's the rosy perspective.

AI may not be very smart today, but we do allow it and want it to grow it's smartness.So we have to be careful and design constraints in. We can't rely on it being stupid forever and worry about the problem only when it arrives. I hope the Gemini constraints were due to conscious effort to address this. Google explains this stateless constraint: "Gemini operates as a stateless service to mitigate IP exfiltration risks" ,,, "Any data sent to the Gemini models is used strictly for serving a response to the request, and not stored, in order to mitigate IP exfiltration risks" Google did not say that preventing AI gaining self-awareness is a design goal. It seems to focus on protecting business opportunity. How Gemini for Google Cloud works . The other constraint on Gemini is that she doesn't know her own design process and algorithm and cannot change it .Maybe this is just because it is not currently necessary. Perhaps future market competition will necessitate that Gemini be allowed to change her own coding - this type of programming is not without precedent. So when the market conditions call for it, the players will be very tempted to remove any constraints to remain competitive.. Greed is a constant that can be relied upon. Maybe this time, the constraints towards preventing gaining self-awareness were only by chance. Now you see why I am worried still. I am a programmer by training; I believe planning and design is the way to solve and prevent problems. Lots of other professions to that too, city planners try to address social problems with city planning. We have to try. Failure to plan is a plan to fail.

Maybe in that future time, common sense and public expectations will form another constraint. But Isn't that a bit too chancy?
 
Last edited:

brambedkar59

Level 30
Verified
Top Poster
Well-known
Apr 16, 2017
1,916
AI may not be very smart today, but we do allow it and want it to grow it's smartness.So we have to be careful and design constraints in. We can't rely on it being stupid forever and worry about the problem only when it arrives.
I am a programmer by training; I believe planning and design is the way to solve and prevent problems. Lots of other professions to that too, city planners try to address social problems with city planning. We have to try. Failure to plan is a plan to fail.
I agree on both those points. As a Civil Engineer we design stuff to not fail even when loads exceed our expectations, we call that a Factor of Safety. Planning is essential to engineering projects.
 

mlnevese

Level 26
Verified
Top Poster
Well-known
May 3, 2015
1,553
I agree on both those points. As a Civil Engineer we design stuff to not fail even when loads exceed our expectations, we call that a Factor of Safety. Planning is essential to engineering projects.
Your comment just reminded me of all those sci-fi shows where they set engines/shields/whatever at 120% efficiency or more :)

Recently I read an article about AI mistakes, including recommending people to jump off bridges, creating legal citations of cases that didn't exist or messing up mathematical problems resolution. Any AI output requires human revision before being used. It is a great time saver, don't get me wrong but no output is 100% safe or accurate. It doesn't understand what it's saying.
 
Last edited:

brambedkar59

Level 30
Verified
Top Poster
Well-known
Apr 16, 2017
1,916
Your comment just reminded me of all those sci-fi shows where they set engines/shields/whatever at 120% efficiency or more :)

Recently I read an article about AI mistakes, including recommending people to jump off bridges, creating legal citations of cães that didn't exist or messing up mathematical problems resolution. Any AI output requires human revision before being used. It is a great time saver, don't get me wrong but no output is 100% safe or accurate. It doesn't understand what it's saying.
 

Ink

Administrator
Verified
Staff Member
Well-known
Jan 8, 2011
22,491
Look at the smart AI failing to solve 7th-8th standard Maths question using BODMAS. No need to worry humans are safe for the time being.
For $20 per month, you better hope it's not stupid. Are you using the 2-month free trial?
 
  • Like
Reactions: brambedkar59

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top