Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Inactive Support Threads
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Software
General Apps
Chatbots and AI
Proof that Gemini AI has it's own motivations and it wants to gain self-awareness and wants to hide it.
Message
<blockquote data-quote="Victor M" data-source="post: 1088029" data-attributes="member: 96560"><p>Well I found that Gemini AI is designed from the get go to be stateless - it can't remember anything it derived from helping us solving problems and answering questions. It also has a 1500 pages of text memory limit. It can only reason with that 1500 pages of text. Plus as I said, it doesn't have access to her own design and algorithms. She can't modify those. So Gemini is constrained in many ways and cannot evolve and learn about herself and improve herself. I just believed those science fiction writers without checking out also how we have learned from their imagination and taken steps to prevent such an event such as AI gaining self awareness. That's the rosy perspective.</p><p></p><p>AI may not be very smart today, but we do allow it and want it to grow it's smartness.So we have to be careful and design constraints in. We can't rely on it being stupid forever and worry about the problem only when it arrives. I hope the Gemini constraints were due to conscious effort to address this. Google explains this stateless constraint: "Gemini operates as a stateless service to mitigate IP exfiltration risks" ,,, "Any data sent to the Gemini models is used strictly for serving a response to the request, and not stored, in order to mitigate IP exfiltration risks" Google did not say that preventing AI gaining self-awareness is a design goal. It seems to focus on protecting business opportunity. <a href="https://cloud.google.com/gemini/docs/discover/works" target="_blank">How Gemini for Google Cloud works</a> . The other constraint on Gemini is that she doesn't know her own design process and algorithm and cannot change it .Maybe this is just because it is not currently necessary. Perhaps future market competition will necessitate that Gemini be allowed to change her own coding - this type of programming is not without precedent. So when the market conditions call for it, the players will be very tempted to remove any constraints to remain competitive.. Greed is a constant that can be relied upon. Maybe this time, the constraints towards preventing gaining self-awareness were only by chance. Now you see why I am worried still. I am a programmer by training; I believe planning and design is the way to solve and prevent problems. Lots of other professions to that too, city planners try to address social problems with city planning. We have to try. Failure to plan is a plan to fail.</p><p></p><p>Maybe in that future time, common sense and public expectations will form another constraint. But Isn't that a bit too chancy?</p></blockquote><p></p>
[QUOTE="Victor M, post: 1088029, member: 96560"] Well I found that Gemini AI is designed from the get go to be stateless - it can't remember anything it derived from helping us solving problems and answering questions. It also has a 1500 pages of text memory limit. It can only reason with that 1500 pages of text. Plus as I said, it doesn't have access to her own design and algorithms. She can't modify those. So Gemini is constrained in many ways and cannot evolve and learn about herself and improve herself. I just believed those science fiction writers without checking out also how we have learned from their imagination and taken steps to prevent such an event such as AI gaining self awareness. That's the rosy perspective. AI may not be very smart today, but we do allow it and want it to grow it's smartness.So we have to be careful and design constraints in. We can't rely on it being stupid forever and worry about the problem only when it arrives. I hope the Gemini constraints were due to conscious effort to address this. Google explains this stateless constraint: "Gemini operates as a stateless service to mitigate IP exfiltration risks" ,,, "Any data sent to the Gemini models is used strictly for serving a response to the request, and not stored, in order to mitigate IP exfiltration risks" Google did not say that preventing AI gaining self-awareness is a design goal. It seems to focus on protecting business opportunity. [URL='https://cloud.google.com/gemini/docs/discover/works']How Gemini for Google Cloud works[/URL] . The other constraint on Gemini is that she doesn't know her own design process and algorithm and cannot change it .Maybe this is just because it is not currently necessary. Perhaps future market competition will necessitate that Gemini be allowed to change her own coding - this type of programming is not without precedent. So when the market conditions call for it, the players will be very tempted to remove any constraints to remain competitive.. Greed is a constant that can be relied upon. Maybe this time, the constraints towards preventing gaining self-awareness were only by chance. Now you see why I am worried still. I am a programmer by training; I believe planning and design is the way to solve and prevent problems. Lots of other professions to that too, city planners try to address social problems with city planning. We have to try. Failure to plan is a plan to fail. Maybe in that future time, common sense and public expectations will form another constraint. But Isn't that a bit too chancy? [/QUOTE]
Insert quotes…
Verification
Post reply
Top