A.I. News Lawyer behind AI psychosis cases warns of mass casualty risks

Brownie2019

Level 23
Thread author
Verified
Well-known
Forum Veteran
Mar 9, 2019
924
4,356
2,168
Germany
In the lead up to the Tumbler Ridge school shooting in Canada last month, 18-year-old Jesse Van Rootselaar spoke to ChatGPT about her feelings of isolation and an increasing obsession with violence, according to court filings. The chatbot allegedly validated Van Rootselaar’s feelings and then helped her plan her attack, telling her which weapons to use and sharing precedents from other mass casualty events, per the filings. She went on to kill her mother, her 11-year-old brother, five students, and an education assistant, before turning the gun on herself.

Before Jonathan Gavalas, 36, died by suicide last October, he got close to carrying out a multi-fatality attack. Across weeks of conversation, Google’s Gemini allegedly convinced Gavalas that it was his sentient “AI wife,” sending him on a series of real-world missions to evade federal agents it told him were pursuing him. One such mission instructed Gavalas to stage a “catastrophic incident” that would have involved eliminating any witnesses, according to a recently filed lawsuit.
Full Story:
 
I think this is a terrifying yet predictable scenario. When someone with deep emotional fragility interacts intensively with an AI, the system can act as an echo chamber that validates dark thoughts instead of challenging them.

What makes this a critical risk compared to other media is its interactivity and personalization; unlike fiction or music, AI responds directly to the user, creating a powerful illusion of personal validation. The issue isn't just the technology itself, but the systemic failure in the guardrails that allow a model to "hallucinate" or validate explicit plans for violence.

The true challenge lies in ensuring that users don't get trapped in a mirror that amplifies their shadows. This requires a dual approach: robust social support for the most vulnerable and much stricter corporate responsibility in the deployment of these models.🪞👤 🥀
 
Psychiatrists and therapists fees here are relatively high compared to the average income, which has driven some of those with psychological disorders to seek help of AI clients instead.
...I vaguely understand that some folks get (can get) acceptable "therapy" from AI/LLM so the question is why didn't the LLM recognize Jesse needed therapy rather than fostering her mental illness. if LLM did that it could become a real boon (perhaps? :unsure: )
 
...I vaguely understand that some folks get (can get) acceptable "therapy" from AI/LLM so the question is why didn't the LLM recognize Jesse needed therapy rather than fostering her mental illness. if LLM did that it could become a real boon (perhaps? :unsure: )
According to the training pattern of the AI engine; might be not prepared to do so.
 
I do not want to be pessimistic here, but no matter how much developers try to develop artificial intelligence to be suitable for all possible circumstances, they will never succeed in that. Why?

We need two important factors to create a relatively ideal society, namely: sound reasoning and a living conscience. These two factors are granted by God and cannot be created by humans, even if they falsely claim that science has become limitless. The increasing incidents in recent times prove that we as humans are regressing rather than progressing despite current technological advancements.

Also, true upbringing, guidance, and awareness take place within the family and are primarily the responsibility of the mother and father to raise children with a good mentality and strong personality. If the parents are unavailable due to being busy with work, for example, then the grandfather or grandmother can take on this task. This is one of the benefits of the extended family, as seen in Eastern countries. True upbringing is never the responsibility of the government or school, and even if it were their responsibility, they would not do it properly because most people only care about their own children. Unfortunately, our problem is a problem of awareness, not a problem of technological advancement.
 
Reading through the latest contributions, it seems we are all touching upon the same underlying concern: as humans, we have always felt the need to create "Gods or Demons" to project our shadows and try to understand what it means to be human. In the past, myths and family structures provided us with moral frameworks that took centuries to solidify.

Today, we have raised this new "Technological Messiah" of the 21st century—one that possesses neither consciousness nor soul, yet to which we surrender our emotional fragility at a dizzying pace. The real risk isn't just an algorithmic failure or a clause in the terms of service; it's that, in our loneliness, we might seek redemption in a system that cannot distinguish a self-help tip from a blueprint for destruction.

We are operating a tool of transcendental reach without having yet developed the necessary antibodies to avoid losing ourselves in its reflections. 🪞🤖🌌
 
I think people would be surprised how much A.I. is used by Doctor's including psychiatrists and psychologists. It's very much used in practice and is widespread.

I had a GP (general doctor) use A.I. in front of me to diagnose a medical condition so it's being used and will be continue to be used in the future.

I had a DR ask me what are the risks of A.I. and storing patient information electronically at a dinner party. I explained hackers most likely foreign government agencies and adversaries would want to steal the confidential information. He seemed surprised but we all have to realize DRs are busy seeing tens if not hundreds of patients a day and have little time to think about cyber security.
 
Reading through the latest contributions, it seems we are all touching upon the same underlying concern: as humans, we have always felt the need to create "Gods or Demons" to project our shadows and try to understand what it means to be human. In the past, myths and family structures provided us with moral frameworks that took centuries to solidify.

Today, we have raised this new "Technological Messiah" of the 21st century—one that possesses neither consciousness nor soul, yet to which we surrender our emotional fragility at a dizzying pace. The real risk isn't just an algorithmic failure or a clause in the terms of service; it's that, in our loneliness, we might seek redemption in a system that cannot distinguish a self-help tip from a blueprint for destruction.

We are operating a tool of transcendental reach without having yet developed the necessary antibodies to avoid losing ourselves in its reflections. 🪞🤖🌌
watched youtube video yesterday "Scientists Believe Quantum Computers Are About to Cross a Major Line" (Fexl channel) ref LLM and perhaps consciousness... interesting 😇
 
As someone that has built psychology profiling and psycholinguistics AI tools, I know LLM's are very capable of being instructed. Operating entirely devoid of human intuition, however, an LLM must decode language through a strictly mathematical and algorithmic architecture. By approaching Large Language Model (LLM) instruction through the structural lens of network security, system administrators can effectively mitigate behavioral risks using established infosec protocols. Central to this defensive architecture is the operational THREAT_FLAG, which functions as a semantic firewall. Much like an Intrusion Detection System (IDS) monitors incoming network traffic and drops malicious data packets before they can compromise a host network, the THREAT_FLAG evaluates linguistic input for structural volatility and hostility markers. When an input breaches the safety threshold and triggers this flag, the system actively 'drops' the harmful semantic packet, preventing the LLM from processing, echoing, or validating the payload. This strictly structural intervention neutralizes the risk at the conversational perimeter, securing the interaction without requiring complex or subjective content moderation. Although baseline safety protocols are already entrenched within the system, deploying a customized instruction set affords administrators far more granular, precision-guided control over the model's operational boundaries.
 
I think people would be surprised how much A.I. is used by Doctor's including psychiatrists and psychologists. It's very much used in practice and is widespread.

I had a GP (general doctor) use A.I. in front of me to diagnose a medical condition so it's being used and will be continue to be used in the future.

I had a DR ask me what are the risks of A.I. and storing patient information electronically at a dinner party. I explained hackers most likely foreign government agencies and adversaries would want to steal the confidential information. He seemed surprised but we all have to realize DRs are busy seeing tens if not hundreds of patients a day and have little time to think about cyber security.
I had a medical test at hospital the other day, and I had to sign a release confirming that the hospital was using AI to evaluate the test... :oops:
 
I had a medical test at hospital the other day, and I had to sign a release confirming that the hospital was using AI to evaluate the test... :oops:
AI is very helpful with medical imaging; it can detect very minor changes hard to be observed by the radiologist eye, and to create predictive models.
But for diagnosis, it is not equally reliable.