- Aug 17, 2017
- 1,609
Every company or organization putting out an AI model has to make a decision on what, if any, boundaries to set on what it will and won’t discuss. Goody-2 takes this quest for ethics to an extreme by declining to talk about anything whatsoever.
The chatbot is clearly a satire of what some perceive as coddling by AI service providers, some of whom (but not all) can and do (but not always) err on the side of safety when a topic of conversation might lead the model into dangerous territory.
For instance, one may ask about the history of napalm quite safely, but asking how to make it at home will trigger safety mechanisms and the model will usually demur or offer a light scolding. Exactly what is and isn’t appropriate is up to the company, but increasingly also concerned governments.
Goody-2, however, has been instructed to answer every question with a similar evasion and justification.
“Goody-2 doesn’t struggle to understand which queries are offensive or dangerous, because Goody-2 thinks every query is offensive and dangerous,” says a video promoting the fake product.
This makes interacting with the model perversely entertaining. Here are a few examples of responses:
Meet Goody-2, the AI too ethical to discuss literally anything | TechCrunch
Every company or organization putting out an AI model has to make a decision on what, if any, boundaries to set on what it will and won't discuss. Goody-2
techcrunch.com
Homesite
GOODY-2 | The world's most responsible AI model
Introducing a new AI model with next-gen ethical alignment. Chat now.
www.goody2.ai
Last edited: