- Apr 16, 2017
- 2,729
Can artificial intelligence be trained to seek—and speak—only the truth? The idea seems enticing, seductive even. And earlier this spring, billionaire business magnate Elon Musk announced that he intends to create “TruthGPT,” an AI chatbot designed to rival GPT-4 not just economically, but in the domain of distilling and presenting only “truth.” A few days later, Musk purchased about 10,000 GPUs, likely to begin building, what he called, a “maximum truth-seeking AI” through his new company X.AI.
This ambition introduces yet another vexing facet of trying to foretell—and direct—the future of AI: Can, or should, chatbots have a monopoly on truth?
Last edited by a moderator: