There are no virtuous participants in the
artificial intelligence race, but if there was, it might've been Anthropic.
Large language model tech is built on mountains of stolen data. The entire summation of decades of the open internet was downloaded and converted by billionaires into tech that threatens to destroy billions of jobs, end the global economy, and potentially the human race. But hey, at least in the short term, shareholders (might) make a stack of cash.
There are no moral leaders in this space, sadly. But at the very least, Anthropic of Claude fame took a strong stand this week against the United States government, to the ire of the Trump administration.
Anthropic was designated a supply chain risk this week, and summarily and forcibly banned from use in U.S. governmental agencies. Why? Anthropic said in a
blog post it revolved around their two major red lines — no Claude AI for use in autonomous weapons, or mass surveillance of United States citizens.
It's not unexpected that mainstream governments of any stripe would be salivating at the thought of turbo-charged AI mass surveillance, but it is unexpected that a big tech corp like Anthropic would be willing to take such a strong stance against it in an era increasingly devoid of administrative morality. But hey, there's always someone willing to race to the metaphorical moral abyss in the name of money.
Hi,
Sam Altman.