A recent New York Times report has fascinating details on how we’re seeing an explosion in uncensored and open-source chatbots across the internet. Many of the creators of this new class of chatbots see them as a win for free speech. And there is something to those arguments. But there are also signs of something worrisome: an increasingly well-paved path for agents of mass misinformation.
As the Times report explains, new chatbots with names like GPT4All and FreedomGPT are springing up as part of a new movement toward chatbots without the guardrails that restrict ChatGPT’s ability to discuss issues deemed abusive, dangerous or overtly political. Many of these bots are reportedly built by independent programmers who tweak existing large language models to alter how they respond to prompts, and at least one of them is based on a large language model that leaked from Meta. It’s unclear exactly how sophisticated they are, but according to the Times, some of these chatbots don’t trail too far behind ChatGPT — the premier chatbot at the moment — in quality.
This new wave of uncensored bots scraps all those guardrails.
The formation of alternatives to the mainstream chatbots could help guard against the dangers of chatbot monopolies. At the same time, some experts say they’re also the exact kinds of tools that could bolster misinformation operations, and could even categorically shift the supply of misinformation ahead of the 2024 elections.
The major chatbots from Google and OpenAI have used a variety of techniques to avoid or limit their programs’ capacity to use offensive language like racial slurs or profanity. They’re trained to deflect prompts that ask how to harm people or do things like build a bomb. And they’re designed to avoid taking explicit political positions on some issues.
Now, more advanced users have figured out ways to trick these chatbots into saying things they’re not supposed to, and it’s naive to think that political values don’t shape the range of responses that these chatbots issue to users. (The language chatbots are and aren’t allowed to utter is in and of itself political, and these companies have not revealed exactly what kind of information the chatbots were trained on, which shapes their range of responses.) But still they reflect an aspiration for a product with mass appeal across age groups that adheres to strict limitations on abuse and minimizes liability and controversy for itself.
This new wave of uncensored bots scraps all those guardrails. There isn’t one unifying set of principles motivating the programmers creating these models, but many are inspired by the idea of completely unrestricted speech and user choice. That means the chatbots have fewer, if any, limits, on the kinds of responses they’ll provide in response to prompts, and that users can train them further on their own specific information.
As somebody who has seen a handful of search and social media behemoths develop the awesome capacity to reshape our public consciousness, censor information and alter the contours of political life, I’m reflexively sympathetic to this agenda, especially because I’m deeply uncomfortable with the opacity of how the mainstream large language models work (as are many artificial intelligence scholars and programmers). That being said, there are also clear trade-offs with opening up chatbots that go beyond their being able to amuse bigots by saying epithets.
The Times referenced a blog post from Eric Hartford, a developer behind WizardLM-Uncensored, one of the unmoderated chatbots. One part of the post in particular does a nice job summing up some of these trade-offs:








