Here we go again with all the toxicity and partisanship, not too mention lack of ethics and courage:
…Critics of this strategy call it “jawboning,” and it was the subject of a high-profile Supreme Court case last year. In that case, Murthy v. Missouri, it was Democrats who were accused of pressuring social media platforms like Facebook and Twitter to take down posts on topics such as the coronavirus vaccine and election fraud, and Republicans challenging their tactics as unconstitutional. (In a 6-to-3 decision, the court rejected the challenge, saying the plaintiffs lacked standing.)
Now, the parties have switched sides. Republican officials, including several Trump administration officials I spoke to who were involved in the executive order, are arguing that pressuring A.I. companies through the federal procurement process is necessary to stop A.I. developers from putting their thumbs on the scale.
Is that hypocritical? Sure. But recent history suggests that working the refs this way can be effective. Meta ended its longstanding fact-checking program this year, and YouTube changed its policies in 2023 to allow more election denial content. Critics of both changes viewed them as capitulation to right-wing critics.
This time around, the critics cite examples of A.I. chatbots that seemingly refuse to praise Mr. Trump, even when prompted to do so, or Chinese-made chatbots that refuse to answer questions about the 1989 Tiananmen Square massacre. They believe developers are deliberately baking a left-wing worldview into their models, one that will be dangerously amplified as A.I. is integrated into fields like education and health care.
There are a few problems with this argument, according to legal and tech policy experts I spoke to.
The first, and most glaring, is that pressuring A.I. companies to change their chatbots’ outputs may violate the First Amendment. In recent cases like Moody v. NetChoice, the Supreme Court has upheld the rights of social media companies to enforce their own content moderation policies. And courts may reject the Trump administration’s argument that it is trying to enforce a neutral standard for government contractors, rather than interfering with protected speech.
“What it seems like they’re doing is saying, ‘If you’re producing outputs we don’t like, that we call biased, we’re not going to give you federal funding that you would otherwise receive,’” Genevieve Lakier, a law professor at the University of Chicago, told me. “That seems like an unconstitutional act of jawboning.”
There is also the problem of defining what, exactly, a “neutral” or “unbiased” A.I. system is. Today’s A.I. chatbots are complex, probability-based systems that are trained to make predictions, not give hard-coded answers. Two ChatGPT users may see wildly different responses to the same prompts, depending on variables like their chat histories and which versions of the model they’re using. And testing an A.I. system for bias isn’t as simple as feeding it a list of questions about politics and seeing how it responds.
Samir Jain, a vice president of policy at the Center for Democracy and Technology, a nonprofit civil liberties group, said the Trump administration’s executive order would set “a really vague standard that’s going to be impossible for providers to meet.”
There is also a technical problem with telling A.I. systems how to behave. Namely, they don’t always listen.
Just ask Elon Musk. For years, Mr. Musk has been trying to create an A.I. chatbot, Grok, that embodies his vision of a rebellious, “anti-woke” truth seeker.
But Grok’s behavior has been erratic and unpredictable. At times, it adopts an edgy, far-right personality, or spouts antisemitic language in response to user prompts. (For a brief period last week, it referred to itself as “Mecha-Hitler.”) At other times, it acts like a liberal — telling users, for example, that man-made climate change is real, or that the right is responsible for more political violence than the left.
Recently, Mr. Musk has lamented that A.I. systems have a liberal bias that is “tough to remove, because there is so much woke content on the internet.”
Nathan Lambert, a research scientist at the Allen Institute for AI, told me that “controlling the many subtle answers that an A.I. will give when pressed is a leading-edge technical problem, often governed in practice by messy interactions made between a few earlier decisions.”
It’s not, in other words, as straightforward as telling an A.I. chatbot to be less woke. And while there are relatively simple tweaks that developers could make to their chatbots — such as changing the “model spec,” a set of instructions given to A.I. models about how they should act — there’s no guarantee that these changes will consistently produce the behavior conservatives want.
But asking whether the Trump administration’s new rules can survive legal challenges, or whether A.I. developers can actually build chatbots that comply with them, may be beside the point. These campaigns are designed to intimidate. And faced with the potential loss of lucrative government contracts, A.I. companies, like their social media predecessors, may find it easier to give in than to fight.
”Even if the executive order violates the First Amendment, it may very well be the case that no one challenges it,” Ms. Lakier said. “I’m surprised by how easily these powerful companies have folded.”