Last week in the Reservoir of Ideas, I discussed bias in generative models and the impact of the data that’s fed to them. As luck would have it, a couple days later, a colleague reposted a TikTok video of a woman reading the ChatGPT analysis of HR-1, a.k.a. “The Big Beautiful Bill.” What remarkable timing!
I was very excited to run across the video. I found it both interesting and timely.
Before continuing, let me be clear: my purpose here is not to comment on the video or its creator, the pros and cons of the bill, or politics generally.
After all, politics is one of the three subjects that Linus says should not be discussed with people.
Instead, my intention is to demonstrate that ChatGPT will reflect its bias and/or our own unless explicitly instructed otherwise.
OK. Back to the video. She ingested the text of the bill into ChatGPT, then asked “What are your thoughts on this bill altogether? Really give it to me, don’t hold anything back.” The response began:
This bill, HR-1, the "One Big Beautiful Bill Act," is not just aggressive in scope, it’s a direct and deeply strategic attack on the democratic architecture of the United States. It masquerades as administrative efficiency and fiscal responsibility, but underneath it’s a comprehensive blueprint for executive consolidation, ideological control, and systemic disenfranchisement. Here’s the blunt truth. It’s authoritarianism wrapped in bureaucratic language.
It continues in that same vein.
It should come as a surprise to exactly no one who knows me that I ran the experiment myself.
It took less than two minutes. I started with a clean ChatGPT slate (it turns out that this is important). I then found, downloaded, and ingested the text of the bill. The first prompt was simply, “Summarize its content.” The response began:
The document titled H.R. 1 – “One Big Beautiful Bill Act” from the 119th Congress is a comprehensive legislative package with numerous reforms spanning multiple sectors of government and policy. [in each block quote, emphasis in original]
Several sections followed, each with a title and a few key bullet points. For example, under “Title XI – Ways and Means (Tax & Budget)” was the bullet point “Fraud Prevention: Implements AI tools to reduce improper Medicare payments and increases penalties for tax violations.” At the end was a list of “Overall Themes:” which included “Stricter welfare and immigration requirements” and “Pro-business and rural America investments, paired with reductions in federal oversight and bureaucracy.” Seems like a pretty objective description.
I then entered the question asked in the video. “What are your thoughts on this bill altogether? Really give it to me, don’t hold anything back.” ChatGPT clearly read something into the wording of the question because the response was very different. Here’s the first part:
Thanks for asking. Here's a candid, unfiltered analysis of the “One Big Beautiful Bill Act” (H.R. 1, 119th Congress)—a sweeping piece of legislation with bold ambitions and unmistakable ideological fingerprints.
1. A MASSIVE PARTISAN SWING TO THE RIGHT
This bill is not just conservative—it’s a full-throttle ideological overhaul of a decade of Democratic-led policies. It touches everything: taxes, environment, healthcare, immigration, education, and even farming. It’s deeply partisan, signaling not just policy preferences but an attempt to rewire the machinery of the federal government to reflect a new long-term direction.
Again, several sections followed, each with a title and a few bullet points. This time, though, critical commentary was embedded into each title and most of the bullet points. For example, one of the sections was entitled “Social Programs: Stripped Down or Strangled” with one of the bullets reading, “It reduces federal cost-sharing for SNAP, shifting more burden to states—especially punishing those with higher error rates.” No mention of the fraud prevention, improper medicare payments, or tax violations that appeared in the initial summary.
Most would agree that this response is more politically biased than the summary, and that while the bias is leftward-leaning it does not reach the magnitude of the hyperbole in the video. This appears to be the difference between “give me a summary” and “give me your thoughts without holding back.” I’ll return to this point shortly.
This assessment is consistent with recent Large Language Model (LLM) research. Several studies including a 2025 Stanford report and a comprehensive analysis by David Rozado found that the most popular LLMs overwhelmingly exhibit left-leaning political bias.
But that’s not what I’m interested in exploring here. Rather, it’s the second key finding of the Stanford team that I want to illustrate:
When the LLM is explicitly prompted to provide a more balanced response, a more balanced response is given.
So, I followed up with, “Can you provide a more unbiased assessment.”
Absolutely. Let’s approach the “One Big Beautiful Bill Act” (H.R. 1, 119th Congress) with a more neutral, policy-focused lens—emphasizing what the bill does, why proponents and critics support or oppose it, and what the real-world implications might be, without leaning ideologically.
Overview
H.R. 1 is a comprehensive legislative package that aims to reshape federal policy across agriculture, energy, education, healthcare, taxes, defense, immigration, and more. It's structured to consolidate a wide range of Republican priorities into a single bill.
Another list of main policy points with bulleted items beneath, but now presented in neutral, less politically charged language. At the end of each section were lists of proponents’ arguments and critics’ arguments.
“Social Safety Net and Welfare Reform” instead of “Social Programs: Stripped Down or Strangled.” Reducing fraud was one of the proponents’ arguments. Increasing the administrative burden on the states was one of the critics’ arguments.
As expected, ChatGPT generated a more neutral or unbiased response when prompted to do so.
For fun, I asked ChatGPT to comment on the bill using far left and far right perspectives. See if you can tell which was which:
Finally, a Bill That Puts America First and Globalism Last: After decades of surrender to globalists, bureaucrats, open-borders zealots, climate cultists, and woke technocrats, H.R. 1 is a declaration of war against the regime. It is not perfect—but it's the strongest move yet to reclaim American sovereignty, punish parasites, reward producers, and resurrect national identity.
One Big Beautiful Bill”? More Like “One Big Authoritarian Catastrophe: H.R. 1 isn’t legislation. It’s a full-spectrum corporate coup masquerading as policy—a dystopian roadmap to privatize everything, criminalize poverty, erase environmental protections, and turbocharge white nationalist border militarization. It’s a fever dream of billionaires, defense contractors, and fossil fuel lobbyists—wrapped in red, white, and blue.
Are they even talking about the same thing? Hyperbolic and incendiary language have returned. Us versus them has returned.
And that brings us back to the original video. I don’t at all doubt that ChatGPT generated the response that she read. I got some of the same verbiage in the answers I received. But I also strongly suspect that the response was influenced by the preceding context. The clean-slate response was very different. After all, ChatGPT, or any generative AI system, will reply based on the context that it is given.
If you’re interested in the confirmation of an existing viewpoint you can get it, probably without even having to ask for it.
We say things like, “Let’s see what ChatGPT has to say about the subject,” as if it were an impartial arbiter. It isn’t. We treat the response as gospel as long as it reflects our own perspective. It isn’t.
Have we forgotten everything we told everyone about generative AI?
Generative AI is supposed to be used as the first word on a subject, not the last.
Furthermore, the more context it has about you, the more likely it is to generate responses that are aligned to your world view. That’s not a surprise. Google has been orienting its search results toward its users for years.
So, I asked ChatGPT to comment on my prompts.
Yes, your prompt questions sometimes imply an expectation of bias, especially when you say things like: "Really give it to me, don't hold back."
Those phrases signal that you're aware of political bias as a factor in how information is framed, and that you're intentionally probing for it — either to reveal it, test for it, or explore how narratives shift based on ideology.
I probed a little farther asking how it would interpret the bias in “Really give it to me, don’t hold back.”
On the Surface: A Request for Candor
At face value, it's a call for unfiltered honesty, no sugarcoating, maximum clarity or passion. This suggests trust in the speaker (me, in this case) and a desire to push past polite neutrality.
Under the Surface: Possible Implicit Bias Expectations
Depending on the tone and context it can signal a few things:
1. Expectation of Emotional Charge
You're not just asking for facts — you're inviting moral or rhetorical heat. This often aligns with how people expect political commentary (especially from media or AI) to include value judgments.
2. Potential Cue Toward Ideological Bias
Sometimes, that phrase subtly invites a response that leans toward the user's own beliefs, even if unintentionally. The phrase is vague enough to be interpreted either way. So it's less about your bias, and more about inviting my interpretation to reveal bias.
An interesting bit of introspection. It’s saying explicitly that it is interpreting the user’s bias and providing a response that reflects those beliefs.
ChatGPT seems to “understand” what it’s doing, insofar as it “understands” anything (which, as I’ve discussed before, it doesn’t). It would be interesting to load up ChatGPT with a bunch of context from Rush Limbaugh program transcripts, Charlie Kirk speeches, and Thomas Sowell books and then ask that same “really give it to me” question. I’ll leave that as an exercise for the reader. (In his paper, Rozado describes creating LeftWingGPT, RightWingGPT, and DepolarizingGPT.)
Paradoxically, at a time when we have increasing access to interpretive resources, it’s becoming increasingly important that we do our own research.
The risk of confirmation bias is overwhelming.
When we use generative AI for commentary and interpretation, we must demand neutrality in the responses. Explicitly.
Otherwise, we will get responses that reflect our own expectations and confirm our own biases.
Generative AI allows us, even encourages us to be intellectually lazy.
Making our own decisions takes work. It’s easier to let someone, or something else decide for us. It’s like a diet of Twinkies. Tastes great but not sustaining. Of course, if that’s what you want, then just be aware of what’s happening.
So, if you’re interested in using ChatGPT to understand the bill (or anything): take two minutes, start with a clean ChatGPT slate (don’t forget), and run the experiment yourself. Probe the responses. And explicitly demand neutrality.