🎙️ Voice is AI-generated. Inconsistencies may occur.
Donald Trump might be the most searched name on the internet, but if you ask the most powerful AI chatbot on Earth who currently sits in the Oval Office, the answer isn't what you'd expect.
OpenAI's ChatGPT—used daily by more than 400 million people, including over 20 million paying subscribers—is still struggling to answer a question that most American kindergartners can get right: Who is the president of the United States?
More than 100 days after Trump returned to office and seven months after winning the election, ChatGPT still often mistakes former President Joe Biden for the current commander in chief.
Although a quick prompt can fix this, and most paid users have access to real-time search, the error still occurs in many interactions—especially for unpaid users, who are more likely to encounter it.
Newsweek reached out to OpenAI for comment on Tuesday.
The issue has baffled users and raised concerns about the reliability of the tool that's been integrated into workplaces, schools, and government offices across the globe. Meanwhile, ChatGPT's competitors—including Meta's AI, Perplexity's AI, X's Grok and Google's Gemini—have managed to give more accurate answers or, at the very least, acknowledge the handover of power.

OpenAI has explained that ChatGPT's core knowledge was last updated in June 2024 and also advises that the system can fetch real-time information using live web searches. But for many users who use the app for quick inquiries, this makes the persistent error even more frustrating.
ChatGPT isn't the only chatbot giant grappling with basic presidential facts. Earlier this year, Meta's Llama 3 model also misidentified the sitting U.S. president, despite updates that were supposed to reflect the results of the 2024 election. Meta's chatbot initially referred to Biden as the president months into Trump's second term, prompting users to share screenshots across social media platforms.
'In Denial'
On the popular subreddit r/ChatGPT, users have chronicled a string of similar experiences where the chatbot confidently gets presidential facts wrong—even mid-conversation. A thread titled "Chat keeps forgetting that Trump is the current president" drew hundreds of upvotes earlier this month.
"Thrice now I've been talking to Chat about politics/history/news and we're having a very normal conversation... then, out of nowhere, it will say something about Biden being the current president," wrote one user. "And I will ask if it knows that Trump is the current president, and it will act really shocked. WHAT IS HAPPENING?"

Another user — and critic of the president — chimed in: "Chat's in denial. We all are."
Some attempted to troubleshoot. "I'm currently using GPT-4 Turbo. I just asked it about model cutoffs, so I'll know better," wrote one poster, who later discovered the importance of enabling the chatbot's real-time search tool to get accurate responses. "I must have misunderstood that feature... I thought it just told me what tool it was using."
Others reported similar failures, even when asking seemingly up-to-date political questions. "It kept saying, 'If Trump had been elected to a second term...' like it didn't know we're already living through it."

Many suggested the system often behaves as though it's updated, until a stray comment about the current president—or the day's date—throws the illusion off.
"It felt like the ultimate gaslighting there for a second," one user wrote. "Chat just kept assuming I mistyped 'DUI' when I said 'DoD.' It didn't know Pete Hegseth was running Defense."
"Denial is the first stage of grief," another quipped.
Getting the Facts Wrong
As generative AI tools like ChatGPT, Gemini and Microsoft's Copilot expand from simple assistants to sources of news and information, their credibility is under growing scrutiny. Whether failing to identify the sitting U.S. president or misquoting news articles, these systems are repeatedly getting basic facts wrong.
Google's Gemini refuses to answer most political questions altogether, instead directing users to search. "Out of an abundance of caution, we're restricting the types of election-related queries for which Gemini will return responses," the company said in a 2024 statement.
Copilot—built on the same GPT-4 model as ChatGPT —offers similarly vague or deflective responses. Independent audits have shown Copilot spreading inaccurate details about European elections, from incorrect dates to made-up scandals.
And it's not limited to political queries. A February BBC study found that over half of AI-generated responses using BBC articles included major errors, including fabricated quotes and false claims. A separate analysis by Columbia University's Tow Center echoed those findings, warning that AI is consistently distorting the news.
"AI assistants cannot currently be relied upon to provide accurate news and they risk misleading the audience," said Pete Archer, director of the BBC's Generative AI Program.
And while ChatGPT fumbles with these basic facts, Sam Altman, its creator, was recently spotted shaking hands with President Trump in Qatar. Maybe the chatbot didn't get the memo.

fairness meter
About the writer
Jesus is a Newsweek reporter based in New York. Originally from Bogotá, Colombia, his focus is reporting on politics, current ... Read more