Please and Thank You: What Does It Cost to Be Polite to ChatGPT?

🎙️ Voice is AI-generated. Inconsistencies may occur.

Saying "please" and "thank you" to ChatGPT emerged primarily as a form of social etiquette combined with a superstition that, one day, an AI overlord might be able to know if you were nice to it or not. But is it effective? And did you know it comes at a cost?

Few tech companies have managed to become synonymous with their industry, but OpenAI has succeeded in doing so with the most important technology in modern times: artificial intelligence. ChatGPT has cemented itself as the go-to generative AI and, as of 2025, remains the most-used AI chatbot on the market.

What Does It Cost to be Polite
What does it cost to be polite to ChatGPT? Photo Illustration by Newsweek

Its widespread adoption in both everyday life and the private sector has created a new market for the best prompts and phrases to get the most out of the AI. LinkedIn and X, formerly Twitter, are now full of content creators and professionals alike, all promising 10 easy tips to get better responses from ChatGPT.

But one of the most popular tactics to encourage the AI isn't based on the right language or the formatting of the prompt, but rather how polite the user is.

In a 2024 survey on the official ChatGPT subreddit community, many users said that they simply wanted to be polite to the bot, with one user writing: "It is basically wasted, but hey, it's nice to be nice."

"I'm always nice just in case the AIs take over the world, and I hope they remember that I was always nice to AI," another person wrote.

Give a Little Respect

But there is evidence to suggest that saying your pleases and thank yous to ChatGPT can provide you with better responses to your queries. In a memo on the creation of Microsoft's Copilot AI, design team director Kurtis Beavers said that being polite sets an agenda that generative AIs mimic, meaning if you're polite and helpful, the response will be helpful too.

"Using polite language sets a tone for the response," Beavers said. "Using basic etiquette when interacting with AI helps generate respectful, collaborative outputs."

Beavers also said that politeness "not only ensures you get the same graciousness in return, but it also improves the AI's responsiveness and performance."

The numbers back this philosophy up. In 2024, research across three different languages and a variety of generative AIs from Waseda University revealed that being rude to AIs led to a 30 percent drop in performance, while being polite reduced errors and gave responses that provided more information from different sources.

"This phenomenon suggests that large language models not only reflect human behavior but are also influenced by language, particularly in different cultural contexts," the report found. "Our findings highlight the need to factor in politeness for cross-cultural natural language processing and LLM usage."

The Cost of Courtesy

Users might not think that "please" and "thank you" could add much to the processing costs of ChatGPT. After all, they're only three words. But OpenAI CEO Sam Altman said differently. In response to a social media user asking how much had been spent on electricity costs on processing "please" and "thank you," Altman estimated it was "tens of millions of dollars well spent—you never know."

While the comment was made in jest, the figure revealed how resource-intensive even the smallest extra factors can be when it comes to LLMs. In March, a report from the Electric Power Research Institute estimated that asking ChatGPT a question costs roughly 10 times the energy that asking Google the same question would demand.

The advances in data center development that began in the 2020s has led to a drastic increase in CO2 emissions from Big Tech. In its Environmental Report last year, Google reported that its emissions were 48 percent higher in 2023 than in 2019, "primarily due to increases in data center energy consumption and supply chain emissions."

According to a 2025 paper by Ajit Singh of India's Patna University, the training of OpenAI's GPT-3 consumed approximately 368,640 kWh of energy. The carbon footprint associated with training GPT-3 was calculated to be about 147 tons of CO2, the equivalent to CO2 emissions from 31 homes' electricity use for one year, according to the EPA.

Andrea Miotti, the founder and executive director of ControlAI, a nonprofit of AI security experts, told Newsweek: "Nobel Prize winners, hundreds of top AI scientists and even the CEOs of the leading AI companies themselves have warned that AI poses an extinction threat to humanity. AI companies are pushing on in spite of this danger and putting everyone at risk."

Who Benefits?

While AI is affecting some industries more rapidly than others, almost every part of the private sector will engage with it in one form or another in the future. At the beginning of 2025, a study from McKinsey found that 92 percent of companies plan to increase their AI investments going forward, and LLMs are the biggest focus of these expansions.

As a result, many companies are looking for the groups of people most likely to learn how to get the most out of AI. Mantas Lukauskas, the AI tech lead at Hostinger Global, told Newsweek that the most recent generations to enter the job market would be best placed to learn new tricks with AI.

"Younger generations, particularly Gen Z and millennials, are the most willing to embrace AI in the workplace," Lukauskas said. "In fast-paced, tech-forward environments like startups or creative agencies, AI is seen as a productivity booster and idea generator.

"In contrast, older generations and more traditional workplaces, such as government, education or legacy industries, tend to be more skeptical.

"Concerns range from AI replacing jobs to mistrust in automated decision-making. The opportunity lies in framing AI as a collaborative tool that enhances rather than replaces human expertise."

At the end of the day though, many users fall back on their manners, just in case. "Of course I'm polite. It has nothing to do with my irrational fear of AI taking over," said a subreddit user. "I'm not at all trying to get on AI's good side. That'd be just silly...right?"

To read how Newsweek uses AI as a newsroom tool, Click here.

About the writer

Theo Burman is a Newsweek Live News Reporter based in London, U.K. He writes about U.S. politics and international news, with a focus on infrastructure and technology. He has covered technological and cultural issues extensively in the U.S. and the U.K., such as the rise of Elon Musk and other tech figures within the conservative movement, and the development of high-profile international construction projects. Theo joined Newsweek in 2024 and has previously written for Dexerto, PinkNews, and News UK. He is a graduate of Durham University and News Associates. You can get in touch with Theo by emailing t.burman@newsweek.com. Languages: English.


Theo Burman is a Newsweek Live News Reporter based in London, U.K. He writes about U.S. politics and international news, ... Read more