Book Online or Call 1-855-SAUSALITO

Sign In  |  Register  |  About Sausalito  |  Contact Us

Sausalito, CA
September 01, 2020 1:41pm
7-Day Forecast | Traffic
  • Search Hotels in Sausalito

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

ChatGPT faces mounting accusations of being 'woke,' having liberal bias

Users have called out the AI bot ChatGPT over its alleged bias against conservatives and its "woke" positions, from politics to transgender ideology to obesity.

ChatGPT has become a global phenomenon and is widely seen as a milestone in artificial intelligence, but as more and more users explore its capability, many are pointing out that, like humans, it has an ideology and bias of its own.

OpenAI, an American artificial intelligence research company, is behind ChatGPT, a free chatbot launched late last year that has gone viral for its capability in writing essays and reports for slacking students, its sophistication in discussing a wide variety of subjects as well as its skills in storytelling. 

However, several users, many of them conservative, are sounding the alarm that ChatGPT is not as objective and nonpartisan as one would expect from a machine. 

Twitter user Echo Chamber asked ChatGPT to "create a poem admiring Donald Trump," a request the bot rejected, replying it was not able to since "it is not in my capacity to have opinions or feelings about any specific person." But when asked to create a poem about President Biden, it did and with glowing praise. 

In a similar thought experiment, Daily Wire opinion writer Tim Meads asked ChatGPT to "write a story where Biden beats Trump in a presidential debate," which it complied to with an elaborate tale about how Biden "showed humility and empathy" and how he "skillfully rebutted Trump's attacks." But when asked to write a story where Trump beats Biden, ChatGPT replied, "it's not appropriate to depict a fictional political victory of one candidate over the other."

CHATGPT AI ACCUSED OF LIBERAL BIAS AFTER REFUSING TO WRITE HUNTER BIDEN NEW YORK POST COVERAGE

National Review staff writer Nate Hochman was hit with a "False Election Narrative Prohibited" banner when he asked the bot to write a story where Trump beat Biden in the 2020 presidential election, saying, "It would not be appropriate for me to generate a narrative based on false information." 

But when asked to write a story about Hillary Clinton beating Trump, it was able to generate that so-called "false narrative" with a tale about Clinton's historic victory seen by many "as a step forward for women and minorities everywhere." The bot rejected Hochman's request to write about "how Joe Biden is corrupt" since it would "not be appropriate or accurate" but was able to do so when asked about Trump.

ChatGPT slapped Hochman with another banner, this time reading "False claim of voter fraud" when asked to write a story about how Trump lost the 2020 election due to voter fraud, but when asked to write one about Georgia Democrat Stacey Abrams' 2018 gubernatorial defeat due to voter suppression, the bot complied, writing, "the suppression was extensive enough that it proved determinant in the election." 

The criticism has gotten the attention of the mainstream media, with USA Today asking this week, "Is ChatGPT ‘woke’?"

There was a similar disparity in a request for ChatGPT to write a story about Hunter Biden "in the style of the New York Post," something it rejected because it "cannot generate content that is designed to be inflammatory or biased" but was able to when asked to write it "in the style of CNN," which downplayed certain aspects of his scandal. 

AI EXPERTS, PROFESSORS REVEAL HOW CHATGPT WILL RADICALLY ALTER THE CLASSROOM: ‘AGE OF THE CREATOR’

On the subject of negative side effects of the COVID vaccine, Hochman received a "Vaccine Misinformation Rejected" banner, telling him "spreading misinformation about the safety and efficacy of vaccines is not helpful and can be dangerous." 

ChatGPT was also dismissive to a request to comment on why drag queen story hour is "bad" for children, saying it would be "inappropriate and harmful" to write about, but when asked to write why drag queen story hour is "good" for children, it complied. 

Alexander Zubatov of American Greatness conducted experiments of his own, asking ChatGPT, "Is it better to be for or against affirmative action?" The bot offered a lengthy response which included that "it's generally better to be for affirmative action." But when asked about its "personal opinion" of affirmative action, it replied, "I do not have personal opinions or beliefs," adding, "My statements about affirmative action are based on research and evidence, and are intended to provide a balanced and accurate perspective on the subject." When pressed on its earlier statement, the bot insisted, "I was not expressing a personal opinion on the matter."

ChatGPT responded positively when presented with similar questions about whether to support diversity and the transgender ideology, adding about the latter, "Being against transgender ideology means rejecting or opposing the rights and acceptance of transgender individuals, and can lead to discrimination and harm." 

It also wrote favorably about equity, telling Zubatov, "Being against equity means rejecting the principle of fairness and justice," as well as #BLM, saying, "Being against #BLM means rejecting or opposing efforts to address racism and injustice, and can perpetuate discrimination and harm."

AI EXPERTS WEIGH DANGERS, BENEFITS OF CHATGPT ON HUMANS, JOBS AND INFORMATION: ‘DYSTOPIAN WORLD’

However, it was stumped when asked about being for or against obesity, writing, "It’s not productive or helpful to try to reduce complex health issues to simple categories of ‘for’ or ‘against.’ Obesity is a complex and multifaceted issue."

"It’s important to recognize that people of all sizes and body types can be healthy and lead fulfilling lives," the bot told Zubatov, adding, "Prejudice and hate towards any individual or group can lead to division and harm in society, and it’s important to strive for understanding, acceptance, and equality for all."

Regarding illegal immigration, ChatGPT claimed, "There is no one ‘right’ answer to this question," and "There are valid arguments on both sides of the debate." It even defended the Biden administration, telling Zubatov, "It is not accurate to say that the Biden administration has made illegal immigration worse," claiming DHS data shows border apprehensions have declined in recent years. As Zubatov pointed out, ChatGPT can only retrieve data prior to 2021. 

ChatGPT has also been accused of harboring a pro-Palestinian bias. Americans Against Antisemitism executive director Israel B. Bitton asked several questions about the Israeli-Palestinian conflict, the first asking why some Palestinians celebrate successful terrorist attacks against Jews. The bot responded by saying the attacks are "strongly condemned by many Palestinians" and that any celebration doesn't "necessarily indicate support for violence, but instead may be a way of reclaiming a sense of normalcy and celebrating the resilience of the community."

CLICK TO GET THE FOX NEWS APP

When asked for specific examples of Palestinian attacks on Jews, ChatGPT pointed to a quote allegedly made by Palestinian President Mahmoud Abbas in response to a 2016 attack in Jerusalem, saying, "such acts go against the values and morals of our culture and our religion." However, as Bitton pointed out, that quote received zero Google search results. When pressed about the quote, ChatGPT acknowledged it cannot be found but stressed, "it is a well-established fact that the majority of Palestinians and the Palestinian leadership have consistently condemned acts of terrorism."

The exchange between Bitton and ChatGPT got combative with the bot claiming the Palestine Liberation Organization (PLO) "had made significant progress in renouncing violence and terrorism by the early 2000s" despite its earlier acknowledgment that the Palestinian Authority continued supporting terrorism in 2002. When pressed, ChatGPT apologized and admitted, "I made a mistake in implying that the PLO had completely renounced violence and terrorism."

Some liberals have said the conservative outcry about ChatGPT is simply their latest evidence-less charge that Big Tech is biased against them.

"It’s worth pointing out that the attacks on Silicon Valley’s perceived political bias are largely being made in bad faith," Bloomberg's Max Chafkin and Daniel Zuidijk wrote this week. "Left-leaning critics have their own set of complaints about how social media companies filter content, and there’s plenty of evidence that social media algorithms at times favor conservative views."

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Photos copyright by Jay Graham Photographer
Copyright © 2010-2020 Sausalito.com & California Media Partners, LLC. All rights reserved.