Book Online or Call 1-855-SAUSALITO

Sign In  |  Register  |  About Sausalito  |  Contact Us

Sausalito, CA
September 01, 2020 1:41pm
7-Day Forecast | Traffic
  • Search Hotels in Sausalito

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

Get ready for RightWingGPT and LeftWingGPT

The unlikely duo of a data scientist and a political philosopher is teaming up to use artificial intelligence to bridge society's increasingly stark political divisions.

As Elon Musk and others continue to sound the alarm about the potential dangers of artificial intelligence, an unlikely duo of a data scientist and a political philosopher is teaming up to use AI with a different purpose in mind: bridging society's increasingly stark political divisions.

The project stemmed from the research of David Rozado, a professor at Te Pūkenga — the New Zealand Institute of Skills and Technology, who's recent work has drawn attention to political bias in ChatGPT and the potential for such bias in other AI systems.

Rozado found that in 14 out of 15 political orientation test answers from ChatGPT, a product of the company OpenAI, were deemed as giving left-leaning viewpoints. At the same time, however, the AI language processing tool denied having any political bias or orientation, maintaining that it was just providing objective and accurate information to users.

"The system would flag as hateful comments about certain groups but not others," Rozado told Fox News Digital, noting for example that the system would say it's hateful to call women dishonest but not men. He's similarly described how ChatGPT is more permissive of negative comments about conservatives and Republicans than the exact same comments made about liberals and Democrats.

DEVELOPER CREATES PRO-FIRST AMENDMENT AI TO COUNTER CHATGPT'S 'POLITICAL MOTIVATIONS'

In response to this apparent bias, Rozado discovered that he could "fine-tune" an AI language model similar to ChatGPT for just $300 spent on cloud computing so that it would consistently give right-leaning answers to questions with political connotations. He dubbed the system RightWingGPT, noting the dangers of "politically aligned AIs" given their potential to further polarize society.

Rozado's research caught the attention of Steve McIntosh, a political philosopher and author who runs a think tank called the Institute for Cultural Evolution. Now the two are teaming up to, as McIntosh told Fox News Digital, stop AI chatbots from "polarizing America further as social media has done."

McIntosh said there are significant dangers with AI, acknowledging there's a real threat, but added there are also opportunities that shouldn't be missed.

To that point, he and Rozado are collaborating on a new project to build another language model called LeftWingGPT that consistently gives left-leaning answers to questions with political connotations and a third model called DepolarizingGPT that will give what the two described as "depolarizing" and "integrative" answers.

AI CHATBOT 'HALLUCINATIONS' PERPETUATE POLITICAL FALSEHOODS, BIASES THAT HAVE REWRITTEN AMERICAN HISTORY

The idea is to combine all three models — RightWingGPT, LeftWingGPT, and DepolarizingGPT — into one system so that when users ask a question, they get answers from all three to offer people more perspectives than their own.

"If someone sees all three answers, they can see three different viewpoints and become more exposed," said Rozado. "People can expand beyond their own views and make up their own minds."

Both Rozado and McIntosh explained that they input works of prominent intellectuals — for example, the likes of conservatives Thomas Sowell, Milton Friedman, William F. Buckley, and Roger Scruton to build RightWingGPT — so the models would be exposed to "healthy" and "responsible" ideas but not extreme ones.

"We avoided sources with deranged viewpoints," said Rozado, who noted the process was automated so they weren't simply picking and choosing what the AI models learned.

According to McIntosh the plan is for the project to go live in July, and they both hope it will make a difference.

WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE

The expected launch would come at a fortuitous time. OpenAI recently warned that more capable AI models may have "greater potential to reinforce entire ideologies, worldviews, truths, and untruths." In February, the company said it would explore developing models that let users define their values.

One potential challenge is that AI language models can pick up subtle biases from the training material they consume — or from the humans who create them.

"Instead of pretending there's no bias, which always exists, let's show people a responsible right, a responsible left, and an integrated position," said McIntosh. "Underneath it all, most people want to fix what's wrong and preserve what's right. But we don't want the French Revolution on one hand or an irrational attachment to the strict status quo on the other. The truth is the left and the right are interdependent and need each other."

McIntosh noted AI could be weaponized to manipulate information and advance a particular ideology but wants to offer a different pathway.

"We want to show people something in the political space beyond hating the other side," he said.

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Photos copyright by Jay Graham Photographer
Copyright © 2010-2020 Sausalito.com & California Media Partners, LLC. All rights reserved.