Book Online or Call 1-855-SAUSALITO

Sign In  |  Register  |  About Sausalito  |  Contact Us

Sausalito, CA
September 01, 2020 1:41pm
7-Day Forecast | Traffic
  • Search Hotels in Sausalito

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

Forget the criticisms, AI could help keep children safe online

Many critics focus on the negative aspects of AI, but it could also end up helping keep children safe online by keeping them from dangerous and offensive content.

Policymakers around the world and in the United States are prioritizing policies to keep kids and teenagers safe on the internet and social media. This discussion extends to AI.

President Joe Biden recently released a fact sheet outlining the contents of his pending executive order on "Safe, Secure, and Trustworthy Artificial Intelligence." The fact sheet, which outlines the content of the full order, directs multiple federal agencies on how they should approach AI’s impact within their areas of jurisdiction.

It specifically mentions safety for minors within the AI and social media context: "To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids."

BIDEN FLOATS NEARLY $20M IN PRIZES FOR AI TOOLS THAT SECURE US COMPUTER CODE

The president is correct that the protection of minors online should be a priority. But what’s left out of this fact sheet, executive order and most discussions about online safety for minors, is the role AI has and could have in making the internet a safer place for all users, "especially kids." 

The internet is a wild and creative place. Connecting half of humanity to each other has transformed history and brought about an abundance of entertainment, economic growth and educational opportunities for billions of people. 

Simultaneously, connecting half of humanity brings downsides in the form of theft, exploitative material and harming others.

The downsides can dominate the narrative. For example, in polling done by our organization, The Center for Growth and Opportunity, 52% of Americans believe social media does more harm than good, with 30% believing it does equal good and harm to children.

According to a recent poll by Security.org, 98% of parents believe that social media platforms are dangerous to users under the age of 18. 

Despite the many current non-AI resources and systems available to caring parents and caregivers, none of these are perfect and younger generations still occasionally get exposed to distressful content even if restrictions are in place on social media platforms.

While it won’t solve every problem, AI systems could supplement these tools and help protect minors online, particularly on social media. 

One way AI can help is to make social media services more transparent. Social media platforms show content to users that they believe the user would enjoy, but these recommendation systems are very complicated.

Explainable AI is an emerging research field that is attempting to understand how recommendation systems work. Explainable AI can make it easier for parents to restrict inappropriate content on their child’s device.

Similarly, AI can be a real-time educational tool for teens online. 

Yubo, a social media app whose user base is only made up of Gen Zers, uses AI to inform its users what kind of behavior is safe. For example, Yubo’s safety features intervene in real-time whenever a child or teenager is about to share sensitive information on the app. And this is more essential than ever because new research indicates that Gen Z is more vulnerable than Baby Boomers to online scams.

Although human content moderators need to review the most complicated content moderation issues, improvements in AI monitoring show promise. 

Yik Yak, a university-centric social media platform, went out of business in 2017 because its anonymous accounts and limited moderation became a toxic environment for its users.

Then, Yik Yak returned in 2021 with new community guidelines and a partnership with Spectrum Labs – an AI content moderation company. These programs have allowed the company to return and provide a much safer and pleasant environment for its younger users.

CLICK HERE FOR MORE FOX NEWS OPINION

Apple is deploying AI to filter out explicit photos in iMessage. Turning this setting on means a minor receives a notification asking them before showing a message, as well as alerting the parent that their child received a troubling message.

All of these examples are early experiments on AI deployment in keeping minors safe online. If the past is any indication, we will continue to see improvements in these tools and new tools that haven’t been dreamed up yet.

The internet has provided so many good things for society and will continue to do so, particularly as we all figure out how to preserve this innovative ecosystem and manage the risks. Early applications of AI and potential AI technology right around the corner could prove another effective tool to keep the most vulnerable populations safe online while allowing them to explore freely.

AI tools promise to be an ally with caregivers, parents and policymakers who want to keep kids and teens safe online. 

CLICK HERE TO READ MORE FROM TAYLOR BARKLEY

Logan Whitehair is an emerging tech policy associate at the Center for Growth and Opportunity, where he researches technology and innovation policy to support team operations.

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Photos copyright by Jay Graham Photographer
Copyright © 2010-2020 Sausalito.com & California Media Partners, LLC. All rights reserved.