Book Online or Call 1-855-SAUSALITO

Sign In  |  Register  |  About Sausalito  |  Contact Us

Sausalito, CA
September 01, 2020 1:41pm
7-Day Forecast | Traffic
  • Search Hotels in Sausalito

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

As AI goes global, let the UN control it

Open science, data access and the correct use of AI at large are entangled in a new geopolitical dynamic.

The emergence of AI systems has led to a growing need to regulate the development of these systems, as well as of the underlying data they rely on. Regulation is now high not just on the policy and public agenda, but also with leaders in the technology industry – who are, not surprisingly, the very same men and women who are poised to make their next fortunes on these AI advancements.

Nevertheless, it was just a few months ago that more than a thousand tech entrepreneurs, researchers and business people signed onto an open letter calling for a six-month pause in the development of the most powerful AI systems.

The letter was quite remarkable and, apart from opposition to nuclear energy and its military capabilities, we had never seen such a high-level group calling for a halt to research. But, let me be clear – such an outcry to stop research has rarely, if ever, succeeded.

When the first convergence of information and communications technologies and biotech emerged around the turn of this century, similar concerns were raised. In his brilliant book, "Homo Deus," the Israeli author Yuval Harari examined the future of humankind, painting a vivid picture of man’s ability to create artificial life and use technology to develop God-like powers and eternal life.

THIS DOMINANT FORCE CAN TAME AI BETTER THAN POLITICIANS

The ethical issues Harari, as well as philosophers, scientists, policy makers and others raised, were taken very seriously. But even then, they did not result in a research cease-fire. 

With the emergence of cable and satellite TV, then of computers and the internet, the World Wide Web, and then the Web 2, we witnessed concerns that they, too, would end the world as we know it – the end of jobs, creativity, analog, the bricks-and-mortar world. The very end of humanity itself was also predicted. Of course, nothing like that happened. But a lot changed and mostly for the better.

Why will this latest call create a different outcome for data and AI? After all, they are just technologies that automate what we as humans tell them to automate. As we trust a search engine like Google or Bing to come up with non-malicious answers, we trust AI to do the same. So why worry now?

In my view, the answer is that data and the AI tools to interpret it, like ChatGPT, have become the fundamental building blocks of modern-day societies, much like energy. Without either, our societies would come to a standstill and simply would not function as they do now.

Neither data access nor AI are on the hierarchy of needs defined by the American psychologist Abraham Maslow – psychological needs, safety needs, love and belonging, esteem needs, and self-actualization – but they do enable those needs to be met in the 21st century. Yes, we could survive without data access and algorithms if we are willing to go back to the ways in which we organized life and society in the 1950s. The same is true of modern energy production and use, in that we could survive if we were willing to go back to the Middle Ages.

The fact is that data, and the AI tools to make sense of it, are now to be seen as strategic assets for any country or human activity, as much as energy production and use. And that may explain the present-day nervousness surrounding the debate on AI, not the least because the discussion takes place in a changing geopolitical and geoeconomic context. For this reason alone, we need to look at global policymaking very differently today.

GOVERNMENTS WORLDWIDE RUSH TO PLACE REGULATIONS ON ARTIFICIAL INTELLIGENCE, A RAPIDLY GROWING TECHNOLOGY

How? As I argued in my presentation to the Board on Research Data and Information at the National Academy of Sciences, the data policies of Europe can be only understood by considering the new European policy goal of "technological sovereignty." This is seen as a drive to cut down dependency on the imports of those strategic assets Europe needs to keep its society going. It implies being able to produce and control vaccines, for example. But primarily, it implies being able to produce and control the strategic assets of modern-day society. Hence, the substantial European investment boost in renewable energy, data and AI technology.

In record time, Europe produced a significant body of regulation with the ambition of getting a grip on the use of data and ensuring the tech industry behind it pivots toward Europe. The U.S. and China have similar policies in place, making the race for technological supremacy increasingly important in a changing geopolitical context.

The same observation that data is a key strategic asset for societies can be applied to science. In the 21st century, science is data driven and the most relevant scientific tools available to make sense of that data are increasingly algorithmic. 

The possible benefits that AI offers science are immense. ChatGPT, for example, offers the potential to eliminate the need to write the basics of a text, to find correlations that may be difficult to detect otherwise, to bring in new data, and other advantages. The scientific community is, therefore, as concerned about the regulation of AI as society at large. And it should be.

In a world where technological sovereignty is paramount, it is challenging to imagine that China, the U.S. or Europe will allow another power to control leading sense-making AI tools like ChatGPT. Nor is it imaginable that the three leading players in science will agree to mutualize access to their data and science without reciprocity, as that would imply less sovereignty. Of course, this creates a dilemma, as the internal logic of 21st-century science – which values openness, global collaboration and fairness – is not compatible with the tech and economic ambitions of countries or continents striving to lead in data and AI.

In fact, if one accepts that the problem for open science, data access and the correct use of AI at large are entangled in a new geopolitical dynamic, then a solution for the correct use of data and AI must be taken into account.

OVER-REGULATION OF ARTIFICIAL INTELLIGENCE COULD LEAD TO CHINESE DOMINANCE, EXPERTS WARN: ‘THEY WANT TO WIN’

That solution, I posit, is global regulation under the aegis of a global agency. Here are the four building blocks:

First, all data and the algorithms underpinning them, particularly in science, should be FAIR – findable, accessible, interoperable and reusable – to make data a strategic asset and a common utility, like airspace for air traffic.

Second, all AI systems should be obliged to make themselves known as AI whenever consulted by a human. "This text generated by ChatGPT" needs to become as mandatory as any conflict of interest an author must report. Why not consider watermarking AI systems, just like food packing mentions the country of origin?

Next, regulation should be use-case-based and not intention-based. What is unacceptable in one country may be acceptable in another, making any ex-ante "one size fits all" approach ineffective. 

Lastly, self-regulation is insufficient in the world of data and AI due to conflicting interests and high global ambitions. A level global playing field for AI and data use policy, along with an early warning and control mechanism, is, therefore, essential. This should lead to the creation, under the umbrella of the U.N., of an International Data and AI Agency (IDAIA).

This is not without precedent. After World War II, as it was digesting the devastating effect of the first atomic bombs, and in the run-up to the Cold War, the world agreed to set up an agency for what was then considered the riskiest thing on the planet – atomic energy and its potential use and abuses. Just as the world fears now that AI might get out of hand, atomic energy was feared then, leading to the International Atomic Energy Agency (IAEA).

CLICK HERE TO GET THE OPINION NEWSLETTER

Interestingly, the reasons behind the IAEA’s creation are remarkably similar, all you have to do is substitute AI for nuclear technology:

It was "created in 1957 in response to the deep fears and expectations generated by the discoveries and diverse uses of nuclear technology." It "is strongly linked to nuclear technology and its controversial applications, either as a weapon or as a practical and useful tool." It was "set up as the world’s ‘Atoms for Peace’ organization within the UN family" and … "given the mandate to work with its Member States and multiple partners worldwide to promote safe, secure, and peaceful nuclear technologies." It "shall seek to accelerate and enlarge the contribution of atomic energy to peace, health, and prosperity throughout the world. It shall ensure, so far as it is able, that assistance provided by it or at its request or under its supervision or control is not used in such a way as to further any military purpose."

Just as the IAEA played a crucial role in containing the use of potentially self-destructive technology, a new IDAIA can play the same role for what it, probably rightfully so, seen as a new potentially self-destructive tech.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Photos copyright by Jay Graham Photographer
Copyright © 2010-2020 Sausalito.com & California Media Partners, LLC. All rights reserved.