It’s Here, Now What?

Published: October 3, 2023

By Jim Lichtman
Image
Read More

I’ve written about Artificial Intelligence (AI) before. Last February, I wrote about an obscure science fiction film from 1970 entitled, Colossus: The Forbin Project.

The premise of the story goes like this: to prevent the mass destruction of civilization, a brilliant US scientist builds an immense computer that, once the key is turned, can stop all nuclear global conflicts. And everyone applauds the achievement, even the president signs off on the project as the ultimate safeguard against self-annihilation.

However, as soon as Colossus is activated, amidst the whir of lights and billions of computations and analysis in seconds, the supercomputer discovers a Russian counterpart designed for the same purpose.  

Artificial intelligence, however, is a different animal. It’s intelligence by machine or a comprehensive software program that allows for scientific achievements in months, even weeks instead of years, that would combat disease, and hunger, and even control the conditions to prevent climate change.

For all its super speed, efficiency, and a clear benefit to mankind, it comes with a rather large price tag, a dark side, if you will. It can create and/or manipulate words and images that closely mimic human behavior, thinking, writing, and likely an incredible list of false information, and . . . it can make it believable—the ultimate pseudo-source for conspiracy theories.  

The United Nations University put together a charter that begins a conversation that needs to be had between scientists, regulators, ethicists, and individuals from a variety of fields of research and learning.

“The AI genie is very much out of the bottle. We should not, however, attempt to put it back in. Instead, we should look to harness the transformative potential of AI for our common good. We, therefore, propose a “Charter of Rights for the Global AI Revolution” — an inclusive, collectively developed, multi-stakeholder charter of rights that will guide the development of AI and lay the groundwork for the future of beneficial human/machine co-existence.

What would such a process look like? Ideally, the initiative would aim to create a global, multi-stakeholder institution for AI governance, with the capacity to track and analyze worldwide developments and discuss them together. Multi-sectoral participation would be crucial, and there would need to be recognition that promotion of innovation, openness, and equity outweighed issues of sovereignty and national interests. Such an institution could act independently or under the auspices of the United Nations, though there would need to be better measures to ensure inclusivity than those currently in the Bretton Woods institutions.

Such an institution would require a foundational document to guide it — the Charter of Rights for the Global AI Revolution. Key questions that would need to be addressed within the Charter would include:

How can a balance be struck between the transformative role of AI in creating “better” decisions, and the risks that it will impinge too deeply upon human decision-making?

What role should AI play in sociopolitical processes such as elections, education, and opinion-forming?

How can we ensure that data does not become discriminatory or used falsely to the harm of some?

To what extent should AI focus on social benefits versus the rights of the individual?

What kind of institution and sets of rules would best reflect the risks, benefits, and rapid transformations of AI?

It’s here. Now, what are we going to prevent harm while doing good?

Comments

Leave a Comment