top of page
  • Rick Bonetti

Artificial Intelligence for Good

Updated: Feb 18

On February 13, 2024, the Washington Post reported that corporate leaders of AI companies agree to limit election ‘deepfakes’ but fall short of ban. Today, February 14, 2024, WP reports "Russia, China, and other U.S. adversaries are using the newest wave of artificial intelligence tools to improve their hacking abilities and find new targets for online espionage, according to a report Wednesday from Microsoft and its close business partner OpenAI."

Is corporate self-regulation sufficient to protect humanity from the negative aspects of Artificial Intelligence? What steps need to be taken to ensure that technology will serve humanity well?

Here is a brief description of some efforts to have AI for good:

The AI for Good Global Summit 2024: Accelerating the United Nations Sustainable Development Goals, is scheduled for May 30-31, 2024 in Geneva, Switzerland. "The AI for Good Global Summit is the leading action-oriented, United Nations platform promoting AI to advance health, climate, gender, inclusive prosperity, sustainable infrastructure, and other global development priorities. AI for Good is organized by the International Telecommunication Union (ITU) – the UN specialized agency for information and communication technology – in partnership with 40 UN sister agencies and co-convened with the government of Switzerland."

The AI Now Institute "produces diagnosis and actionable policy research on artificial intelligence." Founded in 2017, the AI Now Institute "develops policy strategy to redirect away from the current trajectory: unbridled commercial surveillance, consolidation of power in very few companies, and a lack of public accountability." Wikipedia says "AI Now Institute grew out of a 2016 symposium spearheaded by the Obama White House Office of Science and Technology Policy. The event was led by Meredith Whittaker, the founder of Google's Research Group, and Kate Crawford, a principal researcher at Microsoft Research. The event focused on near-term implications of AI in social domains: Inequality, Labor, Ethics, and Healthcare."

Center for Humane Technology (CHT) (which produced the YouTube video above) is an independent 501(c)(3) nonprofit "working to align technology with humanity's best interests." Their work was featured in the 2020 docudrama "The Social Dilemma." CHT offers many free resources:

  • Top Tech Podcast: Your Undivided Attention

  • Free Courses: Foundations of Humane Technology; How Tech Affects Democracy; and How Tech Affects Kids & Youth

  • Research Library: Ledger of Harms

Partnership on Artificial Intelligence to Benefit People and Society (otherwise known as Partnership on AI) is a non-profit coalition formed in 2016. The Partnership on AI "brings together diverse voices from the AI community to address important questions about our future with AI. This non-profit organization has released PAI’s Guidance for Safe Foundation Model Deployment, a primer on AI safety. The Partnership on AI sees 2024 as a call to action. after a collective wake-up call in 2023. "Starting with the UN’s 17 Sustainable Development Goals, we need to set our AI ingenuity and expectations high and engage creatively and inclusively with people and communities to get there."

"Artificial general intelligence has the potential to benefit nearly every aspect of our lives—so it must be developed and deployed responsibly." ~ OpenAI

Stuart Russell, in his 2019 book Human Compatible: Artificial Intelligence and the Problem of Control, suggests that "we can rebuild AI on a new foundation with machines designed to be inherently uncertain about the human preferences they are required to satisfy. Such machines would be humble, altruistic, and committed to pursuing our objectives, not theirs. This new foundation would allow us to create provably deferential and beneficial machines."

Stuart Russell was one of the presenters in the Science of the Noosphere Master Class I took in the summer of 2023. In conversation with David Sloan Wilson and Terrance Deacon, Russell said this about preferences:

"There are futures we want to avoid, such as extinction and enslavement and various other dystopias, and futures that we would like to bring about. And this concept of the noosphere is really important to that because we’re not born with these preferences, they result from our immersion in the noosphere. And so understanding the dynamics of that is extremely important because to some extent, our preferences about the future end up determining what future we get.”

Journalist Robert Wright, a presenter at Human Energy's N2 Conference last November says, "Artificial intelligence is the crystallization of the noosphere.... if the age of AI is going to work out well, there will have to be at least some movement toward the goals they identified—a more unified global political community and more in the way of international affinity and sympathy." He thinks we should be "looking at AI in its broadest evolutionary context." So do I.

I believe technology and artificial intelligence in particular can greatly benefit humanity if we as a society reward positive efforts to harness "AI for good" and provide sufficient global regulations and enforcement to minimize potential harms.


Recent Posts

See All


bottom of page