On May 1, 2023, the New York Times reported ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead. "Dr. Geoffrey Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work."
"“It is hard to see how you can prevent the bad actors from using it for bad things,... the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.” Dr. Geoffrey Hinton
In the March 23, 2023 YouTube video above, former Google CEO Eric Schmidt is interviewed by Walter Isaacson about A.I.’s impact on life, politics, and warfare, as well as what can be done to keep it under control. Schmidt thinks "We are on the cusp of a new revolution that is going to change our world in a really profound way - much more so than people think," and "at a pace faster than we have ever seen." Schmidt, together with Henry Kissinger and Daniel Huttenlocher wrote the 2021 book The Age of AI and Our Human Future.
We now have OpenAI with ChatGPT-4; Google with Bard and its new Magi Project; Microsoft's new AI-powered Bing search engine chat mode for Edge browser; and Elon Musk is ramping up his efforts to compete with OpenAI, the ChatGPT developer he helped found, even as he calls out the potential harms of A.I.
In a 2023 TED Talk, computer scientist Yejin Choi "demystifies the current state of massive artificial intelligence systems like ChatGPT, highlighting three key problems with cutting-edge large language models:
Extreme-scale - "AI models are so expensive to train, and only a few tech companies can afford to do so. So we already see the concentration of power."
Safety. - "We are now at the mercy of those few tech companies because researchers in the larger community do not have the means to truly inspect and dissect these models."
Their massive carbon footprint and environmental impact.
"We need to make AI smaller, to democratize it. And we need to make AI safer by teaching human norms and values." ~ Yejin Choi
"AI is trained on: raw web data, crafted examples custom developed for AI training, and then human judgments." It is the human feedback on AI performance that is essential to input.
Government regulation to control AI is the appropriate, immediate, actionable, coordinated, global response, but where to get the political will to do so? And what are the human norms and values that need to be taught to AI?
How can we encourage a long, "deep time" view of human's role in evolution, and accelerate a unified global worldview and the noosphere? How do we quickly mobilize an evolutionary hope for the future?
The challenge appears to be even more immediate than the global climate crisis. Maybe the superior "intelligence" of generative AI can be harnessed to save humanity from extinction if the international community can add guardrails to prevent bad actors from exploiting the new technology.
As Dr. Goeffrey Hinton said on PBS Newshour May 5, 2023 "We should realize that we are probably going to get things more intelligent than us very soon and they will be wonderful. They will be able to do all sorts of things that we find difficult, so there is huge positive potential in these things, but of course, there are also huge negative possibilities and I think we should put more or less equal resources into developing AI to make into making it more powerful and into keeping it under control to minimize bad side effects."
Comments