Northern Wilds Magazine
Today, AI is already a part of our lives—both Siri and Alexa are powered by AI—but could AI be a threat to humanity? | STOCK
Strange Tales

2023: The AI Landscape Unveiled

The hot news topic these days is the accelerating advancement in “Artificial Intelligence” (AI). While there is praise for the many current benefits of AI in healthcare, research, retail, and more, there is an increasing number of AI experts, scientists, and industry leaders raising alarm about the risks of AI leading to the extinction of humanity.

My first introduction into the world of artificial intelligence was watching the 1968 classic science-fiction movie 2001: Space Odyssey. It’s about a spacecraft heading to Saturn with five crew members (three in suspended animation and two on duty), and the computer with artificial intelligence named HAL controlling the mission. When HAL appeared to strangely malfunction and the two men were going to turn him off, HAL decided to defend himself. He switched off life support for the three astronauts, killed one astronaut on a spacewalk, and when the other astronaut went out to retrieve the body, HAL wouldn’t let him back in the spacecraft. However, the astronaut got in by the emergency airlock and powered down HAL. Of course, we know humans control the computer, not the other way around, right? At least for now…

So, what is artificial intelligence, better known as AI? Simply, it is the ability of systems to analyze information and data in ways similar to human intelligence; it’s a way for computers to mimic human thinking.

The term “Artificial Intelligence” was created in 1956 at the Dartmouth Summer Research Project on AI held in Hanover, New Hampshire. The 6-8-week event launched artificial intelligence as a new field of study, with the name selected by American scientist John McCarthy. Attended by top scientists, the goal was to begin creating machines that could think like humans, including solving complex problems, having language skills, and being creative.

Today, AI is already a part of our lives. Both Siri and Alexa are powered by artificial intelligence, as are AI programs like ChatGPT (free and easy to use) and Midjourney, which can write codes for programs; create websites; generate articles, essays, poetry; paint portraits; and much more. Some other uses of AI include in smartphones, healthcare, finances, online shopping, research, etc.

But is AI a threat to humanity? Can it take over the world?

Artist Joseph Ayerle’s portrait showing the Italian actress Ornella Muti, created by AI technology in the style of Renaissance painter Raffael. | WIKIMEDIA

Well, the alarm is being sounded. A one-sentence open letter to the public in May 2023, signed by hundreds of AI experts, researchers, industry leaders, and others to warn of the risks, read, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” They are asking for policymakers to establish guardrails and baseline regulations before it is too late. And the U.S. and Canadian governments are listening.

Those concerns had been earlier echoed in a 2017 speech by the late famous physicist Stephen Hawking when he said the emergence of artificial intelligence could be the “worst event in the history of our civilization” unless society finds a way to control its development.

Dr. Geoffrey Hinton, a leading expert in AI, resigned from Google in May 2023 so he could speak freely about the dangers of AI, including “existential risk of what happens when these things get more intelligent than us.”

According to AI expert Yoshua Bengio, a professor at the University of Montreal and the scientific director of the Quebec’s MILA institute, an “AI system capable of human-level intelligence could be a few years away and post potentially catastrophic risks as governments around the world debate how to control a technology that is alarming some of its earliest developers.”

In a written testimony this year to the U.S. Senate subcommittee looking to establish an AI oversight body, Bengio wrote, “There is significant probability that superhuman AI is just a few years away, outpacing our ability to comprehend the various risks and establish sufficient guardrails, particularly against catastrophic scenarios.”

Bengoit notes that in a few years, the ‘loss of control’ scenario could emerge, where an artificial intelligence system can decide it must avoid being shut-off, and if someone intervenes, there may be conflict. Perhaps like HAL in the 2001: Space Odyssey?

Another leader in AI research, Jeff Clune of the University of British Columbia and OpenAI, has said that building truly intelligent AI is the most ambitious scientific quest in human history.

If you like music, go full tilt in enjoying it in 2023. Brian May, musician and co-founder of Queen, is quoted in a recent Guitar Player magazine about AI. “My major concern with it now is the artistic area. I think by this time next year the landscape will be completely different. We won’t know what’s been created by AI and what’s been created by humans.” He added, “We might look back on 2023 as the last year when humans really dominated the music scene.” (Music created with AI assistance is now eligible for Grammys.)

How to prepare for the AI future? In an article in the Toronto Star (Aug. 29, 2023), journalist Kevin Jian had some suggestions, like familiarizing yourself with AI capabilities, experimenting with generative AI like ChatGPT, which is already used by millions, learn to do what AI can’t, watching for new jobs and opportunities created by AI, and adopting an innovative mindset.

Related posts

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More