Add Eric Schmidt to the list of tech luminaries concerned about the dangers of AI. The former Google chief tells guests at The Wall Street Journal‘s CEO Council Summit that AI represents an “existential risk” that could get many people “harmed or killed.” He doesn’t feel that threat is serious at the moment, but he sees a near future where AI could help find software security flaws or new biology types. It’s important to ensure these systems aren’t “misused by evil people,” the veteran executive says.
Schmidt doesn’t have a firm solution for regulating AI, but he believes there won’t be an AI-specific regulator in the US. He participated in a National Security Commission on AI that reviewed the technology and published a 2021 report determining that the US wasn’t ready for the tech’s impact.
Schmidt doesn’t have direct influence over AI. However, he joins a growing number of well-known moguls who have argued for a careful approach. Current Google CEO Sundar Pichai has cautioned that society needs to adapt to AI, while OpenAI leader Sam Altman has expressed concern that authoritarians might abuse these algorithms. In March, numerous industry leaders and researchers (including Elon Musk and Steve Wozniak) signed an open letter calling on companies to pause AI experiments for six months while they rethought the safety and ethical implications of their work.
There are already multiple ethics issues. Schools are banning OpenAI’s ChatGPT over fears of cheating, and there are worries about inaccuracy, misinformation and access to sensitive data. In the long term, critics are concerned about job automation that could leave many people out of work. In that light, Schmidt’s comments are more an extension of current warnings than a logical leap. They may be “fiction” today, as the ex-CEO notes, but not necessarily for much longer.