By: Jennifer Elias
See original post here.
Key points:
- In an interview with CBS’ “60 Minutes” that aired Sunday, Google CEO Sundar Pichai hinted that society isn’t prepared for the rapid advancement of AI.
- Pichai said laws that guardrail AI advancements are “not for a company to decide” alone.
- Warning of consequences, he said AI will impact “every product of every company.”
Google and Alphabet CEO Sundar Pichai said “every product of every company” will be impacted by the quick development of AI, warning that society needs to prepare for technologies like the ones it’s already launched.
In an interview with CBS’ “60 Minutes” aired on Sunday that struck a concerned tone, interviewer Scott Pelley tried several of Google’s artificial intelligence projects and said he was “speechless” and felt it was “unsettling,” referring to the human-like capabilities of products like Google’s chatbot Bard.
“We need to adapt as a society for it,” Pichai told Pelley, adding that jobs that would be disrupted by AI would include “knowledge workers,” including writers, accountants, architects and, ironically, even software engineers.
“This is going to impact every product across every company,” Pichai said. “For example, you could be a radiologist, if you think about five to 10 years from now, you’re going to have an AI collaborator with you. You come in the morning, let’s say you have a hundred things to go through, it may say, ‘these are the most serious cases you need to look at first.’”
Pelley viewed other areas with advanced AI products within Google, including DeepMind, where robots were playing soccer, which they learned themselves, as opposed to from humans. Another unit showed robots that recognized items on a countertop and fetched Pelley an apple he asked for.
When warning of AI’s consequences, Pichai said that the scale of the problem of disinformation and fake news and images will be “much bigger,” adding that “it could cause harm.”
Last month, CNBC reported that internally, Pichai told employees that the success of its newly launched Bard program now hinges on public testing, adding that “things will go wrong.”
Google launched its AI chatbot Bard as an experimental product to the public last month. It followed Microsoft’s January announcement that its search engine Bing would include OpenAI’s GPT technology, which garnered international attention after ChatGPT launched in 2022.
However, fears of the consequences of the rapid progress has also reached the public and critics in recent weeks. In March, Elon Musk, Steve Wozniak and dozens of academics called for an immediate pause in training “experiments” connected to large language models that were “more powerful than GPT-4,” OpenAI’s flagship LLM. More than 25,000 people have signed the letter since then.
“Competitive pressure among giants like Google and startups you’ve never heard of is propelling humanity into the future, ready or not,” Pelley commented in the segment.
Google has launched a document outlining “recommendations for regulating AI,” but Pichai said society must quickly adapt with regulation, laws to punish abuse and treaties among nations to make AI safe for the world as well as rules that “Align with human values including morality.”
“It’s not for a company to decide,” Pichai said. “This is why I think the development of this needs to include not just engineers but social scientists, ethicists, philosophers and so on.”
When asked whether society is prepared for AI technology like Bard, Pichai answered, “On one hand, I feel no, because the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology is evolving, there seems to be a mismatch.”
However, he added that he’s optimistic because compared with other technologies in the past, “the number of people who have started worrying about the implications” did so early on.
From a six-word prompt by Pelley, Bard created a tale with characters and plot that it invented, including a man whose wife couldn’t conceive and a stranger grieving after a miscarriage and longing for closure. “I am rarely speechless,” Pelley said. “The humanity at super human speed was a shock.”
Pelley said he asked Bard why it helps people and it replied “because it makes me happy,” which Pelley said shocked him. “Bard appears to be thinking,” he told James Manyika, a senior vice president Google hired last year as head of “technology and society.” Manyika responded that Bard is not sentient and not aware of itself but it can “behave like” it.
Pichai also said Bard has a lot of hallucinations after Pelley explained that he asked Bard about inflation and received an instant response with suggestions for five books that, when he checked later, didn’t actually exist.
Pelley also seemed concerned when Pichai said there is “a black box” with chatbots, where “you don’t fully understand” why or how it comes up with certain responses.
“You don’t fully understand how it works and yet you’ve turned it loose on society?” Pelley asked.
“Let me put it this way, I don’t think we fully understand how a human mind works either,” Pichai responded.