Two of the three godfathers of A.I. are professors based in Canada. One of them, Geoffrey Hinton of the University of Toronto, recently left his job at Google in order to speak more frankly about the risks of artificial intelligence.
The other, Yoshua Bengio of the Université de Montréal, echoed the sounding of that alarm in a recent open letter urging a pause in the development of increasingly powerful A.I. systems.
He’s among the several high profile names, including Elon Musk and the Apple co-founder Steve Wozniak, who believe it’s time to pump the brakes, if only for six months.
[Read Dan Bilefsky’s 2019 Saturday Profile of Yoshua Bengio: He Helped Create A.I. Now, He Worries About ‘Killer Robots’]
“It is because there is an unexpected acceleration — I probably would not have signed such a letter a year ago — that we need to take a step back, and that my opinion on these topics has changed,” said Prof. Bengio in a post on his blog. “We succeeded in regulating nuclear weapons on a global scale after World War II, we can reach a similar agreement for A.I.”
Canada is a far ways from signing legislation, which is expected to be in effect no earlier than 2025. There are several more months needed to settle on regulations for the bill, C-27, the Artificial Intelligence and Data Act, which was introduced last June.
In the near term, Canadian privacy regulators are currently at work investigating an unspecified complaint into whether the chatbot ChatGPT inappropriately collects, uses or discloses the data of Canadians without the proper consent. On Thursday, the federal privacy commissioner’s office announced that its provincial counterparts in Quebec, British Columbia and Alberta would be joining the investigation.
“As regulators, we need to keep up with — and stay ahead of — fast-moving technological advances in order to protect the fundamental privacy rights of Canadians,” said Philippe Dufresne, the privacy commissioner, in a statement.
The uptake of public A.I. tools, especially ChatGPT — short for “Generative Pretrained Transformer” — has been explosive. Analysts at the Swiss bank UBS estimated in a May report that ChatGPT, created by the San Francisco-based company OpenAI, has reached more than 200 million monthly active users in April, double the number from January, though the trend shows signs of a plateau.
In less than six months, ChatGPT appears to have mushroomed. It has popped up on college campuses as a tool for fabricating coursework and has been restricted by universities. It’s being used to filter through romantic prospects on dating apps. And as my colleagues — the Times technology columnist Kevin Roose and Emma Goldberg, who covers the future of work for the Business desk — have found, these A.I. chatbots can lead to some pretty trippy conversations.
[Read Kevin’s story: A Conversation With Bing’s Chatbot Left Me Deeply Unsettled]
[Read Emma’s story: ChatFished: How to Lose Friends and Alienate People With A.I.]
The stock market had a taste of the more frightening consequences of misused A.I. earlier this week, when prices plunged after pictures showing a building near the Pentagon apparently on fire circulated on the internet. It turned out to be an A.I.-generated spoof.
While pushing for a slowdown in the development of super A.I. systems, Prof. Bengio, who also is the founder and the scientific director of Mila (also known as the Quebec Artificial Intelligence Institute), is also calling on the Canadian government to accelerate regulation efforts.
Valerie Pisano, the president and chief executive of Mila, said there were steps that the government could immediately take — such as requiring appropriate labels to identify text, images or voices generated by A.I. — to protect the public from some of the technology’s dangers, like disinformation campaigns during elections and disruptions in the job market.
“This technology is coming into our lives much, much faster than anything we’ve seen before,” Ms. Pisano told me and said regulation was lagging too far behind. “This gap is creating a disequilibrium that is really worrisome.”
Ms. Pisano, along with academics, researchers and industry leaders in Canada are asking the government to increase government oversight through Bill C-27 and to move quickly.
Luke Stark, a researcher of the history and ethics of A.I. and an assistant professor at Western University in London, Ontario, told me he’s pleased to see the privacy commissioner’s investigation underway in the interim.
“The question of where the material that has been used to train these large language models comes from is clearly a big issue,” Prof. Stark said. That issue, he added, “has been a little bit lost in the rush to hype up these technologies, for better and ill.”
Voters in Alberta head to the polls on Monday to elect their next premier. Ian Austen visited Calgary and traveled through the rural southern region of Alberta, speaking with Conservative Canadians who were rethinking their choices after the party’s hard-right turn.
Grimes, a Canadian musician and producer, spoke to Joe Coscarelli, a culture reporter for The Times, about her new artificial intelligence tool that has “open-sourced” her voice.
For those traveling through Vancouver for a summer weekend, here are some recommendations for restaurants, shopping, places to visit and things to see in the city as part of The Times’s 36 Hours series.
The U.S. is competing with Canada and other countries to secure the critical minerals necessary for manufacturing electric cars and other items in the energy transition.
Are you an avid bird-watcher, or even a beginner? The Times’s Science desk is working on a project with the Cornell Lab of Ornithology and is fielding reader submissions for it. Learn more here.
Vjosa Isai is a reporter-researcher for The New York Times in Canada. Follow her on Twitter at @lavjosa.