“A lot of the headlines have been saying that I think it should be stopped now—and I’ve never said that,” he says. “First of all, I don’t think that’s possible, and I think we should continue to develop it because it could do wonderful things. But we should put equal effort into mitigating or preventing the possible bad consequences.”
Hinton says he didn’t leave Google to protest its handling of this new form of AI. In fact, he says, the company moved relatively cautiously despite having a lead in the area. Researchers at Google invented a type of neural network known as a transformer, which has been crucial to the development of models like PaLM and GPT-4.
In the 1980s, Hinton, a professor at the University of Toronto, along with a handful of other researchers, sought to give computers greater intelligence by training artificial neural networks with data instead of programming them in the conventional way. The networks could digest pixels as input, and, as they saw more examples, adjust the values connecting their crudely simulated neurons until the system could recognize the contents of an image. The approach showed fits of promise over the years, but it wasn’t until a decade ago that its real power and potential became apparent.
In 2018, Hinton was given the Turing Award, the most prestigious prize in computer science, for his work on neural networks. He received the prize together with two other pioneering figures, Yann LeCun, Meta’s chief AI scientist, and Yoshua Bengio, a professor at the University of Montreal.
That’s when a new generation of many-layered artificial neural networks—fed copious amounts of training data and run on powerful computer chips—were suddenly far better than any existing program at labeling the contents of photographs.
The technique, known as deep learning, kicked off a renaissance in artificial intelligence, with Big Tech companies rushing to recruit AI experts, build increasingly powerful deep learning algorithms, and apply them to products such as face recognition, translation, and speech recognition.
Google hired Hinton in 2013 after acquiring his company, DNNResearch, founded to commercialize his university lab’s deep learning ideas. Two years later, one of Hinton’s grad students who had also joined Google, Ilya Sutskever, left the search company to cofound OpenAI as a nonprofit counterweight to the power being amassed by Big Tech companies in AI.
Since its inception, OpenAI has focused on scaling up the size of neural networks, the volume of data they guzzle, and the computer power they consume. In 2019, the company reorganized as a for-profit corporation with outside investors, and later took $10 billion from Microsoft. It has developed a series of strikingly fluent text-generation systems, most recently GPT-4, which powers the premium version of ChatGPT and has stunned researchers with its ability to perform tasks that seem to require reasoning and common sense.
Hinton believes we already have a technology that will be disruptive and destabilizing. He points to the risk, as others have done, that more advanced language algorithms will be able to wage more sophisticated misinformation campaigns and interfere in elections.
Read the full article here