Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI
26 days ago | 5 Views
Very rarely does the world of tech innovators reach such consensus.
There are two things everyone in this world seems to agree on at the moment: 1. Artificial General Intelligence (AGI) is on its way. 2. We don’t know what it will look like or how it will evolve once it gets started.
AGI will be so different from everything we know that it will make programs such as ChatGPT seem rudimentary. These will be “systems that are generally smarter than humans,” OpenAI co-founder and CEO Sam Altman said, in a statement posted online last year.
A key difference between generative AI and Artificial General Intelligence would be the latter’s ability to reach far beyond the data sets it is fed.
AI, as we know it, is ring-fenced by what it is taught. Hence the term “artificial narrow intelligence” or ANI is also used to describe it. In order for it to remain competent, its datasets must be actively updated with context and information.
AGI would not operate in this manner. In order for it to qualify as AGI, it would need to have access to an evolving body of knowledge: say, the internet.
It would then need to be free to “learn” and develop “cognitive ability” in a free-range manner. Much like humans do. What would this mean?
In a simple example, AGI could come upon a video of two people throwing a ball back and forth, and begin to calculate projectile rates and momentum. It could later use this knowledge and this new view of the physical world, to calculate how a drone might be better deployed.
There is consensus within the tech community — at OpenAI, Meta, Google and Anthropic — that if such a program were created, it would, in the words of a 2023 IBM paper, be “more advanced than any human”.
These programs would be the kinds of “thinking machines promised by science fiction,” IBM stated, in a paper published this April.
A race without rules?
The thing to remember is that they don’t exist yet. The thing to remember is that they could.
Computational limitations are holding things back. It would also take a certain leap in infrastructure — even in terms of simple essentials such as space, water and power — to make AGI possible. But it can be thought of as the equivalent of a rowboat and a ship. We’ve built one; the free-floating foundations are there.
This, in part, is why there have been so many resignations at OpenAI.
The company that created the pathbreaking generative AI model GPT lost much of its top rung this year, including chief scientist Ilya Sutskever, head of alignment Jan Leike, and chief technical officer Mira Murati.
All three quit citing “safety” concerns, saying OpenAI was moving too fast and prioritising results over safeguards, in ongoing experiments that aim to make the leap from AI to AGI.
Here among us...
There are those who believe the leap has already occurred.
Earlier this year, Elon Musk, who helped fund and co-found OpenAI in 2015 (and exited the board in 2018, amid conflicts of interest with his own companies, including Tesla), sued OpenAI, alleging that the latest version of GPT-4 is in fact an AGI product.
In his suit, Musk, who has since launched his own AI company, xAI, demands “judicial determination” of this.
It is unclear how a judge would effectively rule on such a plaint, which brings us to the huge gap between the rapid pace of development of AI, and the world’s lagging laws.
The effort to catch up, legislatively, is barely inching along.
When it comes to AGI, there was something of a shift last month — not in the laws, but at least in the approaches of key lawmakers.
At a subcommittee hearing in September, US senators heard testimony from whistleblowers from leading AI companies. At the end of the hearing, many appeared to move from their previously held opinion that AGI was “a vague metaphorical device” to the notion that, as one senator put it, “it is no longer that far out in the future.”
AGI experiments, meanwhile, continue. Google DeepMind, Anthropic, Microsoft and IBM are believed to be working on their own.
They and other advocates of the leap point to the good it could do. AGI could speed up and potentially improve medical diagnostics; customise education in ways hitherto unforeseen.
OpenAI has said AGI is capable of accelerating its own progress, making it hard to predict how quickly it could evolve. What about potential harm?
“A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too,” OpenAI noted in a statement in February 2023. This was when it was first announcing the launch of efforts to create AGI. When Sutskever, Leike and Murati were still part of those efforts.
Before the lawsuit. Before the senate hearing. In a before to which we may no longer be able to return.
Read Also: YouTube brings a new way for Indian creators to boost earnings