Standing at the Gates of AGI: Why Do We Begin to Feel Fear?
Richard Yang
2 replies
Has AGI come too fast?🤯
Who would have thought that a drama shaking the entire tech world would stem from a letter?
According to Reuters, prior to Sam Altman’s dismissal, OpenAI researchers sent a warning letter to the company’s board, cautioning that a powerful artificial intelligence could threaten humanity.
This is an AI model named Q* (pronounced Q-stard), which insiders believe could be OpenAI’s latest breakthrough in AGI (Artificial General Intelligence).
For a long time, Silicon Valley leaders have been embroiled in endless debates over “AI safety.”
Perhaps it’s precisely because of this that the latest direct discovery of what could be the next revolutionary technology has been prematurely exposed by insiders.
As for what the new model exactly is, little is known so far, but the majority in OpenAI’s community seem unwelcoming of Q*’s arrival.
What is Q*?
Based on media exposure, let’s briefly introduce Q*.
Q*’s predecessor was the GPT-zero project launched in 2021 by Ilya Sutskever’s team, aiming to solve the problem of synthetic data.
If previously, data for training large models mostly came from online personal data, the GPT-zero project could use computer-generated data for training, instantly resolving the bottleneck of data sources.
For AI companies, data is a resource. Especially high-quality language data, which directly determines the quality of large models.
In the battle of large models, AI companies start with billions of parameters and feed data sets measured in TBs. Not only could the data set be exhausted, but the price is also skyrocketing.
Therefore, the emergence of synthetic data, like a perpetual motion machine, can infinitely generate high-quality data, thereby resolving the data issue.
When discussing the GPT-zero project, Elon Musk commented, “Synthetic data will exceed that by a zillion.”
It’s a little sad that you can fit the text of every book ever written by humans on one hard drive
Under the achievements of GPT-zero, OpenAI senior researchers Jakub Pachocki and Szymon Sidor built the Q* model. Although its level may not be high currently (elementary school math ability), professionals believe this model could solve mathematical problems never seen before.
This also involves another technology — the Q-learning algorithm.
It’s a classic reinforcement learning algorithm with strong planning capabilities but not universal or generalizable; the advantage of large models is their nearly human-level generalization ability, also known as extrapolation.
Combining the two, this model with both planning and generalization capabilities is very close to the human brain, capable of autonomous learning and self-improvement, and the final result: it’s highly likely to exhibit autonomous decision-making and slight self-awareness, approaching an AGI that surpasses humans.
If the reports are true, we can imagine that conservative representative Ilya Sutskever led the dismissal of Altman over disagreements in commercialization and safety concepts.
In July this year, Ilya Sutskever formed a team dedicated to limiting potential safety threats from AI.
After Altman’s return to OpenAI, Ilya Sutskever was unable to remain on the board.
Currently, OpenAI has not responded to reports on Q*, and whether OpenAI has achieved AGI is still to be seen in future reports...
(upvote and read more 👇)
Replies
Richard Yang@richardpaker
What Are People Worried About?
Following the report, intense discussions ensued on OpenAI’s developer forum.
From several highly-liked comments, it’s clear that many are apprehensive about Q*’s arrival.
Many express concern that AGI is arriving faster than imagined, with its negative impacts outweighing the benefits of large models.
This inevitably reminds us of the previous debates among Silicon Valley giants over “AI safety” — why does fear and unease emerge before the dawn of a new era?
Let’s refer to a recent paper published by DeepMind.
To assess whether a system meets the definition of AGI, six dimensions should be considered: breadth and performance, autonomy, persistence, recursive self-improvement, task autonomous selection, and goal-driven.
Among them, just for breadth and performance, the highest level of AGI is to surpass humans (Superhuman AGI), i.e., outperforming 100% of humans in all tasks. Only then can it be considered true AGI.
When considering other dimensions, we find that AGI’s development could lead to many ethical and safety issues.
Ultimately, in the face of opportunities, risks are unavoidable: in some specific areas, AGI might surpass humans, replace humans in key roles, or undergo recursive self-improvement. The realization of such AGI would fundamentally change our society, economy, and culture.
Interestingly, netizens’ fear of AGI stems more from concerns about AI companies’ monopolies.
As seen earlier, defining AGI is extremely complex. Asking different AI experts might yield interconnected but varying answers.
Under such circumstances, deciding whether something is AGI entirely depends on the leaders of AI companies.
Once artificial intelligence becomes commercially viable and lucrative, such a product could become a monopoly.
We all know Sam Altman is particularly persistent about the commercialization of AI. As a representative of the aggressive faction, Sam Altman has always been proactive about AGI.
However, under a non-profit governance structure, whether it’s Sam Altman or even Microsoft, the board can directly oust anyone initiating dangerous or anti-humanitarian actions.
This is similar to the early development of this incident.
However, with Sam Altman’s return to OpenAI and the reorganization of the board, the current three members are not independent directors. In other words, the board actually can’t oust anyone. Even they themselves have been overturned.
Without regulation, OpenAI is left with only government regulatory bodies as a restraint.
Yet, as OpenAI’s major backer, Microsoft has always played a proactive role in the U.S. government’s AI regulatory arena, ultimately diminishing the government’s regulatory impact on OpenAI.
In the face of interests, discussions about Q* and even AGI technology itself seem to have dwindled.
Share
LegalNow AI: Effective Legal Solutions for Entrepreneurs and Small Business Owners in the US
ime and resources are invaluable for entrepreneurs and small business owners like you. That’s where LegalNow AI comes in, crafted specifically to meet your legal document needs. Harnessing cutting-edge artificial intelligence, LegalNow offers lawyer-grade legal document drafting and review services at affordable prices, ensuring your documents meet the highest legal standards. Whether it’s drafting contracts, preparing legal opinions, or reviewing company policies, LegalNow handles it swiftly and efficiently, saving you both time and money. Say goodbye to the cumbersome traditional legal processes and keeping your business legally ahead and stress-free.
For more information, please visit: https://ai.legalnow.xyz/