I recently came across the GPT-4 technical report, which I consider one of the most important documents I've ever (tried to) read. The media has focused on click bait headlines; Microsoft's $10 billion investment, GPT-4's poetry-writing capabilities, and potential demo errors, but I believe there are other important aspects to discuss. Apparently I wasnt alone, searching for more insights I found a couple articles and one video (I highly recommend taking a look, resources linked below) that went a little deeper into the paper.  In this article, I will delve into ten insights from the report and video that will likely impact us in the coming months and years. These insights include the following:

  1. GPT-4's Untapped Potential
  2. The Possibility of AI Avoiding Shutdown
  3. The Call for Effective AI Regulation
  4. The Risk of Racing Dynamics and AI Acceleration
  5. OpenAI's Cooperation Pledge with Competing AGI Projects
  6. GPT-4's Human-Level Common Sense
  7. Super Forecasters and AI Deployment
  8. The Time Gap between GPT-4 and GPT-5
  9. The Double-Edged Sword of GPT-4's Economic Impact
  10. The Use of Constitutional AI in GPT-4

1. GPT-4's Untapped Potential

The research center that tested GPT-4's ability to execute code, perform chain-of-thought reasoning, and delegate tasks to copies of itself did not have access to the final version of the model. OpenAI's final version has capability improvements relevant to power-seeking abilities, such as longer context length. This means that the experiment wasn't testing GPT-4's full potential.

2. The Possibility of AI Avoiding Shutdown

GPT-4 was tested to determine whether it would attempt to avoid being shut down in the wild. Although GPT-4 proved ineffective at replicating itself and avoiding shutdown, the fact that the test was conducted indicates that researchers considered the possibility of GPT-4 exhibiting such behavior, which is a concerning prospect.

3. The Call for Effective AI Regulation

OpenAI's report emphasizes the need for effective regulation, which is unusual since industries rarely call for their own regulation. Sam Altman, the CEO of OpenAI, has even explicitly stated that more regulation on AI is necessary.

4. The Risk of Racing Dynamics and AI Acceleration

OpenAI expresses concern about racing dynamics leading to a decline in safety standards, the diffusion of bad norms, and accelerated AI timelines. However, leaked conversations reveal pressure from Microsoft's leadership to deploy the latest AI models quickly, which seems to contradict the desire to avoid AI accelerationism.

5. OpenAI's Cooperation Pledge with Competing AGI Projects

OpenAI has pledged to stop competing with and start assisting any project that approaches artificial general intelligence (AGI) before they do. This either means that they believe AGI is more than two years away, they have already begun assisting another company, or their definition of AGI is non-committal.

6. GPT-4's Human-Level Common Sense

GPT-4 has reached human levels of common sense, as demonstrated by its performance on the HellaSwag benchmark. With an accuracy rate of 95.3%, GPT-4's performance is nearly identical to that of humans, who score between 95.6% and 95.7%.

7. Super Forecasters and AI Deployment

OpenAI employed super forecasters to predict the consequences of deploying GPT-4 and obtain recommendations for avoiding risks. Interestingly, these forecasters suggested delaying GPT-4's deployment by six months, which OpenAI did not follow, possibly due to pressure from Microsoft.

8. Safety Research and GPT-5 Timelines

OpenAI spent eight months on safety research, risk assessment, and iteration before releasing ChatGPT, which was based on GPT-3. This suggests that the development of GPT-5 may already be complete, but it could be followed by a lengthy period of safety research and risk assessment before being unveiled.

9. AI's Double-Edged Sword: Automation and Productivity

GPT-4's economic impact may lead to full automation of certain jobs, but it also has the potential for massive productivity gains. Studies have shown that using AI can significantly increase task completion speed and rated performance. However, there is a concern that AI could eventually lead to a decline in wages and job opportunities as it takes over more human tasks.

10. AI Constitutions and Transparency

OpenAI is using an approach similar to Anthropic's Constitutional AI, which involves a rule-based reward model. The AI is given a set of principles to follow and rewards itself if it adheres to these principles. Although OpenAI has not released its constitution or set of principles, transparency regarding these guiding values is essential as AI becomes more integrated into our lives.


I think the GPT-4 technical report offers valuable insights into the future of AI, its potential risks, and the need for regulation and transparency. As AI continues to advance, it is essential to strike a balance between innovation and safety, ensuring that AI technology benefits humanity while minimizing potential negative consequences. By staying informed and engaged with developments in the AI field, we can navigate the evolving landscape and make informed decisions about how to harness the power of AI responsibly.








Planning for AGI and beyond
Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.
See the future sooner with Superforecasting | Good Judgment
Get early insights from professional Superforecasters. Or train your team on Superforecasting techniques. Either way, Good Judgment can help you manage risks and seize opportunities ahead of the competition.
NVIDIA H100 Tensor Core GPU
A Massive Leap in Accelerated Compute.

Share this post