Sam Altman, CEO of OpenAI, released a 13-page document on Monday comparing the shift towards superintelligence to past major technological transitions like electricity or the combustion engine. It is a comprehensive proposal on how governments should tax, regulate, and redistribute wealth from AI technology.

Six major insights from Altman's plan:

1-Shared Benefits

Altman advocates for a proactive policy similar to the “Progressive Era” or New Deal to ensure AI breakthroughs translate into shared opportunities to benefit a broad spectrum of people, not just a few powerful entities. He proposes principles for an AI-centered industrial policy, including sharing prosperity broadly, mitigating risk and building governance, and democratizing access and agency.

2-AI-Driven Tax, Wealth Fund

Through this paper, Altman also outlines initial policy ideas, such as modernizing the tax system. He said policymakers could raise taxes on capital gains, corporate income, and AI-driven profits, or introduce taxes on automation, while offering wage-linked incentives to help firms retain and retrain workers. These measures aim to fund essential programs and support workforce shifts in an AI-driven economy.

He also called for creating a Public Wealth Fund. Policymakers and AI companies could collaborate to create a fund investing in AI-driven growth across companies, said Altman. Returns from the fund could be distributed to citizens, letting everyone benefit directly from AI's economic upside.

3-Four-Day Workweek

Use AI efficiency gains to boost worker benefits, fund healthcare and retirement, and test shorter workweeks without reducing pay, turning saved hours into permanent time off or a four-day work week.

4-Policy Pilots & Global AI

He suggests that policy experiments should be piloted by non-government groups, with successful approaches reinforced by the state through regulation, procurement, and investment. It emphasizes the need for global cooperation as the transition to superintelligence is already underway worldwide.

5-Containing Dangerous AI

Societies should create and test plans to contain dangerous AI systems that can't easily be recalled, focusing on limiting their spread, reducing harm, and coordinating responses—similar to strategies used in cybersecurity and public health.

6-Strengthening Safety Nets

Altman urges authorities to ensure safety nets like unemployment insurance, SNAP, Social Security, Medicaid, and Medicare work effectively and at scale. Track AI's impact on jobs and wages in real time, then automatically expand temporary support—such as cash assistance, wage insurance, or training—when disruptions exceed set thresholds, scaling back as conditions stabilize.

OpenAI presents these ideas as a starting point for a global, inclusive conversation on shaping AI's benefits. Progress will rely on ongoing collaboration, experimentation, and feedback, supported by fellowships, research grants, and discussions at the new OpenAI Workshop.

The proposal comes at a time when AI development is accelerating at an unprecedented rate. In December, Google (NASDAQ:GOOG) (NASDAQ:GOOGL) CEO Sundar Pichai predicted that AI tools would soon make decisions on behalf of users, from guiding investment choices to reviewing medical treatments.

By March, NVIDIA (NASDAQ:NVDA) CEO Jensen Huang claimed that Artificial General Intelligence (AGI) had already been achieved. Huang says an AI doesn't need a lasting presence to count; even a short-lived AI that builds a viral app and earns a billion dollars qualifies, similar to fleeting dot-com-era companies.

This comes as Anthropic CEO Dario Amodei issued a stark warning about the potential perils of rapidly advancing AI technology, highlighting the risks posed by “powerful AI.” Amodei highlighted risks such as powerful AI emerging within 1–2 years with unpredictable autonomy, potential misuse by states or actors to seize power, severe labor market disruption and inequality, and deep societal impacts that current governance systems aren't prepared for.

Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.

Photo Courtesy: Meir Chaimowitz on Shutterstock