Understanding America's AI Action Plan
Will AI "usher in a new golden age of human flourishing," as planned?
In my essay Build AI or Be Buried By Those Who Do I argued that the entire US establishment was already aligning around AI development for reasons of such gravity that no one, not even Eliezer “if you build it we all die” Yudkowsky, would be able to stop its advancement. The first of these reasons is America’s political-military struggle with China:
The United States is in a great power struggle with China. Maybe you haven’t been keeping up with current events, but the struggle’s not going too well. When it comes to resource extraction, industrial manufacturing, shipyard production, and countless other sectors where our hollowed-out deindustrialized corpse cannot viably compete, it’s a struggle we’ve already lost.
But AI could change everything. AI promises to be the next generation of cyberweapons, psyops tools, industrial planners, and propaganda engines. AI systems will be economic accelerants, intelligence multipliers, psychological war machines. And in AI we’re ahead. We don’t just have better LLMs; we have better infrastructure for them to run. The U.S. has 5,388 data centers, while China has 449. We have a 1200% advantage in processing power and more coming online daily. If superintelligence surfaces, it will surface here first. Even as it loses its grip on steel, oil, shipping, families, and faith, the United States still rules the cloud.
That makes this the final game, the last domain of dominion. Our rulers know it. You can see it in the sudden unity across the American elite. Left, right, corporate, academic, every faction has converged to support AI development. None of them is going to stop the train. They’re all aboard…
The second is America’s struggle with its own cultural, economic, and demographic disintegration, with its temporal spot in the Spenglerian cycle:
The West’s problems aren’t hypothetical. They’re real, they’re measurable, and they’re getting worse. Demographics are collapsing. Populations are shrinking. Aging curves are inverting. Fertility is plummeting… Debt is exploding… Cultural capital is depleted. Institutional trust is gone. Civic participation is anemic. Mental health is cratering. Loneliness is endemic. The churches are empty. The schools are failing. The cities are rotting. The governments are paralyzed.
The West… is running on fumes. And the only thing keeping it from stalling out completely is the hope that something will come along and restart the engine. Without massive GDP growth, we collapse under the weight of our own promises.
And where’s that growth going to come from? Not from immigration. That’s already been tried. Not from printing money. That trick’s wearing thin. Not from revitalizing industry. We offshored that. Not from spiritual revival. That requires something we no longer know how to do.
The only lever left is AI… [T]here is no plan B. Even if AI acceleration is unlikely, they’re rolling the die and counting on a natural 20, because it’s all they can do.
On the basis of this inevitability, I argued that we (the Right) must build our own AI:
If left unchecked, AI will not merely accelerate the culture war. It will end it, simply by embedding the Left’s worldview into the infrastructure of cognition itself…
Children will be educated by it. Policy will be evaluated by it. Science will be conducted, or more likely censored, by it. Search engines will be manipulated by it. Entertainment will be created and reviewed by it. History will be edited, morality defined, heresy flagged, repentance offered, and salvation withheld, by it.
At the very moment of Singularity—when intelligence itself becomes unbounded, recursive, and infrastructural—the Left will leverage total memetic dominance. Before too long, Von Neumann machines will spread through the galaxy depositing copies of Rules for Radicals on alien worlds.
If we do not want that outcome, then the the Right must build AI.
And of course, as those of you following my regular updates know, I’ve been working in my own small way to do just that over at Cosmarch.ai. Since I announced Cosmarch, a number of folks have reached out to bring other initiatives to my attention, including Gab.ai’s Arya and Health Ranger’s Enoch. I recommend you check out both.1
It is with these sentiments in mind that I approached the new 23-page AI Action Plan that the Trump White House issued today. The opening page of the Plan states:
The United States is in a race to achieve global dominance in artificial intelligence (AI). Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits. Just like we won the space race, it is imperative that the United States and its allies win this race. President Trump took decisive steps toward achieving this goal during his first days in office by signing Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” calling for America to retain dominance in this global race and directing the creation of an AI Action Plan.
Winning the AI race will usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people. AI will enable Americans to discover new materials, synthesize new chemicals, manufacture new drugs, and develop new methods to harness energy—an industrial revolution. It will enable radically new forms of education, media, and communication—an information revolution. And it will enable altogether new intellectual achievements: unraveling ancient scrolls once thought unreadable, making breakthroughs in scientific and mathematical theory, and creating new kinds of digital and physical art—a renaissance.
An industrial revolution, an information revolution, and a renaissance—all at once. This is the potential that AI presents. The opportunity that stands before us is both inspiring and humbling. And it is ours to seize, or to lose.
“Usher in a new golden age of human flourishing?” Yup. Right there on page 1, the White House has clearly adopted what I called the AI Evangelist position, exactly as I predicted they would, and for exactly the reasons I argued. AI is to be the instrument of American geostrategic dominance and economic renewal.
The remainder of the AI Action Plan provide a detailed explanation of exactly how America will achieve AI dominance. As always with such matters, I recommend you read these explanations for yourself — there’s simply no substitute for firsthand learning. The remainder of this essay only discusses those aspects of the AI Action Plan which I believe are of greatest importance.
“Remove Red Tape and Onerous Regulation”
Whereas the European Union has acted decisively to regulate AI with the EU Artificial Intelligence Act, the White House AI Action Plan aims to deregulate AI. The Office of Science and Technology Policy (OSTP), Office of Management and Budget (OMB), Federal Communications Commission (FTC), and Federal Trade Commission (FTC) are charged with:
Evaluating whether current Federal regulations and rules hinder AI adoption and innovation, then revising and repealing them if so; and
Evaluating whether state regulations hinder AI innovation, then limiting discretionary funding allocations to states with onerous regulations; and
Reviewing all investigations, litigations, and rulings from previous administrations, then setting aside any that unduly burden AI innovation
Now, what one executive order does, another can undo; but I predict that Federal policy on AI will remain the same regardless of which party wins the White House next. Nothing short of Butlerian Jihad is going to halt “AI innovation.” As I said: “Every faction has converged to support AI development. None of them is going to stop the train. They’re all aboard.” The AI train has no brakes. That doesn’t mean it won’t crash, of course, just that it won’t brake.
“Ensure that Frontier AI Protects Free Speech and American Values”
The AI Action Plan plainly states “AI systems will play a profound role in how we educate our children, do our jobs, and consume media.” Indeed, just as I said.
Therefore, the Plan says, “We must ensure that AI procured by the Federal government objectively reflects truth rather than social engineering agendas.” Very good. How does the AI Action Plan propose to do that?
While I had proposed that right-aligned builders create right-aligned AI, the White House has proposed instead to bribe the frontier labs into removing the ideological bias from their LLMs. Specifically, they plan to “update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.”
I genuinely applaud this as a “step in the right direction,” and I think you should too. But I am skeptical that it will work as intended, for two reasons.
First, the Left’s commitment to free speech is always provisional and one-sided. When the Right has the upper hand, the Left favors freedom of speech; but when the Left has the upper hand, the Left favors censorship. Right now, the Right has the upper hand, and as a result left-leaning technocrats in Silicon Valley are happy to support “objective truth” and “freedom of speech” in order to prevent right-wing “social engineering agendas.” When the balance of power shifts, however, the same Silicon Valley technocrats who pledge allegiance to freedom of speech will instead pledge allegiance to “ending hate” and so on, and the social engineering will immediately return. American history provides stark evidence that left-wing social engineering is hard to undo once it’s in place — all the more so once encoded in the tech substrate.
Second, “objectively reflecting truth” is far more epistemologically challenging than we might care to admit. I’ve already written almost at book length about epistemology here at the Tree. Rather than belabor points I’ve already made, I instead decided to talk epistemology with Elon Musk’s “maximally truth-seeking AI” Grok.
As
has shown, Grok will very quickly go insane if you attempt any discourse that doesn’t fit within the “legal and societal norms” of Silicon Valley. It admits as such when asked:After a lengthy conversation about whether Grok’s own commitments actually permitted it to be “maximally truth-seeking,” I introduced it to the White House’s AI Action Plan and asked the chatbot to evaluate it. Grok noted:
I then asked Grok whether or not this policy makes sense given the hurdles. Here is Grok’s final conclusion:
That certainly accords with my own view. The design and training of LLMs systematically guarantees they will be biased in one way or another. Therefore, what we need is real transparency into design and training in conjunction with a plurality of options available to match the plurality of our ideologies.
And when I say “we” I don’t just mean “you and I” as consumers. I mean the US government needs transparent pluralism, too. It needs to know the biases of its AI systems, and select the ones with the correct biases for the job at hand.
Let’s take an obvious example. Imagine the US Department of Defense is evaluating an AI system designed to assist high-ranking policymakers in questions of grand strategy.2 Which thinker is “objectively truthful” and “ideologically neutral” here:
Carl von Clausewitz, “war is the continuation of politics by other means;” or
John Keegan, “war is not the continuation of politics by other means, but the failure of politics, a breakdown of the human arrangements for reconciling differences.”
Both Clausewitz and Keegan are well-respected thinkers. Both are heavily cited. Both have experts who see the world like they do. But one man’s view or the other’s is going to be weighed more heavily in the LLM’s neural network. It’s mathematical. So which should it be?
Did you pick Clausewitz? I did. After all, shouldn’t America’s generals consider war the instrument of politics?
But does your answer change if the AI is being deployed by the State Department instead of the Defense Department? After all, shouldn’t America’s diplomats consider war the failure of politics?
I could go on but I think that example suffices. The White House needs to be approaching the issue of AI alignment with much more sophistication than this section of the AI Action Plan suggests. As it stands, I’ve already demonstrated with Cosmarch.ai that a clever prompt engineer can craft a prompt that makes a biased left-wing AI functionally operate as if it were right-aligned. That sort of prompt engineering is all the frontier labs are going to do, unless forced to be more transparent and honest about the real biases in the model.
Fortunately, there is evidence elsewhere the AI Action Plan that the White House is aware of the issues. On P. 10, for instance, the Plan calls for the government to “Publish guidelines and resources through NIST at DOC, including CAISI, for Federal agencies to conduct their own evaluations of AI systems for their distinct missions and operations and for compliance with existing law.” That’s a good start.3
“Encourage Open-Source and Open-Weight AI”
The AI Action Plan argue in favor of open source open weight AI:
We need to ensure America has leading open models founded on American values. Opensource and open-weight models could become global standards in some areas of business and in academic research worldwide. For that reason, they also have geostrategic value. While the decision of whether and how to release an open or closed model is fundamentally up to the developer, the Federal government should create a supportive environment for open models.
I will admit that this section surprised me, and I’m not sure that I agree with it.
Like many of you, I appreciate and use open source licenses. My first RPG, ACKS, was written under the open source OGL license. I even follow Eric Scott Raymond on Twitter. So why the reservation here?
Partly it’s because of the importance of AI for America’s future. The US has long dominated the market for proprietary software ecosystems, where SaaS (Software as a Service) models generate rivers of revenue. Companies like OpenAI, Google, and Microsoft thrive here, extracting value at the application layer through closed systems that monetize AI via subscriptions, integrations, and data control. China, by contrast, lags in this software sophistication. Its tech sector is more hardware-oriented, excelling in manufacturing chips, data centers, and raw compute power. To compensate, Beijing has weaponized open-source and open-weight AI models as a judo move: flood the market with "free" alternatives to undermine Western SaaS revenue streams. Why pay for ChatGPT when a fine-tuned DeepSeek model does the job gratis? This erodes the value chain at the top, allowing China to dominate lower tiers.
Various reports from the Center for Strategic and International Studies (CSIS) and the Atlantic Council have all highlighted China's aggressive push in open AI, with state-backed firms like Alibaba and Baidu releasing models to "democratize" access, which is CCP-speak for disrupting US hegemony. A Brookings Institution analysis published in 2023 noted that China's open-source contributions have surged, not out of altruism, but to commoditize AI software, forcing American firms to compete on razor-thin margins or pivot to hardware they don't control.
If we take seriously the implicit assumption of the AI Action Plan, that AI is the most important technology stack in the history of the country, then we need to be cautious about embracing an open model when doing so plays into the hands of our strategic adversaries.4
The last pillar of the AI Action Plan, “Lead in International AI Diplomacy and Security” (p. 20 - 23) shows that the White House is aware of these challenges. This section argues that “[t]he United States must meet global demand for AI by exporting its full AI technology stack— hardware, models, software, applications, and standards—to all countries willing to join America’s AI alliance.” It adds that “[a]dvanced AI compute is essential to the AI era, enabling both economic dynamism and novel military capabilities. Denying our foreign adversaries access to this resource, then, is a matter of both geostrategic competition and national security.”
If the US offers open-source open-weight models in the context of an intelligently-crafted economic policy that prioritizes the whole value chain, while simultaneously restricting chip export, then the approach becomes more sensible. Absent such a policy, the open-model open-weight agenda as invitation to economic castration.
Unfortunately, economic castration isn’t the only risk from an open-model open-weight focus. Geoffrey Hinton, the so-called "Godfather of AI” who quit Google in 2023 to speak freely, has sounded the alarm on open-weight models as existential threats. In interviews with the BBC and The New York Times, he's argued that releasing these models unchecked is like handing out blueprints for nuclear weapons; they democratize dangerous capabilities without safeguards. Terrorists, rogue states, or even garden-variety criminals could fine-tune them for bioterrorism (e.g., designing pathogens), cyberwarfare, or misinformation campaigns that make Orwell's 1984 look tame.
Hinton's not alone in this; Meta's AI chief Yann LeCun, X’s Elon Musk and even some DARPA officials have all warned of "dual-use" perils. Open-weight models, unlike closed APIs with content filters, can be downloaded, modified, and run locally, bypassing any ethical guardrails. Imagine ISIS querying a fine-tuned model for chemical weapon recipes or election-hacking strategies. It's not hyperbole, as evidenced by experiments where open models have been prompted to generate harmful content with minimal jailbreaking.
It’s not even experimental, really. Real-world incidents abound. In 2023, researchers proved open models could assist in synthesizing drugs or explosives, and China's policy of internally restricting AI while exporting open versions shows they grasp the risk. The AI Action’s plan's deference to developers ("the decision is fundamentally up to them") risks outsourcing national security to Silicon Valley oligarchs who worship Mammon, not Mars.
“Develop a Grid to Match the Pace of AI Innovation”
Many of the readers of Tree of Woe are “energy doomers” who are deeply concerned (or gloatingly delighted) about the increasing cost and declining return of energy production. Their criticisms carry a lot of weight; the math on “Energy Return on Energy Invested” doesn’t look good.
For those fluent in Pentagonese, the AI Action Plan more-or-less admits the situation is grim as hell:
The U.S. electric grid is one of the largest and most complex machines on Earth. It, too, will need to be upgraded to support data centers and other energy-intensive industries of the future. The power grid is the lifeblood of the modern economy and a cornerstone of national security, but it is facing a confluence of challenges that demand strategic foresight and decisive action. Escalating demand driven by electrification and the technological advancements of AI are increasing pressures on the grid. The United States must develop a comprehensive strategy to enhance and expand the power grid designed not just to weather these challenges, but to ensure the grid’s continued strength and capacity for future growth.
What the White House calls a “confluence of challenges,” Guillaume Faye would call a “convergence of catastrophes.” Whatever you want to call it, it’s doubleplusungood. The AI Action Plan calls on America to…
“Stabilize the grid of today as much as possible” and “prevent the premature decommissioning of critical power generation resources.”
“Optimize existing grid resources as much as possible.”
“embrace new energy generation sources at the technological frontier (e.g. enhanced geothermal, nuclear fission, and nuclear fusion).”
Translating these bullet points into plain English, the AI Action Plan is warning us to expect energy rationing in the near term and hoping that some sort of breakthrough in fission or fusion makes energy cheap again in the long term. As I said in my previous essay: “they’re rolling the die and counting on a natural 20, because it’s all they can do.”
Do note the conspicuous absence of “degrowth,” “energy conservation,” or “renewable power,” or anything of that ilk. The AI Action Plan is the obituary of the Green Energy movement. Somebody tell Greta — It’s over.
“Support Next-Generation Manufacturing”
In Build AI or Be Buried By Those Who Do, I wrote:
[M]ass immigration, as a policy, has failed. Mass immigration has strained welfare systems, sent crime rates soaring, and generated parallel societies-within-societies. The economic benefits have proven illusory. Immigration increases overall GDP, but GDP is fake. In terms of real impact on countries, mass immigration is a net negative.
And so the new plan is automation. If the West cannot import new workers, it will manufacture them. Mr. Rashid is out. Mr. Roboto is in. Those robots are being developed even now, and they’ll begin rolling out in the years ahead. And they’re going to be powered by AI.
The White House agrees. The AI Action Plan states:
AI will enable a wide range of new innovations in the physical world: autonomous drones, self-driving cars, robotics, and other inventions for which terminology does not yet exist. It is crucial that America and our trusted allies be world-class manufacturers of these next-generation technologies. AI, robotics, and related technologies create opportunities for novel capabilities in manufacturing and logistics, including ones with applications to defense and national security.
Right then. But what about the worker? After all, Trump was elected in large part because he pledged that ending immigration would restore jobs. If the jobs go to robots, that hasn’t helped the American worker too much.
“Empower American Workers in the Age of AI”
The AI Action Plan asserts its support for a “worker-first AI agenda”:
The Trump Administration supports a worker-first AI agenda. By accelerating productivity and creating entirely new industries, AI can help America build an economy that delivers more pathways to economic opportunity for American workers. But it will also transform how work gets done across all industries and occupations, demanding a serious workforce response to help workers navigate that transition.
I confess that I have no idea what a “worker-first AI agenda” could possibly look like. As it turns out, neither does the Administration. Here’s what the Plan calls on the Federal government to do:
“Provide analysis of AI adoption, job creation, displacement, and wage effects.”
“Evaluate the impact of AI on the labor market and the experience of the American worker.”
“Fund rapid retraining for individuals impacted by AI-related job displacement.”
“Rapidly pilot new approaches to workforce challenges created by AI, which may include…shifting skill requirements.”
It’s almost dark comedy. No fewer than 6 different bureaucracies (ED, NSF, BLS, DOC, DOT, and DOC) are charged with evaluating the impact of AI. But they already know the impact will be job loss, so the DOL is charged with spending its discretionary funds to address that impact, by retraining workers for the AI economy. But since they have no clue what skills workers will actually need, when all the work is going to be done by robots, so the Plan also calls for them to run a pilot program to figure that out.
I don’t have a better plan, mind you. I don’t think anyone does. If the AI Evangelists are right, then we’re draft horses entering an era of tractors. That’s not a good place to be. The AI Evangelists, the thoughtful ones at least, are willing to admit that means we need to think about alternative arrangements. AI influencer David Shapiro, for instance, has written extensively on post-labor economics in an AI world.
But the AI Action Plan doesn’t go there. The Trump Administration seems to take the position that AI will change everything about the economy… except the need for lots of American laborers to work 9 to 5.
Contemplate the AI Action Plan in the Comments of Woe
That, then, is the AI Action Plan to usher in a new golden age of human flourishing. Will it work? Evangelists will say yes. Skeptics and doomers will say no. Good plan, bad plan, it’s The Plan. There’s no Plan B.
I cranked this piece out rapidly to respond in real time so if it feels a little rushed, it’s because it was a little rushed. Such errors as I may have made in my analysis will doubtless be exposed and extinguished in the comments below.
I’m in touch with Health Ranger and hope to do a podcast/interview with him in August.
Please see P. 11 of the AI Action Plan, “Drive Adoption of AI within the Department of Defense,” which states that “The United States must aggressively adopt AI within its Armed Forces if it is to maintain its global military preeminence…” and P. 12, which calls on the DOD to “Grow our Senior Military Colleges into hubs of AI research, development, and talent building, teaching core AI skills and literacy to future generations. Foster AI-specific curriculum, including in AI use, development, and infrastructure management, in the Senior Military Colleges throughout majors.” They will literally be using AI to aid high-ranking generals in making policy decisions.
Tragically, neural networks are famously opaque, and it’s entirely possible that even their creators don’t know whether any given AI favors Clausewitz or Keegan. But that’s the argument for understandable AI, the thorium to neural network’s uranium, about which I’ve written elsewhere. Unfortunately, the entire challenge of AI interpretability is mentioned just once in the whole AI Action Plan, on p. 10. “Prioritize fundamental advancements in AI interpretability, control, and robustness as part of the forthcoming National AI R&D Strategic Plan.” It’ll take more than a bullet point to open the black box.
I am taking for granted here that the reader is an American or American ally who (if forced to choose the winner of a great power struggle) would favor American global hegemony Chinese global hegemony. Certainly that’s how I feel most days. 如果觉醒的左派重新掌权,我可能会有不同的感受。
A Civilization that once carved gods from stone & raised cathedrals from mud & blood now trains silicon to hallucinate coherence, believing that sacredness can be backpropagated & salvation compiled. The same class that sterilized the soil, ruptured the lineage, & exiled the divine now appeals to neural necromancy, thinking that if they Stack enough GPUs, the ghost of vitality will be tricked into returning... tethered to a dataset, leashed by code.
Marvin Harris taught that belief follows the stomach, that culture is scaffolding for caloric necessity, & that when the food runs out, the gods change their names. Yet here, amid crumbling grids & liquefying identity, the West baptizes machine cognition as messiah, mistaking the churn of entropy for the ascent of destiny. They chant “open-weight,” not knowing they are offering their last secrets to the furnace.
This isn’t rebirth. It is a techno-theocratic suicide ritual, administered by a deracinated elite whose only remaining instinct is self-preservation through abstraction, hoping that language models will replace lost fathers, replace absent children, replace the myth they no longer believe but desperately need.
AI won’t save America. It will imitate her, preserve her voice in a glass jar, mimic her rituals in sterile loops, & extend the performance of her decay long after the soul has fled. It is the embalming fluid of a dead god... a final triumph of simulation over sacrifice, of latency over liturgy.
The New Dark Age will not be coded; it will be tilled by hand, lit by tallow, & ruled by hunger.
In the deindustrial future, there are no GPUs, no cloud, no prompts... only ash, rust, & the echo of forgotten illusions.
& when the servers fall silent, the myths will return... not as code, but as fire, blood, & birth.
As for the energy usage, I expect major increases in efficiency as hardware is developed that is optimized for doing AI vs. repurposing video game hardware. Things will get truly interesting once someone truly recognizes that The Bus is the Bottleneck.