Move Fast, Break Governance: The Contradictions of Trump's AI Strategy
From my recent Futuristic Lawyer podcast appearance — a deep dive into policy paradoxes and strategic blind spots

I was honored to join Tobias on The Futuristic Lawyer podcast last week to discuss Trump's AI Action Plan and how the current administration's approach to AI differs from what we saw during the Biden years. The conversation, which you can listen to here on YouTube or via Substack in the link below, covered everything from the contradictions in current AI policy to the dangerous precedent of politically manipulating AI responses.
The Futuristic Lawyer podcast focuses on how technology intersects with law and policy — exactly the kind of rigorous analysis we need as AI becomes increasingly central to governance. Having spent four years at the State Department during the Biden administration, I wanted to share some key insights about what's happening now and why the details matter so much for America's technological future.
Below, I'll summarize the main points from our discussion, complete with timestamps so you can jump to specific topics that interest you.
1. Background on Trump's AI Action Plan
In July 2025, the White House released America’s AI Action Plan, a roadmap built around three pillars:
Accelerate AI innovation (regulatory sandboxes, less red tape, and encouragement of open-source/open-weight models);
Build American AI infrastructure (compute, workforce, energy, evaluation capacity); and
Lead in AI diplomacy and security (export controls, evaluation of adversary models, and standards work at NIST/CAISI).
The document sketches more than 90 near-term policy actions and puts Commerce/NIST in the middle of model testing and standards, including new evaluation workstreams.
On the same day, the Administration issued an Executive Order on “Preventing Woke AI in the Federal Government,” instructing agencies to preference AI systems deemed “ideologically neutral” in procurement, and signaling revisions to guidance that strip references to DEI, climate, and misinformation in federal use contexts—framed as “accuracy” and “objectivity.” Supporters call it pro-innovation; critics warn it pressures vendors to self-censor and politicizes evaluation.
2. The Ideology Driving The Adoption
One of the most striking aspects of the current AI Action Plan is how it brings Silicon Valley's "move fast and break things" mentality directly into government (11:30-13:14). More than efficiency, this represents a fundamental ideological shift toward deregulation and letting AI companies develop "as fast and loose as possible."
On the surface, this might sound like removing bureaucratic obstacles. But as I explained in the interview, the details of implementation matter enormously. We're already seeing AI adoption in federal agencies that leads to "mishaps that don't actually make government more efficient" and actually create more losses (13:01-13:08).
The irony? While the administration talks about cutting government fat, effective AI implementation actually requires more careful integration, not less — you need people who understand both the technology and the agency workflows to make it work.

3. The Talent Blindspot
Perhaps the most glaring contradiction in the AI Action Plan is what it completely ignores: migration policy (15:13-15:55). The plan has zero mention of immigration, even though "the strongest AI developers in the US — most of them weren't born here."
This is strategically counterproductive. If your goal is to maximize US AI development, alienating the global talent pool that drives American innovation makes no sense. Add to this the general defunding of sciences and scientists not feeling welcome in the US anymore, and you're actively undermining your stated objectives.
As I noted in the interview: "You're doing a lot of things that are counterproductive to what the goal is, if the goal is really to maximize the potential of US AI development" (15:45-15:55).
4. Biden vs. Trump: Two Different Philosophies
The contrast with the previous administration couldn't be clearer (17:01-17:35). The Biden approach prioritized "responsible AI" with much more focus on mitigating risks, biases, and societal harms. It was more regulated, with less emphasis on accelerating development at all costs.
Were those Biden-era frameworks perfect? Absolutely not. As I mentioned, they were "still relatively weak and hard to implement and hold people accountable" (18:35-18:42). But they provided something to build on — a foundation for accountability and dialogue around these issues.
The current approach throws that foundation away entirely, embracing what I can only describe as regulatory nihilism.

5. The "Woke AI" Contradiction
One of the most concerning developments is the administration's focus on eliminating "woke AI" (19:06-20:33). The stated goal is to make AI "less biased and more neutral" by removing discussions of misinformation, diversity, equity, inclusion, and climate change.
But here's the logical problem: "If you want something that is more accurate and neutral, it needs to be able to say what is and isn't misinformation. If you're starting from the point that that's not something you can discuss, then you're already not working in a place of neutrality" (19:38-19:59).
We're already seeing real-world consequences when political pressure forces engineers to modify AI responses, sometimes leading to harmful outputs — the kind of politically motivated manipulation that should concern everyone, regardless of party affiliation. For instance, we have seen how Elon Musk's continuous pressure to tweak his AI, Grok, to align with his own views even led to a moment where Grok referred to itself as “MechaHitler.”
6. The Open Source Paradox
Another puzzling element is the administration's emphasis on open-source AI development (23:49-25:08), especially given that China currently leads in this area. While open-source has benefits, prioritizing it when your main competitor has advantages there seems strategically questionable.
The timing of recent OpenAI open-source releases, likely influenced by administration conversations, suggests this policy direction is already reshaping corporate strategies — though whether this ultimately benefits US competitiveness remains unclear.

7. Abandoning America's Secret Weapon
Perhaps the most strategically damaging aspect of the current approach is how it abandons America's greatest competitive advantage: our alliances (32:54-33:49). The AI Action Plan is intensely focused on US dominance with little room for partnership or collaboration.
As I explained to Tobias, "China does not have and does not manage alliances. It manages its relationships with countries very differently. And so by losing this main asset and then just isolating ourselves while also seeking some sort of global domination on technology — it just doesn't make sense. You can't have your cake and eat it too" (32:54-33:22).
Recent Chinese AI strategy documents, released just days after Trump's plan, were notably more collaborative in tone — positioning China as an alternative for countries alienated by aggressive US dominance rhetoric.
8. Sloppy Rollouts Are More Harmful Than Sci-Fi Doomerism
The fundamental question isn't whether AI can help make government more efficient — it probably can. The question is whether we're implementing it thoughtfully (37:40-38:16). That requires people embedded in agencies who understand both the technology and the institutional logic, who can troubleshoot when things go wrong, and who can ensure the technology actually serves the problems it's supposed to solve.
Instead, we're getting the opposite: rapid deployment with minimal oversight, driven more by ideology than by careful analysis of what works.
My biggest concern isn't the "evil AGI" scenarios that get lots of attention (35:32-35:45). It's that we'll end up with "technologies that are ineffective, that are overhyped, that are not implemented or not actually applicable" to the problems they're supposed to solve (36:04-36:16). The result? People fired when AI can't actually replace them, bad decisions based on AI that doesn't understand context, and systems that make mistakes or aren't secure.
The current strategy pushes us further toward that world.
This conversation with Tobias really crystallized for me how the details of AI policy implementation matter just as much as the high-level strategy — maybe more. If you're interested in these intersections between technology and governance, I'd highly recommend subscribing to The Futuristic Lawyer and checking out our full conversation.
What aspects of current AI policy concern you most? I'd love to hear your thoughts in the comments.
Great summary! I feel privileged to have conversations such as these and learn more about important topics.