China girl, pt. 6 (finale): the AI reckoning of America's future
Or: the conclusion to a six-part series on how China mobilizes women while America excludes them, and why AI governance is where we either learn the lesson or lose for good
The thing about making the same mistake in relationships, nuclear weapons, space technology, conventional defense, and now AI is that at some point it stops being a mistake. It’s a pattern. Patterns are choices. A mistake is a choice repeated, and the conclusion to my China girl series on how China is utilizing women to superpower its defense industry while America falls behind is all about how to avoid bigger mistakes for the future.
I’ll show how AI governance is the logical end point to inform future policy making positions and decisions in lieu of obvious interventions like encouraging more women to study engineering.
If you’re new here: I’ve spent five parts documenting how America voluntarily handicaps itself in defense technology by excluding women while China mobilizes everyone. The short version is we lose $100 billion annually, China closed an 80% AI capability gap in 13 months, and we keep forgetting that we solved this problem during WWII and Apollo.
Part 1 documented how China mobilizes millions of women in STEM and in defense, while we exclude ours, and how China closed an 80% AI capability gap in 13 months that analysts predicted would take 5-7 years. Part 2 diagnosed the cultural hostilities driving women out of defense roles while examining aspects of the industry keeping in mind how 40% of women engineers in America end quitting the field entirely. Part 3 estimated the costs of this at $100 billion annually, almost $4.5 trillion since 1976 (roughly the GDP of Japan) that has evaporated because we made defense hostile to women. Part 4 identified the mechanisms doing the excluding: clearance systems treating student debt as security risk, veteran pipelines importing 82% male demographics, financial vetting penalizing people who got educated without generational wealth. Part 5 showed we’ve actually mobilized women in defense to great success though only through seemingly existential crisis moments, consider for example how 80% of WWII codebreakers were women, Katherine Johnson calculated the moon shot by hand, female CIA analysts found bin Laden through laundry patterns, but then we forgot all of it while China studied our history and took notes.
If you’ve been following along, AI governance is where all of this comes together,
I’ll begin by explaining why AI governance is the foundational conclusion and pathway for achieving better gender parity in American defense. Then I’ll examine who’s making the decisions on AI and why it matters that they’re mostly men. Finally, I’ll lay out the strategic choice we’re facing and what happens if we keep making the same mistakes.
Part 6’s subsections:
Why AI governance is the meta-domain and not just another sector
Who actually writes the rules on AI and why it matters that they’re mostly men
Dreaming bigger dreams through being aspirational doesn’t cut it: evidence from peace processes
Work, work, work, work, work: mandatory quotas work but voluntary commitments don’t
The talent war we’re losing by default
The strategic choice
Why AI governance is the meta-domain
Here I explain why I’m concluding this series with AI governance specifically, not STEM pipelines or defense culture reform or any other intervention which may seem obvious at first.
Key points:
AI governance isn’t just another sector where women’s exclusion matters. It’s the layer that determines how every other technology develops, deploys, and gets regulated. In turn, if you fix AI governance, you fix a lot of downstream problems. If you fail at AI governance, you bake those failures into everything AI touches.
44.2% of AI systems exhibit gender bias per Berkeley Haas analysis, with 70% of those delivering lower quality service to women. This means AI has a governance problem that cascades across sectors impacting society from all angles including healthcare, financial systems, criminal justice, national security, and much more through every domain AI touches.
The same clearance apparatus and delays I discussed in Part 4 as barriers to entry for achieving more gender equity in defense, all of the same applies to AI researchers who need to work on classified applications.
Whoever writes AI rules determines which values and blind spots get embedded into everything AI builds. If the people writing the rules are 73% male, the rules will reflect 73% male perspectives and miss what they don’t think to consider.
When I talk about AI governance being the key connection layer that shapes everything else I’ve discussed in the China girl series, what I mean in plain terms is that AI is not like other technologies. It’s not a thing you build and deploy, like a missile or a satellite. AI is recursive, AI learns, and that means AI’s capacities are infinite. It’s a thing that builds other things. To render AI in one category like a missile or satellite is impossible by the very nature of its being a thing that builds things.
After all, AI systems and their algorithms are already designing drug compounds, writing legal documents, analyzing medical scans, predicting who commits crimes, deciding who gets loans, targeting military strikes, and much, much more. That in turn means the decisions underpinning AI and its uses also cascade into every single domain AI touches. The decisions about how AI gets developed, who develops it, what values get built in, what failures get caught or missed, and so on and so forth, these are all determined by those at the helm of AI governance.
That accountability for AI governance and its rippling impacts is why I advance that AI governance matters more than obvious surface level solutions to the lack of women in defense. It will not be through bandaid on the wound measures such as merely increasing enrollment in STEM oriented educational pipelines. The obvious solutions also miss second and third order effects which I’ve discussed extensively in different iterations.
After all, you can have a perfect STEM pipeline producing brilliant women engineers and still lose every one of them at the clearance barrier. You can work to reform defense culture for a decade and still watch China outpace you because their researchers started contributing a year before yours finished getting their security clearance process.
If you get AI governance right, however, and if the people writing the rules and building the systems actually represent the populations those systems affect, you will fix a lot of downstream problems at once.Conversely, if you get AI governance wrong, you bake the failures into everything.
What getting governance wrong looks like in practice, for example, is when 44.2% of AI systems exhibit gender bias according to Berkeley Haas analysis, with 70% of those delivering lower quality service to women. This means that these systems involved features that were designed, tested, approved, and deployed by teams and oversight bodies that didn’t include enough people who would have caught the problems. Why is an AI system failing another gender? It’s because there wasn’t adequate representation or consideration or regard for these populations. The governance failed before the technology did. Imagine how much innovation, success, and strategy is lost in the defense technology that overlooks details which can change everything in a moment.
Consider how when an AI researcher at Anthropic or OpenAI needs a security clearance for work touching classified applications, she enters the exact same time-consuming system I documented in Part 4. A clearance system doesn’t know the difference between a missile engineer and a machine learning engineer, nor does it care to. Its function is to serve as a mechanism and thus it applies identical barriers to both.
Consider in addition to America’s various barriers, how China’s advantage documented in Part 1 was AI specifically. The 13-month capability gap closure was AI model benchmarks. The 2 million data annotators, majority women, were training AI models. It’s AI. It’s all AI. It’s AI all the way down, out, and through. When I wrote Part 1 about the specifics of gender utilization in the defense industry competition between China and the U.S, I was already writing about AI competition. The defense technology patterns I documented across Parts 1 through 5, the cultural exclusion, the clearance barriers, the $100 billion in annual losses, all of that is now playing out in AI governance. Part 6 makes that connection explicit and why future oriented decisions should keep this in mind.
The pen is mightier than the sword: who actually writes the rules for AI governance?
Here I document who sits at the table when AI governance decisions get made, and why it matters that they’re mostly men.
Key points:
Only 12% of AI researchers globally are women per UNESCO and only 18% of authors at leading AI conferences are female. The people building AI don’t look like the people AI affects.
The Global Partnership on AI hit 36% women in its Innovation Working Group, the highest figure I found for any major AI governance body, but most bodies are worse.
The U.S. National Security Commission on AI was 73% male, 4 women among 15 commissioners. This commission shaped America’s AI strategy for competing with China.
Detachment 201, the new Army Reserve program that commissions tech executives as Lieutenant Colonels to embed AI expertise in military planning, launched in June 2025 with four inaugural members. The initial program includes Shyam Sankar from Palantir, Boz Bosworth from Meta, Kevin Weil from OpenAI, and Bob McGrew from Thinking Machines Lab. All four are men and none have military backgrounds.
No country has mandatory gender quotas for AI governance bodies. Commitments to gender balance are aspirational, not binding, and we have 25 years of evidence that aspirational commitments don’t work.
Peace process research shows agreements with women participants are 35% more likely to last 15 years. Corporate board research shows mandatory quotas achieve in 5 years what voluntary approaches fail to deliver in decades.
There is no supranational authority that determines which countries enact AI governance nor how. To understand and benchmark domestic rule making and its reasoning, I find it helpful to dig into international data across institutional entities to get a sense of the global perspective when considering individual approaches. When it comes to AI governance, the UN’s High-Level Advisory Body on AI achieved gender balance through deliberate design. The Global Partnership on AI hit 36% women in its Innovation and Commercialization Working Group, the highest figure I found for any major AI governance body.
Yet this number is an outlier, as globally, only 12% of AI researchers are women, and just 18% of authors at leading AI conferences being female according to UNESCO. The people building AI don’t look like the people AI affects, and in examining them on the national level, this becomes clear. That’s because on the national level where rules actually get written and enforced, the composition is worse.
The U.S. National Security Commission on AI had 4 women among 15 commissioners, which is 27%. That means the commission that determined America’s AI strategy for competition with China was 73% male. With nearly 3 times the amount of women and their perspectives involved, this also means whatever blind spots those 11 men had collectively will be blind spots shaped by national AI policy.
The UK AI Safety Institute leadership is also predominantly male, while Japan’s AI Safety Institute has Akiko Murakami as Director while Canada leads the G7 in women-in-AI growth, with Elissa Strome and Catherine Régis co-directing their AI safety work.
On the American side, one of the more novel bridging mechanisms between tech and government lately has been Detachment 201, which also demonstrates a disparity. Formed in June 2025, this Army Reserve unit directly commissions tech executives as Lieutenant Colonels serving approximately 120 days annually. It’s designed to embed frontier AI expertise directly into military planning and the inaugural cohort includes Shyam Sankar, CTO of Palantir, Andrew “Boz” Bosworth, CTO of Meta, Kevin Weil, CPO of OpenAI, and Bob McGrew, Advisor at Thinking Machines Lab and former OpenAI CRO.
All four are men, so one of the more innovative government-industry interfaces for AI in years has launched without a single woman at the table.
As of writing, no country has mandatory gender quotas for AI governance bodies. In some other examples, the Nordic countries issued a 2025 joint statement committing to ‘gender-balanced representation in AI development and governance’ but without binding mechanisms and the EU AI Act mentions gender equality in recitals but lacks enforcement provisions for governance body composition.
We have 25 years of data showing aspirational commitments don’t work.
Dreaming bigger dreams through being aspirational doesn’t cut it: evidence from peace processes
Here I examine what robust quantitative evidence tells us about including women in high-stakes governance decisions, drawing from peace process research where the data is unambiguous.
We’re in uncharted territory with a new technology via AI but we also actually have robust quantitative evidence on what happens when you include women in high-stakes governance decisions versus when you don’t. This comes from peace processes and the data is unambiguous.
A landmark study analyzing 182 peace agreements from 1989-2011 found that when women participate as negotiators, mediators, or signatories, the probability of peace lasting at least two years increases by 20% while the probability of lasting 15 years increases by 35%.
The weight of this implication bears pause. Peace agreements with women at the table are 35% more likely to still be held 15 years later. Despite this evidence, women remain systematically excluded. Data from the Council on Foreign Relations shows that between 1992 and 2019, women comprised only 13% of negotiators, 6% of mediators, and 6% of signatories in major peace processes. The data proves inclusion works, but the institutions exclude anyway.
If any of this sounds familiar, it’s because it is. The mechanism isn’t mysterious and I’ve talked about this in prior posts as well. Cognitive diversity research demonstrates that diverse groups make better collective predictions because their errors cancel out, while homogeneous groups share blind spots. Scott Page’s mathematical work on this, published in PNAS, showed that a team of randomly selected agents can outperform a team of the best-performing agents if the random team is more diverse. Different perspectives catch different problems.
In peace negotiations, women brought civil society linkages that increased implementation rates. They raised issues that male negotiators missed. They considered stakeholders that weren’t at the table. They thought about and discussed things that may have not been reflected or regarded by male negotiators. The exact same dynamics apply to AI governance. When 43% of AI company boards are entirely male and only 10% of standards experts are women, there are perspectives missing, failure modes unconsidered, stakeholder impacts ignored, and vulnerabilities for exploitation and failures not yet envisioned but all too possible.
The AI Now Institute documented this directly when they found that women form just 15% of AI research staff at Facebook, 10% at Google, and 18% of authors at leading AI conferences. When the people building AI systems don’t reflect the populations those systems affect, the systems fail on everyone they weren’t designed to see.
The consequences of AI failures due to homogenous systems are documented and extensive. Joy Buolamwini and Timnit Gebru’s Gender Shades study found commercial facial recognition 99% accurate on white men but only 65% accurate on dark-skinned women. The Epic Sepsis Model, deployed across 180+ hospitals serving 54% of U.S. patients, missed 67% of sepsis cases in external validation while sepsis kills 270,000 Americans annually. The COMPAS recidivism algorithm showed Black defendants 77% more likely to be flagged high risk despite overall accuracy of just 61%, while Robert Williams, Nijeer Parks, and Porcha Woodruff (8 months pregnant) were all wrongfully arrested due to facial recognition failures. In 2003, U.S. Patriot missile batteries shot down allied aircraft when automated systems misidentified friend as foe. The teams that built these systems, the oversight bodies that approved them, the testing frameworks that validated them, all suffered from the same homogeneity that produces blind spots. Now imagine that pattern applied to autonomous weapons making kill decisions based on algorithms trained by teams that are 80% male and maintain that homogeneity in an ever evolving battlefield that is constantly iterating on itself.
Work, work, work, work, work: mandatory quotas work, but voluntary commitments don’t
Here’s what the evidence shows about mandatory versus voluntary approaches to getting women into governance roles, and why words are not enough.
Consider the contrasts between mandatory and voluntary participation as demonstrated by Norway and the Netherlands respectively. Norway passed a mandatory 40% board quota in 2003 with forced dissolution as the penalty for non-compliance, and women’s representation moved from 0% at the median company in 2003 to 40% by 2008. In five years, their number quadrupled, and research found no negative performance impact. The newly appointed women were more qualified than their predecessors on average, disproving the “pipeline problem” excuse.
The Netherlands tried the voluntary approach. After seven years of aspirational targets, 90% of listed companies had not met quotas for executive directors and 66% failed for non-executive directors. Companies’ ‘comply or explain’ reports “mostly resulted in vague explanations that could not be corroborated, or sometimes blatant rejection.” The Netherlands eventually gave up and adopted a binding 33% quota in 2021.
On a continental front, European-wide data tells the same story. Countries with binding quotas achieved 39.6% women on boards by 2024, gaining 28 percentage points since 2010. Countries with only soft measures reached 33.8%, gaining 17.3 points. Countries taking no action reached 17%, gaining 9.8 points.
The rate of progress matters even more. Mandatory quota countries advanced 3.5 percentage points per year. Soft measure countries advanced 1.6 points per year. No-action countries advanced 0.2 points per year. At that pace, reaching 40% representation would take 200 years.
This is directly applicable to AI governance because it tells us what to predict without firm commitments. For example, when the Nordic countries issue a statement “committing to gender-balanced representation” without binding mechanisms, the evidence predicts what will happen: nothing meaningful. When the EU AI Act mentions gender equality in recitals without enforcement provisions, the evidence predicts what will happen: nothing meaningful. Aspirational commitments are how institutions signal virtue while avoiding change.
Yet some countries are doing more than signaling. Canada’s AI4Good Lab is a 7-week program exclusively for women and gender diverse people, running annually in Montreal, Toronto, and Edmonton with 90 participants backed by Mila, Quebec’s AI Institute. The UNIDIR Women in AI Fellowship specifically targets women diplomats for AI governance training, selecting up to 30 fellows annually with priority to developing countries. UK AI procurement guidelines require suppliers to “address the need for diversity to mitigate bias in the AI system,” linking team diversity directly to contract requirements. If you want UK government contracts, you have to demonstrate diversity. These are genuine alternatives for example in programs that deliberately include women rather than accidentally excluding them.
The talent war we’re losing by default
Here I document the talent competition asymmetry between the United States and China, and why our structural barriers compound at software speed.
China is fighting a talent war where America is offering 249 day waits for a clearance while China offers $700,000 and housing. One such method of battle is China’s Qiming Program, the country’s successor to its previous Thousand Talents Plan, and possibly the most aggressive AI talent recruitment effort any nation has mounted. The signing bonus alone can range from approximately $420,000 and up to $700,0000 along with the potential (depending on province) for additional personal research funding, project funding, and housing.
China understands how culture war concerns do not get conflated with a talent war. This isn’t a diversity program or framed in any such way, though it is seeking diversity in thought to fuel China’s ascent. It’s a talent acquisition program that happens to not care about gender, national origin, or any demographic factor except capability. If you can build AI systems, China wants you.
The results are measurable and rapid, reflecting the success such a talent acquisition approach is taking. According to MacroPolo’s Global AI Talent Tracker, 28% of elite AI researchers globally are now Chinese, up from 11% in 2019. Conversely, America’s share of AI researchers dropped from 59% to 42% in the same period. At least 85 scientists moved across scientific fields from U.S. institutions to China since early 2024 alone.
Here’s the part that should concern anyone thinking about AI competition: over half of DeepSeek’s 200+ researchers never left China for schooling or work. Their domestic pipeline is producing world-class AI talent without needing to recruit from abroad at all. When DeepSeek released models that matched frontier U.S. capabilities at a fraction of the compute cost, they did it with homegrown talent.
While China built this infrastructure, recent U.S. policy shifts moved in the opposite direction. Among these changes include how frontier model reporting requirements were rescinded, the AI Safety Institute was reorganized, and guidance from the National Institute of Standards and Technology, which sets federal standards for AI development practices, is now being revised to remove previous metrics. Furthermore, the Blueprint for an AI Bill of Rights was removed from the official White House website and immigration pathway modernization for AI experts was halted. As of writing, the mechanisms that could have measured whether we were addressing the barriers documented in this series were removed before anyone tested whether they worked.
Meanwhile, China’s Cyberspace Administration created a mandatory algorithm registry requiring companies to file training methods and security self-assessments, among the first such systems globally. In 2018, MOST designated National Champions including Baidu, Alibaba, Tencent, and SenseTime, granting them preferential access to government data and infrastructure. They have a coordinated national strategy for AI dominance. They have funding mechanisms that make American venture capital look modest. They have a talent pipeline that doesn’t filter out half the population.
The structural barriers I’ve documented across Parts 1 through 5, the clearance delays, the financial penalties, the cultural hostility, all of these compound in AI because AI moves at software speed. That means the impact of these barriers scales at the speed China advances. Every barrier that costs us months costs us model generations, just like every woman we lose to attrition is a researcher China would have hired yesterday.
Defense acquisition cycles measure in years, so 249-day clearance delays were painful but survivable. Programs could absorb them. AI development cycles measure in months. The 249 days I documented in Part 4 was bureaucratic friction for defense, but for AI it represents multiple generations of model architecture becoming obsolete. The techniques someone would bring to a job literally become outdated during the wait.
This compounds with clearance barriers because AI’s workforce is fundamentally different from legacy defense. Over 50% of the U.S. AI workforce is foreign-born and two-thirds of AI graduate students are international. The clearance barriers from Part 4 hit AI harder than they ever hit defense manufacturing because immigration barriers compound with security barriers in ways traditional defense never faced.
Concluding remarks: two paths and the strategic choice
Here I lay out the choice explicitly because I’ve been building to this across the whole series.
Two paths.
Path A means replicating defense patterns such as applying traditional vetting to AI researchers as dual-use capabilities expand to watch clearance processing become the binding constraint on national security AI the way it became the binding constraint on defense. We could import the same cultural patterns from the same companies, get approximately 20% women in cleared AI roles, probably less since AI has even more international researchers who’ll get flagged for foreign contacts, and watch China continue outpacing us with $700,000 signing bonuses and 0 clearance delays among other options they provide. We can lose the AI race the same way we’ve been losing the defense race, through self-inflicted talent constraints.
Path B means learning from defense failures to implement continuous vetting that catches problems faster while processing people sooner, to build on allied frameworks creating shared talent pools. AUKUS Pillar II is working toward federated security clearances recognized across three countries, basically a trilateral free market for defense technology with free flow of knowledge and people. NATO DIANA offers 23 accelerator sites and 182 test centers across 28 countries with work conducted at university-based sites rather than classified facilities, meaning you don’t need a personal clearance to participate. The March 2024 RAAIT trials demonstrated interoperability of AI datasets, models, and platforms across the US, UK, and Australia. These frameworks exist and we could build on them and require gender metrics in AI-for-defense programs so we actually measure what’s happening instead of maintaining the research void I documented in Part 4. Pushing women’s participation in national security AI toward 35-40% could be completed.
America’s constitutional protections against compelled labor enable democracy. China’s Military-Civil Fusion can compel cooperation, but compelled cooperation breeds resentment and brain drain and innovation-killing compliance. Our voluntary system, when properly incentivized, generates genuine commitment and creative problem-solving that authoritarian systems cannot replicate.
The challenge is making voluntary participation attractive enough. We’re failing that challenge not because of constitutional constraints but because of bureaucratic barriers and cultural hostility that have nothing to do with the Constitution. The 249-day clearance wait isn’t constitutionally required. The financial vetting that treats debt as risk isn’t constitutionally required.
These are policy choices we made and keep making.
The Jenga tower is tipping
My Chinese friend who beats me at Jenga doesn’t have supernatural powers. She just uses all the pieces as I leave half in the box. She builds higher because she has more to build with. She takes risks I can’t take because she has margin for error I don’t have, which is this situation exactly.
China isn’t smarter at AI. They’re not more innovative. They don’t have better researchers or more creative engineers. Not all of their work is original. They just use everyone while we use half of everyone. In exponential technologies like AI, that difference doesn’t add. It multiplies. It scales. It changes everything.
Eric Schmidt told Congress that “China’s stated goal is global AI leadership by 2030. Achieving this would allow Beijing to set global technology standards and norms, fundamentally altering the global balance of power.” These are not partisan assessments or divorced from national security as petty culture wars. These are bipartisan threat evaluations. Consider how the National Security Commission on AI warned that “successful adoption of AI will drive economies, reshape societies, and determine which countries set the rules for the coming century.” In the meantime, Carnegie Endowment describes “an AI governance arms race” that “reflects real competition among states, international organizations, and the tech industry to set global standards.”
Parts 1-5 of the China Girl series documented how we discovered women win wars, proved diverse cognitive approaches produce superior outcomes in every domain where we’ve measured it, and then forgot while China studied our history and operationalized our lessons at national scale.
Part 6 documents the same pattern emerging in real time for AI.
Defense technology was the proof of concept for the thesis. We ran the experiment across five parts of this series and documented the results - an estimated $100 billion in annual losses, 23% women in an industry that should approach labor force parity, and significant capability gaps compounding while China builds higher with all their pieces. As we look towards the future, AI governance is the test of whether we learned anything.
The Jenga tower is tipping. China locked in permanent mobilization through Military-Civil Fusion. America locked out half its talent through bureaucratic inertia dressed up as security necessity and other barriers, and is now without mechanisms to measure who it is losing
We’re putting pieces back in the box while they build higher with all of theirs.
Democracy doesn’t die in darkness. It dies in the fluorescent lighting of human resources departments processing clearance paperwork while China builds the future with the talent we rejected as a dusty Jenga box sits in a corner.
It’s time to open it up and use all the pieces.
Reader note: This is analysis of innovation economics, competition policy, and governance frameworks. For comments and consulting inquiries, reach me at ani@anibruna.com

