The Tech Sales Newsletter #77: AI gets political

While software has always been intermingled with politics and government projects, it’s one of the few industries where there hasn’t historically been a big push to adopt technology priorities as national ones.

More specifically, while governments have provided research funding, legal frameworks, and good old cash in the form of contracts, they’ve never pushed for nationalization or for software as “destiny” for the nation-state.

This has so far been a prudent strategy for the most part, leaving both risk and opportunity up to the builders and sellers rather than the bureaucrats.

As we enter the most influential era of technology, this dynamic is likely to shift. This is particularly important for tech sales because if you’ve had any experience selling mission-critical projects to government buyers, the money comes with a lot of… conditions.

The key takeaway

For tech sales: The conversation between governments and AI companies has shifted from “What do you exactly do, nerd?” to “How can we help accelerate this?” This shift will lead to a significant increase in opportunities ahead of us—both in terms of direct funding and government contracts, as well as an indirect “favorable business environment.” However, these opportunities will come with strings attached, whether you like it or not.

For investors: Value accrues at the bottom of the stack, and that value is about to grow exponentially. As new electric grid capacity is either added or redirected toward large-scale data centers, several companies will build outsized computing clusters that will be deemed critical to U.S. national security. I can’t be more prescriptive than this.

The shifting state of play

Source: whitehouse.gov

For the purpose of this discussion, we’ll start with the actual structural change—the executive order on “Removing Barriers to American Leadership in AI.” While both the European Union and the Biden administration have issued their own regulatory frameworks, these can be considered iterative of the historical approach to technology, albeit more restrictive. Based on the Trump administration essentially removing the previous framework and the EU Commission facing significant pressure to start prioritizing business-friendly policies, both of these are essentially irrelevant going forward.

Section 1. Purpose. The United States has long been at the forefront of artificial intelligence (AI) innovation, driven by the strength of our free markets, world-class research institutions, and entrepreneurial spirit. To maintain this leadership, we must develop AI systems that are free from ideological bias or engineered social agendas. With the right Government policies, we can solidify our position as the global leader in AI and secure a brighter future for all Americans.

This order revokes certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence.

As part of the “revoke” text, a clear direction is now stated—not simply to support or regulate, but to ensure “global leadership in AI.”

Sec. 2. Policy. It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.

This marks a shift in strategy.

AI dominance = The goal is not just to participate but to be ahead of all other competitors. Historically, the language used in these contexts has been “leadership.” Dominance represents a clear step up in expectation.

Promote human flourishing, economic competitiveness, and national security = The U.S. aims to be a benevolent leader in AI while still achieving first place in the utilization of AI, whether for economic or military purposes.

There is no emphasis here on “international collaboration” or “scientific discovery.” AI has been identified as fundamental to the security of the nation-state and will be treated accordingly.

Sec. 4. Developing an Artificial Intelligence Action Plan. (a) Within 180 days of this order, the Assistant to the President for Science and Technology (APST), the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs (APNSA), in coordination with the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, the Director of the Office of Management and Budget (OMB Director), and the heads of such executive departments and agencies (agencies) as the APST and APNSA deem relevant, shall develop and submit to the President an action plan to achieve the policy set forth in section 2 of this order.

The immediate next step is to allocate resources to define an active strategy for the U.S. government to achieve the goals outlined in its policy. Now, let’s review who’s who in the three roles tasked with translating the policy into an actionable plan.

Michael Kratsios

Assistant to the President for Science and Technology (APST)

Michael is clearly a T-shaped individual, on a mission to drive technological progress at scale within the U.S. government. Note his diverse connections to funding sources, a proven track record in the White House, Department of Defense (DoD) experience, and active engagement in one of the most critical “back-office” AI companies in the industry.

David Sacks

Special Advisor for AI and Crypto

David is one of the most well-connected individuals in tech today. While his role is primarily associated with crypto, we should not underestimate the back-channeling he will facilitate regarding AI.

Michael Waltz

Assistant to the President for National Security Affairs (APNSA)

M. Waltz has earned four Bronze Stars, including two for valor during multiple combat tours, and is known as one of the most prominent China hawks in Congress.

If we draw some conclusions here, a few things become obvious:

  • The policy is not “performative art” but represents a clear change of direction.

  • The individuals tasked with delivering the actionable plan bring a diverse yet interconnected set of skills and existing relationships.

  • The policy prioritizes both economic and military utilization of AI.

  • The goal is to dominate globally, not merely “support” AI adoption within the U.S.

It’s important to recognize that once the government takes a deep interest in a technology, the contracts they grant often come with… extras.

For those who have never sold a project requiring security clearance, these conversations almost always lead to concessions or “under the table” arrangements. This became especially clear the first time I negotiated with a government procurement agent. He made it explicit: “I can agree to these terms, but it doesn’t matter anyway—if we need the software for national security purposes, we’ll just break your licensing protections.”

Chips,‬‭ data,‬‭ energy‬‭ and‬‭ talent

Source: OpenAI

OpenAI recently released a policy proposal paper outlining their key pitch for AI as a foundational technology and an integral part of public policy.

‭ OpenAI’s‬‭ mission‬‭ is‬‭ to‬‭ ensure‬‭ that‬‭ artificial‬‭ intelligence‬‭ benefits‬‭ everyone.‬‭ To‬‭ us,‬‭ that‬‭ means‬ building‬‭ AI‬‭ that‬‭ helps‬‭ people‬‭ solve‬‭ hard‬‭ problems‬‭ because‬‭ by‬‭ helping‬‭ with‬‭ the‬‭ hard‬‭ problems,‬ AI‬‭ can‬‭ benefit‬‭ the‬‭ most‬‭ people‬‭ possible‬ –‬‭ through‬‭ better‬‭ healthcare‬‭ and‬‭ education,‬‭ more‬‭ scientific‬‭ discoveries,‬‭ better‬‭ public‬‭ policies‬‭ and‬‭ services,‬‭ and‬‭ improved‬‭ productivity.‬‭ We’re‬‭ off‬ ‭ to‬‭ a‬‭ strong‬‭ start,‬‭ creating‬‭ freely‬‭ available‬‭ intelligence‬‭ being‬‭ used‬‭ by‬‭ more‬‭ than‬‭ 300‬‭ million‬ people‬‭ around‬‭ the‬‭ world,‬‭ including‬‭ 3‬‭ million‬‭ developers,‬‭ to‬‭ ideate,‬‭ discover,‬‭ and‬‭ innovate‬‭ beyond what we’re currently capable of doing on our own. 

The differences between the U.S. government’s policy moving forward and this proposal become obvious in the very first sentence. “Benefits everyone” is not a shared goal.

With‬‭ such‬‭ prosperity‬‭ in‬‭ sight‬‭ ,‬‭ we‬‭ want‬‭ to‬‭ work‬‭ with‬‭ policymakers‬‭ to‬‭ ensure‬‭ that‬‭ AI’s‬‭ benefits‬ are‬‭ shared‬‭ responsibly‬‭ and‬‭ equitably.‬‭ The‬‭ enclosed‬‭ framework‬‭ champions‬‭ the‬ entrepreneurship and individual freedoms at the heart of the American innovation ecosystem.‬ If‬‭ done‬‭ right,‬‭ the‬‭ developers‬‭ who‬‭ are‬‭ AI’s‬‭ Main‬‭ Street‬‭ will‬‭ thrive‬‭ along‬‭ with‬‭ companies‬‭ of‬‭ all‬ sizes,‬‭ and‬‭ the‬‭ broad‬‭ economic‬‭ benefits‬‭ of‬‭ the‬‭ technology‬‭ will‬‭ catalyze‬‭ a‬‭ reindustrialization‬ across the country.‬‭

This reads more like a statement from a non-profit than from the very commercial structure that OpenAI is adopting as its primary operational model. There will be no “equitably sharing” in a world where dominance through AI becomes the norm.

America has faced such moments before, and we know how to think big, build big and act big.‬ Automobiles‬‭ weren’t‬‭ invented‬‭ here—they‬‭ were‬‭ invented‬‭ in‬‭ Europe.‬‭ Early‬‭ proponents‬ envisioned‬‭ the‬‭ car‬‭ transforming‬‭ how‬‭ people‬‭ lived‬‭ and‬‭ worked.‬‭ Supply‬‭ chains‬‭ and‬‭ customer‬ bases could be expanded and diversified.‬

‭ But‬‭ in‬‭ the‬‭ United‬‭ Kingdom,‬‭ where‬‭ some‬‭ of‬‭ the‬‭ earliest‬‭ cars‬‭ were‬‭ introduced,‬‭ the‬‭ new‬‭ industry’s‬ growth‬‭ was‬‭ stunted‬‭ by‬‭ regulation.‬‭ The‬‭ 1865‬‭ Red‬‭ Flag‬‭ Act‬‭ required‬‭ a‬‭ flag‬‭ bearer‬‭ to‬‭ walk‬‭ ahead‬‭ of‬‭ any‬‭ car‬‭ to‬‭ warn‬‭ others‬‭ on‬‭ the‬‭ road‬‭ and‬‭ wave‬‭ the‬‭ car‬‭ aside‬‭ in‬‭ favor‬‭ of‬‭ horse-drawn‬‭ transport.‬‭ How‬‭ could‬‭ a‬‭ person‬‭ walk‬‭ in‬‭ front‬‭ of‬‭ a‬‭ car‬‭ without‬‭ getting‬‭ run‬‭ over?‬‭ Because‬‭ of‬‭ another requirement: that cars move‬‭ no faster than‬‭ 4 miles per hour‬‭ .‬‭ America,‬‭ meanwhile,‬‭ took‬‭ a‬‭ very‬‭ different‬‭ approach‬‭ to‬‭ the‬‭ car,‬‭ merging‬‭ private-sector‬‭ vision‬ and‬‭ innovation‬‭ with‬‭ public-sector‬‭ enlightenment‬‭ to‬‭ unlock‬‭ the‬‭ new‬‭ technology‬‭ and‬‭ its‬ economic—and ultimately, with World War I looming—national security benefits.‬

‭ The‬‭ country‬‭ became‬‭ the‬‭ heart‬‭ of‬‭ the‬‭ world’s‬‭ auto‬‭ industry,‬‭ mass-producing‬‭ affordable‬‭ cars‬ with‬‭ the‬‭ help‬‭ of‬‭ local,‬‭ state‬‭ and‬‭ federal‬‭ officials‬‭ who‬‭ saw‬‭ the‬‭ industry’s‬‭ potential.‬‭ Public‬‭ safety‬ concerns‬‭ over‬‭ horse-drawn‬‭ vehicles‬‭ on‬‭ crowded‬‭ city‬‭ streets‬‭ prompted‬‭ local‬‭ officials‬‭ to‬ support‬‭ the‬‭ switch‬‭ to‬‭ cars—not‬‭ make‬‭ cars‬‭ yield‬‭ to‬‭ horses.‬‭ The‬‭ country’s‬‭ size‬‭ prompted‬‭ states‬ to‬‭ invest‬‭ in‬‭ better‬‭ roads.‬‭ And‬‭ the‬‭ federal‬‭ government‬‭ cleared‬‭ the‬‭ way‬‭ to‬‭ scale‬‭ transport‬‭ by‬‭ car‬‭ with a national—rather than state-by-state—highway system.‬‭

The comparison between the historical large-scale adoption of automobiles and AI is the strongest argument in the paper, particularly as it aligns closely with our current reality. There are valuable insights to be gained from studying how groundbreaking technologies were perceived at the time and how they evolved over decades.

An interesting anecdote highlights this: Jeff Bezos got the idea for AWS while visiting a winery in Luxembourg. He was shown one of the world’s first electricity generators, which had been used in an era before the concept of a shared electric grid existed. At that time, if you wanted electricity, you needed to operate your own generator for your facility. This immediately sparked an epiphany for Bezos: the notion that everyone should run distributed computing at scale was just as inefficient as running individual electricity generators. The future, he realized, lay in shared data centers that could easily distribute computing power to economic hubs with demand.

The enclosed policy proposals reflect OpenAI’s position that:‬

● We believe in America because America believes in innovation.‬

‭●‬‭ Chips,‬‭ data,‬‭ energy‬‭ and‬‭ talent‬‭ are‬‭ the‬‭ keys‬‭ to‬‭ winning‬‭ on‬‭ AI‬‭—and‬‭ this‬‭ is‬‭ a‬‭ race‬ America can and must win‬‭ .‬

●‭ With‬‭ an‬‭ estimated‬‭ $175‬‭ billion‬‭ sitting‬‭ in‬‭ global‬‭ funds‬‭ awaiting‬‭ investment‬‭ in‬‭ AI‬‭ projects,‬ if‬‭ the‬‭ US‬‭ doesn’t‬‭ attract‬‭ those‬‭ funds,‬‭ they‬‭ will‬‭ flow‬‭ to‬‭ China-backed‬‭ projects‬‭ —‬‭ strengthening the Chinese Communist Party’s‬‭ global influence.‬

● Rules‬‭ and‬‭ regulations‬‭ for‬‭ the‬‭ development‬‭ and‬‭ use‬‭ of‬‭ AI‬‭ should‬‭ be‬‭ based‬‭ on‬‭ the‬ democratic‬‭ values‬‭ the‬‭ country‬‭ has‬‭ always‬‭ stood‬‭ for‬‭ — what‬‭ we‬‭ think‬‭ of‬‭ as‬‭ “democratic‬ AI.”

‬●‭ As‬‭ for‬‭ any‬‭ industry,‬‭ we‬‭ need‬‭ common-sense‬‭ rules‬‭ of‬‭ the‬‭ road‬‭ that‬‭ safeguard‬‭ the‬ public‬‭ while‬‭ helping‬‭ innovators‬‭ thrive‬‭ by‬ encouraging‬‭ investment,‬‭ competition,‬‭ and‬‭ greater‬‭ freedom for‬‭ everyone‬‭ —and‬‭ to‬‭ best‬‭ achieve‬‭ this,‬‭ these‬‭ rules‬‭ should‬‭ apply‬ nationwide.

If we cut through the fluff, the policy vision is clear: OpenAI aims to attract as much funding as possible toward chips, data, energy, and talent while being protected by favorable regulatory frameworks. Rather than expanding further on these points, the paper shifts focus to national security, child safety, and infrastructure. The only well-defined section is a series of infrastructure-related proposals:

Solutions:‬

‭ We‬‭ need‬‭ a‬‭ foundational‬‭ strategy‬‭ to‬‭ ensure‬‭ that‬‭ investment‬‭ in‬‭ infrastructure‬‭ benefits‬‭ the‬‭ most‬ people‬‭ possible‬‭ and‬‭ maximizes‬‭ access‬‭ to‬‭ AI.‬‭ This‬‭ includes‬‭ policies‬‭ and‬‭ initiatives‬‭ that‬ encourage‬‭ rather‬‭ than‬‭ stifle‬‭ developers;‬‭ support‬‭ thriving‬‭ AI‬‭ ecosystems‬‭ of‬‭ labs,‬‭ start-ups‬‭ and‬ larger companies; and secure America’s leadership on AI into the future, such as:‬

‭ Ensuring‬‭ that‬‭ AI‬‭ has‬‭ the‬‭ ability‬‭ to‬‭ learn‬‭ from‬‭ universal,‬‭ publicly‬‭ available‬‭ information,‬‭ just‬‭ like‬ humans do, while also protecting creators from unauthorized digital replicas‬‭ .‬

‭ Digitizing‬‭ government‬‭ data‬‭ currently‬‭ in‬‭ analog‬‭ form.‬‭ A‬‭ lot‬‭ of‬‭ government‬‭ data‬‭ is‬‭ in‬‭ the‬‭ public‬ domain.‬‭ Making‬‭ it‬‭ more‬‭ accessible‬‭ or‬‭ machine-readable‬‭ could‬‭ help‬‭ US‬‭ AI‬‭ developers‬‭ of‬‭ all‬ sizes,‬‭ especially‬‭ those‬‭ working‬‭ in‬‭ fields‬‭ where‬‭ vital‬‭ data‬‭ is‬‭ disproportionately‬ government-held.‬‭ In‬‭ exchange,‬‭ developers‬‭ using‬‭ this‬‭ data‬‭ could‬‭ work‬‭ with‬‭ government‬‭ to‬‭ unlock new insights that help it develop better public policies.‬

‭ A‬‭ Compact‬‭ for‬‭ AI‬‭ among‬‭ US‬‭ allies‬‭ and‬‭ partner‬‭ nations‬‭ that‬‭ streamlines‬‭ access‬‭ to‬‭ capital‬‭ and‬ supply‬‭ chains‬‭ in‬‭ ways‬‭ that‬‭ support‬‭ AI‬‭ infrastructure‬‭ and‬‭ a‬‭ robust‬‭ AI‬‭ ecosystem.‬‭ Participating‬ countries‬‭ would‬‭ also‬‭ agree‬‭ to‬‭ some‬‭ common‬‭ security‬‭ standards.‬‭ Over‬‭ time,‬‭ this‬‭ collaboration‬ could‬‭ expand‬‭ to‬‭ a‬‭ global‬‭ network‬‭ of‬‭ US‬‭ allies‬‭ and‬‭ partners‬‭ that‬‭ would‬‭ compete‬‭ with‬‭ the‬ People’s‬‭ Republic‬‭ of‬‭ China’s‬‭ AI‬‭ infrastructure‬‭ alliances‬‭ while‬‭ also‬‭ strengthening‬‭ security‬ through shared standards.‬

‭ AI‬‭ Economic‬‭ Zones,‬‭ created‬‭ by‬‭ local,‬‭ state‬‭ and‬‭ the‬‭ federal‬‭ government‬‭ together‬‭ with‬‭ industry,‬ that‬‭ significantly‬‭ speed‬‭ up‬‭ the‬‭ permitting‬‭ processes‬‭ for‬‭ building‬‭ AI‬‭ infrastructure‬‭ like‬‭ new‬‭ solar‬‭ arrays, wind farms and nuclear reactors.‬

‭ Creation‬‭ of‬‭ AI‬‭ research‬‭ labs‬‭ and‬‭ workforces‬‭ aligned‬‭ with‬‭ key‬‭ local‬‭ industries‬‭ by‬‭ requiring‬‭ AI‬ companies‬‭ to‬‭ provide‬‭ meaningful‬‭ amounts‬‭ of‬‭ compute‬‭ to‬‭ public‬‭ universities‬‭ to‬‭ equitably‬‭ scale‬ the‬‭ training‬‭ of‬‭ a‬‭ homegrown‬‭ AI-skilled‬‭ workforce.‬‭ For‬‭ example,‬‭ Kansas‬‭ could‬‭ establish‬‭ a‬‭ hub‬‭ dedicated‬‭ to‬‭ applying‬‭ AI‬‭ in‬‭ agriculture;‬‭ Texas‬‭ or‬‭ Pennsylvania‬‭ could‬‭ develop‬‭ centers‬‭ focused‬ on integrating AI into power production and grid resilience.‬

‭ A‬‭ nationwide‬‭ AI‬‭ education‬‭ strategy‬‭—rooted‬‭ in‬‭ local‬‭ communities‬‭ in‬‭ partnership‬‭ with‬‭ American‬‭ companies—to‬‭ help‬‭ our‬‭ current‬‭ workforce‬‭ and‬‭ students‬‭ become‬‭ AI-ready,‬‭ bolster‬‭ the‬ economy, and secure America’s continued leadership on innovation.‬

‭ Investment‬‭ in‬‭ national‬‭ research‬‭ infrastructure‬‭ that‬‭ would‬‭ give‬‭ scientists,‬‭ innovators,‬‭ and‬‭ educators‬‭ access‬‭ to‬‭ the‬‭ compute‬‭ and‬‭ data‬‭ necessary‬‭ to‬‭ accelerate‬‭ and‬‭ democratize‬‭ scientific‬‭ progress‬‭ , such as‬‭ through funding a National AI Research‬‭ Resource.‬

Dramatically‬‭ increased‬‭ federal‬‭ spending‬‭ on‬‭ power‬‭ and‬‭ data‬‭ transmission‬‭ and‬‭ streamlined‬‭ approval‬‭ for‬‭ new‬‭ lines.‬‭ That‬‭ would‬‭ be‬‭ accompanied‬‭ by‬‭ the‬‭ creation‬‭ of‬‭ a‬‭ National‬‭ AI‬‭ Infrastructure‬‭ Highway‬‭ to‬‭ connect‬‭ those‬‭ regional‬‭ power‬‭ and‬‭ communication‬‭ grids‬‭ in‬‭ the‬‭ interest of national economic competitiveness and security.‬

‭ Federal‬‭ backstops‬‭ for‬‭ high-value‬‭ AI‬‭ public‬‭ works‬‭ that‬‭ illustrate‬‭ the‬‭ model‬‭ of‬‭ federal‬‭ spending‬‭ in‬‭ support‬‭ of‬‭ industry‬‭ growth‬‭ and‬‭ both‬‭ industry‬‭ and‬‭ users‬‭ adhering‬‭ to‬‭ its‬‭ rules.‬‭ Since‬‭ private‬ markets‬‭ alone‬‭ may‬‭ not‬‭ be‬‭ enough‬‭ to‬‭ pay‬‭ for‬‭ the‬‭ massive‬‭ amount‬‭ of‬‭ needed‬‭ AI‬‭ infrastructure,‬ the‬‭ US‬‭ government‬‭ can‬‭ provide‬‭ off take‬‭ purchase‬‭ commitments‬‭ and‬‭ credit‬‭ enhancements‬‭ to‬ encourage‬‭ infrastructure‬‭ investment.‬‭ The‬‭ resulting‬‭ infrastructure,‬‭ such‬‭ as‬‭ the‬‭ new‬‭ energy‬ sources needed to power AI data centers, would be considered strategic national assets.‬

These proposals are both sensible and revealing about the direction things might take. The ask is essentially to position AI as a priority service that the U.S. government must make available across the country, through a combination of subsidies and legal frameworks. This approach relies on the shared understanding that AI will be the most important technology of the 21st century.

Source: Andrew Harnik—Getty Images

The very first practical project under the new U.S. policy has kicked off with Stargate:

The Stargate Project is a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States. We will begin deploying $100 billion immediately. This infrastructure will secure American leadership in AI, create hundreds of thousands of American jobs, and generate massive economic benefit for the entire world. This project will not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.

The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.

Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the key initial technology partners. The buildout is currently underway, starting in Texas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements.

As part of Stargate, Oracle, NVIDIA, and OpenAI will closely collaborate to build and operate this computing system. This builds on a deep collaboration between OpenAI and NVIDIA going back to 2016 and a newer partnership between OpenAI and Oracle.

This also builds on the existing OpenAI partnership with Microsoft. OpenAI will continue to increase its consumption of Azure as OpenAI continues its work with Microsoft with this additional compute to train leading models and deliver great products and services.

Notice how we jumped from $175B in “global funds” to $500B for a single project. The announcement received criticism from Musk, who claimed that “there is no such funding.” While factual, this was a somewhat misleading statement. The BG2 podcast (hosted by Brad from Altimeter and Bill from Benchmark) presented a strong case for how the funding and rollout might look. This was later confirmed to be close to reality by the ARM CEO, who dialed in and is also involved in the project.

Source: Stargate, Executive Orders, TikTok, DOGE, Public Valuations | BG2 w/ Bill Gurley & Brad Gerstner

So, Stargate is definitely real, and whether it gets implemented at this scale or not is less relevant. Even achieving 2.5 GW of power would be a tremendous milestone, considering that Musk’s current supercluster runs on 0.15 GW and Zuckerberg just announced a 2 GW project. It’s important to note that these data centers will only serve the needs of Meta and OpenAI, with nothing allocated toward broader adoption.

If the two companies together reach 5 GW, that would already represent a significant amount of electrical overcapacity, given that the total available power in the U.S. right now is around 800 GW at peak, growing at only 2% annually. This is where other policies come into play, particularly those focused on ramping up internal energy production. The “drill, baby, drill” approach has a lot to do with AI—not just reduced living costs but also the energy demands of scaling AI infrastructure.

Crouching tiger, hidden dragon

Source: DeepSeek on X

It’s important to understand that this wouldn’t be a high-stakes game unless, well, there were other players involved.

OpenAI even named the other player in their policy paper:

Over‬‭ time,‬‭ this‬‭ collaboration‬ could‬‭ expand‬‭ to‬‭ a‬‭ global‬‭ network‬‭ of‬‭ US‬‭ allies‬‭ and‬‭ partners‬‭ that‬‭ would‬‭ compete‬‭ with‬‭ the‬ People’s‬‭ Republic‬‭ of‬‭ China’s‬‭ AI‬‭ infrastructure‬‭ alliances‬‭ while‬‭ also‬‭ strengthening‬‭ security‬ through shared standards.‬

If you’re engaged in the AI/tech community on X, the last few days have been… hectic. DeepSeek, the most high-profile Chinese AI company currently on the market, ironically positions itself as a “side project” of the hedge fund High-Flyer, which manages $5.5B in assets. Unless you’re exceptionally naive, it’s important to understand that this fund operates at the mercy of the Chinese government.

DeepSeek was established in July 2023, building on the existing machine learning expertise that High-Flyer was already known for, and has since extensively hired “very technically oriented” researchers.

The myth around their capabilities includes the bold claim that R1 and V3 were launched on a shoestring budget of only a few million dollars. However, in an interview with the CEO of Scale AI—one of the leading players in synthetic data for model training—he hinted at a much more realistic scenario: R1 was likely trained on 50,000 H100 GPUs, which, under export controls, should never have been available in China.

DeepSeek’s activities are far from benevolent contributions to AI research. They represent a key tool of propaganda in the ongoing cold war between the U.S. and China, currently fought through economics and technology.

From my testing of the 32B distribution, I found it underwhelming compared to models like o1 Pro or Sonnet 3.5 for the type of analytical work I typically use those models for. However, that doesn’t mean the full-sized (671B) model won’t be extremely useful for a range of applications if it performs even close to o1 in day-to-day usage. More importantly, the techniques and principles outlined in their white paper are clearly resonating with a large segment of the open-source research community.

We can expect a significant amount of research to build on top of these models, leading to new forks widely adopted for specific workflows. Ultimately, the goal of making these models public serves a dual purpose: propaganda and economic damage. If “good enough” performance can be achieved with open-source models for various use cases, funding for advanced frontier model research could diminish, potentially hindering the development of meaningful “national security dominance” trough the discovery and utilization of AGI/ASI.

The topic of AI has now fully entered the realm of realpolitik. This shift brings both significant benefits for tech sales (as funding accelerates) and notable side effects (as national security begins to take precedence over commercial decisions).

The under-discussed but glaringly obvious first requirement of any government intending to use AI for national security purposes is simple: there must be no guardrails preventing the model from doing exactly “what’s needed”.

Source: DeepSeek R1 Whitepaper

If you believe that models like OpenAI’s o3 or DeepSeek’s R1 won’t be used without guardrails in combat scenarios, espionage, cybersecurity attacks, or covert operations—well, bless your heart.

The Deal Director

Cloud Infrastructure Software • Enterprise AI • Cybersecurity

https://x.com/thedealdirector
Previous
Previous

The Tech Sales Newsletter #78: Tesla and the bet on real-world AI

Next
Next

The Tech Sales Newsletter #76: SaaS spending patterns in 2024