(Original Caption) The Making of Old Glory. Mrs. Helen McAfee uses a zig-zag machine to sew stars to … [+] the blue field.

Bettmann Archive

The date was October 24. The Biden White House released a new U.S. National Security Memorandum on Artificial Intelligence, and as you might expect, it’s got a lot of words in it.

It’s worth writing about, as we’re at this inflection point where new models are advancing extremely rapidly.

Taking a look at this document and its companion, the Framework to Advance AI Governance and Risk Management in National Security, there are a couple of things that stick out about this directive.

Difficult Language

The first thing that many readers will notice is that both of these documents are written, essentially, in legalese.

Here’s how the drafters describe the scope of the framework document itself:

“The Framework to Advance AI Governance and Risk Management in National Security (“AI Framework”) builds on and fulfills the requirements found in Section 4.2 of the National Security Memorandum on Advancing the United States’ Leadership in AI, Harnessing AI to Fulfill National Security Objectives, and Fostering the Safety, Security, and Trustworthiness of AI (“AI NSM”), which directs designated Department Heads to issue guidance to their respective components/sub-agencies to advance governance and risk management practices regarding the use of AI as a component of a National Security System (NSS).1,2 This AI Framework is intended to support and enable the U.S. Government to continue taking active steps to uphold human rights, civil rights, civil liberties, privacy, and safety; ensure that AI is used in a manner consistent with the President’s authority as commander-in-chief to decide when to order military operations in the nation’s defense; … AI use in military contexts shall adhere to the principles and measures articulated in the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, announced by the United States on November 9, 2023.”

That might be easy for ChatGPT to digest, but most humans will struggle with it at least a little bit, but like any other legislation, plain language is not a top priority.

Here’s how planners address a prior regulation, the Office of Management and Budget’s ‘Advancing the Responsible Acquisition of Artificial Intelligence in Government’ memorandum otherwise known as OMB M-24-10:

“The AI Framework is complementary to, but does not otherwise replace or modify, OMB Memorandum M-24-10. All AI use by federal agencies shall be governed by either OMB Memorandum M-24-10 and its successor policies or by this AI Framework. This AI Framework covers AI when it is being used as a component of an NSS. This AI Framework applies to both new and existing AI developed, used, or procured by or on behalf of the U.S. Government, and it applies to system functionality that implements or is reliant on AI, rather than to the entirety of an information system that incorporates AI.”

Let’s leave that for now – suffice to say: read at your own risk.

The Four National Security Pillars for Handling AI

One thing that the framework document does lay out pretty clearly is four key components of strategy for managing AI:

The first one involves looking at prohibited and/or ‘high impact’ use cases around artificial intelligence and the risks attached to them.

The second one involves working toward “sufficiently robust minimum risk management practices,” which, as we’ll point out later, involves testing.

The third one involves cataloging these important AI cases that the government deems ‘high impact’ – a documentation approach, basically.

The fourth one cites ‘effective training and accountability mechanisms,’ which is another component of this regulation.

Protecting the Nuclear Football

Here’s a component of the memorandum that’s uniquely interesting to foreign policy buffs – it starts out with the suggestion that “analysts in China and Russia will undoubtedly study the NSM closely.”

The companion document itself has a moratorium on “taking out the human in the loop” for nuclear weapons related decisions. Experts cite a U.S./China AI safety meeting in Geneva in May of this year, which probably contributed to this important caveat. Stakeholders note that this disclaimer should “reduce miscommunication between the two rivals.”

Robust Testing: Part of the Deal

If we go back to that second pillar identified in the documents, we have the requirement for additional safeguards for high impact AI use cases.

There’s testing and notification, and an impact assessment. You can read all about it in the language of the memo. Essentially, systems will be scrutinized to make sure that they work as intended, and don’t have harmful side effects for the society at large.

Appointing Chiefs

Here’s another seemingly important aspect of the memo – the document shows the government will appoint Chief AI Officers who have the ability to waive certain cases from high impact requirements.

Whatever you call it, this echoes the practice in the private sector of naming people who will be responsible for keeping an eye on the use of AI in general.

Those are five of the big takeaways from these unprecedented government documents, but for a little bit more, let’s hear what AI itself has to say:

Just for fun, we ask ChatGPT: what’s in the document?

“The new National Security Memorandum (NSM) on artificial intelligence (AI), issued by the Biden administration on October 24, 2024, focuses on leveraging AI to advance U.S. national security while ensuring responsible use,” ChatGPT returned. “The memorandum outlines several key priorities…”

This is ChatGPT’s list: accelerating AI adoption, safeguarding values/rights, strengthening AI infrastructure, and countering ‘adversary AI’.

Then we asked the following. “What do you think of this memorandum?”

In response, ChatGPT kind of wrote a book, but here it is:

“The Biden administration’s National Security Memorandum on AI reflects a timely and comprehensive approach to managing both the opportunities and risks that AI presents in the national security landscape. It attempts to strike a balance between harnessing AI to maintain a technological edge, particularly in competition with adversaries like China, and ensuring that its use aligns with democratic values and safeguards civil liberties.

One of the memorandum’s strengths is its emphasis on securing AI infrastructure, such as bolstering chip supply chains and prioritizing AI in supercomputing. This aligns with the broader strategy seen in initiatives like the CHIPS Act, addressing the critical importance of semiconductor production for AI leadership. The focus on private sector collaboration is also a practical recognition of where much of AI innovation occurs, ensuring that U.S. companies remain globally competitive​.

The memorandum also brings an ethical dimension by underscoring that AI adoption must protect civil rights and privacy. This is crucial in light of public concerns over surveillance and bias in AI systems. However, ensuring that AI adoption by intelligence and defense agencies remains transparent and accountable will be challenging, especially given the classified nature of some applications​.

One potential criticism is whether the U.S. can realistically balance rapid AI adoption with the need for stringent ethical safeguards, especially when global competitors may not impose similar constraints. There’s also the concern raised by civil society groups that deploying AI in security contexts could perpetuate bias or civil liberties violations​.

Overall, the memorandum sets a solid framework, but its effectiveness will depend on the practical implementation of these guidelines across various federal agencies and how well the U.S. can coordinate with international partners on responsible AI use.”

Those responses are highly illustrative of the power of AI to help influence discussions, and contribute to meaningful dialogue on what we’re seeing, as we move into another year of fast paced innovation and development.