Business decision-makers can only talk about AI. This seemingly ubiquitous solution introduces several quality-of-life capabilities and significant cybersecurity issues.
Here, we touch on Executive Order 14110 and how it addresses this issue for government agencies and contractors.
Why Worry About AI?
We’ll be the first to say we aren’t an AI company. But, for better or worse, AI seems to touch on everything in our technological lives. And that includes cybersecurity and cyber threats.
The challenge is that as these technologies become software, it’s almost inevitable that government agencies or their contractors will adopt them. As with any new technology, the potential for significant challenges remains high across several contexts.
Executive Order 14110
The Executive Order on Artificial Intelligence, issued on October 30, 2023, addresses AI technology’s potential benefits and risks.
The order encompasses several key areas of interest around AI and its potential adoption:
- Safety and Security: The EO mandates measures to mitigate risks, particularly in critical infrastructure and national security. It includes requirements for companies developing powerful AI systems to conduct safety tests and report results to the Department of Commerce. The Department of Energy also evaluates AI outputs that may pose security threats.
- Equity and Civil Rights: The EO directs actions to prevent discrimination and protect civil rights in AI applications. This includes training for investigating AI-related civil rights violations, developing best practices for AI use in law enforcement, and ensuring AI deployment in workplaces and housing does not lead to discrimination.
- Innovation and Competition: To foster AI innovation, the EO calls for increased investment in AI research, the establishment of regional innovation engines, and the streamlining of visa processes for AI talent. It also addresses intellectual property concerns. It encourages the FTC to use its authority to ensure fair competition and consumer protection in the AI marketplace.
- Workforce Impact: The EO outlines directives to support workers, including reports on labor market impacts, best practices for employers, and ensuring fair compensation for workers whose tasks are augmented by technology.
- Public Benefit and Governance: The EO promotes using AI for the public good, such as in scientific research and public benefits programs. It also establishes the AI Safety and Security Board to advise on the safe deployment of AI in critical infrastructure?.
Overall, Biden’s EO on AI seeks to balance the innovative potential necessary safeguards to protect society from its risks.
What Are the Specific Comments on Cybersecurity in this EO?
The cybersecurity requirements of President Biden’s Executive Order on AI are extensive. They target various development and deployment approaches to ensure safety and security.
Some of the requirements that address cybersecurity and privacy explicitly include:
- AI Safety Testing and Reporting: Companies developing powerful systems must conduct safety tests and report the results to the Department of Commerce. This includes sharing information on large computing clusters capable of training these systems.
- Critical Infrastructure Protection: The EO directs the development of safety and security guidelines for critical infrastructure sectors. This involves assessing risks and implementing measures to safeguard against threats applications pose in critical infrastructure.
- Federal Data Security: The EO mandates guidelines for the secure release of federal data used for large language model training to prevent its misuse for developing offensive capabilities, such as chemical, biological, radiological, or nuclear weapons.
- National Security Memorandum: The executive branch will develop a coordinated approach to managing AI’s security risks, focusing on AI used in national security systems, military, and intelligence purposes.
- Algorithmic Risk Management: The EO emphasizes managing risks associated with dual-use AI models. NIST is tasked with developing standards for pre-release testing of AI models and guidelines for secure software development frameworks.
- Vulnerability Identification: New tools will be piloted to identify vulnerabilities in vital government software systems. The Departments of Defense and Homeland Security will lead these efforts to ensure the security of software used for national security and other critical purposes.
These measures are designed to mitigate the risks associated with AI while ensuring that its development and deployment contribute positively to national security and public safety. The EO aims to establish robust cybersecurity practices across federal agencies and critical infrastructure sectors.
What Role Will NIST and Special Publications Play in this Executive Order?
The National Institute of Standards and Technology is critical in Executive Order 14110, which focuses on developing standards and guidelines to ensure AI technologies’ safe and secure deployment. Here are the key responsibilities assigned to NIST:
- AI Risk Management Framework: NIST is tasked with developing and refining the AI Risk Management Framework. This framework is designed to help organizations manage the risks associated with AI, including safety, security, and ethics.
- Standards for Generative AI: NIST is responsible for creating standards to manage the risks posed by generative systems. This includes developing guidelines for securely developing and deploying generative AI models and addressing dual-use concerns (where AI can be used for benign and malicious purposes)??.
- Pre-release Testing Standards: NIST is instructed to establish standards for pre-release red-team testing of models. Red-teaming involves simulating attacks to identify vulnerabilities and improve the security of systems before they are deployed.
- Guidance on AI Use in Critical Infrastructure: NIST is developing guidelines tailored to critical infrastructure sectors. This involves assessing risks in these sectors and providing actionable recommendations to mitigate those risks.
- Public Comment and International Standards: NIST actively seeks public input on draft documents related to managing risks and is involved in expanding international standards for AI. This collaborative approach ensures that safety and security guidelines are comprehensive and widely adopted.
By fulfilling these roles, NIST contributes to the executive order’s goal of harnessing AI’s potential while protecting society from its risks. This involves setting technical standards and fostering a collaborative environment where these standards can evolve and improve over time.?
Make Sure Your AI-Powered Systems Meet Government Standards with Continuum GRC
Continuum GRC is a cloud platform that stays ahead of the curve, including support for all certifications (along with our sister company and assessors, Lazarus Alliance).
- FedRAMP
- StateRAMP
- NIST 800-53
- FARS NIST 800-171 & 172
- CMMC
- SOC 1 & SOC 2
- HIPAA
- PCI DSS 4.0
- IRS 1075 & 4812
- COSO SOX
- ISO 27001 + other ISO standards
- NIAP Common Criteria
- And dozens more!
We are the only FedRAMP and StateRAMP-authorized compliance and risk management solution worldwide.
Continuum GRC is a proactive cyber security® and the only FedRAMP and StateRAMP-authorized cybersecurity audit platform worldwide. Call 1-888-896-6207 to discuss your organization’s cybersecurity needs and learn how we can help protect its systems and ensure compliance.
[wpforms id= “43885”]