{"database": "openregs", "table": "crs_reports", "rows": [["IF13217", "Federal Government and Anthropic: Considerations for AI Innovation and Competition", "2026-05-05T04:00:00Z", "2026-05-07T09:38:00Z", "Active", "Resources", "Laurie Harris, Clare Y. Cho", "Artificial Intelligence, Telecommunications & Internet Policy, Competition Policy & Law, Technology & Innovation", "On February 27, 2026, President Trump directed federal agencies to stop using technology developed by the U.S. artificial intelligence (AI) company Anthropic, and Secretary of Defense Pete Hegseth announced he was directing the Department of Defense (DOD) to designate Anthropic a \u201cSupply-Chain Risk to National Security.\u201d (DOD and the Secretary are now using \u201cDepartment of War\u201d and \u201cSecretary of War,\u201d respectively, as \u201csecondary\u201d designations per Executive Order 14347.) These actions followed a reported months-long dispute between DOD and Anthropic regarding certain uses of its AI technologies. The national security risk designation and use prohibitions may have implications for AI innovation and competition, including at Anthropic and other domestic AI companies. This In Focus provides information on the AI models under debate, actions taken by the U.S. government (USG), the potential implications of those actions, and considerations and questions for Congress.  \nFrontier AI Models: Potential Capabilities and Limitations\nFrontier AI models are the most advanced foundation models\u2014general-purpose AI models pretrained on large datasets that can be used for many applications. Anthropic\u2019s Claude model, one such frontier model, reportedly has been deployed across DOD and national security agencies for such applications as intelligence analysis, operational planning, and cyber operations. In June 2024, Anthropic stated it was the first AI company to deploy frontier models in classified USG networks. In July 2025, four U.S. AI companies entered into contracts with DOD to \u201caccelerate [DOD] adoption of advanced AI capabilities to address critical national security challenges.\u201d DOD awarded up to $200 million each to Anthropic, Google, OpenAI, and xAI. The Pentagon reportedly agreed to the use in classified systems of xAI\u2019s Grok model, Google\u2019s Gemini model, and six other tech companies\u2019 AI models, as of May 1, 2026. \nWhile asserting a belief in \u201cthe existential importance of using AI to defend the United States and other democracies,\u201d Anthropic claimed that, during contract negotiations with DOD, it requested two use exceptions for its Claude model. First, Anthropic stated that it \u201cdo[es] not believe that today\u2019s frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America\u2019s warfighters and civilians.\u201d Second, Anthropic asserted that \u201cmass domestic surveillance of Americans constitutes a violation of fundamental rights.\u201d \nAnthropic\u2019s requested use exceptions highlight a broader debate over frontier AI model capabilities and limitations. Though frontier AI models demonstrate powerful capabilities, as measured by publicly available benchmarks and assessments, some studies have described their potential limitations. A December 2024 Frontier AI Trends Report by the UK\u2019s AI Security Institute reported that, in its evaluations of frontier AI systems across domains critical to national security and public safety, model safeguards are improving, but the institute found \u201cvulnerabilities in every system [it] tested.\u201d According to Stanford University\u2019s 2025 AI Index Report, complex reasoning tasks remain a challenge, though AI model performance on \u201cdemanding benchmarks\u201d continues to improve. \nRecent Federal Actions\nOn February 27, 2026, President Trump directed federal agencies to \u201cIMMEDIATELY CEASE all use of Anthropic\u2019s technology\u201d and outlined a six-month \u201cphase out period for Agencies like the Department of War who are using Anthropic\u2019s products, at various levels.\u201d On March 5, 2026, Anthropic CEO Dario Amodei confirmed receipt of a letter from DOD designating Anthropic a supply chain risk to America\u2019s national security, which reportedly went into immediate effect.\nIn response, federal agencies took actions to stop using Anthropic\u2019s Claude models. For example, the General Services Administration (GSA) announced that it was removing Anthropic from USAi.gov and its multiple award schedule (i.e., long-term government-wide contracts with commercial firms). Other agencies such as the State Department and the Department of Health and Human Services reportedly ceased use of Claude. The Office of Personnel Management removed Claude from its list of AI use cases (updated March 4, 2026) and added xAI\u2019s Grok and OpenAI\u2019s Codex (Claude was listed on the prior list, dated January 30, 2026).\nPotential Effect on Anthropic\nAs a private company, Anthropic provides limited public financial information. Some information suggests that federal agencies and government contractors no longer using Anthropic\u2019s AI models might not have a significant financial effect on the company. The $200 million awarded by DOD and a $18,960 award from the Department of State in 2026 (the only government contract with Anthropic on usaspending.gov) are relatively small compared to its run-rate revenue (i.e., annual revenue estimate based on its current financial performance). On February 12, 2026, Anthropic stated its run-rate revenue had reached $14 billion, and on April 6, 2026, Anthropic announced it had surpassed $30 billion. In January 2026, Anthropic CEO Dario Amodei reportedly stated that about 80% of Anthropic\u2019s business is with enterprise customers, which he viewed as a relatively predictable, stable source of income.\nOther information suggests that federal agencies and government contractors no longer using Anthropic\u2019s AI models might have a significant financial effect on the company. Anthropic has stated that the USG\u2019s actions are \u201charming Anthropic irreparably.\u201d Additionally, it is unclear what percentage of Anthropic\u2019s enterprise customers are federal agencies and USG contractors. If Anthropic loses a significant share of its revenue or funding from investors, it might have difficulty continuing to develop its AI models, potentially affecting its ability to compete and innovate. \nThe effect of the USG\u2019s actions on Anthropic may partially depend on the response of its other clients and the scope of the prohibition, which is under dispute. On February 27, 2026, Secretary Hegseth stated, \u201cEffective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.\u201d Anthropic responded that it does not believe this action is legal, asserting that \u201ca supply chain risk designation under 10 U.S.C. \u00a73252 can only extend to the use of Claude as part of [DOD] contracts\u2014it cannot affect how contractors use Claude to serve other customers.\u201d Anthropic filed federal lawsuits in two courts on March 9, 2026. One lawsuit claims that the government\u2019s actions exceed its legal authority and violate the Administrative Procedure Act as well as Anthropic\u2019s due process and First Amendment rights. The second lawsuit seeks review of the designation of Anthropic as a supply chain risk under a separate statute. In the first lawsuit, Microsoft, a company that \u201chas established a close business relationship with Anthropic,\u201d filed an amicus brief urging the court to temporarily block the implementation of this designation. On March 26, 2026, a federal judge ordered a preliminary injunction to temporarily block the implementation of this designation and halted the President\u2019s directive ordering federal agencies to stop using Claude, and GSA restored Anthropic in USAi.gov and its multiple award schedule. In the second lawsuit, the U.S. Court of Appeals for the D.C. Circuit denied Anthropic\u2019s request to stop DOD from labeling it as a security risk. \nOne trade group reportedly raised concerns that the designation is being used in a procurement dispute and should instead be reserved for foreign adversaries. Some initial reporting indicated that a subset of defense contractors have stopped using Anthropic, while others are waiting to see how the conflict is resolved. On April 17, 2026, Anthropic executives met with White House officials, discussing \u201copportunities for collaboration\u201d and \u201cbalance between advancing innovation and ensuring safety.\u201d\nPotential Implications for Innovation and Competition\nThe USG\u2019s actions against Anthropic might have broader effects on AI markets and competition. For example, if the USG\u2019s actions negatively impact Anthropic\u2019s revenues such that it can no longer operate, that would reduce the number of companies offering frontier AI models and prevent other companies from creating AI products using Anthropic\u2019s models. However, the USG\u2019s actions also appear to have boosted public adoption of Claude, which became the most popular app on Apple\u2019s chart of top free apps in the United States on February 28, 2026. Further, OpenAI\u2019s decision to strike a deal with the Pentagon reportedly resulted in a \u201cmassive wave of public backlash\u201d as users uninstalled ChatGPT.\nThe USG\u2019s actions against Anthropic have also raised concerns about potential effects on innovation and U.S. competitiveness. Some trade groups reportedly raised concerns that designating an American technology company as a national security risk would \u201chave a chilling effect on U.S. innovation.\u201d A letter reportedly sent by former defense officials, academics, and tech policy leaders to the House and Senate Armed Services Committees asserted that \u201cblacklisting an American company weakens U.S. competitiveness\u201d and warned, \u201cthis is not a marketplace any serious entrepreneur or investor can build around.\u201d Microsoft\u2019s amicus brief in support of Anthropic asserts: \u201cThis is not the time to put at risk the very AI ecosystem that the Administration has helped to champion.\u201d\nConsiderations for Congress\nFederal policies and actions may influence competition between AI companies and potentially encourage or stifle innovation. Congress may wish to conduct oversight on the extent of DOD\u2019s authority to declare Anthropic a supply chain risk to national security and how the designation may affect private-sector innovation. Congress may also consider legislation to clarify privacy and security considerations around the use of AI technologies for sensitive applications, such as public surveillance. Alternatively, Congress may wait for federal courts to determine the legal merits of the Trump Administration\u2019s actions against Anthropic before considering a legislative response. In weighing these options, among others, Congress might consider a range of questions, including \nHow do certain types of government actions affect revenue reliability for AI businesses? \nHow might this dispute between Anthropic and the federal government influence future agreements between private companies and the federal government?\nHow difficult and costly is it for government agencies to switch from one AI foundation model to another? \nAnthropic developed AI models that underpinned many federal uses with a stated goal of \u201cbuilding reliably safe systems.\u201d How might restricting the use of Anthropic\u2019s models by the USG affect the reliability of AI-powered government services and decisionmaking, particularly for high-impact uses\u2014those that have a legal, material, binding, or significant effect on rights or safety?\nHow might USG actions in response to a company\u2019s efforts to maintain safety measures affect industry efforts to innovate in AI safety and security?\nIn light of what has been described as a global \u201cAI race,\u201d what might be the effects of USG actions against one AI company on the ability of, or incentives for, other U.S. AI companies to innovate and invest in building AI models? ", "https://www.congress.gov/crs_external_products/IF/PDF/IF13217/IF13217.2.pdf", "https://www.congress.gov/crs_external_products/IF/HTML/IF13217.html"]], "columns": ["id", "title", "publish_date", "update_date", "status", "content_type", "authors", "topics", "summary", "pdf_url", "html_url"], "primary_keys": ["id"], "primary_key_values": ["IF13217"], "units": {}, "query_ms": 0.4490870051085949, "source": "Federal Register API & Regulations.gov API", "source_url": "https://www.federalregister.gov/developers/api/v1", "license": "Public Domain (U.S. Government data)", "license_url": "https://www.regulations.gov/faq"}