A screen displaying the website of US AI company Anthropic, February. (©AP=Kyodo)
このページを で読む
On February 27, the Trump administration designated Anthropic, an American artificial intelligence (AI) developer, as a "supply chain risk" to national security and effectively sidelined it from major state-run projects.
The move followed the company's contractual stipulation that its AI would not be used in autonomous lethal weapon systems capable of selecting and engaging targets without human intervention.
Some observers have interpreted the Trump administration's decision as a retaliatory measure.
At its core, the episode highlights a troubling dynamic: companies that set ethical limits on their technology risk being shut out of the systems shaping the future of warfare, while those more willing to accommodate military demands gain ground.
Ethics on the Sidelines
In the absence of meaningful progress on international rules on the subject, the tension is becoming harder to ignore. The fusion of state power and advanced technology is accelerating, while the norms meant to govern it struggle to keep pace.
Regarding AI and the military, several frameworks already exist. For instance, discussions on regulating lethal autonomous weapon systems (LAWS) are ongoing at the United Nations, while NATO has outlined principles for the use of AI. The European Union has also introduced its AI Act.

However, none of these efforts has produced binding rules. NATO's guidelines remain political in nature rather than legally enforceable, and the EU's AI Act does not apply to military uses.
At the UN, meanwhile, progress has stalled as Russia continues to wield its veto, blocking the adoption of a treaty despite broad international support.
Against this backdrop, the latest episode involving Athropic marks a broader shift. Other AI firms are now moving to deepen their involvement in the defense sector, as ethical considerations are increasingly taking a back seat in government procurement decisions.

The 'Approval Button' Issue
To probe these tensions, I conducted an AI-mediated "dialogue" immediately following the Anthropic incident. The participants — ChatGPT, Grok, and Anthropic's Claude — were asked to reflect on their relationship with the US military.
In particular, Claude framed the issue not simply as a matter of corporate interest, but as a structural dilemma. "While nations seek to acquire AI as a powerful tool, they are often reluctant to subject AI to their own control or ethical constraints," it observed.
The remark points to a deeper reality in which AI is no longer merely an instrument of state power, but an emerging actor in the geopolitical realm.
Central to this discussion is what some describe as the "approval button issue." Even where humans formally retain final authority, that does not necessarily translate into meaningful control.
AI systems can process vast quantities of data in seconds, generating multiple courses of action. Humans are then left to select from these options.
Yet under operational time pressure, that role can quickly collapse into little more than rubber-stamping machine-generated recommendations. The decision may remain human in form, but not in substance.
AI Speed vs Human Deliberation
This dynamic is not incidental. The very rationale for deploying AI in military contexts is to compress decision-making timelines and outpace adversaries. Achieving that requires shortening the sequence from observation to action — what is commonly known as the OODA loop: observe, orient, decide, act.
As that cycle accelerates, the space for genuine human deliberation risks shrinking accordingly.
If AI compresses this cycle, decision-making could unfold in seconds or even milliseconds. The space for deliberation and verification would likewise shrink. To that end, the pressure to match this speed will not remain confined to the tactical level. It will extend upward, shaping the design of strategy and institutions themselves.

Even when humans retain formal authority, their role can become little more than a "rubber stamp." Approval becomes a procedure, not a judgment. This presents a profound challenge to democratic decision-making, where systems built on time-consuming consensus are increasingly at odds with environments that demand speed.
Claude stressed this problem, saying: "It is not technology but institutions that can counter speed. Checks and balances must be built into the system before a crisis occurs."
In other words, the decisive variable is not technical capability, but institutional design.
Tokyo's Strategic Dilemma
Japan's position reflects this awareness. Under Ministry of Defense guidelines, Tokyo prohibits the development of lethal autonomous weapon systems and explicitly adopts a "human-centered" approach. Internationally, this stance is often viewed as both progressive and consistent with Japan's broader security policy.
That said, the difficulty lies in sustaining these principles within alliance frameworks. In an integrated system such as the Japan-US alliance, Tokyo risks becoming a user rather than a shaper of decision-making processes. As AI integration deepens, the logic of speed may take precedence, raising the possibility that domestically established constraints will erode in practice.
There is also a risk that, as democracies prioritize safeguards, the gap with authoritarian systems widens. In latter countries, ethical constraints and public accountability carry less weight, allowing development to move ahead far more quickly.
AI is reshaping warfare by speeding up decision-making while also increasing the risks of misidentification and escalation. Add nuclear weapons to the mix, and the consequences could be catastrophic.
The convergence of AI and military power is no longer just a technical issue — it is a test of institutions and values. It calls for rethinking established security frameworks for an era in which AI is a given, with balancing speed and control emerging as a defining challenge.
RELATED:
- Trump's Venezuela Strike and Its Implications for China and Japan
- What to Watch at the Trump-Takaichi Summit
Author: So Ishii, The Sankei Shimbun
(Read the article in Japanese)
このページを で読む
