AI Weapons and Human Responsibility: Can the Ministry of Defense’s “Guidelines” Be Trusted?

At the Crossroads of AI and War

As the military application of artificial intelligence (AI) rapidly advances, Japan’s Ministry of Defense has reportedly established a set of “AI Application Guidelines.” These guidelines aim to ensure that appropriate human judgment is involved when integrating AI into offensive equipment such as unmanned aerial and maritime vehicles. While this move appears to acknowledge international concerns surrounding lethal autonomous weapon systems (LAWS), several critical issues arise.

The Limits of Internal Review and the Challenge of Transparency

The review of these guidelines is conducted by the Acquisition, Technology & Logistics Agency and expert panels within the ministry—essentially an internal review process. Although there are claims that external experts will also be consulted, the final decisions rest solely with the Ministry of Defense.
This structure raises concerns about a lack of transparency and objectivity. Can the public and the international community place trust in a system based on self-auditing?

LAWS and International Law: How Much Human Involvement Is Enough?

LAWS, which autonomously identify and engage targets, are an extremely sensitive area of technology under ongoing discussion at the United Nations. Japan maintains that it will not develop “fully autonomous” weapons, but the definition of “human involvement” remains ambiguous.
For example, if humans are merely providing “final approval,” can that really be considered sufficient involvement? If too much decision-making is delegated to AI, it becomes unclear who bears responsibility for malfunctions or wrongful targeting.

Intellectual Property vs. Public Interest: Balancing with the Private Sector

The Ministry of Defense has suggested the possibility of requiring companies involved in R&D to disclose AI training data and algorithms. However, such information often constitutes core intellectual property for these companies, making their reluctance to disclose it understandable.
This dilemma directly relates to the balance between “national security as a public interest” and the “commercial value of private-sector technology.” Although the ministry plans to supplement its non-binding guidelines with contractual agreements, the actual effectiveness of this approach remains in question.

Lack of Diplomatic Coordination: The Foreign Ministry’s Absence

International negotiations and discussions on LAWS typically fall under the jurisdiction of the Ministry of Foreign Affairs, yet the review process for these AI guidelines excludes that ministry. Although the Ministry of Defense claims to “share an understanding” with the Foreign Ministry, it is important to monitor whether interagency governance and coordination are merely perfunctory.

Who Ensures the “Credibility” of These Guidelines?

The research and development of AI-equipped defense systems is an unavoidable issue in the realm of national security. However, maintaining a delicate balance between technical feasibility and ethical and legal constraints is critical.
The establishment of these guidelines is a step forward, but a “self-contained” approach is unlikely to earn public trust. Truly effective implementation will require collaboration with external bodies, the private sector, and diplomatic authorities, along with accountability and transparency toward the public.