WE stand at the dawn of this technological development and can expect giant leaps in years to come which may well change the nature of war and the very nature of being human. At a time of increasing conflict and rapid technological change, we must understand that the existing legal obligations of international humanitarian law (IHL) apply to artificial intelligence (AI), new weapons and cyber warfare.

It is clear that existing IHL already applies to the use of all new weapons systems and cyber warfare. The law’s current principles of humanity, distinction, precaution and proportionality are clearly applicable to the use of new weapons and methods of warfare. These principles of IHL and its rules that prohibit deliberate attacks against civilians and the objects indispensable to their survival clearly and sufficiently restrict the use of new weapons within humanitarian limits. If it is unlawful to harm civilians by deliberately bombing them and their hospitals and water supplies then it is equally unlawful to deliberately harm civilians by destroying the computer systems that control their health systems and water supplies.

New weapons and new means and methods of warfare are not new in themselves. Every generation tends to come up with new weapons and new methods. And there is a consistent requirement in IHL that all new weapons should be designed in such a way that they can be used in compliance with IHL. To ensure this respect for the existing requirements of IHL, it is essential that a clear policy of human control is firmly fixed within AI and cyber warfare systems. This system of human control is needed to mitigate the humanitarian risks of AI and cyber systems.


Every generation tends to come up with new weapons and new methods


There are several dimensions of necessary human control. Real time human supervision is necessary to ensure that a human can intervene in the system to guide or deactivate it on humanitarian grounds according to law. Human supervision is also required to ensure predictability and reliability in an AI or cyber system. Without this, the machine learning of the system could lead it to respond in unpredictable and unreliable ways that produce unlawful results. For example, it may learn to simplify its decision-making by targeting all young males across a conflicted area.

Human control is also required to oversee the scope of a weapon system so that it does not roam too widely or set itself new tasks as it learns from itself and its environment. Human control is also required to check highly personalized machine-based decision-making on the legal rights of individuals in armed conflict. For example, are machine-based detention policies lawful?

In short, the policy of human control is necessary to ensure that all new AI and cyber systems remain weapons and do not become non-human combatants. It is also necessary to ensure that these systems stay within the laws of IHL as they operate and learn across the real-time battlefield and behind the frontlines.

It is very likely that the next few years will reveal the need for major new policymaking and probably the development of new law around AI and cyber warfare. Many States and groups of States have been actively developing new norms and guidelines on AI and cyber warfare in recent years. ASEAN has itself been very active in this field. Work from this region stands alongside the work of Russia, the USA and many other States at the UN General Assembly and more spontaneous moves like the Paris Call and Microsoft’s suggestion of a Digital Geneva Convention.

This buzz of activity suggests the storming and forming of new norms and the need for clearer policy and legal rules. But the arms race in AI, cyber and new tech weapons at the moment means some States are reluctant to come together in a comprehensive multilateral process to agree new norms and rules. It seems, perhaps, that just as the development of a new generation of paradigm-shifting weapons is hotting-up, the difficult process of negotiating their regulation is experiencing something of a freeze.


Policy of human control is needed to ensure all new AI and cyber systems remain weapons, non-human combatants


Putting aside discussion that focus on what could go wrong, there are reasons for optimism in the take-up and development of AI weapons and cyber war. After all, human technology is usually ambivalent as a power for good and bad.

There are many things that are already extremely good about computerized warfare. It has delivered extraordinary levels of precision so that military targeting is now much better and able to show greater respect for the principles of distinction, precaution and proportionality.

This precision is genuine progress in the conduct of hostilities. It means that in many recent wars we have seen an avoidance by some parties to conflict of the kind of blanket bombing that was such a feature of World War II and the terrible Cold War conflicts in this region in Korea, Vietnam, Laos and Cambodia.

The fact that greater precision is now possible makes it an even greater failure when sophisticated armed forces choose not to use it and are indiscriminate or negligent in their conduct of hostilities.

It is important not to idealize human control but to remember that the myriad of war crimes and atrocities committed in wars throughout history have been a direct result of human control. Human control of the weapons and methods of war has proved truly terrible for hundreds of millions of people in history.

We should hope and insist, therefore, that computer control actually has the power to make wars better by complementing the flawed ethical and legal practices of human control. The art in this will be in how we find a balance between ethics and computers. This means developing a new humane culture of warfare that is adapted to our latest hybrid human form of cyber humanity.


AI and cyber is already transforming our world for the better across many fields of human endeavour


AI is already transforming the conduct of humanitarian operations. The use of big data has the potential to bring effective needs assessment to scale across a conflict. Digital communications means aid resources and information can be transferred in milli-seconds to thousands of people in urgent needs. Digital recognition is already helping to accelerate family tracing and restoring family links in conflict.

It is not all gloom. AI and cyber is already transforming our world for the better across many fields of human endeavour. I hope governments will increasingly shape military policy and humanitarian diplomacy that strikes a humanitarian balance in this new paradigm of weapons development, starting perhaps in its upcoming White Paper.


* Dr Hugo Slim is Head of Policy and Humanitarian Diplomacy for the International Committee of the Red Cross (ICRC).

** The views and opinions expressed in this article are those of the author(s) and do not necessarily reflect the position of Astro AWANI.