Episodes
-
The development of artificial intelligence (AI) technologies brings significant opportunities, and risks, for principled humanitarian action. While AI innovations advance at a pace that seemingly defies human capabilities to manage them responsibly, humanitarian organizations are chasing ‘AI for Good’, and struggling to find effective safeguards.
In this post, ICRC Senior Policy Adviser Pierrick Devidal reflects on some of the lessons from the ICRC’s experience in building its recently adopted AI Policy, with the hope that it can inform other efforts to build an ethical and responsible approach to the use of AI in the humanitarian sector. -
Private businesses that operate in situations of armed conflict, do business with a government or other entity involved in an armed conflict, or may do so in the future, should be aware of relevant rules of international humanitarian law (IHL).
In this post, the International Committee of the Red Cross (ICRC), Australian Red Cross Society and French Red Cross Society describe a new publication that introduces the most relevant rules of IHL and explains why and how private businesses need to respect them. -
Episodes manquant?
-
Communities in conflict-affected areas are direly impacted by growing climate risks and shocks. Over the last few years, political will to strengthen climate action in these settings has been growing. Commitments need to be urgently translated into tangible outcomes for communities – and avenues to do so exist.
In this post, and on the eve of COP29, Catherine-Lune Grayson and Amir Khouzam reflect on pathways to strengthen climate action in conflict settings. -
Language matters and the protections of international law are crucial when facing global trends of dehumanization. Dehumanizing narratives strip people of their dignity, making it easier to justify inhumane treatment, torture, and exclusion from legal protections.
In this post, Terry Hackett, Head of Division on Persons Deprived of Liberty at the ICRC, emphasizes the urgent need to reject dehumanization, ensure humane treatment, and strengthen compliance with international law to protect the dignity and rights of detainees globally. -
In line with its mandate, the ICRC engages with all parties to an armed conflict, including non-state armed groups. The ICRC has a long history of confidential humanitarian engagement with armed groups to alleviate and prevent the suffering of persons living in areas controlled by these groups. However, this engagement has become increasingly complex. Accordingly, the ICRC undertakes an annual internal exercise to evaluate the status of its relationships with armed groups and to identify developments to strengthen its future engagement worldwide.
In this post, ICRC Adviser Matthew Bamber-Zryd discusses some of the key findings from this exercise. In 2024, the ICRC estimates that 210 million persons live in areas under the full or contested control of armed groups. There are more than 450 armed groups of humanitarian concern worldwide and the ICRC’s engagement with these groups remains stable. Despite the ICRC’s successful contact with 60% of armed groups worldwide, engagement with some groups remains challenging. These obstacles stem from a combination of state-imposed barriers, notably counter-terrorism legislation, and the precarious security environment prevailing in certain countries. -
Gender can still be a confusing and contested subject for international humanitarian law (IHL) and military practitioners. But just as practitioners keep abreast of astonishing technological advancement, and states continue to dedicate significant – and, in numerous contexts, increasing – national spending on defence and security, it is high time that the equal protection of civilians is invested in, too. Gender inequality remains ingrained across today’s conflict-affected contexts, and gender-specific harms shape some of the horrors inflicted on civilians.
To encourage parties to armed conflict to take more and better measures to reduce this harm, in 2024 the ICRC, the Swedish Red Cross, and the Nordic Centre for Military Operations published a new report – International Humanitarian Law and a Gender Perspective in the Planning and Conduct of Military Operations – based on an expert meeting with state and military practitioners. In this post, the report’s co-authors set out ten legal, policy and operational recommendations to equip armed forces to reduce the gendered risks faced by diverse women, men, girls and boys in armed conflict, and identify good practices from modern militaries. It’s time for these to be part-and-parcel of how militaries comply with IHL and related civilian harm reduction measures. -
The 34th International Conference of the Red Cross and Red Crescent will take place 28-31 October 2024 in Geneva, Switzerland. At this meeting, states party to the 1949 Geneva Conventions and the components of the International Red Cross and Red Crescent Movement will meet to discuss humanitarian issues under this International Conference’s theme “Navigate uncertainty, strengthen humanity”.
In the lead-up to this meeting, ICRC Legal Adviser Ellen Policinski looks back at the role of the International Conference in drafting the 1949 Geneva Conventions, in particular the Fourth Geneva Convention, which protects civilians. -
Food insecurity remains a critical issue in modern armed conflicts, exacerbated by the mutually reinforcing effects of conflict, economic shocks, and climate change. In response, the ICRC's 2024 Challenges Report emphasizes how compliance with a broad range of rules of international humanitarian law (IHL) can help avoid acute food crises, and highlights a number of contemporary obstacles to achieving such compliance in practice.
In this post, ICRC Legal Adviser Matt Pollard highlights key legal protections under IHL, including the prohibition against using starvation as a method of warfare. He stresses the importance of a much wider range of rules relevant to safeguarding civilian access to essential resources like food and water, and outlines how avoiding unduly narrow interpretations of such IHL rules is essential to reducing food insecurity and its devastating long-term effects during armed conflicts. -
Detention by non-state armed groups is a widespread, diverse, and legally complex occurrence in armed conflicts across the globe. In 2023, the ICRC assessed that around 70 non-state armed groups in non-international armed conflicts have detainees. The circumstances of detention can pose serious humanitarian concerns, including ill-treatment and inadequate living conditions for detainees.
In this post, part of a series on the Fourth Geneva Convention and the internment of protected persons and drawing upon the 2024 ICRC Challenges Report, ICRC Legal Adviser Tilman Rodenhäuser discusses the prohibition of arbitrary detention under international humanitarian law (IHL) and how this relates to internment by non-state armed groups in the context of non-international armed conflicts. -
Water and wastewater pipelines, electricity lines and telecommunication installations permeate contemporary urban landscapes and form complex, interdependent service networks, which populations rely on for their essential needs. Armed conflict can damage or disrupt these networks and the essential services they provide. In recent years, increasing attention has been paid to protecting critical civilian infrastructure, yet addressing the humanitarian impact of essential service disruption requires a broader focus beyond physical infrastructure.
In this post, the group of experts behind the newly released report “Keeping the Lights on and the Taps Running”, co-published by the ICRC and the Norwegian Red Cross, highlight the crucial yet often overlooked role of the personnel who operate, maintain, and repair essential service infrastructure during hostilities. They argue that protecting and facilitating safer access for essential service providers during armed conflict should be considered a key component of humanitarian action and review the Movement's experience in doing so. -
Since 2003, the ICRC has submitted a report on ‘International Humanitarian Law and the Challenges of Contemporary Armed Conflict’ to the International Conference of the Red Cross and Red Crescent, where the High Contracting Parties to the Geneva Conventions come together with the International Red Cross and Red Crescent Movement to discuss key matters of humanitarian concern and to make joint commitments.
In this post and drawing from the 2024 Challenges Report, ICRC Chief Legal Officer Cordula Droege presents the ICRC’s analysis of some of the salient legal issues of today’s conflicts, animated, first and foremost, by its desire to achieve greater protection of victims of war from the effects of armed conflicts, and informed by its observation of key humanitarian issues on the ground and its dialogue with parties to conflicts in all parts of the world. -
In the debate on how artificial intelligence (AI) will impact military strategy and decision-making, a key question is who makes better decisions — humans or machines? Advocates for a better leveraging of artificial intelligence point to heuristics and human error, arguing that new technologies can reduce civilian suffering through more precise targeting and greater legal compliance. The counter argument is that AI-enabled decision-making can be as bad, if not worse, than that done by humans, and that the scope for mistakes creates disproportionate risks. What these debates overlook is that it may not be possible for machines to replicate all dimensions of human decision-making. Moreover, we may not want them to.
In this post, Erica Harper, Head of Research and Policy at the Geneva Academy of International Humanitarian Law and Human Rights, sets out the possible implications of AI-enabled military decision-making as this relates to the initiation of war, the waging of conflict, and peacebuilding. She highlights that while such use of AI may create positive externalities — including in terms of prevention and harm mitigation — the risks are profound. These include the potential for a new era of opportunistic warfare, a mainstreaming of violence desensitization and missed opportunities for peace. Such potential needs to be assessed in terms of the current state of multilateral fragility, and factored into AI policy-making at the regional and international levels. -
The military decision-making process is facing a challenge by the increasing number of interconnected sensors capturing information on the battlefield. The abundance of information offers advantages for operational planning – if it can be processed and acted upon rapidly. This is where AI-assisted decision-support systems (DSS) enter the picture. They are meant to empower military commanders to make faster and more informed decisions, thus accelerating and improving the decision-making process. Although they are just meant to assist – and not replace – human decision-makers, they pose several ethical challenges which need to be addressed.
In this post, Matthias Klaus, who has a background in AI ethics, risk analysis and international security studies, explores the ethical challenges associated with a military AI application often overshadowed by the largely dominating concern about autonomous weapon systems (AWS). He highlights a number of ethical challenges associated specifically with DSS, which are often portrayed as bringing more objectivity, effectivity and efficiency to military decision-making. However, they could foster forms of bias, infringe upon human autonomy and dignity, and effectively undermine military moral responsibility by resulting in peer pressure and deskilling. -
The Fourth Geneva Convention was the first humanitarian law convention dedicated to protections for civilians during armed conflict. Amongst its numerous protective rules, it also provides the main rules of international humanitarian law (IHL) governing the exceptional practice of internment of protected persons – detention of such persons for security reasons during international armed conflict.
In this post, and in commemoration of the 75th anniversary of the Geneva Conventions this year, Group Captain Tim Wood, Provost Marshal of the New Zealand Defence Force, shares his views and practical insights with regards to procedures for internment review of civilians. Drawing on operational experience, he considers some of the characteristics of review bodies which are essential for them to properly fulfil their role. -
When the very first Geneva Convention was adopted in 1864, it was the culmination of several interwoven humanitarian projects of the ICRC’s principal founder, Henry Dunant. One of those ambitions was the conception, standardization, and integration into what would become known as international humanitarian law (IHL) of the distinctive emblem of the Convention. Designed to signal the specific protections IHL accords to the medical services and certain humanitarian operations, the emblem – today the red cross, red crescent, and red crystal – is displayed on different persons and objects in the physical world, including on buildings, transports, units, equipment, and personnel that are accorded these protections. Over its 160-year history, the distinctive emblem has saved countless lives.
Today, the ICRC is again engaged in a project to conceive, standardize, and integrate into IHL a means to identify those very same specific protections, but in a way the drafters of the original 1864 Geneva Convention could not have imagined: a digital emblem specifically designed to identify the digital assets of the medical services and certain humanitarian operations. In this post, building on previous work on this topic, ICRC Legal Adviser Samit D’Cunha summarizes some of the key milestones of the history and development of the distinctive emblem and explores how these milestones serve as a lodestone – or compass – for the Digital Emblem Project’s path forward. -
During armed conflict and other situations of violence, timely access to reliable information can save lives. Affected people need to know where danger and risks come from, how and where they can find assistance, and how to protect themselves and access needed services. At the same time, the information dimensions of conflict have become part of the digital frontlines, where harmful information can spread at greater scale, speed, and reach than ever before. The information space can be riddled with narratives that distort facts that are essential for people to make decisions regarding shelter or their security, that undermine humanitarian operations, or that influence people’s behavior, fueling polarization and hate speech or triggering or inciting violence against civilian populations.
The International Committee of the Red Cross (ICRC) is concerned that the spread of misleading or hateful narratives may undermine the protection and safety of people affected by armed conflict and other situations of violence. The ICRC focuses on the potential for harmful effects resulting from the distortion of information or the absence of reliable information. In this post, ICRC Digital Risks Adviser Joelle Rizk presents four risks associated with the spread of harmful information in situations of armed conflict and elaborates on the ICRC approach that focuses on addressing its harmful effects on people. -
Over the past decade, discussions surrounding artificial intelligence (AI) in the military domain have largely focused on autonomous weapon systems. This is partially due to the ongoing debates of the Group of Governmental Experts on Lethal Autonomous Weapons Systems of the Convention on Certain Conventional Weapons. While autonomous weapon systems are indeed a pressing concern, the critical reality is that AI is hastily deployed to gather intelligence and, even more worrisome, to support militaries to select and engage targets.
As AI-based decision support systems (AI DSS) are increasingly used in contemporary battlefields, Jimena Sofía Viveros Álvarez, member of the United Nations Secretary General’s High-Level Advisory Body on AI, REAIM Commissioner and OECD.AI Expert, advocates against the reliance on these technologies in supporting the target identification, selection and engagement cycle as their risks and inefficacies are a permanent fact which cannot be ignored, for they actually risk exacerbating civilian suffering. -
Algorithmic bias has long been recognized as a key problem affecting decision-making processes that integrate artificial intelligence (AI) technologies. The increased use of AI in making military decisions relevant to the use of force has sustained such questions about biases in these technologies and in how human users programme with and rely on data based on hierarchized socio-cultural norms, knowledges, and modes of attention.
In this post, Dr Ingvild Bode, Professor at the Center for War Studies, University of Southern Denmark, and Ishmael Bhila, PhD researcher at the “Meaningful Human Control: Between Regulation and Reflexion” project, Paderborn University, unpack the problem of algorithmic bias with reference to AI-based decision support systems (AI DSS). They examine three categories of algorithmic bias – preexisting bias, technical bias, and emergent bias – across four lifecycle stages of an AI DSS, concluding that stakeholders in the ongoing discussion about AI in the military domain should consider the impact of algorithmic bias on AI DSS more seriously. -
The desire to develop technological solutions to help militaries in their decision-making processes is not new. However, more recently, we have witnessed militaries incorporating increasingly complex forms of artificial intelligence-based decision support systems (AI DSS) in their decision-making process, including decisions on the use of force. The novelty of this development is that the process by which these AI DSS function challenges the human’s ability to exercise judgement in military decision-making processes. This potential erosion of human judgement raises several legal, humanitarian and ethical challenges and risks, especially in relation to military decisions that have a significant impact on people’s lives, their dignity, and their communities. It is in light of this development that we must urgently and in earnest discuss how these systems are used and their impact on people affected by armed conflict.
With this post, Wen Zhou, Legal Adviser with the International Committee of the Red Cross (ICRC), and Anna Rosalie Greipl, Researcher at the Geneva Academy of International Humanitarian Law and Human Rights, launch a new series on artificial intelligence (AI) in military decision-making. To start the discussion, they outline some of the challenges and risks, as well as the potential, that pertain to the use of AI DSS in preserving human judgement in legal determinations on the use of force. They also propose some measures and constraints regarding the design and use of AI DSS in these decision-making processes that can inform current and future debates on military AI governance, in order to ensure compliance with international humanitarian law (IHL) and support mitigating the risk of harm to people affected by those decisions. -
In cities from Gaza to those in Sudan and Ukraine, childhoods are irrevocably changed by urban warfare. Yet despite the number of children affected and the increasingly urbanized nature of conflict, the detail of the child-specific nature of the harm caused remains poorly understood by practitioners and decision-makers. To address this gap, in 2023 the ICRC published a new report – Childhood in Rubble: The Humanitarian Consequences of Urban Warfare for Children – drawing from existing literature, 52 interviews with experts, and the organization’s firsthand experience.
In this post, three of the report’s contributors set out eight overlooked ways that children are affected by urban warfare and outline a set of legal, policy and operational recommendations that states, non-state armed groups and humanitarians could implement to elevate the protection of children from media rallying cry to political priority. - Montre plus