As Israel resumes its offensive in Gaza, concerns are growing over the Israel Defense Forces (IDF) using artificial intelligence (AI) to select bombing targets. The IDF utilises an AI target-creation platform named “the Gospel,” accelerating the production of targets akin to a “factory.” This AI-driven approach, employed since the 11-day war in May 2021, has raised questions about the ethics and consequences of automated systems in warfare.
The Gospel, part of the IDF’s target administration division formed in 2019, operates on an AI-based system called Habsora. By rapidly extracting intelligence, it produces automated recommendations for targeting, including private homes of suspected Hamas or Islamic Jihad operatives. This has significantly expanded the pool of targets, leading to unprecedented levels of strikes in the Gaza Strip.
The target division, powered by AI capabilities, reportedly maintains a database of 30,000 to 40,000 suspected militants. The Gospel has played a crucial role in compiling lists of individuals authorised for assassination, marking a departure from previous targeting strategies.
IDF’s claims of precise attacks on Hamas infrastructure while minimising harm to non-combatants have been met with scepticism. Experts argue that assertions about AI reducing civilian harm lack empirical evidence, contrasting with the visible impact of widespread urban destruction in Gaza.
During the recent conflict, the IDF targeted over 12,000 locations in Gaza, a stark increase compared to previous operations. Sources reveal that each strike on private homes includes a collateral damage score, indicating the expected number of civilian casualties. Concerns are raised about the decision-making process, as some unit commanders may be more trigger-happy than others.
The integration of AI-based systems, such as the Gospel, into IDF operations has accelerated the target creation process, prompting comparisons to a “mass assassination factory.” Critics worry that reliance on automated systems may lead to commanders losing the ability to consider the risk of civilian harm in a meaningful way.
The use of AI in Israel’s offensive against Hamas underscores broader concerns about the growing reliance on complex and opaque automated systems in modern warfare. As the IDF’s approach becomes a focal point, it raises important questions about the ethical implications and potential consequences of integrating AI into military decision-making processes.