Towards A Better Future
As Techcrunch recently announced, Daniel Ek, the founder and CEO of Spotify, has added two zeros to its one year-old promise to invest €1 million in the military AI and ‘European defence’ series A start-up Helsing, which was founded in 2021 in Berlin. Next to announcements about Ek’s investment in Helsing, the Swedish billionaire made headlines several times this year after Arsenal FC declined the offer for his takeover of the football club. Whereas Ek’s interest in Gunners is linked with his support for the team, which he has favoured since he was a child, Ek’s investment into Helsing is linked with his mission to “to advance ambitious science and technology to solve the world’s biggest challenges and help society progress towards a better future”. This article will provide further background information about the ‘Helsing deal’ and provide a few thoughts about the morality of military tech in the 21st century.
Helsing’s Background And ‘Hellsicht’
Crunchbase classifies Helsing as an “information technology company that [relies on] artificial intelligence [AI] in implementing security”. While it was founded by Gundbert Scherf, Niklas Köhler and Torsten Reil in Berlin in 2021, Helsing already has offices in Berlin, Munich and London. Having raised €102.5 million, Ek’s investment company Prima Materia plays a key role in accelerating the impact of Helsing. Resonating with Helsing’s mission to use AI technologies for the creation of a better future, Helsing will offer Ek a spot in their company board alongside its experts. The latter have a wide pool of knowledge and experience across different disciplines such as gaming, biotech, AI, physics, computer science, software development, engineering, defence, security, geopolitics and diplomacy.
Next to having worked towards a PhD in Complex Systems from the Department of Zoology at Oxford University, the CEO of Helsing, founded the software and games company NaturalMotion in 2001, which was bought by Zynga for $527 million in 2014. Having dropped out his PhD to start NaturalMotion, Reil did not only set up a business with influence in Silicon Valley, but also played a key role at Zynga as VentureBeat (VB) illustrates. About a year after leaving Zynga in 2017, Reil publicly announced that he returned to his hometown Berlin and was now “mainly invest[ing]…in AI and deep tech companies”. Certainly, investing his knowledge at Helsing, which further includes such from his membership at the Munich Security Conference’s Innovation Board and his experience as the director of Oxford Nanoimaging Ltd. (ONI) and Five AI, will lead to establish a fruitful business.
As Madhumita Murgia and Helen Warrell from the Financial Times point out, Helsing’s software “will use artificial intelligence to integrate data from infrared, video, sonar and radio frequencies, gleaned from sensors on military vehicles, to create a real-time picture of battlefields”. Among others, this will allow Helsing to build applications which can “help[] troops to detect swarming drones, enemy forces or camouflaged vehicles more accurately than the human eye”. Despite that the latter is aimed at supporting military operations, with Helsing having announced the concrete aim to sell its software to the German, French and British militaries, an adapted version of the company’s ‘Hellsicht’ (clairvoyance/ astuteness) might also be of use for the average citizen, for instance in cases where acute crises hinder secure movement (i.e. natural disasters).
Helsing’s special focus on defence is however part of the company’s strategy, which is underlined by the fact that it employs various experts from this domain. The former Commissioner at the German Ministry of Defence, Dr. Gundbert Scherf, who is also the Co-Founder and President/COO of Helsing, positions himself as the “[a]rchitect of [the] Defense Acquisition Reform Project and [the] German Cyber Security and Information Domain Command” on LinkedIn. Scherf has not only obtained valuable knowledge in the field of defence, but has also contributed to innovation in this field. The Cyber and Information Domain Service is a new dimension of the German army which deals with cyber threats and “IT and high-technology weapon systems” as it involves the work of 14,500 employees across 28 departments and 25 locations to date.
Similar to Scherf, Nick Elliott CB MBE obtained first-hand experience in the army, but in his case in the UK. As an officer in the British army from 1987 to 2008, “he commanded a bomb disposal squadron and a combat engineer regiment on operations worldwide” as stated on the website of Helsing. Indeed, Elliott CB MBE has had a strong passion for military technology since his youth, two related master degrees and experience in several areas of British defence with his most recent role having been to serve as the Director General of the UK Vaccine Taskforce. With both Ek, Reil and Elliott CB MBE showing a long respectively strong commitment to each of their passions, Helsing’s ‘Hellsicht’ is based on a cross-disciplinary approach with potential and reach.
However, a small gap in Helsing’s current team setup might be the lack of employment of experts in the fields of international (humanitarian and criminal) law and ethics. Whereas the former UK diplomat James Dancer, Deputy CEO for the UK Branch and Director of Partnerships and Programmes, might possess valuable expertise in these domains, it is yet unclear whether Helsing will actively involve Dancer in academic debates about military technology, exchanges with civil society and debates with policy-makers. Arguably, working towards an influence in these fields is a relevant task considering Helsing’s aim to “serve our democracies”. It would not be surprising if a multi-disciplinary team of developers and entrepreneurs could contribute valuable insights for law-making based on the active effort to monitor the impact of their applications from design to implementation.
The Morality of Military AI: Beyond Human vs. Machine
As the Stockholm International Peace Research Institute’s (SIPRI) research shows, emerging military and security technologies are plenty. As AI, robotics, internet of things (IoT) and many more are setting the stage for a fourth industrial revolution, the roles of emerging technologies in peace, justice, security and international law are transforming and the roles of humans need to be aligned with the optimal functioning of the latter technologies. In other words, the ‘management’ of emerging technologies respectively their operation needs to be learned. As was emphasized during the Arria-Formula Meeting on the Impact of Emerging Technologies on International Peace and Security in May 2021,
“AI enabled technologies can help us to produce faster, more accurate and comprehensive analyses, improve logistics and assist human decision-making in other ways – that save lives. However, one problematic application of artificial intelligence involves the incorporation of autonomy into the critical functions of weapon systems.”
Izumi Nakamitsu, Arria-Formula Meeting – Chinese Mission to UN, Youtube
Especially, when it comes to Nakamitsu’s last point, the debate about AI technology often shifts to discussing its morality, but can we truly ascribe machines ‘autonomy’? While morality talks are of crucial importance, it should not be forgotten that humans needed to adapt to the operation and optimize older inventions as well. By no means were the Wright brothers ‘in control’ of the first powered aircraft, which they designed for take-off in 1903. System failures can only be captured through strict monitoring, research and development. In other words, in order to promote innovation risks have to be taken so that lessons can be learned. Nevertheless, one critical element of emerging technologies is how they will be used by the international community, because their (un)intentional and irresponsible use could bear greater risks than cracks related with their features and functionalities. As Jean-François Caron emphasized in his 2019 work ‘Contemporary Technologies and the Morality of Warfare. The War of the Machines’,
“…if many people believe that human morality is the only way to make sure that the moral rules of warfare will be respected, we cannot neglect the fact that the human condition has also been the main factor of their violations […] Alongside their capacity to ensure a better respect for the lives of non-combatants, technologies can counterbalance the problems associated with human nature.”
Caron 2020, p. 39, 49
Caron’s thought arguably is mirrored within a part of Helsing’s mission. As Murgia’s and Warrell’s article in the Financial Times highlights, Helsing is well aware that Europe is not as advanced as China and the US when it comes to cultivating big tech companies. In particular, Europe might indeed be a little scared of China’s new combat technologies considering that its own inventions are still lacking behind. In order to restore a ‘balance of power’ in the fields of security and defence in the international community, the development of AI military technologies could therefore be interpreted as a moral imperative. However, this imperative arguably stays relative to restoring the latter balance of power. The failure to protect citizens from future attacks by emerging technologies could result from the failure to step up on the pathway of emerging technologies now. Therefore, discussions about AI tech across civil society must finally come to accept that the ‘human vs. machine’ dichotomy, which has largely been influenced by pop culture, is no longer a feasible ground for discussion.
As Sjoukje van der Meulen and Max Bruinsma emphasize, the concept of “man as ‘aggregate of data’” illustrates that humans are somewhat ‘categorical’ beings. Even though it is a common argument that authenticity is inherent to the human but not the machine, humans could thus be argued to only possess an identity based on certain categories as well – categories which oftentimes stem from the systems and societies we live in. In turn, the argument that ‘authenticity’ creates predictability and humaneness becomes debunked. Individual (human) choice is embedded in an experience out of our control as well. As Caron underlines, Ronald Arkin said that facing life or death humans’ judgement can become cloudy, whereas “robots with pre-programmed lethal autonomy are not affected by the ‘shoot first and ask questions later’ approach”. The latter proves that AI military tech offers a chance to enhance human capabilities. What it will enable and how it will be received and used matters more than the fact that it automates certain processes.
Centurion Plus
Are you passionate about security and defence as well as AI tech? Then our team would like to support you with the legal side of your business operations! Whether you are located in Germany, Africa or another country does thereby not matter as much as the sincere ambition to start up, scale up or invest into projects in the first two regions. We would like to support start-ups to promote a knowledge exchange with felt impact. Are you ready for this challenge? Then contact us!