If artificial intelligence (AI) opens up the possibility for us to anticipate the future, then the eruption of violence will no longer be tolerated. The armed forces of the future will invest in this type of predictive method and look for cooperation with economics, politics, and civil society in order to avoid conflicts.
Armed forces will evolve towards platforms that use their ecosystem to rapidly and accurately combine and build these capabilities. Countries like Germany face particular challenges, as they don’t have the same possibilities as superpowers do to build these platforms. But they also have unique opportunities, because they will need to rely more on cooperation and diversity.
Conflicts in the Digital Age and the Role of Defence Platforms
Since the Iraq War, odds that a weaker power will enter into open battle against a highly developed power have decreased. While it is not impossible to rule out the risk of major conflicts involving a substantial number of troops, it seems likely that digitalisation will increase the number of conflicts in Southern states, as automation processes render traditional industries redundant, and will thus change the global supply chains. The West may potentially be involved in these conflicts, or confronted with the consequences. Defence forces will therefore have to master the following tasks:
Anticipate: with the help of big data and algorithms, conflicts can be predicted with reliable probability. Defence forces can generate heatmaps for countries or regions and extract information on enemies, which can be used to devise defence strategies and – almost more importantly – take early action: weak signals are consolidated and measures defined on the basis of historical data and experiences.
Resolve: if the probable cause of threat or conflict potential can be recognised, defence organisations will be able to cooperate with national and international civilian partners to develop countermeasures and make these available via various platforms. It seems likely that digitalisation will increase the number of conflicts in Southern states.These include the creation of new political narratives on partners who undermine the conflict discourse and the development of security, medicine, education, and also food supply services, which also turns affected citizens and parties to producers of these services, thus promoting a new kind of collaboration and trust between passive or antagonistic parties. Furthermore, decision-making processes and procedures could be offered and controlled via a virtual platform, which can be used by the parties concerned and generate learning processes.
Block: digitalisation will lead to multiple conflicts, with primary fears revolving around losing control of entire regions. This scenario can concern entire countries (Somalia, Afghanistan) or the larger metropoles, with a proletariat that has been rendered ‘useless’ due to digitalisation and defies government rule. In these situations, defence forces will isolate these ‘no-go’ areas, cordon them off with automated barriers, and monitor them with drones.
Activate: if the preceding actions can’t prevent conflicts from breaking out, defence platforms will mobilise machine and human conflict intervention. Initially, most likely in the form of a virtual fight between the platforms (hacker attacks and counterattacks). This will also affect the physical world, or even blur the distinction between military and civilian targets (destruction of civil infrastructure, et cetera)7. The use of physical force, with humans supported and protected by machines (hybrid warfare), might now become necessary.
Which types of technologies and organisations are capable of implementing such a complex and major undertaking? Traditional military institutions will not suffice. Analogous to the changes in economic organisation (‘Industry 4.0’, Internet of Things), platforms are most likely to live up to the abovementioned challenges. Defence platforms are complex, partially virtual forms of organisation, consisting of a large ecosystem of partners who use machine learning (ML) to operate and make predictions.
New Cooperation with the Economy, Politics and Civil Society
The three layers defence platforms.
Although the Industry 4.0 objective remains highly vague for this application, a few technological and structural cornerstones can be singled out. Initially, this concept consists of the combination of elements containing software, computers, as well as mechanical and electronic components (cyber-physical systems); these are connected via the Internet and distribute themselves globally in extreme cases. The innovative character of such a system lies in its form and mutability: while a human being consists of a fixed number of ‘components’ such as limbs, sensory organs, and the like, partially autonomous components now form a kind of ‘community’, whose members ‘come and go’ as they please or as conditions permit. Furthermore, individual components can belong to several, or different, cyber-physical systems – this offers intelligent machines the ability to ‘self-design’ and optimise themselves, to an extent previously unknown in technology.
Aside from machines, which communicate among themselves, humans who interact with these machines through an interface will also be integrated into this new concept of defence platforms. The human role in these technological concepts can occasionally be ambiguous. Which types of technologies and organisations are capable of implementing such a complex and major undertaking? Traditional military institutions will not suffice.This has much to do with the technical emphasis of these concepts, as well as with the uncertainty over which capabilities machines can achieve and which tasks will make sense for humans. Even when it comes to controlling these components, a clear division of roles is not that simple. Although machines can now be controlled by each other and become more autonomous (smart machines, smart devices), these complex systems, can no longer solely be controlled by humans. With the help of a predefined objective, machine learning can detect patterns based on a database of similar cases, and make decisions or suggestions to the human. However, it is already clear that humans react too slowly for machines when it comes to real-time situations (brake automation, trading). The well-known – and complicated – relations between humans within organisations are now complemented by new complexities which, when applied to the context of armed forces, can be illustrated as follows:
With the help of sensors, machines or smart objects can carry out automated tasks (for example a truck requesting refuelling or maintenance). The machine will act as the human’s personal artificial intelligence, a so-called software agent, and make suggestions on how to interpret certain situations. Other machines can become autonomous in order to carry out these activities (self-driving tanker). A multitude of examples can easily be imagined within the defence sector: virtual barriers can detect activities and call for drones to clarify the situation, machines can anticipate maintenance works via ML and order a 3D printer or a largely automated robot factory (lights-out factory) to print out spare parts, which the drone then delivers to the vehicle. This allows for reductions to the relatively high ratio of support staff (logistics, medical services, et cetera) to combat troops.
In this case, humans define objectives for machines to carry out. Communication takes place via various interfaces (voice command, keyboard, visor, and the like). On the other hand, the machine will act as the human’s personal artificial intelligence, a so-called software agent, and make suggestions on how to interpret certain situations (resource allocation, personnel decisions, as well as tactical and strategic decisions), but will also illustrate courses of action and the associated odds of success. Evidently, humans can use the machine to practice emergency situations: the role of the adversary can be taken over by the AI, and the level of difficulty or realism increased through a self-learning system.
Intersection with the Environment
Machines require a considerable amount of information and competencies, which cannot always be found within the organisation itself. Therefore, the concept of defence platforms predicates a certain degree of openness, to connect external information and specific skills (point skills) situationally. This information input above all follows open source intelligence (OSINT) principles, and is gathered from open sources (science, economy, politics, media, et cetera), thus generally improving the framework for decisions – such as the probability of conflicts or opposition. Another reason for opening up military platforms to other partners and civilian platforms is access to services, products, and abilities of these partners, which will increase the platform’s own capacities. This includes the possibility of utilising and integrating problem-related expertise via specific platforms, similar to crowdworking platforms. The underlying technology, particularly ML, is also used in both military and civilian spheres, and there are synergies (and disputes due to limited resources) which are of course adequately addressed by those countries that control both sectors.
Rapid and Flexible Cooperation between Organisations
Traditional defence organisations will form the core of defence platforms. However, they are joined by a multitude of external players and above all new management tools. The comprehensive task of conflict prediction and resolution will result in defence platforms that are open towards the various areas of society and to cooperation.Platforms are suitable for such large-scale tasks because they can help organise many participants or personal resources via standardised interfaces, on an on-demand basis, and beyond the boundaries of traditional organisations. Therefore, platforms can select, combine, and virtually offer services for specific tasks. They are ‘open’, in contrast to traditional organisations, which are exclusive and impose barriers and complex protocols for any sort of cooperation. Under the threat of conflict, the mentioned assignments demand rapid and flexible cooperation from a large number of organisations. The comprehensive task of conflict prediction and resolution will result in defence platforms that are open towards the various areas of society and to cooperation: with science, culture, media, administrations, civil society, and the like. The defence platform therefore creates its own ecosystem:
“Can we imagine defense capability planning processes that try to include not only their ‘own’ capabilities, but that start looking for a better balance between those ‘own’ capabilities and capabilities to empower various ecosystem players? Whereby ‘defense’ is no longer just the operator who intervenes heavily, physically, lately in a conflict dynamic (and we still foresee a need for that security function as well); but the more strategic custodian who looks for ways to intervene, digitally and early in the process? A thoughtful curator that advocates a better balance between conflict-centric and resilience-centric efforts in order to maximize the defense value proposition?.”
Relief Supplies Created in Robot Factories
The complexity resulting from this kind of cooperation must be appropriately monitored. For this reason, platforms do not use the top-down chain of command found in traditional hierarchies. Instead, they use algorithmically supported control. Algorithms can evaluate patterns in social media and therefore deduce potential conflicts. They can then select the necessary human participants, or machines, who can resolve this conflict based on their experiences in similar situations. The algorithms can subsequently autonomously or semi-autonomously control cooperation with these participants (humans and machines). This means, for example, specifying which relief supplies are requested from partners. Those are then created in robot factories, ‘printed’ on site, or transported using drones. If the algorithm is unreliable or if ethical reasons apply, humans can have the final say (‘human-in-the-loop’). This development is not yet complete; however, it is already evident that machines can learn without human specifications, through trial-and-error procedures (‘reinforcement learning’).
Defence Platforms as a Digital HQ
Parallel to the discussion that takes place within economics – assuming that humans will initially be supported and subsequently replaced by ML in their decision-making, resulting in ‘robo-bosses’ –, we can assume that defence platforms will also possess a sort of digital HQ, or at the very least a digital central office or ‘brain’. There are good reasons to assume that these will follow the example of police leadership units that form within the framework of predictive policing. Take, for instance, the Real-Time Analysis Critical Response unit in Los Angeles (RACR):
Platforms do not use the top-down chain of command found in traditional hierarchies. Instead, they use algorithmically supported control.
“A 911 call. A possible gang fight in progress. RACR Command directs patrol units to the scene all the while monitoring their real-time progress. Data about the fight is pushed to officers on their mobile phones. Alerts about past shootings and gang tensions warn officers of unseen dangers. (…) Officers scroll through photographs to visualize the physical geography before they arrive. (…) Roll call. Monday morning. Patrol officers receive maps of today’s crime forecast. Small red boxes signify areas of predicted crime. These boxes represent algorithmic forecasts of heightened criminal activities. Years of accumulated crime crunched by powerful computers to target precise city blocks .”
This is the objective of the networking efforts put in place around the turn of the millennium, under headings such as ‘netcentric warfare’ and ‘networked operations’, while the full extent of ML capabilities was still unknown at the time.
These types of defence platforms and their comprehensive ecosystem are the expression of political power within the international system. The possibility to control humans, organisations, and machines with the help of AI defines which position a nation or a coalition assumes within the power structure. It becomes clear that these types of platforms and their machines can greatly change the position of a country, given that this position no longer depends on the number of armed forces, economic possibilities, or inhabitants: “(…) like in the first industrial revolution, population size will become less important for national power. Small countries that develop a siginificant edge in AI technology will punch far above their weight.”
As a result, positions of power will be ‘negotiated’ between individual platforms. We can presume that defence platforms will be in a sort of perpetual state of conflict with other, potentially even nominally allied platforms. The opposing platform must be infiltrated in order to obtain and change information and therefore hamper the oppositions’ decision-making ability and disrupt their procedures. Today’s cyberspace violations and attacks provide a good outlook on what this permanent state of conflict will look like:
“Like everyone else, we don’t know exactly when AI will start to be widely used in the attack vector itself. Are these types of platforms not the ultimate expression of military cynicism, of the desire to win without personal commitment and endangerment?We did see one instance, about six months ago in a network in India. It wasn’t super sophisticated. I wouldn’t call it a full-blown AI attack. But it was using some bits of machine learning to learn what normal looked like in this network, and try to blend into the background noise. Luckily, we detected it because it was doing a lot of lateral movement and unusual behavior, so our models went off loud and clear. It might not be trying to steal data; it might just hang out and learn, right? If you want to learn about new medical research or alternative energy, maybe you just want to camp out in the network and observe. Or maybe it’s about subtly changing data, patient records, blood types, bank account balances, and that’s going to wreak havoc because no one’s going to know what data they can trust.”
Maybe global technology platforms will also adapt to this type of conflict. The intermediary role of these platforms was used as an argument in Google’s internal discussion regarding the acceptance of defence contracts: “(…) it was better for peace if the world’s militaries were intertwined with international organizations like Google rather than working solely with nationalistic defense contractors.”
This shift of conflicts into the technical domain has certainly already for a long time stimulated the imagination. Nikola Tesla had already drawn sketches of machine duels in 1900, and considered this a way out of violent conflicts: as long as humans still constitute part of the conflict; their emotions will always produce new conflicts. The machine must therefore take over the role of humans. The nation state then becomes a ‘spectator’ of technical conflict:
Perhaps there now exists the prospect of developing ethically constructed and acting platforms that will do anything to prevent conflicts.
“But now, what is the next phase in this evolution? Not peace as yet, by any means. The next change which should naturally follow from modern developments should be the continuous diminution of the number of individuals engaged in battle. The apparatus will be one of specifically great power, but only a few individuals will be required to operate it. This evolution will bring more and more into prominence a machine or mechanism with the fewest individuals as an element of warfare, and the absolutely unavoidable consequence of this will be the abandonment of large, clumsy, slowly moving, and unmanageable units. Greatest possible speed and maximum rate of energy-delivery by the war apparatus will be the main object. The loss of life will become smaller and smaller, and finally, the number of the individuals continuously diminishing, merely machines will meet in a contest without blood-shed, the nations being simply interested, ambitious spectators.”
The Role of Humans
Similar to the idea of Industry 4.0, the concept of a defence platform begs the question of the scope and nature of human roles and tasks. Of course, these will depend on the progress of ML. There can be little doubt, however, about the value of the objective of avoiding physical combat between humans, and what Clausewitz called ‘efforts’ (especially the fear of being killed or wounded). The act of killing is abhorrent to the vast majority of people, and causes trauma. While it was possible to drastically increase the ‘willingness to kill’ leading up to the Vietnam War (from twenty-five per cent in the Second World War to over ninety per cent of the combatants), the associated negative consequences suffered even by drone pilots are not overcome by willingness alone. Furthermore, Western countries in particular will find themselves facing moral dilemmas in many conflicts – especially conflicts in Southern countries. At first glance, machines offer ‘an easy way out’ to this kind of problem, as clearly demonstrated in the Afghanistan conflict, for example.
It can be expected that humans will continue to be active on defence platforms, but perhaps only in the second row behind, or protected by, autonomous machines of the platform.
As such, it can be expected that humans will continue to be active on defence platforms, but perhaps only in the second row behind, or protected by, autonomous machines of the platform. The Russian military’s objective to replace about thirty per cent of their combat capacity with machines in the next few years does not appear to be far-fetched in this regard.
The military platforms outlined here – this much can be said, despite all uncertainty – entail complex and hard-to-solve implications. They will not only shift economic structures and displace the measuring instruments of international politics. Would such conflicts not also leave the question of responsibility unanswered?
Are ‘Virtuous’ Conflicts with Machines Possible?
Are these types of platforms not the ultimate expression of military cynicism, of the desire to win without personal commitment and endangerment? Would conflicts and the loss of lives truly be avoided? Or would the implementation simply be handed over to machines without fixing the root of the conflict? Can ‘virtuous’ or ‘holy’ conflicts fought by machines even be possible, as long as the losses remain as high as they are, particularly within the civilian population of the technically inferior opponent? And finally: is the machine-based platform only another facet of mankind’s dream of a superweapon that should eventually usher in and ensure peace, but instead keeps creating further conflicts?
Perhaps there now exists the prospect of developing ethically constructed and acting platforms that will do anything to prevent conflicts. This, and only this, seems to be a perspective that reconciles technology with human civilisation: “Future military history will be written on a completely new front, where the struggle to desist struggling will be carried out. The decisive blows will be those that are not struck.”
Armed Forces of the Future
How We Must Act
Countries with the ability to build central platforms, which are by nature always global, seem to have an advantage. These offer a comprehensive control mechanism, as well as data structures and registries, for military use. Germany and Europe could compensate for their ‘disadvantage’ with the help of networked defence ecosystems, and the associated diversity could lead to machines and platforms that are smarter and more ethical than those of the major powers.
- Acceleration of the ML sector in Germany and Europe. If ML is key to a country’s economic, political, and military positioning, then priorities need to be set accordingly. Measures must consist of combining and networking investment and research on a national, or better yet EU, level, and to provide a mid and long-term perspective: technological development and leadership must be increasingly recognised and utilised as a geopolitical factor.
- Intensifying cooperation between the civilian and military sphere. The cooperation between military and civil platforms is essential. A country’s position cannot be maintained without civil platforms with global impact. Furthermore, military platforms require civil platform services and products in order to carry out their tasks in conflict recognition and reconciliation.
- Societal buy-in. Machines, robots, and algorithms always reflect the values of those that program them. It is to be expected that this ethical programming will become a matter of ‘life and death’ for the digital civilisation. The values that are built into the machines must therefore be developed using a wide-ranging discourse.