The Patriot Files Forums  

Go Back   The Patriot Files Forums > Warfare > Cyber

Post New Thread  Reply
 
Thread Tools Display Modes
  #1  
Old 02-05-2020, 06:44 AM
Boats's Avatar
Boats Boats is offline
Senior Member
 

Join Date: Jul 2002
Location: Chicago, IL
Posts: 14,132
Arrow What can we learn from history about stopping ai warfare?

WHAT CAN WE LEARN FROM HISTORY ABOUT STOPPING AI WARFARE?
By: Bradley A. Alaniz & Jed Macosko - Mind Matters - 02-05-20
Re: https://mindmatters.ai/2020/02/what-...ng-ai-warfare/

The reach and pervasiveness of the Internet, including the “Internet of Things”, is growing in tandem with the growth in capability and sophistication of artificial intelligences (AI).

Combined, these parallel developments could produce AI that damages property, both digital and physical, and endangers lives. If the past is any guide, the introduction of technologies that can be weaponized may mean that we experience a catastrophic event caused by AI before we take global steps to properly regulate its use.

A catastrophic event caused by AI could be intentional or unintentional on the part of a human actor, an AI, or a combination of both. Although accidents are a valid concern, here we want to focus on the need for an international prohibition on the use of AI to intentionally and directly cause harm.

The prohibition on malicious AI would mirror the international ban on chemical weapons following their catastrophic use in World War I. However, it is our hope that the prohibition can be established before a catastrophic event occurs. Here, we look at three scenarios of intentionally malicious AI use, discuss the requirements of a prohibition on malicious AI, and examine the barriers that need to be overcome in order to enact such a prohibition.

Imagine a country that produces a large fraction of the AI-enabled electronic devices used by other countries (China comes to mind). Now imagine that this country wishes to deter other countries from interfering with its foreign policy. It could design the AI devices to respond
to a “master switch” that would cause a catastrophic event (prior to, say, its invasion of a neighboring nation). For example, AI-enabled vehicles could simultaneously accelerate to maximum speed and lock out any attempts to steer or brake.

Another example would be the subversion of AI-controlled infrastructure such as a nation’s power grid or aircraft control system in order to cause disruption, chaos, or even physical damage. Admittedly, such acts of aggression would turn the entire world against the perpetrating country. But Germany’s invasion of Poland (September 1, 1939) and Japan’s surprise attack on Pearl Harbor (December 7, 1941) had similar foreseeable effects—ultimate disaster for the aggressors—and yet the aggressor countries still favored these strategies.

A less overtly aggressive but still catastrophic use of AI would be to embed instructions in AI devices that lead users into vices. For example, the AI produced by the ambitious nation could suggest activities that would weaken family ties (adultery, pornography, etc.) or promote addiction. With the proliferation of streaming media, with customized content for every user, this scenario is likely possible today.

As mentioned earlier, there have already been efforts to ban Weapons of Mass Destruction (WMD) such as chemical weapons and nuclear weapons and with very few exceptions (like chemical weapon use in Iraq in 1998 and in Syria in 2013) these agreements have held. However, other internationally coordinated weapons ban agreements have been persistently and repeatedly ignored by some of the countries that signed them (agreements about land mines and aerial weapons, for instance).

It appears that the key difference between the agreements that have been honored and the agreements that have not been honored is that the the honored ones involved weapons of mass destruction. An effective ban on malicious AI requires the global community to first agree that such a form (or use) of AI would be a WMD.

The first step toward an agreement that certain AI meet the criteria of WMD would be for a meeting of international AI experts with this goal in mind. The meeting would mirror the 1975 Asilomar Conference on Recombinant DNA, where experts agreed that organisms with modified DNA should not be released into the wild.

The concern in 1975 was that human ingenuity could create new life forms capable of taking over the world. The concern today is that humans can make AI that can take over the world.

Since 1975, it has become clear that creating new life forms is not as easy as once presumed. Certainly, humans have the power to cobble together a “superbug”—a bacterium resistant to all known antibiotics—by breeding bacteria that are each resistant to one antibiotic. But the ingenuity required to create a brand-new life form from scratch that can survive on its own, let alone destroy all of humanity, is well-beyond our current abilities. Still, the practices put in place after the Asilomar conference were a good example of the precautionary principle which should also be applied to the danger of AI.

We authors are not convinced that human-created AI could ever become self-aware and see humankind as a threat that requires elimination. However, even if AI never becomes self-aware, it could become a WMD, like the human-created superbugs and the agents used in biological warfare. Thus, we think the precautionary principle should be applied and the global community should adopt policies to limit the spread of malicious AI. It is not too soon for global AI experts to convene on this matter and for the public to be alerted to their recommendations on how best to contain malicious AI.

About this writer: BRADLEY A. ALANIZ
Commander, U.S. Navy, is a Military Professor at the US Navy War College
__________________
Boats

O Almighty Lord God, who neither slumberest nor sleepest; Protect and assist, we beseech thee, all those who at home or abroad, by land, by sea, or in the air, are serving this country, that they, being armed with thy defence, may be preserved evermore in all perils; and being filled with wisdom and girded with strength, may do their duty to thy honour and glory; through Jesus Christ our Lord. Amen.

"IN GOD WE TRUST"
sendpm.gif Reply With Quote
Sponsored Links
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is On

All times are GMT -7. The time now is 12:25 PM.


Powered by vBulletin, Jelsoft Enterprises Ltd.