I had a family friend who worked on missiles and drones and other defense systems. He was really one of my dad’s running buddies, and he was a super nice guy, had 4 kids, went to church, etc.
One day, I believe during the Iraq occupation, maybe ~12 or 13 years ago, I asked him very directly how he felt about working on these killing machines and whether it bothered him. He smiled and asked if I’d rather have the war here in the U.S.. He also told me he feels like he’s saving lives by being able to so directly target the political enemies, without as much collateral damage as in the past. New technology, he truly believed was preventing innocent civilians from being killed.
It certainly made me think about it, and maybe appreciate somewhat the perspective of people who end up working on war technology, even if I wouldn’t do it. This point of view assumes we’re going to have a war anyway, and no doubt the ideal is just not to have wars, so maybe there’s some rationalization, but OTOH maybe he’s right that he is helping to make the best of a bad situation and saving lives compared to what might happen otherwise.
Costa Rica hasn’t had a standing military since 1948. They are in one of the most politically unstable parts of the world and do just fine without worry of invasion.
The US hasn’t been attacked militarily on its own soil in the modern era.
The US military monopoly hasn’t prevented horrific attacks such as 9/11 executed by groups claiming to be motivated by our foreign military campaigns.
I think there is a valid question about the moral culpability of working in this area.
It's a valid question, but realistically if Costa Rica were invaded a number of countries would step in to help them. I love Costa Rica, it's one of the most beautiful countries I've been to and I do appreciate the political statement their making, but at the same time they're in a pretty unique situation.
As for the ethics of working on weapons, I think there is a lot of grey when it comes to software. It tends to centralize wealth, since once you get it right it works for everyone. It tends to be dual use, because a hardened OS can be used for both banks and tanks. Even developments in AI are worrying because they're so clearly applicable to the military.
Would I work on a nuclear bomb? No. Would I work on software that does a better job of, say, facial recognition to lessen the likelihood of a predator drone killing an innocent civilian? Maybe. It's not an all or nothing thing.
In the last 40 years, Panama and Grenada were invaded, Honduras had a coup, Colombia had a civil war, Venezuela is currently having a sort of civil war, Nicaragua's government was overthrown by a foreign-armed terrorist campaign, and El Salvador's government sent death squads out to kill its subjects. Nobody stepped in to help any of them except Colombia. Why would Costa Rica be different?
> Would I work on software that does a better job of, say, facial recognition to lessen the likelihood of a predator drone killing an innocent civilian?
The logical extreme of this is Death Note: the person who has the power simply chooses who should die, and that person dies, immediately and with no opportunity for resistance and no evidence of who killed them. Is that your ideal world? Who do you want to have that power — to define who plays the role of an “innocent civilian” in your sketch — and what do you do if they lose control of it? What do you do if the person or bureaucracy to which you have given such omnipotence turns out not to be incorruptible and perfectly loving?
> The logical extreme of this [...] Is that your ideal world?
Clearly not. Would you please not post an extreme straw-man and turn this into polarizing ideological judgement? The post you’re responding to very clearly agreed that war is morally questionable, and very clearly argued for middle ground or better, not going to some extreme.
You don’t have to agree with war or endorse any kind of killing in any way to see that some of the activities involved by some of the people are trying to prevent damage rather than cause it.
Intentionally choosing not to acknowledge the nuance in someone’s point of view is ironic in this discussion, because that’s one of the ways that wars start.
You assert that "software that does a better job of, say, facial recognition to lessen the likelihood of a predator drone killing an innocent civilian" is "middle ground", "not going to some extreme", "trying to prevent damage", and "nuanced".
It is none of those. It is a non-nuanced extreme that is going to cause damage and kill those of us in the middle ground. Reducing it to a comic book is a way to cut through the confusion and demonstrate that. If you have a reason (that reasonable people will accept) to think that the comic-book scenario is undesirable, you will find that that reason also applies to the facial-recognition-missiles case — perhaps more weakly, perhaps more strongly, but certainly well enough to make it clear that amplifying the humans' power of violence in that way is not going to prevent damage.
Moreover, it is absurd that someone is proposing to build Slaughterbots and you are accusing me of "turn[ing] this into polarizing ideological judgement" because I presented the commonsense, obvious arguments against that course of action.
What's your moral stance on developing defense mechanisms against Slaughterbot attacks? What if the best defense mechanism is killing the ones launching the attacks?
I think developing defense mechanisms against Slaughterbot attacks is a good idea, because certainly they will happen sooner or later. If the best defense mechanism is killing the ones launching the attacks, we will see several significant consequences:
1. Power will only be exercised by the anonymous and the reckless; government transparency will become a thing of the past. If killing the judge who ruled against you, or the school-board member who voted against teaching Creationism, or the wife you're convinced is cheating on you, is as easy and anonymous as buying porn on Amazon, then no president, no general, no preacher, no judge, and no police officer will dare to show their face. The only people who exercise power non-anonymously would be those whose impulsiveness overcomes their better judgment.
2. To defend against anonymity, defense efforts will necessarily expand to kill not only those who are certain to be the ones launching the attacks, but those who have a reasonable chance of being the ones launching the attacks. Just as the Khmer Rouge killed everyone who wore glasses or knew how to read, we can expect that anyone with the requisite skills whose loyalty to the victors is in question will be killed. Expect North-Korea-style graded loyalty systems in which having a cousin believed to have doubted the regime will sentence you to death.
3. Dead-Hand-type systems cannot be defended against by killing their owners, only by misleading their owners as to your identity. So they become the dominant game strategy. This means that it isn't sufficient to kill people once they are launching attacks; you must kill them before they have a chance to deploy their forces.
4. Battlefields will no longer have borders; war anywhere will mean war everywhere. Combined with Dead Hand systems, the necessity for preemptive strikes, and the enormous capital efficiency of precision munitions, this will result in a holocaust far more rapid and complete than nuclear weapons could ever have threatened.
While this sounds like an awesome plot for a science-fiction novel, I'd rather live in a very different future.
So, I hope that we can develop better defense mechanisms than just drone-striking drone pilots, drone sysadmins, and drone programmers. For example, pervasive surveillance (which also eliminates what we know as "human rights", but doesn't end up with everyone inevitably dead within a few days); undetectable subterranean fortresses; living off-planet in small, high-trust tribes; and immune-system-style area defense with nets, walls, tiny anti-aircraft guns, and so on. With defense mechanisms such as these, the Drone Age should be more survivable than the Nuclear Age.
But, if we can't develop better defense mechanisms than killing attackers, we should delay the advent of the drone holocaust as long as we can, enabling us to enjoy what remains of our lives before it ends them.
You paint a bleak future. Keep in mind though, there have been many dark moments in human history when a lot of people got killed for very bad reasons, and yet here we are.
is as easy and anonymous as buying porn on Amazon
I'm not sure ease of use is such a game changer. You can buy a drone today, completely anonymously, strap some explosives to it, remotely fly it into someone and detonate it, a few hundred yards away from you. Easily available cheap drones like that existed for at least a decade, yet I don't remember many cases where someone used them for this purpose. Does Slaughterbot-like product existence make it easier? If some terrorist wants to kill a bunch of people, how is it easier than just detonating a truck full of C4? To a terrorist this technology does not provide that much benefit over what's already available. How about governments? I don't see it - if a government wants someone dead, they will be dead (either officially, e.g. Bin Laden, or unofficially, Epstein-style). If a government wants a bunch of people dead, the difficulty lies not in technology, but in PR. I doubt there is a lack of trigger happy black ops types (or "patriots") ready to do whatever you can program a drone to do. Here I'm talking about democratic first world governments. It's even less clear if tyrannical governments would benefit a lot from this technology - sending a bunch of agents to arrest and execute people is just as effective. I don't think tactical difficulties of finding and physically shooting people is a big concern for decision makers. As you yourself pointed out, Khmer Rouge or North Korea had no problems doing that without any advanced technology.
you must kill them before they have a chance to deploy their forces.
Yes. And that's how it has been at least since 9/11 - CIA drone strikes all over the world. Honestly, I'd much rather have them only have done drone strikes if at all possible (instead of invading Iraq with boots on the ground).
more rapid and complete than nuclear weapons could ever have threatened
Sorry, I'm not seeing it - how would this change major conflicts and battlefields? If you have a battlefield, and you know who your enemy is, you don't really need Slaughterbots - you need big guns and missiles that can do real damage. It's much easier to defend soldiers against tiny drones than against heavy fire. If you don't know who your enemy is, say terrorists mixed in the crowd of civilians, how would face detection help you? As for precise military strikes - we're already doing it with drones, so nothing new here.
end up with everyone inevitably dead within a few daysenjoy what remains of our lives before it ends them
You are being overly dramatic. Yes, terrorists and evil governments will keep murdering people just like they always have. No, this technology does not make it fundamentally easier. Is the world today a scary place to live in? Yes, but for very different reasons - think about what will start happening in a few decades when the global temperature rises a couple degrees, triggering potentially cataclysmic events affecting livelihood of millions, or global pollution contaminating air, water and food to the point where it's making people sick. I really hope we will develop advanced technology by that time to deal with those issues.
But of course it's way more fun to discuss advanced defense methods against killer drones. So let's do that :) I was thinking that some kind of a small EMP device could have been used whenever slaughterbots are detected, but after reading a little about EMPs it seems it would not be able to hurt them much because these drones are so small. I don't think nets of any kind would be effective - I just don't see how would you cover a city with nets. Underground fortresses and off-planet camps can only protect a small number of people. In some scenarios some kind of laser based defense system could be effective (deployed in high value/risk environments), and of course we can keep tons of similar drones ready to attack other drones at multiple locations throughout the city. Neither of these seem to be particularly effective against a large scale attack, and both require very good mass surveillance. I think that a combination of very pervasive surveillance with an ability to deliver defense drones quickly to the area of the attack (perhaps carried in a missile, fired automatically as soon as a threat level calculated by the surveillance system crosses some threshold) is the best option. The defense drones could be much more expensive than the attack drones, so be able to quickly eliminate them. Fascinating engineering challenge!
The US is known to have carried out drone strikes in Afghanistan, Yemen (including against US citizens), Pakistan, Libya, and Somalia; authority over the assassination program was officially transferred from the CIA to the military by Obama. That leaves another 200-plus countries whose citizens do not yet know the feeling of helpless terror when the car in front of you on the highway explodes into a fireball unexpectedly, presaged only by the far-off ripping sound of a Reaper on the horizon, just like most days. The smaller drones that make this tactic affordable to a wider range of groups will give no such warning.
> It’s much easier to defend soldiers against tiny drones than against heavy fire.
Daesh used tiny drones against soldiers with some effectiveness, but there are several major differences between autonomous drones and heavy fire. First, heavy fire is expensive, requiring either heavy weapons or a large number of small arms. Second, autonomous drones (which Daesh evidently did not have) can travel a lot farther than heavy fire; the attacker can target the soldiers’ families in another city rather than the soldiers themselves, and even if they are targeting the soldiers directly, they do not need to expose themselves to counterattack from the soldiers. Third, almost all bullets miss, but autonomous drones hardly ever need to miss; like a sniper, they can plan for one shot, one kill.
You may be thinking of the 5 m/s quadcopters shown in the Slaughterbots video, but there’s no reason for drones to move that slowly. Slingshot stones, arrows from bows, and bottle-rockets all move on the order of 100 m/s, and you can stick guidance canards on any of them, VAPP-style.
> If you don’t know who your enemy is, say terrorists mixed in the crowd of civilians, how would face detection help you?
Yes, it’s true that if your enemy is protected by anonymity, face-recognition drones are less than useful — that’s why the first step in my scenario is the end of any government transparency, because the only people who can govern in that scenario (in the Westphalian sense of applying deadly force with impunity) are anonymous terrorists. But if the terrorists know who their victims are, the victims cannot protect themselves by mixing into a crowd of civilians.
> Yes, terrorists and evil governments will keep murdering people just like they always have. No, this technology does not make it fundamentally easier.
Well, on the battlefield it definitely will drive down the cost per kill, even though it hasn’t yet. It’s plausible to think that it will drive down the cost per kill in scenarios of mass political killing, as I described above, but you might be right that it won’t.
The two really big changes, though, are not about making killing easier, but about making killing more persuasive, for two reasons. ① It allows the killing to be precisely focused on the desired target, for example enabling armies to kill only the officers of the opposing forces, only the men in a city, or only the workers at a munitions plant, rather than everybody within three kilometers; ② it allows the killing to be truly borderless, so that it’s very nearly as easy to kill the officers’ families as to kill the officers — but only the officers who refuse to surrender.
You say “evil governments”, but killing people to break their will to continue to struggle is not limited to some subset of governments; it is the fundamental way that governments retain power in the face of the threat of invasion.
Covering a city with nets is surprisingly practical, given modern materials like Dyneema and Zylon, but not effective against all kinds of drones. I agree that underground fortresses and off-planet camps cannot save very many people, but perhaps they can preserve some seed of human civilization.
> You can buy a drone today, completely anonymously, strap some explosives to it, remotely fly it into someone and detonate it, a few hundred yards away from you
But that’s not face-recognition-driven, anonymous, long-range, or precision-guided; it might not even be cheap, considering that the alternative may be to lob the grenade by hand or shoot with a sniper rifle. If the radio signal is jammed, the drone falls out of the sky, or at least stays put, and the operator can no longer see out of its camera. As far as I know, the signal on these commercial drones is unencrypted, so there’s no way for the drone to distinguish commands from its buyer from commands from a jammer. Because the signal is emitted constantly, it can guide defenders directly to the place of concealment of the operator. And a quadcopter drone moves slowly compared to a thrown grenade or even a bottlerocket, so it’s relatively easy for the defenders to target.
> Does Slaughterbot-like product existence make it easier?
Yes.
> If some terrorist wants to kill a bunch of people, how is it easier than just detonating a truck full of C4?
Jeffrey Dahmer wanted to kill a bunch of people. Terrorists want to persuade a bunch of people; the killing is just a means to that end. Here are seven advantages to a terrorist of slaughterbots over a truck full of C4:
1. The driver dies when they set off the truck full of C4.
2. The 200 people killed by the truck full of C4 are kind of random. Some of them might be counterproductive to your cause — for example, most of the deaths in the 1994 bombing of the AMIA here in Buenos Aires were kindergarten-aged kids, which helps to undermine sympathy for the bombers. By contrast, with the slaughterbots, you can kill 200 specific people; for example, journalists who have published articles critical of you, policemen who refused to accept your bribes (or their family members), extortion targets who refused to pay your ransom, neo-Nazis you’ve identified through cluster analysis, drone pilots (or their family members), army officers (or their family members), or just people wearing clothes you don’t like, such as headscarfs (if you’re Narendra Modi) or police uniforms (if you’re an insurgent).
3. A truck full of C4 is like two tonnes of C4. The Slaughterbots video suggests using 3 grams of shaped explosive per target, at which level 600 grams would be needed to kill 200 people. This is on the order of 2000 times lower cost for the explosive, assuming there’s a free market in C4. However...
4. A truck full of C4 requires C4, which is hard to get and arouses suspicion in most places; by contrast, precision-guided munitions can reach these levels of lethality without such powerful explosives, or without any explosives at all, although I will refrain from speculating on details. Certainly both fiction and the industrial safety and health literature is full of examples of machines killing people without any explosives.
5. A truck full of C4 is large and physically straightforward to stop, although this may require heavy materials; after the AMIA truck bombing, all the Jewish community buildings here put up large concrete barricades to prevent a third bombing. So far this has been successful. (However, Nisman, the prosecutor assigned to the AMIA case, surprisingly committed suicide the day before he was due to present his case to the court.) A flock of autonomous drones is potentially very difficult to stop. They don’t have to fly; they can skitter like cockroaches, fall like Dragons’ Teeth, float like a balloon, or stick to the bottoms of the cars of authorized personnel.
6. You can prevent a truck bombing by killing the driver of the truck full of C4 before he arrives at his destination, for example if he tries to barrel through a military checkpoint. In all likelihood this will completely prevent the bombing; if he’s already activated a deadman switch, it will detonate the bomb at a place of your choosing rather than his, and probably kill nobody but him, or maybe a couple of unlucky bystanders. By contrast, an autonomously targeted weapon, or even a fire-and-forget weapon, can be designed to continue to its target once it is deployed, whether or not you kill the operator.
7. Trucks drive 100 km/hour, can only travel on roads, and they carry license plates, making them traceable. Laima, an early Aerosonde, flew the 3270 km from Newfoundland to the UK in 26 hours, powered by 5.7 ℓ of gasoline, in 1998 — while this is only 125 km/hour, it is of course possible to fly much faster at the expense of greater fuel consumption. Modern autonomous aircraft can be much smaller. This means that border checkpoints and walls may be an effective way to prevent trucks full of C4 from getting near their destination city, but they will not help against autonomous aircraft.
> How about governments? I don’t see it - if a government wants someone dead, they will be dead (either officially, e.g. Bin Laden, or unofficially, Epstein-style). If a government wants a bunch of people dead, the difficulty lies not in technology, but in PR.
This is far from true. The US government has a list of people they want dead who are not yet dead — several lists, actually, the notorious Disposition Matrix being only one — and even Ed Snowden and Julian Assange are not on them officially. Killing bin Laden alone cost them almost 10 years, two failed invasions, and the destruction of the world polio eradication effort; Ayman al-Zawahiri has so far survived 20 years on the list. Both of the Venezuelan governments want the other one dead. Hamas, the government of the Gaza Strip, wants the entire Israeli army dead, as does the government of Iran. The Israeli government wanted the Iranian nuclear scientists dead — and in that case it did kill them. The Yemeni government, as well as the Saudi government, wants all the Houthi rebels dead, or at least their commanders, and that has been the case for five years. The Turkish government wants Fethullah Gulen dead. Every government in the region wanted everyone in Daesh dead. In most of these cases no special PR effort would be needed.
Long-range autonomous anonymous drones will change all that.
> sending a bunch of agents to arrest and execute people is just as effective. ... As for precise military strikes - we’re already doing it with drones, so nothing new here.
Sending a bunch of agents is not anonymous or deniable, and it can be stopped by borders; I know people who probably only survived the last dictatorship by fleeing the country. It’s also very expensive; four police officers occupied for half the day is going to cost you the PPP equivalent of about US$1000. That’s two orders of magnitude cheaper than a Hellfire missile (US$117k) but three orders of magnitude more expensive than the rifle cartridge those agents will use to execute the person. The cost of a single-use long-range drone would probably be in the neighborhood of US$1000, but if the attacker can reuse the same drone against multiple targets, they might be able to get the cost down below US$100 per target, three orders of magnitude less than a Hellfire drone strike.
It’s very predictable that as the cost of an attack goes down, its frequency will go up, and it will become accessible to more groups.
(Continued in a sibling comment, presently displayed above.)
Real weapons are not like that. They are expensive, they can fail to kill their target and they can also cause collateral damage. If death notes were as easy to obtain as guns there would clearly be an increase in homicides but that's not true with military missiles.
The Slaughterbots video is absolutely awful. First of all quadrocopters have an incredibly small payload capacity and limited flight time. A quadrocopter lifting a shaped charge would be as big as your head and have 5 minutes of flight. Simply locking your door and hiding under your bed would be enough to stop them. The AI aspect doesn't make them more dangerous than a "smart rifle" that shoots once the barrel points at a target.
Do you know what I am scared of? I am more scared of riot police using 40mm grenade launchers with "non-lethal" projectiles who are knowingly aiming them at my face even though their training clearly taught that these weapons should never be used to aim at someone's head. The end result is lost eyeballs and sometimes even deaths and the people who were targeted aren't just limited to those who are protesting violently in a large crowd. Peaceful bystanders and journalists who were not involved also became victims of this type of police violence. [0]
As for the first line, you assert that real weapons are expensive, unreliable, and kill unintended people. Except in a de minimis sense, none of these are true of knives. Moreover, you seem to be reasoning on the basis of the premise that future technology is not meaningfully different from current technology.
In conclusion, your comment consists entirely of wishful and uninformed thinking.
Eh, there is a difference between the examples you've sited and Costa Rica. They're an ally of the US and a strong democracy focussed on tourism.
> The logical extreme of this is Death Note
I don't really deal with logical extremes. It leads to weird philosophies like Objectivism or Stalinism. In international relations terms, I'm a liberal with a dash of realism and constructivism. I don't live in my ideal world. My ideal world doesn't have torture or murder or war of any kind. It doesn't have extreme wealth inequality or poverty. Unless this is all merely a simulation, I live in the real world. Who has the power to kill people? Lots of people. Everyone driving a car or carrying a gun. Billions of people. It's a matter of degree and targeting and justification and blow-back and economics and ethics and so many other things that it's not really sensible to talk about it.
I'm familiar with the arguments against AI being used on the battlefield, but even though I abhor war, I'm not convinced that there should be a ban.
Of course there is a valid question about the morals of war technology. You are absolutely right about that, and I am not even remotely suggesting otherwise. Like I said, I don’t think I would ever choose to work on it.
There’s a vast chasm in between right and wrong though. There can be understanding of others’ perspectives, regardless of my personal judgement. And there is also a valid question and tightly related question here about the morals of mitigating damage during a military conflict, especially if the mitigation prevents innocent deaths. If there’s a hard moral line between doctors and cooks and drivers and snipers and drone programmers, I don’t know exactly where it lies. Doctors are generally considered morally good, even war doctors, but if we are at war, it’s certainly better to prevent an injury than to treat one.
The US was last attacked in living memory; Pearl Harbor survivors still number > 0.
I will leave the WTC attack on the table, as I’m not interested in a nitpicking tangent about what constitutes an attack in asymmetric warfare vs. “terrorism.”
“The modern era” is usefully vague enough to be unfalsifiable.
In practice, Costa Rica has a standing military. It's just the US military.
Due to the Monroe Doctrine, this is a rational stance for Costa Rica to take. If the US were to adopt this policy, Costa Rica might have to take a hard look at repealing it.
> New technology, he truly believed was preventing innocent civilians from being killed.
Drones and missiles are definitely a step forward compared to previous technology in many regards, but I can't help but be reminded of people who argued that the development and use of napalm would reduce human suffering by putting an end to the war in Vietnam faster.
For an interesting and rather nuanced (but not 100% realistic) view on drone strikes, I'd recommend giving the 2015 movie Eye in the Sky a watch.
Another issue with drone strikes and missiles is "the bravery of being out of range": it's easier to make the decision to kill someone who you're just watching on a screen than it is to look a person in the eyes and decide to have them killed.
One day, I believe during the Iraq occupation, maybe ~12 or 13 years ago, I asked him very directly how he felt about working on these killing machines and whether it bothered him. He smiled and asked if I’d rather have the war here in the U.S.. He also told me he feels like he’s saving lives by being able to so directly target the political enemies, without as much collateral damage as in the past. New technology, he truly believed was preventing innocent civilians from being killed.
It certainly made me think about it, and maybe appreciate somewhat the perspective of people who end up working on war technology, even if I wouldn’t do it. This point of view assumes we’re going to have a war anyway, and no doubt the ideal is just not to have wars, so maybe there’s some rationalization, but OTOH maybe he’s right that he is helping to make the best of a bad situation and saving lives compared to what might happen otherwise.