Reminds me of the proposal to keep the nuclear launch codes inside the body of an innocent volunteer, so the President would have to kill the person to get the codes.
If you believe we should never use nuclear weapons, then don't have them at all.
If you believe there is a case where it may be moral and rational to use nuclear weapons, why would you want to put a potential barrier in the way of their use? You could have a situation where everyone was agreed to use them but the president was physically unable to harm the aide to use them.
You can know that something is the right thing to do but not have the courage to physically harm someone to do it.
An interlock that you may not be able to unlock for reasons unrelated to the task at hand is a bad interlock.
>You can know that something is the right thing to do but not have the courage to physically harm someone to do it.
In this specific case the "thing to do" is literally to harm hundreds of thousands of people.
The reasoning behind this proposed interlock is that any logic which concludes that it is moral and rational to harm hundreds of thousands of people must also conclude that it is moral and rational to harm the "interlock" individual. Otherwise, it is likely that dropping the bomb would be a mistake.
> The reasoning behind this proposed interlock is that any logic which concludes that it is moral and rational to harm hundreds of thousands of people must also conclude that it is moral and rational to harm the "interlock" individual.
Yes, but you can know it's the right thing to do, but not be able to physically do it.
The president's ability to physically cut someone open is not relevant to whether it's a good idea to use nuclear weapons or not. Him being unable to do it tells you nothing about whether they should be launching the weapons.
If the president fails the test that tells you nothing about whether the launch is the right thing to do. Doesn't that fundamentally make the test bad?
> It is about forcing the president to look somebody in the eye before they kill them.
Right, but can you understand that 'the President being able to look somebody in the eye before they killing them' is not a requisite for 'the employment of nuclear weapons being justified'?
We require the president to be able to do B before they can do A. But what if A is the right thing to do but the President is not able to do B? Being not able to do B does not mean A is wrong.
Doing A cannot be the right thing to do if you think doing B is still impossible.
If you cannot kill your friend to kill a few hundreds of thousands more, how can it possibly be justified? I just struggle to come up with a scenario where that is the case.
Of course I’m of the school that thinks firing nuclear weapons is never a good idea.
But that is the exact point. Having a human interlock explicitly shifts the dependency. Knowing that you should launch nukes is no longer enough and being able to bring yourself to physically kill someone is the additional requirement that we are _deliberately_ adding to this process despite there not being an obvious logical link between the two actions before.
I believe it is a requirement. I believe that the natural bias would be towards using nuclear weapons when we shouldn't. I believe there there is no possible world where the use of nuclear weapons is justified and the president couldn't also kill one additional person. I do believe there are cases where a president may use nuclear weapons when it isn't truly justified and that having additional checks will help prevent that.
> The president's ability to physically cut someone open is not relevant to whether it's a good idea to use nuclear weapons or not. Him being unable to do it tells you nothing about whether they should be launching the weapons.
Our emotional systems are the product of millions of years of evolution and often (not always, but often) show better judgement than our "higher" faculties. Bringing that part of our capabilities into the decision-making loop is a very good idea.
I think it would work equally well if the president had two aides and had to order one to butcher the other, in front of her eyes, in order to launch a nuclear strike.
Regardless of the exact details, I think the point of this thought experiment is that for a head of state, the decision to launch a massive attack that will cause hundreds of thousands of casualties can feel a little abstract. "Bombing a city" can seem abstract, even if the president understands this means killing children. Understanding is quite different from feeling. However, if the act of ordering a bombing raid on a city involved physically murdering a child, it would definitely feel more immediate and less abstract.
Your point stands, of course. But the part about removing the abstractness of the act seems relevant when ordering people killed.
Everybody agrees that this is a nuke-them-all situation, but the president, given himself part of the task of ripping apart human bodies, thinks more about the subject and decides a another diplomatic round is a better option.
I think that's the point. I'm personally not an advocate of this because it seems to be a little too "beat you over the head" with its moral metaphor, but the whole point is that the President should have to personally kill someone to understand the gravity of what they are about to do.
From the perspective of an advocate I'd say: If they can't come to terms with killing one, who are they to execute hundreds of thousands?
> "If you believe there is a case where it may be moral and rational to use nuclear weapons, why would you want to put a potential barrier in the way of their use?"
Because you think the point where they become moral and rational to use is way way way further than commonly discussed, and you want to put many barriers of many kinds (physical, emotional, logistical) to delay their point of use without completely blocking them.
You could also say that if a person is incapable of doing the hard parts of the job, don't vote them into the position. (Downside of that is that you'll end up voting someone who doesn't mind killing someone in cold blood while expecting that to be a filter that brings more empathy to the position).
It's an attempt to make an abstraction concrete. Think of it as the trolley problem in real life.
Stalin is famously supposed to have said, "one death is a tragedy, 100,000 is a statistic". Cynical or not it is how humans think.
> If you believe we should never use nuclear weapons, then don't have them at all.
Strategic game theory and Mutual Assured Destruction depend on the possibility that the other guy will use them if you do, and may be the only way to prevent their use. Interestingly this is one reason why you want the other guy to know your procedures, capabilities, deployments etc. Secret weapons have no deterrent value.
> Think of it as the trolley problem in real life.
Well exactly... doesn't that show you that it's a bad idea? People don't know if they could bring themselves to throw the switch even if everyone thinks it makes rational sense.
You're taking a rational, well-considered, strategic decision... and making the interlock a messy personal emotional one unrelated to the actual issue at hand. That sounds like the wrong way around to be doing things?
> Well exactly... doesn't that show you that it's a bad idea?
I don't think so, no. Sometimes we think too abstractly and make what turn out to be poor decisions. Emotions are really valuable heuristics and should be harnessed at a time like this.
Absolutely not, mutually assured destruction only works if both sides know that the other is committed to carrying out a retaliatory strike in the minutes before their death. It’s essential that the person in the position to order a retaliatory strike be someone ready to kill hundreds of millions of people for no reason other than the fact that they said they would. Putting emotional barriers between that person and the codes they need to carry out that enormous responsibility just makes it less likely that they will be able to follow through. If there’s sufficient uncertainty about whether there will be a follow-through then the nuclear arsenal loses its deterrence factor and we’re back to having to live with the fear that our rational enemies may carry out a first strike on us.
> Absolutely not, mutually assured destruction only works if both sides know that the other is committed to carrying out a retaliatory strike in the minutes before their death.
Not really. You would need to be absolutely certain that the other party won’t carry out a retaliatory strike before they’re destroyed.
The only thing that matters is that the other party is capable of indescriminate destruction, not the certainty they’ll actually do it.
It’s like punching someone holding a gun in the face.
Trolley Problems are themselves a bad idea... the Kobayashi Maru is a similar exercise. I, like Kirk, don't believe that there are situations that can't be worked around if there is time to think, and resources to act.
Isn't the Trolley problem a situation that is, by definition, time sensitive? If you had more time to think and resources to act, it wouldn't be a Trolley Problem.
If the answer to launch-nukes-by-cutting-a-human-aide is "well, I need more time to think" then maybe that's a good outcome?
It's the 1980s, and the United States implements this policy. What happens on the Soviet side? After the United States' announcement the Soviet press and Soviet sympathizers worldwide gasp loudly in horror. "How cruel are Americans, really? Is the barbaric act of murdering and butchering an innocent young man the only thing still able to keep their president from destroying our Earth?"
The Soviet General Secretary soon receives a report about what the new policy means tactically. Americans will take several extra minutes, possibly more, to authorize retaliation. (The exact delay is subject to disagreement. Secret experiments are conducted to get the timing down. They are inconclusive.) Amid the decade's mounting tensions, a preemptive nuclear strike looks more tempting than before.
Too bad sociopaths and narcissists are more common in positions of power. All it would do is uselessly kill a volunteer.
Time is also of the essence for MAD; known delay only makes MAD less effective if e.g. sub-launched cruise missiles are faster than dissection. And do all the fallback commanders need their own willing victim to mount a response?
https://boingboing.net/2015/12/11/proposal-keep-the-nuclear-...