Friday, 26 February 2016

I Robot

While my husband was out for a couple of evenings last week, I watched Shoah. Possibly not something best done on one's own.

I haven't finished yet. I'm only five and a half hours in. But the things I've learnt so far aren't quite as straightforward as I'd imagined.

In fact, I haven't exactly learnt anything. Rather, I've found myself up against questions I can't answer about moral responsibility - and I'm also growing increasingly doubtful about Edmund Burke's assertion that "the only thing necessary for the triumph of evil is for good men to do nothing." I think Burke grossly underestimated the force that is bad men.

Not that it is always easy to define who exactly is bad.

Before watching the film, I would probably have said that all the many individuals who contributed in one way or another to the workings of Treblinka, Auschwitz et cetera were each unequivocally evil. After all, they had allowed evil to triumph by, essentially, doing nothing. Yet, if many of them - including, most troublingly, those inmates who were spared, provided they did appalling work within the system - had struggled against the powers in charge of the camps, what difference would it actually have made? Those individuals were not faced with the chance of saving someone else by sacrificing themselves. If they had refused to cooperate, all that would have happened is that they would have gone to their deaths and others would have been drafted in to replace them. Good men would have done something, been killed, and more good men would then have been drafted in - until some of those good men thought, "What's the point?"

I don't think cooperating allowed the triumph of evil, since not cooperating stopped nothing and resulted in extra death.

To begin with, though, I did imagine that it might be possible to differentiate between the moral responsibility of the person who wrote the detailed report that is read out at one point - a report specifying exactly what modifications were needed to increase the efficiency of the trucks that were used as killing chambers early on in the war - and the moral responsibility of the person who drove the engine that took the trucks into Treblinka. The report writer, it seemed to me, was in no real danger and was coldly, calmly applying their intelligence to how to improve the killing capabilities of machinery; the engine driver, by contrast, was in immediate danger -  should he refuse to drive his engine, he would be killed and someone else would be dragged in to do the driving instead.

His refusal would prevent nothing, whereas the report writer could have deliberately made recommendations that made the machines function in such a way that they didn't work at all. But actually, as I write, I begin to have my doubts about whether even the report writer had a real choice. Almost everyone had become trapped within a depraved structure of violence. It was the architects who created that structure - (those who, using violent intimidation, imposed violent methods on an entire nation) -  who were to blame. The report writer may have been one of these willing architects - or merely a frightened cog inside the death-dealing machine.

And so the only conclusion I can reach with certainty is that it is absolutely vital to ensure that you never ever allow your country's government to fall into the hands of people who consider violence a useful tool - or, worse still, actually get satisfaction or pleasure from violence.

But, as the lesson of history seems to be that that is virtually impossible, an alternative presents itself - one that has never until now been available. If humanity is not capable of ruling itself over a sustained period without resorting to violence - and this does seem to be clear from everything that has occurred in our history thus far - then perhaps what we need to do is put non-humans in charge. If we can manage to create a generation of robots that are programmed to have as their priority never ever allowing any kind of violence, then surely they will provide us with an ideal government. At a time when politicians seem to be held in lower esteem than ever before, could my gentle robots possibly make things worse?

But then I remember that other great phenomenon so unavoidable in human affairs. It is known as the law of unintended consequences. I withdraw all further proposals. While personally I think this 1750 French mannequin might make a fine President of the United States, I cannot guarantee it. I suppose it depends on what the alternatives turn out to be:


  1. Have you ever seen a 1970s film called The Forbin Project? It's all about a supercomputer that is programmed to protect humanity and reaches similar conclusions to your post.

    1. No, but I've just found the whole hing on YouTube so tha's this evening's diversion organised