Category Archives: Bots


Ed. Note: This post by Benjamin Vanlalvena is a part of the TLF Editorial Board Test 2016.          

                                         Source: xkthe_three_laws_of_roboticscd

Liability in law arises to persons who are considered rational and have control over their actions. Techonology is advancing at a rapid pace; machines have taken over a lot of jobs requiring manual labour. Some argue that this is beneficial as it means humans as a race would be able to focus on other activities/specialize. However, with the rate at which things are developing, one wonders what kind of activity would be left for humans. We already have a ‘robot lawyer’ hired by a law firm, a robot which helped people with their traffic tickets and has already successfully challenged 160,000 tickets, there are also robots writing stories for news agencies, one wrote a movie, another drew art. Robots have already defeated us in chess and go. Though they might not be completely ‘intelligent’, there’s no doubt that someday they could catch up to us.

However, does such a fear of robots ‘taking over our jobs’ make us Luddites? As robots become more advanced and autonomous, the chain of causality becomes complex. Which brings us to the question of who becomes liable when a robot commits a crime, or more crucially, can a robot commit a crime or is it merely following orders or is its action simply a malfunction. Companies are considered to be non-human legal entities which can be made liable for their offences through fines or revocation of licences. Could we take an action in a similar direction?

Ethics in and of itself is a widely debated philosophical subject, so is the concept of personhood and consciousness. To bring in a third factor, robots, as ‘beings’ having the potential to possess ‘ethics’ or whether ‘artificial intelligence’ could be termed as consciousness is a legal quagmire. When the action of a robot causes the death of a person or an accident, the question arises, who should be liable, the manufacturer, the owner or the user?

As earlier mentioned, the idea of being liable for an action arises from the fact that the actor is considered to be autonomous. For self-driving cars, therefore, the trolley problem becomes relevant and the question of liability in cases of driverless cars crashing is pertinent.

Ethics, however, are not limited to drivers, and robots are not limited to such a function. There’s a plethora of situations which we must consider. If a robot, for example is to be truly autonomous and yet follow Asimov’s laws, what then, when there are multiple orders which are contradictory, how should a robot react if it’s owner who is in great pain and there is no scope for her to live, requests the robot to kill her? If a General is fighting in war and knows that if he were to be turned over be tortured and forced to spill secrets, requests a robot to kill him, should it? Who would decide what is ethical for robots used in war or war-like situations?

The question therefore arises, when our idea of what is ‘ethical’ or ‘moral’ itself differs among people, can we enforce such an idea on robots? Before we ask if we can trust robots with making moral decisions, can we trust humankind to make the same decisions?

If we make robots liable for their actions, do they deserve any rights? It would not be a first to give rights to non-humans. Animals, for example, have a number of people advocating for their rights. Questions are aplenty, in a trolley problem, if one had to choose between a human being and 5 robots who could through their research cure cancer or some other illness, who should be destroyed? What if the human being were the President of a country?

As time passes, AI will only develop further, when we have autonomous robots who have learnt to say no. The question of who should teach when it should say no also arises. What is morality but one programming oneself or being programmed subconsciously or otherwise to behave in a particular manner in a particular circumstance. How different then, would it be from teaching a child what is moral and programming a robot to act in a particular manner in a particular circumstance, is that not ‘right’ for it?

When we have more and more humanoid robots, and start to treat them like humans and have relationships with them. Questions of how they can be used will eventually arise. How would we view a relationship between a robot and a human? (The movie ‘Her’ comes to mind) What about robots used for sex? What if said robot is looks like a child? An animal? Does it matter only if they have ‘conscience’?

The ethics of robotics, is a difficult to address, and before we are overwhelmed with the advancement of technology, we must address these concerns.




For further information;, Morals and the machine, The Economist., Humans need not apply., Should we give robots rights?, Anthropomorphism: Opportunities and Challenges in Human-Robot Interaction., What is a Human? – Toward Psychological Benchmarks in the Field of Human-Robot Interaction.

The Equations of Bots and the Law, Part I : Crimes and Torts

(Image Source:

One of the most interesting news items to come through the interwebs recently was the ‘seizure’ of a certain ‘art experiment’ in Switzerland. The bot, sadly unimaginatively named Random Darknet Shopper, lived up to its name by buying items randomly from Darknet marketplaces (with Bitcoins, interestingly) and shipping them to a gallery in Switzerland. The bot came under the scanner of the police after it bought some ecstacy pills and a counterfeit passport.

While the cops in this case have, in good humour, not filed any charges, this does raise some interesting questions. Specifically, when computers and all other devices are getting incredibly smarter day by day and artificial intelligence would seem to be a real possibility (and concern), when a bot commits a crime, like buying an illegal object, who is liable – the bot or the programmer? To take this a step further, if a bot creates some content, who owns the copyright to that content – the bot, or its programmer?

For instance, one of the most fascinating stories that has come up recently was the story of David Slater, an award-winning American photographer, who left his camera lying around, which was picked up by a macaque monkey, which took a photo of itself. In this case, the general consensus would seem to be that the copyright belongs to no one, as you kinda-sorta really need to be a human being to own a copyright – at least, as of right now.

But changing that story a little bit to fit our context, what if, rather than simply leaving the camera lying around, the photographer had set it up exactly the way he wanted to and pre-programmed it to take photographs in certain situations?

More realistic, and contentious, examples of this are Twitter Bots. These are programs that have been created to tweet certain text when certain requirements are met. The text of the tweet and the requirement are set by the programmer himself or herself. In certain cases, the text and the requirements can be extremely broad, resulting in content that perhaps even the programmer didn’t expect, which is directly published as a tweet. In this case, who owns the copyright for the content? And if the content is violative of Twitter’s policies or even national legislations, who would be liable?

While these may seem like fantastical concerns, they are actually extremely relevant for the immediate future. Soon enough, we will have self-driving cars on the road, and Microsoft’s campus has been guarded by Knightscope’s K5 ever since last year! And if Russia has its way, we will soon have a situation where a country’s army is hugely supplemented by autonomous fighting robots.

While all of these issues are individual issues within separate fields of law, I will be addressing a few of them within the Indian context here. For instance, let’s try and see how Indian criminal law or tort law will function in this scenario. To put this situation in context, let’s imagine a scenario where the ‘bot’ is the same as the robots currently patrolling the Microsoft campus, the K5s. These K5 are currently only tooled to surveil, assess and report suspicious activities, they might soon be able to use Tasers.

In a situation where the programming of a bot results in unintended consequences, even the creator/programmer cannot be said to have the intent of committing the crime in question. The question I am considering here is who, if anyone, would be liable in such a scenario, and to what extent.

The first point that should be mentioned here is that under the Indian Penal Code (‘IPC’), the ‘person’ being accused of a crime mandatorily needs to be a human being, with an exception being made for any Company, Association or body of person under Section 11. Thus, unless a further exception is made for bots, they cannot be covered under the IPC. Furthermore, the two main requirements for most offences are Actus Reus, the act, and Mens Rea, the intent. Now, since we are yet to have a functional AI capable of clearing the Turing Test, no bot will be able to meet the Mens Rea requirement, let alone the Microsoft bots. The exception here being, of course, the categories of offences that do not require Mens Rea.

So then, there seems to be no recourse in criminal law for bots on a crime spree. But how about tort law?

Before that question can be answered, the level of liability for the harm done by a bot would need to be confirmed by law. Under an absolute or strict liability regime, the creator/programmer would necessarily end up being liable for the damage caused by the bot. But things get a bit murkier when we consider the questions of negligence.

For negligence, we have the standard of the most popular man in law – the reasonable man. The creator will be liable for the damage caused by his bot under the tort of negligence if the actions that lead to the causation of the damage were reasonably foreseeable and if reasonable precautions were taken to prevent the same, which are both heavily fact-based distinctions.

Going back to the example of the Microsoft robots, the bots are currently not allowed to do anything but watch and report. But if (or when) they are truly given the ability to use Taser guns (or even let’s say pepper spray or tear gas), and they end up harming the wrong person, would Microsoft or Knightscope be liable?

The determination of the above would be based on a test of the ‘foreseeability’ of the bots’ actions. That would, necessarily, involve a thorough examination of whether the bot was functioning within its set parameters. If yes, the question would be what exactly these parameters were and how reasonable they were, and if not, whether such a malfunction was reasonably foreseeable.

If the answer to the first test is yes, the issue would come down to the whether the parameters of the bot’s functioning satisfy the judge(s) in question. If they do, then it could be very well be argued that the bot’s actions were not reasonably foreseeable, and that the reasonable precautions were taken, and the creator/programmer would not be liable. If they don’t, then the creator would necessarily be liable.

If the question is answered with a no, then the liability would depend on whether the malfunction was foreseeable. While this is a heavily fact-based question, in my opinion, this would perhaps be an easier test to satisfy – the functioning or malfunctioning of programs is a very unpredictable science. But a consequence of such defences being taken would quite probably be that a higher standard of liability would be imposed on bots in such cases, which would be quite problematic for the bot industries.

It may perhaps avert the advent of Skynet though, so that’s a good thing!

The autonomous photographer point for this post was inspired by the discussion in the CopyrightX class.