Category Archives: Technology

Law Enforcement v. End-to-End Encryption

In a post-Snowden world, there has been relatively more awareness and interest in the right to privacy regarding digital communications; and in knowing when the government can snoop-in on personal conversations. A majority of the communications taking place today are digital and involve two crucial processes i.e. encryption and decryption. Encryption (which is conversion of information into a code) happens when a message/call is initiated. At the same time, decryption (conversion of code back into useful information) happens when the message/call is received by the recipient. There are multiple nuances in this process; both in the technological aspect and the legal aspect.

For quite a while now, WhatsApp chat pages show the message – “Messages and chats are now protected with end-to-end encryption.”. The end-to-end encryption or E2EE (first used in program called Pretty Good Privacy, by Phil Zimmermann in 1991) is a form of encryption that makes it improbable if not impossible to intercept a private conversation. Traditionally, there are 3 instances when a conversation can be intercepted – firstly, from the device of the sender before encryption, secondly, when the information code in is transmission and thirdly from the device of the recipient after decryption.

The two ends i.e. the sender and the recipient, stay vulnerable to unwanted physical access or hacking but it’s the second instance where majority of the snooping takes place. It is here that E2EE becomes useful, for tech companies to bypass court orders and in extension protect user data. E2EE in the simplest of terms means that two people communicating are the only ones who have the specific keys to decrypt each other’s messages and any other person who intercepts such data will have nothing but an unintelligible code. While most communication apps or telecommunication providers have the decryption keys in their own servers; which grant them the ability to see or hear any conversation that passes through their servers, E2EE eliminates this obstacle by giving the keys to both individuals and not the service provider. Imagine the system as that of a letter-box, anyone can put in the messages and lock it (public key) but only the intended recipient has the key to unlock his messages.

This effective bypassing of the service providers has both pros and cons to it. On one hand, it allows for greater freedom for expression of opinions and beliefs without the fear of any sanctions, while on the other hand it stops the governments from carrying out intelligence activities vital for national security. The government, in ensuring the safety of its citizens, does covert operations such as surveillance which enables them to intercept vital communications between suspects; which may lead to stopping of terrorist threat. The importance of such can be gauged from the fact that the latest attack in London included secure-device (encryption) communications between the terrorists and also that ISIS issued instructions for its followers on how to communicate through encrypted apps to plan attacks.

Privacy however is not the only lens through which encryption can be seen in a global setting, for example, promotion and use of E2EE is seen as a human rights issue, as it furthers individual privacy and freedom of expression, which are two rights contained in the International Covenant on Civil and Political Rights (ICCPR). Yet UN reports like “The Right to Privacy in a Digital Age” and “The Promotion and Protection of the Right to Freedom of Opinion and Expression” expound on the idea that judicially ordered decryption is not violative of human rights and has laid down a three part test to limit when a government can restrict encryption.

There is an intense debate about the curbing of powers of law enforcement authorities to gather information through court notices from service provider companies. This debate gained public light after the incident in Brazil, where Facebook had its assets frozen subsequent to its non-compliance of a court order to provide WhatsApp conversational information of a bank robbery gang; which Facebook couldn’t have provided even if it wanted to, as it had no means to do so after enabling E2EE. This incident coupled by Apple’s refusal to the FBI to decrypt the iPhone of the San Bernardino shooter and install a backdoor in their operating system for use by law enforcement, have prompted the UK government to take the issue one step further by enacting a new legislation for surveillance through equipment interference.

The intelligence gathering aspect of E2EE is marred with internal conflictions in the state itself as the state needs such encryption tools to secure its own data but also resents it, as it makes public surveillance harder. Hence, while promoting stronger encryption programs for state use, they limit their citizen’s ability to do so. While certain states like Germany encourage public use of E2EE to avert the covert intelligence gathering abilities of the FIVE EYES countries.

Another facet of this issue is the restrictions in commercial and export area; profit driven tech companies in order to boost sales promote E2EE (more popularity, more sales) and oppose state-imposed rules as this would mean that they can import or build only those applications which allow third party access. Since, every state strives for stronger encryption tools to protect their own data and deal with upcoming security threats, there is state imposed regulations on selling such technology by tech companies to certain states for the purposes of national security and foreign policy goals.

A hypothetical solution to this problem of law enforcement and national security versus privacy can be the innovation of an internal system within the system of such service providing companies like WhatsApp. The internal system, when established would compare every number that tries to send a message with a blacklist (The numbers law enforcement wants to track with judicial approval). When a blacklisted number tries to send a message, the server can stop the E2EE services for the said number from that point onwards and the collected information can be stored in a separate database which can only be accessed by the company’s department handling judicial obligations.

Technology has and will continue to benefit us in ways that cannot be counted, but unaccountable use of such is also capable of great harm. The need for security more than privacy might result in a paradigm shift against rigid privacy laws as is prima facie seen in lawmakers of Florida after the Orlando attacks. It is the cooperation of law and technology together that will result in swifter disbursal of justice and achieve a balance between privacy security and public safety. The adoption of a system such as the one suggested above might be the first step to strike that balance between privacy and safety.

Reconstructing a Crime Scene: Virtual Reality in Courtrooms

Virtual Reality is the latest buzz in the technological sphere, especially with the arrival of VR devices like headsets from giants like Facebook (Oculus Rift), Sony (PlayStation VR) and HTC (Vive). It is a relatively old concept (Aspen Movie Map was the first example, created by MIT in 1978) but with advancement in contemporary technologies, virtual reality has progressed by leaps and bounds in its effectiveness i.e. from being a mere 3-D image to an immersive and interactive system. Apart from its use in gaming and other entertainment purposes, it has been proposed to use this technology in another rather unexpected facet that of judicial proceedings.

Prior to the use of the technology it is important to analyse the concerns regarding the technology in question in a courtroom setting. The concerns are multifold: the manipulation of facts regarding the actual scene of crime/accident, time delay in court proceedings, cost of the expensive procedure and the possibility of manipulation of the trier of facts (judge/jury) i.e. inducing bias. While all of these are relevant questions regarding the use of virtual reality in courtrooms, each of the above concerns can be mitigated.

The problem repeatedly being raised in regard to VR i.e. the manipulation of facts is actually a factual misconception of what VR is capable of. Virtual reality is a computer generated 3-D interactive simulation, in simpler terms, it is a computer-generated environment which simulates some of our senses like vision and hearing to make the artificial environment seem real. The virtual environment, in this case, the scene of the crime/ accident is made by the computer, using 2-D images of the real-world place with minor adjustments by a technician in giving the architectural design the same aesthetics as that of the real-world place. In the case of a shooting, the area of impact and the trajectory of the bullets, or in the case of stabbing, the angle of wound and weapon penetration, are all put inside the simulation after gathering factual information from standard forensic tests. Hence, nothing in the simulation is the manifestation of a recollection from memory. The layout of the virtual crime scene can be checked using the photographs of the real crime scene for any inconsistencies, removing the question of manipulation of facts.

The issue of time delay can be also dealt with. Software like crime scene virtual tour  and IC-CRIME can create a virtual simulation from the 2-D images of the crime scene significantly faster. The cost concerns are very pragmatic issues keeping in mind the novelty of the technology and hence its expensiveness, it can be argued that the relatively wealthy party in a case might use its resources to use this technology while the other party could not afford to fight fire with fire. But, the facts of the scene remaining same for both, it should be the prerogative of the state to provide a virtual simulation to be used by both the parties. Also, with further improvement in technology the cost of using such technologies will go down, hence making it more accessible.  Countries like UK, USA and Switzerland have already granted funds for further research and use of this technology in courtrooms.

The concern regarding the introduction of bias can be nullified as virtual reality instead of inducing bias has been proven to show a reduction in bias in both jury and judges. Firstly, no trial is free of bias. Bias is “inclination or prejudice for or against one person or group, especially in a way considered to be unfair.” It is of two types – explicit and implicit. Every human possesses both the types, formed through an individual’s life experiences. The notion of a right to fair trial rests on the notion of a judge/jury being unbiased and hence capable of looking at evidences objectively. To further this notion, a process called voir dire, is performed to eliminate prospective jury members on the criteria of certain signs of bias shown by them i.e. explicit bias.

Since explicit bias manifests itself in our actions and is easy to root out, the problem lies with implicit bias as it rests in our sub-conscious and is very hard to notice. An individual may/ may not be aware of his/her own implicit biases. Studies have shown that an average American judge/jury form an implicit relation between black person and weapon and between a white person and innocence. Contemporary studies have shown that a black judge is more likely to give a harsher sentence to a white person than a black person and vice versa. From the results of these studies it can be reasonably said that the strong preference of white judges for white people and black judges for black people is based on a very fundamental emotion – empathy i.e. the ability to feel from another’s perspective.

Virtual Reality comes into play in such a situation by providing the very platform to see another man’s perspective. Studies by Prof. Mel Slater from University of Barcelona have shown that when people with implicit racial biases (tested from above studies) were put in virtual simulation and made to see a mirror reflecting an individual, different in race than the person in simulation (technically called bodyswapping); people showed reduced implicit biases and more empathy. Since bias and empathy cannot be rooted out of an individual, the next best thing that can be, is have a diversity of biases so that no one starts severely disadvantaged.

The role of empathy in a judicial decision is not a quantifiable one but its presence is an unmistakable one even though not explicitly expressed. Judges hesitate to show empathy towards any party because of the stigma; that any empathy from a judge in his professional capacity is a taboo and should not be condoned. Also, the more practical reason is that any exhibition of empathy might invite a strict scrutiny from the appellate courts and along with it bring the possibility of mar on their performance reviews.

But in reality, we need judges to be more empathetic so as to understand why people do certain things and assign corresponding sentences since the purpose of law is not just retribution but also rehabilitation of the convict back in the society. As much as we would like to think that the law is absolute and clear, it is not always so and there is much room for a judge’s discretion.

Apart from the problems and their subsequent solutions, to the use of virtual reality in courtroom discussed above; there are a few more advantages. Stanford studies have shown that virtual reality simulations can help us check the reliability of a witness’s testimony by trying out the simulation from the witness’s perspective. For example, if a witness testifies to see a particular crime happen standing somewhere far-away from the crime scene, it could be checked in the simulation if the witness had the line of sight to see the crime from where he said he was standing. It can also help preserve a crime scene in the virtual world when preserving it in real world is not possible, such as homicide on a busy street or an accident on a bridge.

It is evident from the studies conducted till now that the use of VR in modern-day courtrooms is quite plausible given the fact that it can help incline the human factor i.e. the implicit emotions, in a way that is beneficial for all. Moreover, the practical concerns can very much be mitigated with further research in the technology as it will become more adaptive and hence more efficient for courtroom purposes. The pros of using this technology in courtrooms do appear to outweigh any significant impediments that can be foreseen and hence more research and enforcement of such technology should be promoted in courtrooms around the world for faster and more effective disbursal of justice.

Cashless Societies: Causes for Concern

cashless_society-infographic

 Source: CNN

The idea of a cashless society, i.e., ‘a civilization holding money, but without its most distinctive material representation – cash’, is said to have originated in the late 1960s. The transition to go cashless had been slow and steady, but it is now increasing at a rapid pace this last decade. As technology evolves, the shift from a cash reliant to a cashless society is becoming more apparent. At least in the urban society, using ‘contactless payments’ or ‘non-cash money’ is not unheard of. It has been reported that not only did the first debit card possibly hit the markets in the mid-1960s but that in 1990, debit cards were used in about 300 million transactions, showing the rise of the same in today’s society. Before welcoming this change with open arms, we must take care that we do not ignore the security and privacy concerns, some of which will be addressed in this article.

As we are transitioning from a cash-reliant society to a [quasi] cashless society, there are some fears about phones being hacked or stolen, or reliance placed on devices which require batteries or internet – what if either is not available? However, conversely, our cash or wallets could be stolen, destroyed in a matter of seconds, could be misplaced, etc. The only difference is the medium of transaction.

Fear is a factor which inhibits change, however these fears are usually not unfounded. In the year 2014, Target, the second-largest discount store retailer in the United States was hacked and up to 70 million customers were hit by a data breach. Furthermore, 2 years later, it was reported that roughly 3.2 million debit cards were compromised in India, affecting several banks such as SBI, ICICI, HDFC, etc.

Nevertheless, as earlier pointed out, just as financial details present online can be stolen, so can paper money. With each transaction taking place online, the fears of online fraud are present, however Guri Melby of Liberal (Venstre) party noted, “The opportunity for crime and fraud does not depend on what type of payment methods we have in society.” A mere shift in the means of trade will not eliminate such crimes. It is here that I must clarify that a cashless society could be in various forms and degrees, be it debit/credit cards, NFC payments, digital currencies such as bitcoin or even mobile transactions such as M-Pesa.

Bruce Schneier, cyber security expert and author of best seller, Data and Goliath, notes that the importance of privacy lies in protection from abuse of power. A hegemony of the authorities over our information – details [and means] of our every transaction – provides absolute power to the authorities and thus a much higher scope for abuse. Daniel Solove, further notes that abuse of power by the Government could lead to distortion of data; however, even if we believe the government to be benevolent, we must consider that data breaches and hack could (and do) occur.

Cash brings with it the double-edged sword of an anonymity that digital transactions do not provide. A completely cashless society might seem attractive in that each transaction can be traced and therefore possibly result in reduction of tax evasion or illicit and illegal activities; however, though that crime might cease to exist in that form, it could always evolve and manifest itself in some other form online.

One of the concerns raised in this regard is that the government could indefinitely hold or be in possession of our transaction history. This seems to be an innocent trade-off for the ease and convenience it provides. The issue that arises however, as Domagoj Sajter notes, is that every single citizen has become a potential criminal and a terrorist to the government, worthy of continuous and perpetual monitoring. The citizens become latent culprits whose guilt is implied, only waiting to be recorded and proven. The principle of innocent till proven guilty vanishes in the mind of the government.

Furthermore, a completely cashless society places power with the Government with no checks and balances of the same. Advanced technology could disable funding of mass actions, extensive protests and large-scale civil disobediences, all of which are important traits of democratic processes. It is pertinent to remember that Martin Luther King Jr. was tracked by the FBI. Providing the government with more ease in curtailing democratic processes leads to a more autocratic governance.

Consider the following: an individual finds out that the Government or one of its agencies is committing a crime against humanity, and she reports it to the public. Not only could her personal life be excavated to find faults but any support that she would receive in terms of money (in a cashless society) could possibly be blocked by the Government. Minor faults could be listed and propaganda could be spread to discredit her point or deviate the masses’ attention. By controlling the economy, they could wring the arms of the media and force them to not focus on or to ignore the issues raised by her.

Michael Snyder also raises an important point about erasure of autonomy in a cashless society, “Just imagine a world where you could not buy, sell, get a job or open a bank account without participating in ‘the system’”. It need not start with forcing people to opt-in, simply providing benefits in some form could indirectly give people no choice but to opt-in. The Supreme Court of India has noted multiple times that the Aadhar Card cannot be made compulsory (a biometric identity card). However, the Aadhar card has been made mandatory to avail EPF Pension Schemes, LPG Benefits and even for IIT JEE 2017. The Government of India is even mulling making Aadhaar number mandatory for filing of income tax (I-T) and link all bank accounts to the unique identity number by the end of this financial year. The government is concurrently working on developing a common mobile phone app that can be used by shopkeepers and merchants for receiving Aadhaar-enabled payments, bypassing credit and debit cards and further moving to cashless transactions. The Aadhaar-enabled payment system (AEPS) is a biometric way of making payments, using only the fingerprint linked to Aadhaar. These are all part of the measures taken by the Indian government to brute force the Indian economy into a cashless form.

Policing of the citizen is not a purely hypothetical scenario; it has already taken place in the past. In 2010, a blockade was imposed by Bank of America, VISA, MasterCard and PayPal on WikiLeaks. In 2014, Eden Alexander started a crowdfunding campaign hoping to cover her medical expenses, but later, the campaign was shut down and the payments were frozen; the cause being that she was a porn actress. We must also take into account the empowerment that cash provides; consider an individual saving cash from their alcoholic or abusive spouse, or the individual who stuffs spare notes under her mattress for years because it gives her a sense of autonomy. We should take care that in seeking development, we do not disempower the downtrodden, but lift them up with us.

The idea of a cashless society is no longer strange, with multiple corporations and even countries having expressed their interest in going cashless. Harvard economist and former chief economist of the IMF, Kenneth Rogoff in his Case Against Cash argues that a less-cash society [in contradistinction to a cash-less society] could possibly reduce economic crime, he suggests in the same article that this could be executed by a gradual phasing out of larger notes. A cashless or less-cash society is inevitable. In Sweden, cash transactions made up barely 2% of the value of all payments made. The question thus is not about when [it will happen] but what are the safeguards we set up to protect our rights.

For further reading:

1] Melissa Farmer: Data Security In A Cashless Society

https://www.academia.edu/12799515/Data_Security_In_A_Cashless_Society

2] David Naylor, Matthew K. Mukerjee and Peter Steenkiste: Balancing Accountability and Privacy in the Network

https://www.cs.cmu.edu/~dnaylor/APIP.pdf

3] Who would actually benefit from a Cashless Society?

https://geopolitics.co/2016/01/30/who-would-benefit-from-a-cashless-society/

4] Anne Bouverot: Banking the unbanked: The mobile money revolution

http://edition.cnn.com/2014/11/06/opinion/banking-the-unbanked-mobile-money/index.html

5] Kenneth Rogoff: Costs and benefits to phasing out paper currency

http://scholar.harvard.edu/files/rogoff/files/c13431.pdf

Encryption and the extent of privacy

Ed. Note.: This post, by Benjamin Vanlalvena, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

A background of the issue

On December 2, 2015, 14 people were killed and 22 were seriously injured in a terrorist attack at the Inland Regional Center in San Bernardino, California, which consisted of a mass shooting and an attempted bombing. The FBI announced on February 9, 2016 that it was unable to unlock the iPhone used by one of the shooters, Farook. The FBI initially asked the NSA to break into the iPhone but their issue was not resolved, and therefore asked Apple to create a version of the phone’s operating system to disable the security features on that phone.

Apple however refused which led to the Department of Justice applying to a United States Magistrate judge who issued a court order requiring Apple to create and provide the requested software and was given until 26th February, 2016 to respond to the order. Apple however announced their intention to oppose the order. The Department of Justice in response filed a new application to compel Apple to comply with the order. It was revealed that they had discussed methods to access the data in January however, a mistake by the investigating agencies ruled out that method. On March 28, the FBI announced that they had unlocked the phone and withdrew the suit.

The dilemma

Privacy is a recognised fundamental right under Article 17 of the International Covenant for Civil and Political Rights and Article 12 of the Universal Declaration of Human Rights.

Encryption is a process through which one encodes or secures a message or data to make the content readable only by an authorized party or by someone who has the decryption key. Apple claims that it does not perform data extractions as the ‘files to be extracted are protected by an encryption key that is tied to the user’s passcode, which Apple does not possess.’ This, according to the FBI Director, James Comey, is a cause for concern as it means that even with a court order, the contents inside the device of all kinds of criminals would not be accessible. Having a backdoor or ‘golden key’, though slightly different [though not totally] from mass surveillance, as agencies herein would be having the capability to access data stored in the devices as compared to a constant monitoring of data. It’s no longer a matter of constant surveillance but the potentiality of other non-governmental persons gaining access through some illegitimate means. The major contention is that there is an assumption either that those who have access to the key are ‘good people’, who have our interests in mind or that the backdoor would only be accessible by the government. The Washington Post reported that the FBI had (after failing to get Apple to comply) paid professional hackers to assist them in cracking the San Bernardino terrorist’s phone. This itself is a cause of concern as it is proof of vulnerabilities existing in our phones which are seemingly secure.

A data that is encrypted cannot be considered to be totally secure if there is some party which has a means to bypass said encryption. The FBI’s request is therefore problematic as it gives it a backdoor to the data which would be a vulnerability which effects all users. One should bear in mind that the trade of such ‘zero-day vulnerabilities’ is not something unheard of and the NSA or FBI having such tools which keep our data secure is problematic as such tools could be end up in the hands of hackers or leaked. One of the most hard hitting points raised is the issue of national interest, that terrorists or paedophiles use encryption and that it is a “safe space” for them. However, a creation of a backdoor according to the former NSA chief, Michael Hayden, would  be futile as terrorists would be making their own apps based on open-source software, the presence of a backdoor would simply make innocent persons less secure and vulnerable to people who would be taking advantage of such backdoors.

While the intention of the agencies might be good or in the interests of the public, one should keep in mind that once a backdoor is provided, not only is this a dangerous precedent but the dangers of such an encryption leaking an effecting the lives of common persons is huge.

For more information, visit:

https://tcf.org/content/commentary/weve-apple-encryption-debate-nothing-new/

https://www.aclu.org/feature/community-control-over-police-surveillance

https://www.ctc.usma.edu/posts/how-terrorists-use-encryption

https://www.youtube.com/watch?v=peAkiNu8mHY

https://www.youtube.com/watch?v=DZz86r-AGjI

Privacy – A right to GO?

Ed. Note.: This post, by Ashwin Murthy, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

For centuries rights have slowly come into existence and prominence, from the right to property to the right to vote and the right against exploitation. In the increasingly digital world of interconnection, the latest right to gain immense popularity is the right to privacy. This right entails the right to be let alone and more importantly the right to protect one’s own information – informational privacy. Thus armed with the right to privacy, one can limit what information others have access to and may use, and thus what information corporations might have or what is up on the Internet. This right to privacy comes in direct contact with applications downloaded on phones, which often ask for permissions to various information on the phone – a device which already possesses a great deal of information of the owner, including the location of the user, their phone number, their emails, their chat conversations and their photos. Applications often ask, either explicitly or in their terms and conditions, for permissions to access varying degrees of the information on the phone, sometimes in a rather unexpected fashion (such as a flashlight app asking for permissions to location), and more recently these apps have been singled out for their questionable privacy settings.

            The latest app to come under fire for its privacy settings is Pokémon GO, an Android and iOS game that took the world by storm, being downloaded over 100 million times by August. The game is an augmented reality game that allows people to catch Pokémon in the real world through synchronous use of the phone camera and location detection. With such popularity, the app was inevitably scrutinised for its privacy settings, especially since it appeared that Pokémon GO was given full permission of the owner’s account. Adam Reeve, a former software engineer at Tumblr, was the first to cause a commotion when he wrote a post detailing all the information the app supposedly had access to. Niantic, the creators of Pokémon GO, later stated that this was an error and the app only accessed basic account information for logging in and in fact could not access information in applications like Gmail or Calendar, later confirmed by security developers. While this was clarified and fixed by Niantic, there were many who were still sceptical, losing trust in not just Pokémon GO, but also in apps in general.

            This sceptical perspective is however what is required to prevent apps from unduly gaining information, particularly those created by the more unsavoury companies who are less scrupulous about their privacy setting, to the point of intentionally trying to get far more information than what was expected from such an app. Pokémon GO, with its shady privacy settings and thus ensuing headlines of hysteria, was merely the catalyst to this questioning as to why such permissions are in fact required by many apps. While it turned out that Niantic did not in fact have very much access to the information people suddenly thought it did, Pokémon GO, by its very nature of using the camera and location services of the phone, potentially has access to far more information than what would be desired, to the point where it has been speculated that the app could be used for spying purposes. While such speculations remain conspiracy theories, the existence of these conspiracy theories is important in itself. Security and governmental agencies are increasingly attempting to access and store the information that such apps and companies themselves have access to. An intelligence agency, if working in tandem with a Niantic, could easily just make Pokémon appear in the house and thus have an interior view of the house through the owner’s phone camera. Niantic’s privacy policy, among other things, states that it may collect and store information about the owner’s location – information that is almost too easy to use for less than noble purposes, and is just one of many apps that can do the same.

            While of course many consumers may not have a problem with these applications having access to such information, they must first have an awareness that these applications actually do have access to all of this information when they are downloaded. Consumers are often content to merely accept the terms and conditions of an app without reading them. The scope for abuse of privacy is almost unparalleled. For there to be a change, the sceptic atmosphere that Pokémon GO accidentally created is needed, and not just for the short period of time that it existed in the wake of Adam Reeve’s post. Currently there is almost zero awareness of the degree to which applications can access and store private information, especially when the privacy policies and terms and conditions are not read or are incomprehensible. The publishers and creators of apps and other such software must be made to disclose explicitly what access they have and what information they can see/store/use. A high level of scrutiny from the consumers would ensure this, especially in the dearth of laws that exist on this specific issue. India has implemented the Information Technology (Amendment) Act, 2008, adding S.43A and S.72A which deal with implementation of reasonable security practices for sensitive information and punishment for wrongful loss or gain by disclosing personal information respectively. These however are both inadequate and too broad to effectively deal with such issues as apps invading a person’s right to privacy. Further such laws would apply only to the app’s usage in India. Thus creation and effective implementation would still only be on a very localised level, further causing a need for the people to be more conscious themselves.

The privacy settings in Pokémon GO might have been a harmless error from a seemingly benevolent company however most companies are not quite as harmless. Consumers must be vigilant to prevent their private lives and affairs slipping away from them, a task which hopefully Pokémon GO has somewhat equipped them to do.

For Further Reading:

  1. Data Protection in India: Overview – Stephen Mathias and Naqeeb Ahmed Kazia, Kochhar & Co
  2. Don’t believe the Pokémon GO Privacy Hype – Engadget
  3. Pokemon GO raises security concerns among Google users – Polygon

REGULATIONS FOR SELF-DRIVING CARS

Ed. Note.: This post, by Vishal Rakhecha, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

Self-driving cars have for long been a thing of sci-fi, but now with companies like Uber, Google, Tesla, Mercedes, Audi and so many more conducting research in this field they don’t seem as unrealistic. Self- driving cars are vehicles which do not require human supervision, with autonomy of varying degrees. Such technology is already present -to a limited extent – in the form of cruise control, parking assist, etc. The creation of such technology would inevitably require a sound system of rules and regulations. These laws among other things must be capable of setting a set of standards for the companies, securing the physical safety and protecting the privacy of the end user. Presently Motor Vehicles Act, 1988 and Central Motor Vehicles Rules, 1989 are the only rules related to automobiles. These laws are inadequate in terms of their application to autonomous cars. This article deals with the changes in the law which may be required to deal with challenges which this new technology may present. These modifications will be essential to ensure the protection of all stakeholders when these contraptions do come on Indian streets. This article will deal with regulations of self-driving cars of level 3 and 4.

The National Highway Traffic Safety Administration in the USA has segmented the autonomous cars in 5 levels.  For the purpose of this article we need to understand what levels 3 and 4 are. In level 3 the car has a high level of automation and does not require the driver to constantly monitor the roadway during the trip with brief periods where the driver control is necessary. The cars in level 4 are completely autonomous and are capable of performing all safety-critical functions. The user is only required to put in the destination and navigation details.

There have to be certain specifications and features based on best practices of the industry which each car must have to get clearance from the government authorities. Each car must have, other than the very basic features required for it to be autonomous, a steering wheel, pedals and an overriding mechanism to give control to the human operator at any time. The tests would include capacity to sense obstructions in front of it, to interpret and adhere to traffic signs and understand signals given by the other drivers on the road mechanical or by hand, ability to follow instructions as provided by its operator, come to sudden stops and increase speed, change lanes, ability to identify smaller objects like children, cycles, pedestrians, etc.

As mentioned earlier both CMVR and MVA have provisions to deal with vehicles requiring constant human supervision. In these laws the probable consequential placement of liability in cases of mishaps is related to only these types of vehicles. Therefore to give these cars a legal backing to actually be able to operate autonomously on the street it is important to include them in the ambit of the definition of the word ‘driver’. A possible version of this could be- ‘a machine capable of manoeuvring itself and that which has passed a driving test.’

The Tesla model S has an auto-pilot mode, which sends information about the places the car travels to, creating an ever growing map with data about the type of road, traffic conditions and other pertinent information. This is going to be true for any autonomous car if it has to become a viable means of transportation. This large-scale collection of data as promising as it is in terms of improving the way autonomous cars are able to understand their surroundings and adapt to them, it raises concerns about the privacy of the consumer. The information thus collected will inevitably be about the consumer’s personal details. This is problematic as the company can use the information for purposes not immediately considered to be for the benefit of the end user for instance, advertising, etc. The government has to make strict provisions making consumer’s informed consent mandatory for any way the data is to be used by the company. This data again has to be well-protected from cyber-attacks and hacks. The manufacturers should be required to maintain robust systems to ensure the safety of the information collected and conduct periodic tests to assess the workings of the system.

When accidents happen in normal circumstances, the driver is in most cases considered liable, but with the advent of autonomous cars fixing liability would become difficult. To be able to identify the events immediately preceding the accidents, German lawmakers have made it mandatory to have black boxes in all autonomous cars, a similar step can be taken here too. The black boxes will have information about when the driver took control of the car (if he did so) and what malfunction could have lead to the mishap. The liability would be placed on the manufacturer unless the consumer has not added any new feature through his own volition.

The possibilities which this technology presents in terms of positive outcomes is that it can lead to in say for instance in reducing the number of accidents, providing an accessible means of transportation for the old, disabled, etc. But, to harness the full potential of these vehicles we need to have a sound system of law which protects all stakeholders from the challenges which the introduction of these cars could pose.

Refugee crisis in a digital age

Ed. Note.: This post, by Kaustub Bhati, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

How many people worldwide are currently displaced or stateless? How many people are adrift in the Mediterranean Ocean in search of a new home? What helps them in this perilous journey and guides them to their destination? The answer to this is a staggering 51 million, constituting around 3% of the world’s total population, out of which 16.7 million people are refugees seeking asylum in various nations. This refugee crisis being the first of its kind in the digital age, where an 8-year old kid knows how to use his smartphone to navigate the world, is bringing about bountiful challenges in the field of application of technology.

A normal citizen, peacefully living in his home is forced to flee his home leaving behind everything to avoid violence and persecution, what is the only piece of technology he could carry with him? Probably his smartphone. Smartphones, and the access to social media and applications (apps) they offer, act as lifelines for many asylum seekers, who rely on them for information ranging from the use of Google Maps to plot safe routes to the real costs of goods and services along the journey, translate the language of an unknown land and find help in the time of need. This reliance on smartphones has created unique opportunities for the development of socially innovative technology to deliver assistance. The digital age proffers unique and novel ways for administrative authorities to engage with the masses using digital tools like apps, web portals to provide better access to public services. The most appropriate example would be of initiatives of New York City which, as of 2013, had 37% of its population as foreign born while 60% were either immigrants or children of immigrants. Their digital access crown the NYC 311 is an interactive, online self-serving system available in 170+ languages. [1]

Technology can also help refugee asylum seekers develop new skills or learn existing skills of the labour market. Busuu, an electronic language-learning platform, is offering free German and English language courses for Syrian refugees. A Berlin-based NGO, Refugees Welcome, created a website that quickly became known as an “Airbnb for refugees,” matching willing hosts with individuals needing shelter. Indeed, it was so popular that the site quickly crashed, and the NGO struggled to keep up with offers of help. In the German town of Dresden, local tech companies created the “Welcome to Dresden” app, providing information and advice for refugee newcomers in Arabic and other languages, similar suit followed in Belgium.[2]

While apps play a vital role, Facebook is not much behind in providing a novel way for refugees to interact with others and share and learn from common experience. Facebook pages like ‘The Syrian House in Germany’ with massive following with the aim to provide instructions to asylum applications and emotional security by allowing integration into a wholly new and different society by learning its culture, its heritage and its language.

Education, a strong point in quick social integration and the requirement of many jobs is another issue addressed through digital accessibility. Many Refugees who had to leave their education midway and have no requisite transcripts to take admission in the universities of their new countries are left desolate but innovations such as the Kiron University, a crowd-funding project, founded by Markus Kressler which provides world-class online education to refugees in fields such as business, engineering, computer science and architecture without the red tape and any tuition fees. Kiron uses online courses put out by universities, including the likes of Harvard, Yale, Cambridge and MIT.[3]

Apart from these astounding benefits, there are some disadvantages too of the digital era we live in. The use of digital devices results in a traceable digital footprint which can be easily tracked and with extremists and smugglers being so tech-savvy can very easily become disastrous. In an age where people are being persecuted because of their religion, anonymity can be a good thing. The concern to privacy also arises due to situations such as in Lebanon, where refugees who do not consent to iris scans do not qualify for UNHCR subsidies and also the use of biometric scans to issue prepaid cards which intensively track purchase history available to government authority anytime they want.[4]

But then these cannot be solely considered as a problem of the digital age but of the society itself. Violation of privacy through identification procedures while being a concern for many as a basic rights violation is sometimes a necessary step for the government to perform because it helps them organise their efforts as well as policies for the overall public welfare and also in addressing the safety concerns that arise due to such a massive influx of refugees.

The purpose of this article was to pave the way to discuss how an ever-upgrading world is keeping up with the sociological aspects vis-à-vis either helping them or dismantling them. In light of the examples discussed about, I would conclude that the striking benefits of the digital age in lieu of the refugee crisis completely overwhelm the few disadvantages they pose, which can also be associated to the measures taken by the asylum providing countries to prevent terrorist attacks and a financial meltdown.

[1] Divia Mattoo, Corinne Goldberg, Jillian Johnson, and Carolina Farias Riaño, “Immigrants in the Smart City: The Potential of City Digital Strategies to Facilitate Immigrant Integration”, http://www.migrationpolicy.org/article/immigrants-smart-city-potential-city-digital-strategies-facilitate-immigrant-integration

[2] Ibid

[3] https://kiron.ngo

[4] THE REFUGEE CRISIS: WHERE AID, FINTECH AND BIOMETRICS INTERSECT, http://blog.mondato.com/refugee-crisis-fintech

Machine Learning: An Explanation

Have you ever wondered how the spam in your mailbox is automatically detected? And what about speech recognition or handwriting recognition? These are quite challenging problems. But luckily they have one thing in common – that is data, and a good deal of it.

Machine learning aims at creating systems that learn from data using various computer science and mathematical techniques. To put it differently, machine learning is the study of computer algorithms that improve  automatically through collected information of experience, i.e., data. Continue reading Machine Learning: An Explanation