Category Archives: Right to Privacy

Cashless Societies: Causes for Concern


 Source: CNN

The idea of a cashless society, i.e., ‘a civilization holding money, but without its most distinctive material representation – cash’, is said to have originated in the late 1960s. The transition to go cashless had been slow and steady, but it is now increasing at a rapid pace this last decade. As technology evolves, the shift from a cash reliant to a cashless society is becoming more apparent. At least in the urban society, using ‘contactless payments’ or ‘non-cash money’ is not unheard of. It has been reported that not only did the first debit card possibly hit the markets in the mid-1960s but that in 1990, debit cards were used in about 300 million transactions, showing the rise of the same in today’s society. Before welcoming this change with open arms, we must take care that we do not ignore the security and privacy concerns, some of which will be addressed in this article.

As we are transitioning from a cash-reliant society to a [quasi] cashless society, there are some fears about phones being hacked or stolen, or reliance placed on devices which require batteries or internet – what if either is not available? However, conversely, our cash or wallets could be stolen, destroyed in a matter of seconds, could be misplaced, etc. The only difference is the medium of transaction.

Fear is a factor which inhibits change, however these fears are usually not unfounded. In the year 2014, Target, the second-largest discount store retailer in the United States was hacked and up to 70 million customers were hit by a data breach. Furthermore, 2 years later, it was reported that roughly 3.2 million debit cards were compromised in India, affecting several banks such as SBI, ICICI, HDFC, etc.

Nevertheless, as earlier pointed out, just as financial details present online can be stolen, so can paper money. With each transaction taking place online, the fears of online fraud are present, however Guri Melby of Liberal (Venstre) party noted, “The opportunity for crime and fraud does not depend on what type of payment methods we have in society.” A mere shift in the means of trade will not eliminate such crimes. It is here that I must clarify that a cashless society could be in various forms and degrees, be it debit/credit cards, NFC payments, digital currencies such as bitcoin or even mobile transactions such as M-Pesa.

Bruce Schneier, cyber security expert and author of best seller, Data and Goliath, notes that the importance of privacy lies in protection from abuse of power. A hegemony of the authorities over our information – details [and means] of our every transaction – provides absolute power to the authorities and thus a much higher scope for abuse. Daniel Solove, further notes that abuse of power by the Government could lead to distortion of data; however, even if we believe the government to be benevolent, we must consider that data breaches and hack could (and do) occur.

Cash brings with it the double-edged sword of an anonymity that digital transactions do not provide. A completely cashless society might seem attractive in that each transaction can be traced and therefore possibly result in reduction of tax evasion or illicit and illegal activities; however, though that crime might cease to exist in that form, it could always evolve and manifest itself in some other form online.

One of the concerns raised in this regard is that the government could indefinitely hold or be in possession of our transaction history. This seems to be an innocent trade-off for the ease and convenience it provides. The issue that arises however, as Domagoj Sajter notes, is that every single citizen has become a potential criminal and a terrorist to the government, worthy of continuous and perpetual monitoring. The citizens become latent culprits whose guilt is implied, only waiting to be recorded and proven. The principle of innocent till proven guilty vanishes in the mind of the government.

Furthermore, a completely cashless society places power with the Government with no checks and balances of the same. Advanced technology could disable funding of mass actions, extensive protests and large-scale civil disobediences, all of which are important traits of democratic processes. It is pertinent to remember that Martin Luther King Jr. was tracked by the FBI. Providing the government with more ease in curtailing democratic processes leads to a more autocratic governance.

Consider the following: an individual finds out that the Government or one of its agencies is committing a crime against humanity, and she reports it to the public. Not only could her personal life be excavated to find faults but any support that she would receive in terms of money (in a cashless society) could possibly be blocked by the Government. Minor faults could be listed and propaganda could be spread to discredit her point or deviate the masses’ attention. By controlling the economy, they could wring the arms of the media and force them to not focus on or to ignore the issues raised by her.

Michael Snyder also raises an important point about erasure of autonomy in a cashless society, “Just imagine a world where you could not buy, sell, get a job or open a bank account without participating in ‘the system’”. It need not start with forcing people to opt-in, simply providing benefits in some form could indirectly give people no choice but to opt-in. The Supreme Court of India has noted multiple times that the Aadhar Card cannot be made compulsory (a biometric identity card). However, the Aadhar card has been made mandatory to avail EPF Pension Schemes, LPG Benefits and even for IIT JEE 2017. The Government of India is even mulling making Aadhaar number mandatory for filing of income tax (I-T) and link all bank accounts to the unique identity number by the end of this financial year. The government is concurrently working on developing a common mobile phone app that can be used by shopkeepers and merchants for receiving Aadhaar-enabled payments, bypassing credit and debit cards and further moving to cashless transactions. The Aadhaar-enabled payment system (AEPS) is a biometric way of making payments, using only the fingerprint linked to Aadhaar. These are all part of the measures taken by the Indian government to brute force the Indian economy into a cashless form.

Policing of the citizen is not a purely hypothetical scenario; it has already taken place in the past. In 2010, a blockade was imposed by Bank of America, VISA, MasterCard and PayPal on WikiLeaks. In 2014, Eden Alexander started a crowdfunding campaign hoping to cover her medical expenses, but later, the campaign was shut down and the payments were frozen; the cause being that she was a porn actress. We must also take into account the empowerment that cash provides; consider an individual saving cash from their alcoholic or abusive spouse, or the individual who stuffs spare notes under her mattress for years because it gives her a sense of autonomy. We should take care that in seeking development, we do not disempower the downtrodden, but lift them up with us.

The idea of a cashless society is no longer strange, with multiple corporations and even countries having expressed their interest in going cashless. Harvard economist and former chief economist of the IMF, Kenneth Rogoff in his Case Against Cash argues that a less-cash society [in contradistinction to a cash-less society] could possibly reduce economic crime, he suggests in the same article that this could be executed by a gradual phasing out of larger notes. A cashless or less-cash society is inevitable. In Sweden, cash transactions made up barely 2% of the value of all payments made. The question thus is not about when [it will happen] but what are the safeguards we set up to protect our rights.

For further reading:

1] Melissa Farmer: Data Security In A Cashless Society

2] David Naylor, Matthew K. Mukerjee and Peter Steenkiste: Balancing Accountability and Privacy in the Network

3] Who would actually benefit from a Cashless Society?

4] Anne Bouverot: Banking the unbanked: The mobile money revolution

5] Kenneth Rogoff: Costs and benefits to phasing out paper currency


Ed. Note: This post by Sayan Bhattacharya is a part of the TLF Editorial Board Test 2016.

Google launched its first smartphone series called Pixel some time earlier this month. The major shift from being software producer to being both hardware and software producer was a calculated change in policy to take a direct dig at Apple’s hardware throne.

Apple stood as undisputed kings in terms of design and the meticulously designed software which ran on them, perfecting user experience with highest precision. Google on the other hand was the undisputed king of software and search engines, comprising of much higher software offerings than any other. Even the most diehard fans of iPhones spent most of their time on their devices using Google products. The changeover was thus a direct policy measure to cut through Apple’s base in hardware design but providing an alternative with Google’s exclusive product range.

On the surface, the launch seem to be all about glittery issues surrounding the inherent competition between Google and Apple, but the media, customers and the makers of our privacy law often tend to ignore the bigger picture being entangled in the mesh of technology. One of the major components of Google’s cutting edge technology over iPhones is its Artificial Intelligence which promotes active data mining. The absence and presence of privacy norms is what distinguishes the new features of Google from the existing features in Apple devices. The assumption on part of Google is that its customers are willing to give up some amount of privacy in order to make life easier and the assumption on part of iPhone is that customers value their privacy more than anything.


The latest Artificial Intelligence in Pixel allows the software to read mails, text messages, calendars. When Google’s AI magically delivers you the answer to the question you asked, it is a case of data mining. It is not against the law too, because technically on paper you have given Google certain permissions by not reading the fine print or skimming through it, which allows it to read through your chats, mails, location history and browsing and what not for it to give you some magical results. So the argument here is that it mostly not a free consent that people are providing due to lack of important information while making the same choice.


The major shift in terms of technology in the new AI that Google has developed in Pixel is in terms of its ability to actively read and understand the context of an act or a conversation. So for example if you are on either of Google Allo or Google Home and chatting about going for a dinner with your family at a selected time, you can be sure to expect a reminder about the same along with reviews about the restaurant and even a direct link to book an Uber ride. This is because the AI feature reads your conversations, figures out context and links you to your needs over the web.

“Adding to that, the very Google Allo introduced in order to challenge authority of messaging applications like WhatsApp, Snapchat and Messenger is not end to end encrypted like all of these messaging applications are until you move into incognito mode. However, the Incognito mode within the Google Allo is only an optional feature, instead of being a default setting like in secure chat apps such as Apple’s iMessage, Facebook Messenger, and WhatsApp. In consequence, Allo’s privacy and security got heavily criticized.”

“NSA whistleblower Edward Snowden has criticised Google Allo app on Twitter, and said Google’s decision to disable end-to-end encryption was dangerous. He asked people to avoid using the app, and his tweet has been re-tweeted over 8000 times on the site.”

The problem essentially with this kind of a feature is the fact that it prioritises data mining for ease of access over consumer privacy. The very fact that now privacy of data is an option instead of the norm is what leads to questioning the ethics of data mining however easier it makes one’s life. The fact that a third party is able to store your conversations, read it by actively understanding its context and finally applying the same to aid future actions on your device is what is astounding in this regard.


In another instance if you back up your photos to Google photos, the Google Assistant is capable of recognizing what’s there in the photo using its computer vision wherein it can understand when the same was taken and who all are there in it. Thus the Google AI goes to the extent of not just mining your data but also linking the data excavated to that of other user’s data which Google has excavated through its software. The ultimate end goal is to link the entirity of data collected to create a form of network which is omnipresent but can’t be seen. The question doesn’t arise out of the same networking but out of the means of achieving the same. The data is thus being excavated without a free consent and is being linked with external third party data without prior permission.

Another huge concern surrounding this huge data storage is with government snooping through data packet inspections which exist already in network connections. A switch to Google Pixel means a switch to an almost completely internet run software which further increases chances of breach of privacy.


Google aims at making its artificial intelligence the next big thing post its position in world of search engines and software. It aims to make its customers switch from a mobile first world to an AI first world. But the underlying assumption is that the same can be done at the cost of user privacy.


Encryption and the extent of privacy

Ed. Note.: This post, by Benjamin Vanlalvena, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

A background of the issue

On December 2, 2015, 14 people were killed and 22 were seriously injured in a terrorist attack at the Inland Regional Center in San Bernardino, California, which consisted of a mass shooting and an attempted bombing. The FBI announced on February 9, 2016 that it was unable to unlock the iPhone used by one of the shooters, Farook. The FBI initially asked the NSA to break into the iPhone but their issue was not resolved, and therefore asked Apple to create a version of the phone’s operating system to disable the security features on that phone.

Apple however refused which led to the Department of Justice applying to a United States Magistrate judge who issued a court order requiring Apple to create and provide the requested software and was given until 26th February, 2016 to respond to the order. Apple however announced their intention to oppose the order. The Department of Justice in response filed a new application to compel Apple to comply with the order. It was revealed that they had discussed methods to access the data in January however, a mistake by the investigating agencies ruled out that method. On March 28, the FBI announced that they had unlocked the phone and withdrew the suit.

The dilemma

Privacy is a recognised fundamental right under Article 17 of the International Covenant for Civil and Political Rights and Article 12 of the Universal Declaration of Human Rights.

Encryption is a process through which one encodes or secures a message or data to make the content readable only by an authorized party or by someone who has the decryption key. Apple claims that it does not perform data extractions as the ‘files to be extracted are protected by an encryption key that is tied to the user’s passcode, which Apple does not possess.’ This, according to the FBI Director, James Comey, is a cause for concern as it means that even with a court order, the contents inside the device of all kinds of criminals would not be accessible. Having a backdoor or ‘golden key’, though slightly different [though not totally] from mass surveillance, as agencies herein would be having the capability to access data stored in the devices as compared to a constant monitoring of data. It’s no longer a matter of constant surveillance but the potentiality of other non-governmental persons gaining access through some illegitimate means. The major contention is that there is an assumption either that those who have access to the key are ‘good people’, who have our interests in mind or that the backdoor would only be accessible by the government. The Washington Post reported that the FBI had (after failing to get Apple to comply) paid professional hackers to assist them in cracking the San Bernardino terrorist’s phone. This itself is a cause of concern as it is proof of vulnerabilities existing in our phones which are seemingly secure.

A data that is encrypted cannot be considered to be totally secure if there is some party which has a means to bypass said encryption. The FBI’s request is therefore problematic as it gives it a backdoor to the data which would be a vulnerability which effects all users. One should bear in mind that the trade of such ‘zero-day vulnerabilities’ is not something unheard of and the NSA or FBI having such tools which keep our data secure is problematic as such tools could be end up in the hands of hackers or leaked. One of the most hard hitting points raised is the issue of national interest, that terrorists or paedophiles use encryption and that it is a “safe space” for them. However, a creation of a backdoor according to the former NSA chief, Michael Hayden, would  be futile as terrorists would be making their own apps based on open-source software, the presence of a backdoor would simply make innocent persons less secure and vulnerable to people who would be taking advantage of such backdoors.

While the intention of the agencies might be good or in the interests of the public, one should keep in mind that once a backdoor is provided, not only is this a dangerous precedent but the dangers of such an encryption leaking an effecting the lives of common persons is huge.

For more information, visit:


Ed. Note.: This post, by Sayan Bhattacharya, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

In the world of technology dominated by a power struggle in terms of presence and absence in data circles, Reliance Jio has probably made the biggest tech news of the year with its revolutionary schemes. By adopting a loss-leader strategy of immediate loss and ultimate dominance, Reliance Jio has promised its subscribers stellar features like free voice calls, extremely cheap data packages, abolition of national roaming  charges and striking down extra rates on national holidays on shifting to its network. This is set to significantly affect competition by taking India’s data scenario from a data scarcity to data abundance mode.

Tech companies were hesitant to hand over consumers too much data till this point since they believed the same would act contrary to their business interests. So intra tech-company competition existed within set boundaries till now.

A long standing argument has been in terms of data inequality, wherein people living in rural areas are greatly disadvantaged due to absence of accessibility to data, presence of high tariffs and lack of initiative. This marked shift is significantly going to  affect this particular section of society. The arrival of data-abundance schemes of Reliance Jio might trigger of the kind of competition needed to make internet more accessible and solve the divide existing in status quo.

The second shift which is less talked about in the classy launch statements is the treatment of these immense amount of data which will be at their disposal, post such a move.  The markets in United States and Europe have been relatively normalised to the idea of data abundance for tech companies in comparison to Indian markets. There also exists a subsequent system of checks and balances placed in the judiciary of these markets to control ethics of data collection like specific privacy laws, special courts, media and NGO sensitisation of existing problems with data collection. Such specific laws and structure is almost non-existent or minimally present in the India which makes such problems harder to deal with.

This article seeks to answer the implications of such a move in terms of privacy of consumer data, regulatory mechanisms and its subsequent impact on the market.

The  major transition when a user shifts from a conventional network to that of a Reliance Jio is in the shift from conventional calling facilities to data calling, wherein Reliance Jio uses a technology called VoLTE to make data calls. This technology is being introduced for the first time here but is already prevalent in European and US markets . These features are different from those available on social media platforms like WhatsApp which have exclusive privacy policies including checks and balances preventing breach of privacy like end-to-end encryptions.

Consequently huge amount of data will now be floating in terms of data calls and  concern is over a third party monitoring private calls. In the world of data, the flow of data is monitored using a technique known as Deep Packet Inspection, which is a form of computer network packet filtering that examines the data part in terms of a packet as it passes an inspection point, searching for protocol non-compliance,viruses, spam, intrusions, Apart from the legitimate inspections, its critique comes in the form of a third party inspection of data which can be grossly misused in terms of:-

  1. Data Snooping and Eavesdropping

  1. Data Mining – The ethics of digging up history of searches in order to use data for unfair advantage of parent companies. The problem with this kind of data history existing is surrounding the fact that algorithms instead of predicting future searches tends to show same results in order to orient users to preferential data. This becomes extremely problematic when it comes to Reliance Jio which aims at achieving a closed ecosystem through its applications thus exploiting net neutrality laws. This was debated extensively in the application of Free Basics of Facebook in which Reliance Jio was a stakeholder.

  1. Internet Censorship – Government intervention to control the flow of data is another concern. In this regard we are essentially concerned with silent monitoring of data to serve government propaganda in order to define what is viewable and what is not.

Further, Reliance Jio tries to incorporate an app ecosystem which comes as a part and parcel with the package which includes JioTV app, a JioCinema app, a JioMusic app, a personal digital wallet,  JioMags and a Newspaper application. The extensive push for a relatively close ecosystem might lead to exploitation of loopholes in Indian net neutrality laws, thus working to a disadvantage for third party. Data Mining techniques might even be used in this regard to identify customer patterns to suit needs of parent company in absence of a strict system of checks and balances as present in countries which have adopted this technology.

Reliance Jio network has worked relatively well insofar as Reliance to Reliance calls are concerned. But when it comes to calls to another operator there have been significantly high cases of call drops reported. Thus promising features intended to incentivise a switch to Reliance Jio network suffers a major roadblock in terms of implementation.

In light of the arguments presented, the shift that has been triggered by Reliance Jio needs an effective system of checks and balances in terms of regulatory measures to ensure the following:-

  • Maintenance of principles of net-neutrality
  • Protection of private consumer data
  • Prevention of privacy breaches
  • Consumer protection in terms of reducing call drops to other operators
  • Prevention of unfair trade practices in terms of data mining to suit needs of parent company


Ed. Note.: This post, by Sayan Bhattacharya, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

Search engines which are quintessential to our internet experience are mechanisms of indexing and crawling through data to provide us with a list of links which are most relevant to both our present and past searches. Figuratively, its functions range from directing users to seats in a movie hall to being the very seat in the movie hall.

Tonnes of data lie a click away thanks to these third parties.The question then evolves surrounding the huge quantity of data and the ethics of its presentation to users lying at the mercy of private entities which almost enjoy an unquestionable monopoly in this regard. To what extent this can it be held liable for the data it presents? This article seeks to deal with the following legal issues surrounding holding search engines liable for:-

  • Copyright infringement of individuals
  • Defamatory Content in search results
  • Autocomplete suggestions: Affecting freedom of speech, privacy and personality of individuals.


The problem with extending liability to these search engines on a principal level lies with them being third party content providers and not the original publishers breaching copyright standards. A debate of search engines being publishers or a mere link provider to publishers ensues since search engines after the initial search entry by the user happen to filter relevant data from its already present resources. Therefore they do have some amount of publishing nature in determining what is relevant and what is not however neutral the algorithms might be. But imposing liability for merely linking users to data irrespective of its legality is problematic.

Copyright laws have been trying to bring back fairness in online searches to provide a checks and balances mechanism to curb presence of plagiarised content to protect rights of initial publisher of being sole distributors of their work. The aim at providing  laws which balance both a free information environment and protection of rights of the copyright holder. Courts across Europe have held these search engines liable for inducement of copyright infringement in several cases.

Recently European Parliament came up with its single digital market and copyright reforms which requires digital content providers who give access to large amount of copyright protected data to provide protection using technology. It also requires them to provide copyright holders with functioning of such checks and balances system. This law becomes problematic on two levels:-

  • A law requiring control of ‘Large amount of copyrighted data’ seems pretty vague since no applicable threshold is established by these reforms as to what constitutes large amount of copyrighted data
  • The protection of such data using technology probably refers to protection through filtering illegal content. This is problematic in the sense that not all data which breaches copyright standards be detected and neither does it apportion any kind of responsibility to search engines as to what standard of precautions are to be adopted.


 Courts in the case of this issue have been more rational in its apportioning liability to search engines as compared to copyright issues.The debate has majorly been surrounding whether search engines are mediums or publishers. In 2009, Metropolitan International Schools Limited brought a defamation case against Designtechnica Corporation, Google UK Limited, and Google Inc which provided a distinction between search engines and other internet based entities and set the precedent for future search engine related cases.

The court held that the search engine operators exercised no control over Designtechnica’s actions because a search yields a list of links determined relevant to the query. The technology ranks the pages in order of perceived relevance, without any form of human intervention thus excluding intent and knowledge factor; the search results to any given query depend on successful crawling, indexing, and ranking. The court held that a search engine is “a different kind of Internet intermediary,” which prevented the search engine from exercising complete control over the search terms and search results.


The most debated issue as regards to extending liability to search engines has come in the form of its autocompletion of searches wherein the major issue has been surrounding whether it predicts or orients users to specific data. First developed as a feature to assist physically disabled and slow people to increase typing speed, its use has now become ubiquitous and identifiable to search engines. The following are the issues surrounding the same:-

  • Conveyance of misleading messages – For example when you search using the name of a business enterprise you might be linked to keywords like “fraud” or “dishonest”. It might end up revealing unwanted details about a person’s past. These searches might be contrary to what you are looking for and might even end up causing breach of privacy or discomfort.

  • Uncompetitive Practices and Unfair Preference based linking: How often have you tried to search for a specific link but the first few links or most often majority of links on a search engine point to another link which is very dominant in the online market? An investigation by the Competition Commission of India has claimed that Google Inc allegedly “abused its dominance” in the Indian market by incorporating clauses in its agreements with users that restricted them from availing services of third-party search engines. A similar probe by the European Competition Commission showed that Google abused its dominant position in market to show “systematic favourable treatment” to its Google own ecosystem applications like Google maps. The counter argument to this is that obviously google searches work on neutral algorithms based on popularity and relevance. But in a research called the snowball effect in modern searches it has been seen that a suggested search indulges users’ curiosity and orients them towards searches that may hence influence in turn the algorithms.

The extension of liability in this regard even though seems perfectly legitimate becomes problematic on a principle level. Search engine algorithms are finally codes written by human beings. At the end of the day like every business enterprise, search engines might have its own set of priorities and preferences. Even if users may expect that Google presents the results on the basis of supposedly ‘neutral’ algorithms, ‘Google has never given up its right as a speaker to select what information it presents and how it presents it’


We’ve dealt with and contrasted individual rights of search engines and societal rights, business ethics and freedom of expression and rights and duties of search engines in terms of specific issues which calls for a checks and balances system. This article talked about how there needs to be a balance between the above mentioned criteria.

An important phenomenon of these search engines which this article talked about in the very beginning is that of monopoly. The very fact that most of us resorting to find data on the world wide web would ultimately resort to Google for links irrespective of copyright breaches, defamation and unfair trade practices shows the dominating power of Google which hardly changes despite use of unethical trade practices by it or through it.

The fact remains that if anyone can solve the existing illegal practices in the world of internet and restore fairness then that is these search engines due to its monopoly and extreme bargaining power over the content displayed. Therefore is it then legally correct to extend search engine liability for misuse of data on their platforms because of this being the only mode of control?

The Right to Be Forgotten – An Explanation

Ed. Note.: This post, by Ashwin Murthy, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

The right to be forgotten is the right of an individual to request search engines to take down certain results relating to the individual, such as links to personal information if that information is inadequate, irrelevant or untrue. For example, if a person’s name is searched on Google and certain information appears relating to that person, the person can request Google to remove that information from the search results. This has its largest application in crime and non-consensual pornography (revenge porn or the distribution of sexually explicit material depicting a person without their consent). If X committed a petty crime and a person searching X’s name finds this petty crime, it leads to an obvious negative impact to X, in terms of job prospects as well as general social stigmatisation. X can ask the providers of the search engine to remove this result, claiming his right to be forgotten. The right is not necessarily an absolute right – in its current stage of discussion it merely applies to information that is inadequate, irrelevant or untrue and not any and all information relating to the person. Further there lies a distinction between the right to privacy and the right to be forgotten – the right to privacy is of information not available to the public while the right to be forgotten is removal of information already available publicly.

            Proponents of the right to be forgotten claim that it is a person’s right to have such outdated or immaterial information deleted from the Internet, or at least from the results of search engines. Photographs, comments, links shared – these are all things that people post in their youth (and sometimes at a not so young age) without a second thought. These people should have the right to delete such content from the Internet to protect their right to privacy and consequentially their right to be forgotten, protecting them from unnecessary backlash at rather innocuous actions. For example, a Canadian doctor was banned from the United States when an internet search showed that he experimented with LSD at one point of time in his life. With the right to be forgotten he can erase such pages from the results of the search engine. Victims of revenge and involuntary porn would have an easy mechanism to ensure that such objects are removed from the internet, a task that is difficult to achieve without such a right.  Critics however claim that this right to be forgotten is a substantial setback to the freedom of information and free speech. Any information spread on the Internet would have the potential to be taken down due to legitimate or seemingly legitimate claims of the right to be forgotten, regardless of the qualitative value of the information. Further, the right to be forgotten would impede with a person’s right to know. The easiest way to discover the background of a person is to Google them. This is especially relevant when employing someone or entering into an agreement of trust. If a person is looking for a security guard and a Google search shows that the applicant for the job is or was a thief, then this information on the Internet is of great use to this person hiring such a man – information that would otherwise not be available to the person. Removing this information denies the person their right to know and access this information. Also, implementation of such a right is technically difficult, forcing a complex algorithm to be developed to correctly identify what sites and results should and should not be removed in the event of a claim of right to be forgotten, especially considering the permanency of content on the Internet with the reposting and reproduction of content that occurs today. Locating every site to remove the content is technologically difficult.

            This right has its premier legal backing in the case of Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González, a decision by the Court of Justice of the European Union (CJEU). In the case, the Spanish citizen Gonzalez wished to remove a Google search result of an auction notice of his repossessed house that was fully resolved with and thus irrelevant. The Court held that the search engine (Google) must consider requests of removal of links and results appearing from a search of the requestor’s name under the grounds of the search result being irrelevant, outdated or excessive. The Court thus clarified that while people do possess this right to be forgotten, it is not absolute and must be balanced against other fundamental rights, including the freedom of expression. Thus the CJEU stated that assessment on the same must be decided on a case-to-case manner. This is line with an EU Regulation, the General Data Protection Regulation (GDPR), in providing only a limited form of the right to be forgotten. Originally this only applied to European countries – Google delisted search results only from European domains (,, etc). Thus if a European citizen requested removal of a result, it would be removed from all European domains but nowhere else. CNIL, France’s data protection regulator, went to the length of fining Google for not removing the requested search results from all domains of Google worldwide, not just the French domain. While Google is fighting this case in France’s highest court, this is a symbol of a slow recognition of a far more expanded form of the right to be forgotten, applicable to search results worldwide.

            The right to be forgotten is not alien to India either – the first case of the same was a request in 2014 to the site to remove certain content, however this request was soon dropped. In 2016, a man raised a request before the Delhi High Court for a valid request for removal of his personal information from Google search results of a marital dispute. The Court recognized this claim and sent an inquiry to Google, to be replied to by September 19th. However, there is currently no legal framework present in India for the same nor does the landmark EU judgement apply in India.

            The right to be forgotten remains a nascent right, not fully developed or fleshed out. There are debates as to the pros and cons of such a right, and the extent to which such a right can and should be granted. However there is a clear rise as to its relevance in the technological and legal fields and will undoubtedly crystallise into a comprehensive right in the near future.

For further reading:

  1. The Audacious ‘Right to Be Forgotten’, Kovey Coles, CIS-India
  2. The Right to Be Forgotten, EPIC
  3. Debate: Should The U.S. Adopt The ‘Right To Be Forgotten’ Online? (audio), NPR

Privacy – A right to GO?

Ed. Note.: This post, by Ashwin Murthy, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

For centuries rights have slowly come into existence and prominence, from the right to property to the right to vote and the right against exploitation. In the increasingly digital world of interconnection, the latest right to gain immense popularity is the right to privacy. This right entails the right to be let alone and more importantly the right to protect one’s own information – informational privacy. Thus armed with the right to privacy, one can limit what information others have access to and may use, and thus what information corporations might have or what is up on the Internet. This right to privacy comes in direct contact with applications downloaded on phones, which often ask for permissions to various information on the phone – a device which already possesses a great deal of information of the owner, including the location of the user, their phone number, their emails, their chat conversations and their photos. Applications often ask, either explicitly or in their terms and conditions, for permissions to access varying degrees of the information on the phone, sometimes in a rather unexpected fashion (such as a flashlight app asking for permissions to location), and more recently these apps have been singled out for their questionable privacy settings.

            The latest app to come under fire for its privacy settings is Pokémon GO, an Android and iOS game that took the world by storm, being downloaded over 100 million times by August. The game is an augmented reality game that allows people to catch Pokémon in the real world through synchronous use of the phone camera and location detection. With such popularity, the app was inevitably scrutinised for its privacy settings, especially since it appeared that Pokémon GO was given full permission of the owner’s account. Adam Reeve, a former software engineer at Tumblr, was the first to cause a commotion when he wrote a post detailing all the information the app supposedly had access to. Niantic, the creators of Pokémon GO, later stated that this was an error and the app only accessed basic account information for logging in and in fact could not access information in applications like Gmail or Calendar, later confirmed by security developers. While this was clarified and fixed by Niantic, there were many who were still sceptical, losing trust in not just Pokémon GO, but also in apps in general.

            This sceptical perspective is however what is required to prevent apps from unduly gaining information, particularly those created by the more unsavoury companies who are less scrupulous about their privacy setting, to the point of intentionally trying to get far more information than what was expected from such an app. Pokémon GO, with its shady privacy settings and thus ensuing headlines of hysteria, was merely the catalyst to this questioning as to why such permissions are in fact required by many apps. While it turned out that Niantic did not in fact have very much access to the information people suddenly thought it did, Pokémon GO, by its very nature of using the camera and location services of the phone, potentially has access to far more information than what would be desired, to the point where it has been speculated that the app could be used for spying purposes. While such speculations remain conspiracy theories, the existence of these conspiracy theories is important in itself. Security and governmental agencies are increasingly attempting to access and store the information that such apps and companies themselves have access to. An intelligence agency, if working in tandem with a Niantic, could easily just make Pokémon appear in the house and thus have an interior view of the house through the owner’s phone camera. Niantic’s privacy policy, among other things, states that it may collect and store information about the owner’s location – information that is almost too easy to use for less than noble purposes, and is just one of many apps that can do the same.

            While of course many consumers may not have a problem with these applications having access to such information, they must first have an awareness that these applications actually do have access to all of this information when they are downloaded. Consumers are often content to merely accept the terms and conditions of an app without reading them. The scope for abuse of privacy is almost unparalleled. For there to be a change, the sceptic atmosphere that Pokémon GO accidentally created is needed, and not just for the short period of time that it existed in the wake of Adam Reeve’s post. Currently there is almost zero awareness of the degree to which applications can access and store private information, especially when the privacy policies and terms and conditions are not read or are incomprehensible. The publishers and creators of apps and other such software must be made to disclose explicitly what access they have and what information they can see/store/use. A high level of scrutiny from the consumers would ensure this, especially in the dearth of laws that exist on this specific issue. India has implemented the Information Technology (Amendment) Act, 2008, adding S.43A and S.72A which deal with implementation of reasonable security practices for sensitive information and punishment for wrongful loss or gain by disclosing personal information respectively. These however are both inadequate and too broad to effectively deal with such issues as apps invading a person’s right to privacy. Further such laws would apply only to the app’s usage in India. Thus creation and effective implementation would still only be on a very localised level, further causing a need for the people to be more conscious themselves.

The privacy settings in Pokémon GO might have been a harmless error from a seemingly benevolent company however most companies are not quite as harmless. Consumers must be vigilant to prevent their private lives and affairs slipping away from them, a task which hopefully Pokémon GO has somewhat equipped them to do.

For Further Reading:

  1. Data Protection in India: Overview – Stephen Mathias and Naqeeb Ahmed Kazia, Kochhar & Co
  2. Don’t believe the Pokémon GO Privacy Hype – Engadget
  3. Pokemon GO raises security concerns among Google users – Polygon


Ed. Note.: This 101, by Vishal Rakhecha, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016 or simply the Aadhaar Act passed in the Lok Sabha to facilitate the transfer of benefits and services to the individuals. This is done by giving them Unique Identification Numbers. At first glance Aadhaar seems like a brilliant scheme to ensure that the tax payer’s money does not end in the wrong hands. But the provisions in the Act raise some serious concerns about the way it can be used by the state to encroach upon the right to privacy of individuals. Apart from this the centrally maintained system to save the data in the Central Identities Data Repository makes it vulnerable to cyber-attacks. The huge uproar against the government is also because of the way Aadhaar was passed, as a money bill, despite the fact that it does not qualify for the same.

According to the ‘law’[1] having an Aadhaar card is not mandatory. But, almost all government schemes today require it from availing a subsidy on LPG to applying for a passport. This continuing trend of using Aadhaar cards as a proof of identity has been spilling into the private sector, since the government allows private entities to use Aadhaar as an identity proof, from getting a mobile number to wanting to sign up on matrimonial sites, it becomes impossible to conduct your day to day activities freely without having an Aadhaar card.

Despite the fact that the government is practically forcing the citizens to get an Aadhaar card, they place their trust on the regime to have some amount of reasonable standard in securing their data. To begin with the entire concept of using bio-metric scans being used is not fool-proof and there have been cases where the fingerprints of the registrar have been registered combined with the fact that unlike passwords and pass codes, bio-metrics cannot be re-issued.

The data collected is not sufficiently protected[2], say for example the fact that the Aadhaar numbers are not cryptographically encrypted and are available in a manner readable by humans. This gives scope for people to easily identify the individuals and the chances of identity theft also increase due to this. The passwords and PIN are stored in the form of hashes but the biometric data is stored in the original form. All the information about the keys and hashes in the UIDAI makes internal trust a very important basis for the protection of the data. This is clearly troubling as the people inside the system can access the data anytime they want and also makes it very easy for someone once inside to tamper with the records. There is no set procedure to carry out data inspection making the process extremely arbitrary.

The fact that Aadhaar is not able to protect the privacy of the data giver is aggravated by the way the data is maintained. The centralised system makes it even more susceptible to attacks[3] as these systems have been shown to have inherent flaws when it comes to protecting privacy. The Aadhar in particular is again more harmful as there are no justifications or reasons as to why there is a need for the centralised database. The fact that the data is localised makes it the ideal target for hackers and foreign governments. Apart from the fact that this system is more vulnerable, it is also much costlier than say a smartcard (which is followed in the UK) or an offline biometric reader. These systems are more advantageous as they are cheaper, do not require real-time access and are safer compared to the centralised system.[4]

Now coming to the Act itself which has several problems, while it is true that Act makes it mandatory to use the information only in the way specified when taking the ‘consent’ of the data giver. Firstly, we need to understand that most people who apply for the scheme are people who have little or no knowledge about the information and have no idea about the consequences of doing so could be. Even if we ignore this fact, the Act provides for section 33(1) which allows for the disclosure for the information pursuant to the order of a district judge or above and section 33(2) which allows any officer of the rank of Joint secretary and above the right to order the disclosure of the information in the interest of national security without the consent of the person.

It is extremely important to understand that an Act that was made to ensure that the money transferred from the Consolidated Fund of India to the person who deserves the money gives the government so much power to actually be able to conduct surveillance on the people is clearly problematic. This is because one, there is a blatant absence of self-imposed checks on the executive power in the mode of ensuring that the government in the way as to what constitutes a situation of national security. Two, under what circumstances the judges can authorise the revelation of the data has not been specified. This gives immense power to these bodies to swoop down and let the government use the data in whatever manner they deem fit.

Though the Act has several benefits but the very hasty manner in which it was passed and the fact that there is a lack of self-restriction on the way the state can use the information. It is understandable that there are certain circumstances which necessitate the government to monitor individuals but unless it is done in a manner which gives the state immense power in terms of the ability to clamp down on dissent whenever it wants to. This is the very reason that there is such a massive amount of criticism of the Aadhaar Act. There is still scope for amendments to be made to the law if the legislature wants to maintain the trust with the civil society.

[1],  Justice K.S.Puttuswamy (Retd) & anr v. Union of India & ors

[2] Japreet Grewal, Vanya Rakesh, Sumandro Chattapadhyay, and Elonnai Hickock,, Report on Understanding Aadhaar and its New Challenges, The Centre for Internet Studies

[3], Electronic Frontier Foundation

[4] Kritika Bharadwaj,, The Mission Creep, Behind the Aadhaar Project, The Wire


Battling Goliath: An Analysis of the National Privacy Principles (Part I: Principles One to Four)

(Image Source:

This is the first in a two-part post on the National Privacy Principles(NPPs). This post provides with a bit of background, and then deals with Principles One through Four, while the next will deal with Principles Five through Nine. Footnotes are especially important. Disclaimers: The first post is a bit on the longer side. Feedback, comments, recommendations, are welcome. The second part is available here.

A Bit of Background

Most of the works of fiction use, and have perhaps always used, old and established plot devices. One of the most tried, tested, and overused plots is an underdog triumphing against the ‘giant’. This is illustrated quite concisely by the legend of David and Goliath (though there are disagreements here), where David of Israel is a normal human without armor and just a sling and five pebbles who fights and wins against Goliath, the giant of a man who is the champion of the Philistines.

The current dimension of debate around the concept of ‘privacy’ on the Internet is, arguably, a situation quite similar to that of this well know plotline. ‘David’ in this situation is us, the data subject, and Goliaths are the multitudes of companies with vested interests in collecting our information, for their own direct or indirect profit. Continue reading Battling Goliath: An Analysis of the National Privacy Principles (Part I: Principles One to Four)