Category Archives: 101s

Law Enforcement v. End-to-End Encryption

In a post-Snowden world, there has been relatively more awareness and interest in the right to privacy regarding digital communications; and in knowing when the government can snoop-in on personal conversations. A majority of the communications taking place today are digital and involve two crucial processes i.e. encryption and decryption. Encryption (which is conversion of information into a code) happens when a message/call is initiated. At the same time, decryption (conversion of code back into useful information) happens when the message/call is received by the recipient. There are multiple nuances in this process; both in the technological aspect and the legal aspect.

For quite a while now, WhatsApp chat pages show the message – “Messages and chats are now protected with end-to-end encryption.”. The end-to-end encryption or E2EE (first used in program called Pretty Good Privacy, by Phil Zimmermann in 1991) is a form of encryption that makes it improbable if not impossible to intercept a private conversation. Traditionally, there are 3 instances when a conversation can be intercepted – firstly, from the device of the sender before encryption, secondly, when the information code in is transmission and thirdly from the device of the recipient after decryption.

The two ends i.e. the sender and the recipient, stay vulnerable to unwanted physical access or hacking but it’s the second instance where majority of the snooping takes place. It is here that E2EE becomes useful, for tech companies to bypass court orders and in extension protect user data. E2EE in the simplest of terms means that two people communicating are the only ones who have the specific keys to decrypt each other’s messages and any other person who intercepts such data will have nothing but an unintelligible code. While most communication apps or telecommunication providers have the decryption keys in their own servers; which grant them the ability to see or hear any conversation that passes through their servers, E2EE eliminates this obstacle by giving the keys to both individuals and not the service provider. Imagine the system as that of a letter-box, anyone can put in the messages and lock it (public key) but only the intended recipient has the key to unlock his messages.

This effective bypassing of the service providers has both pros and cons to it. On one hand, it allows for greater freedom for expression of opinions and beliefs without the fear of any sanctions, while on the other hand it stops the governments from carrying out intelligence activities vital for national security. The government, in ensuring the safety of its citizens, does covert operations such as surveillance which enables them to intercept vital communications between suspects; which may lead to stopping of terrorist threat. The importance of such can be gauged from the fact that the latest attack in London included secure-device (encryption) communications between the terrorists and also that ISIS issued instructions for its followers on how to communicate through encrypted apps to plan attacks.

Privacy however is not the only lens through which encryption can be seen in a global setting, for example, promotion and use of E2EE is seen as a human rights issue, as it furthers individual privacy and freedom of expression, which are two rights contained in the International Covenant on Civil and Political Rights (ICCPR). Yet UN reports like “The Right to Privacy in a Digital Age” and “The Promotion and Protection of the Right to Freedom of Opinion and Expression” expound on the idea that judicially ordered decryption is not violative of human rights and has laid down a three part test to limit when a government can restrict encryption.

There is an intense debate about the curbing of powers of law enforcement authorities to gather information through court notices from service provider companies. This debate gained public light after the incident in Brazil, where Facebook had its assets frozen subsequent to its non-compliance of a court order to provide WhatsApp conversational information of a bank robbery gang; which Facebook couldn’t have provided even if it wanted to, as it had no means to do so after enabling E2EE. This incident coupled by Apple’s refusal to the FBI to decrypt the iPhone of the San Bernardino shooter and install a backdoor in their operating system for use by law enforcement, have prompted the UK government to take the issue one step further by enacting a new legislation for surveillance through equipment interference.

The intelligence gathering aspect of E2EE is marred with internal conflictions in the state itself as the state needs such encryption tools to secure its own data but also resents it, as it makes public surveillance harder. Hence, while promoting stronger encryption programs for state use, they limit their citizen’s ability to do so. While certain states like Germany encourage public use of E2EE to avert the covert intelligence gathering abilities of the FIVE EYES countries.

Another facet of this issue is the restrictions in commercial and export area; profit driven tech companies in order to boost sales promote E2EE (more popularity, more sales) and oppose state-imposed rules as this would mean that they can import or build only those applications which allow third party access. Since, every state strives for stronger encryption tools to protect their own data and deal with upcoming security threats, there is state imposed regulations on selling such technology by tech companies to certain states for the purposes of national security and foreign policy goals.

A hypothetical solution to this problem of law enforcement and national security versus privacy can be the innovation of an internal system within the system of such service providing companies like WhatsApp. The internal system, when established would compare every number that tries to send a message with a blacklist (The numbers law enforcement wants to track with judicial approval). When a blacklisted number tries to send a message, the server can stop the E2EE services for the said number from that point onwards and the collected information can be stored in a separate database which can only be accessed by the company’s department handling judicial obligations.

Technology has and will continue to benefit us in ways that cannot be counted, but unaccountable use of such is also capable of great harm. The need for security more than privacy might result in a paradigm shift against rigid privacy laws as is prima facie seen in lawmakers of Florida after the Orlando attacks. It is the cooperation of law and technology together that will result in swifter disbursal of justice and achieve a balance between privacy security and public safety. The adoption of a system such as the one suggested above might be the first step to strike that balance between privacy and safety.

Fake News and Its Follies


Fake news may seem to be very innocuous and in fact might not seem to cause much harm to anyone or have any real-world consequences. Fake news is a phenomenon where a few individuals, sites and online portals create or/and share pieces of information either completely false or cherry-picked from real incidents with the intention to mislead the general public or gain publicity. We all have at least once received a message on WhatsApp groups or on Twitter or on Facebook saying things like – Jana Gana Mana received ‘best national anthem’ award from UNESCO, or that the new Rs 2000 notes have a GPS enabled chip, or that Narendra Modi has been selected as the Best PM in the world by UNESCO. These apparently harmless rumours have done little more than made Twitter trolls target unsuspecting individuals, sometimes even well-known people.

This problem of ‘fake news’ has led to some very tangible damage in today’s world, such as, the recent rumour in Uttar Pradesh and surrounding areas, that there was a severe shortage of salt. The price of salt which was otherwise about Rs 20/kg, shot up to Rs 250/kg and in some cases to Rs 400/kg. The police had to resort to riot control and raids in multiple places to prevent looting and hoarding. The situation blew up to such a great extent that the state’s Chief Minister had to come out with a statement that there was adequate quantity of salt available.

Spreading false information for personal gain is not a new phenomenon, but with the growth of social media and other easily accessible news portals, the reach of the same has reached new heights. This concept came to the forefront given the amount of misinformation propagated by both the sides in Brexit and US presidential elections. This has grown to such a great extent that Oxford Dictionary selected ‘post-truth’ as the word of the year. In a post-truth society, individuals/groups are easily able to influence public opinion for or against their beliefs by posting false and incorrect information online (and probably even get paid for it).

There is a fundamental reason as to why fake-news is bad, it makes it tougher for the individuals to trust established institutions. The relationship between media and citizens is that of trust, the people expect the news portals to be honest and unbiased in their reporting. But, when they are constantly exposed to increasing amount of misinformation and hoaxes, they start losing the faith they have in these institutions. What this does is create a smoke-screen, through which people are not able to see and, judge or reach a definitive conclusion as to what is to be believed and what is not to be believed.

Though there is no set legal provision in India dealing with the problem of fake-news, the closest law the country has that deals with some sort of misinformation being spread is the defamation law. But even the validity of defamation law has been called into question, though the criminal defamation law has been upheld by the SC. It has been stated by critics that the law is being used by the establishment to curb the rights of individuals who question the actions of the governments or its leaders. Sites like Facebook, Reddit, Twitter, etc., can be classified as intermediaries and are the primary sources of fake news. Intermediary liability deals with the liability which can be placed upon such sites, and is dealt with under the IT Act. The provisions under this Act however are not adequate to deal with the issue of fake news. This is because intermediaries are only liable for breaches in privacy of the end-users and not for spread of misinformation.

There are a few other countries which have laws which deal with the subject of misinformation. Germany has mandated Facebook to maintain a 24/7 functioning Legal Protection Office in Germany. This department would take complaints from victims to them and the department would have to initiate an investigation and resolve the issue. If after 24 hours, the department fails to take any action, the company will be charged 500,000 euros (Rs 3,60,00,000) per day the news is left online. China had in 2013 made stringent rules against rumour-mongering. Indonesia has also set up a National Cyber Agency which would deal with content that the agency thinks are ‘slanderous, fake, misleading and spread hate’.

There is a possibility that there could be a chilling effect on the freedom of free speech and expression,  Facebook for example as a corporate entity will in trying to avoid the fine, block any sort of information which comes into question. This is because there is no accountability on the actions in this case. In the cases of China and Indonesia, the governments become the sole deciders of what truth constitutes and anything which they do not want the public to know or any information which is against the establishment’s viewpoint would be labelled as ‘fake’.

The promulgation of fake news has brought into focus the role of sites like Facebook, Twitter, Reddit, etc., which have becoming one of the major sources of news consumption in the developed world. Several analysts have blamed sites like Facebook for the absolute lack of accountability these sites have in dealing with the problem of misinformation spreading on their portals. Then again, moves taken by Facebook and Reddit have been questioned by free speech activists.

This problem of fake news actively being shared and the consequent need to set up regulations to counter this flow by social media outlets and the like raises some serious ethical and legal questions, including whether corporate entities like Facebook, Reddit, Google, etc., should be given a free hand in blocking or blacklisting ‘fake news’, whether the government should step up and actively take a part in stopping fake news and whether the benefits of checking the spread of misinformation are valuable enough to censor any sort of ‘suspected’ news. As of now most laws have still not adapted towards tackling these issues, however there has been a slowly shifting trend towards dealing with the same.



Ed. Note: This post by Vishal Rackecha is a part of the TLF Editorial Board Test 2016.

One of the greatest problems for the Indian Economy faces today is the problem of financial inclusion and the lack of credit in rural areas and for micro industries. In 2013, the Reserve Bank released a paper based on the findings of a committee under the chairmanship of Nachiket Mor. This committee said that services provided through mobiles and other internet portals are a low-cost method and under the right regulatory setup would have the potential bringing financial services to places where the formal banking setups find it unviable or unprofitable to setup branches. This is because having both credit and savings functions is necessary. The committee suggested that allowing non-banking businesses with huge customer bases and comprehensive data about the consumers will be able to increase the reach of the requisite facilities in regions where they are not available.

Payment banks would provide be able to provide services such as payments and holding demand deposits to their customers. The concept of payment banks also brings with it the benefits of having a robust payment mechanism at your fingertips and yet not having to spend on the costly infrastructure and manpower required to maintain an actual bank.

The RBI issued in December 2014 released guidelines for payment banks for an entity to register itself as a payment bank. Eligible promoters should be in pre-paid payment instrument (PPI), Non-banking financial institutions (NBFC’s), telecom operators and supermarkets. Another factor was that these entities should have a good track record and having had properly run their business for a minimum period of 5 years.

Each individual account would be allowed to deposit a maximum sum of 1 lakh rupees; they would be given interest on these savings. These banks would be allowed to issue debit-cards and ATM cards. All their services have to be accessible through mobile; and will be used for automatic cashless and chequeless payment of bills. Payment banks cannot undertake lending activities. They will also provide services like being able to transfer money from the accounts via mobile. RBI has also, with TRAI issued rules for telecom operators on the charges for the services of these payment banks.

Payment banks would have to maintain CRR and SLR based on RBI guidelines. The minimum paid-up capital would be 100 crores and their outside liabilities should not exceed more than 33.33 percentage of their net worth. The minimum initial capital requirement paid by the promoter has to be 40% of the entire investment made and the foreign investment would vary according to that private sector banks. Each of these banks has to have a fully networked and technology driven system of functioning from the beginning. Presently 11 payment banks have been issued licenses; this includes Vodafone m-pesa, Aditya Birla Nuvo Ltd, Department of Posts, etc.

These ‘banks’ will go a long way in shaping the financial sector in the nation and will lead to inclusion of presently uncared for section of the Indian economy. This will though not change the monopoly the traditional banks have over the credit supply. It will also go on to promote the goals of both Pradhan Mantri Jan Dhan Yojana and Digital India of including more and more Indians in the organised sector of finance but also make cashless payments more accessible for the poorer sections. This is because the chances of creating a branch in remote village are far lesser than being able to take a mobile phone there. The system though has its promises and will change the dynamics of this sector, assessing the true potential of the system will not be possible till it is implemented in its entirety.


Ed. Note: This post by Kaustub Bhati is a part of the TLF Editorial Board Test 2016.

Have you ever seen a Paralympic athlete run and wondered how is he doing that? The answer to that query is assistive technology. Assistive Technology is basically an umbrella term, used for any software or hardware designed to help a user get past the area of their disabilities. It encompasses any device which helps assist, adapt and rehabilitate a disabled person.

Disability is seen as a socially constructed phenomenon that results from barriers that are present in the environment. This view of disability locates it within the environment rather than the person[1] and Well-designed high quality assistive devices, or daily living aids, that support independent living for the handicapped and disabled, seniors, or those with a medical condition or injury should make life easier and safer for the aged and disabled.

Technology is a ubiquitous part of our daily life which makes our daily tasks simpler and the devices that have sprouted from this realm of technology ranges from a simple walking frame to whole exoskeletons for people who cannot support themselves to brain and spinal implants helping quadriplegic people to control robots using just their thoughts. Professor John Donoghue, creator of BrainGate, achieved this feat of sending brain signals to robotic arms to move around. The only problem with this technology is that such implants tend to abrade tissues, cause inflammation and finally rejection of the implant by the host body but this problem is seemed to have been solved by the Swiss Federal Institute of Technology. They have made a flexible implant named ‘e-dura’ which is made from a silicone rubber that has the same elasticity as dura mater, the protective skin that surrounds the spinal cord and brain and has the potential to correct nerve damage. It has successfully gone through animal testing and is up for human test phase. The World Health Organisation organized a key stakeholders meeting in Geneva on 3 and 4 July 2014 and established a global initiative: the Global Cooperation on Assistive Technology (GATE). This is in partnership with stakeholders who represent international organizations, donor agencies, professional organizations, academia, and user groups. The vision of the GATE initiative: A world where everyone in need has high-quality, affordable assistive products to lead a healthy, productive and dignified life. The GATE initiative has only one goal: to improve access to high-quality affordable assistive products globally.[2] In India too, Universities such as IIT Delhi work extensively in this field and have come up with some unique technologies of their own such as, The Refreshable Braille Display, a device that enables people with visually impairment to read digital text through tactile interface, that is, it gives a line-by-line embossed braille script on the display of textual Content in PDF format.[3] Other such inventions sponsored by the government of India are, the SmartCane, a device that uses ultrasonic ranging to detect objects in its path and generates tactile output in the form of different vibratory patterns. These vibrations convey the distance information and thus enable the user to negotiate the obstacles from a safe distance.[4]

Indian companies such as BarrierBreak, in lieu of world leaders in assistive technologies from Israel such as PresenTense, ParaTrek and Sesame Enable, are the frontrunners in this field. In collaboration with Royal National Institute of Blind People and the W3C Consortium (World Wide Web Consortium) they are paving the way for greater cohesion of technology and social need.

Assistive Technologies promotes greater independence by enabling people to perform tasks that they were formerly unable to accomplish, or had great difficulty accomplishing, by providing enhancements to or changed methods of interacting with the technology needed to accomplish such tasks. This is a very exciting time for new developments in assistive technology. Not only are existing AT programs regularly updated, but new and previously unseen technology is on-route to improve accessibility for persons with disabilities.[5]

By contemporary approximations, more than 4,000 assistive technologies have been designed for the disabled and seniors. These devices include the whole shebang, from wheelchairs to a wide assortment of high-tech tools and many companies today are turning their research and expansion to assistive technologies.[6] It is a turf which should be encouraged by governments worldwide to encourage emerging entrepreneurs and start-ups in actively engaging and inventing new technologies to help the disabled all-around the globe.

[1]  Albert M. Cook, Janice Miller Polgar, Assistive Technologies: Principles and Practice

[2] Global Cooperation on Assistive Technology (GATE),

[3] Assitech,

[4] Ibid

[5] Assistive Technology: Devices Products & Information,

[6] Ibid

YouTube’s Copyright Policy – An Explanation

Ed. Note: This post by Ashwin Murthy is a part of the TLF Editorial Board Test 2016.

Digital media has become the norm of the modern world and in the field no website is as dominant as YouTube. YouTube, currently a Google subsidiary, controls the market when it comes to video sharing, outpacing the other video-sharing providers by millions of views and users. YouTube presently has more than a billion users and has even allowed the growth of a new career in YouTube personalities, the most famous being PewDiePie. As a natural product of being a video sharing service, multiple videos use content that is copyright protected.

To help protect these companies and their copyright protections, YouTube has created a system called Content ID. This system allows copyright owners to easily identify their content on another person’s or channel’s videos and take action against the same. A database of files and content is submitted to YouTube by the content owners. When a video is uploaded to YouTube, it is scanned against the database. If a match is detected, a Content ID claim is raised and the owners of the content that has been copied are informed. They then have the option to do one of four things:

  • Mute the audio that is copied/matching
  • Block the entire video from being viewed
  • Track the number of views
  • Monetize the video by running ads

Monetizing the video is the most common measure that the content owners take, one that bears very little publicity and negativity associated with it and is also the most profitable. The choice between sharing the revenue generated from these ads with the uploader is up to the content owner. Not everyone is given this Content ID privilege – only copyright owners who meet specific criteria are allotted the same.

More serious however is when the copyright owners submit an official legal copyright infringement notification, instead of merely accepting Content ID actions. This notification can be submitted through a very simple form, the uploaded content is removed keeping in line with the Digital Millennium Copyright Act (DMCA). This will then place a strike upon the uploader, which initially has no penalty apart from removing the uploaded video. If the live stream or archived live stream is removed for copyright, then the uploader’s access to live streaming will be restricted for 90 days. Three strikes however will cause the account to be permanently banned, deletion of all videos from the account and a ban from creating an account on YouTube. Strikes can be removed by waiting three months and completing the Copyright School (a ‘corrective’ test and series of instructions on the YouTube site), getting a retraction from the issuer or submitting a counter notification, disputing the challenge.

Certain content, even if using copyright protected works, may be allowed if it is ‘fair use’. YouTube has certain fair use guidelines, which speak of which content would and would not be protected by fair use depending on certain characteristics including the amount and substantiality of the work used in comparison to the entire work as a whole. Thus if 90% of the content of a video uploaded is of a music video, such as Ezra Furman’s song ‘Restless Year’, then it would not be protected by fair use and would be liable for takedown. However, if a part of the music video is used to create content on something original, such as a discussion of emerging trends of fashion, it would be protected and no claim would be allowed. Excerpts used verbatim for the purposes of criticism, teaching, reviewing or proving an analytic point are all part of fair use.

In theory this measure is a unilaterally positive measure, however in practice certain issues have been raised. This quasi-legal system allows for an easy way for certain companies and individuals to easily make money without getting involved in lawsuits or disputes, simply through the filling of a single form. Further, there is no penalty for filing a false claim. It essentially becomes an automated action, where a claim is made with only potential benefit and no downside. The action by YouTube is first taken, and only after this can the uploader fight for a refraction/counter claim. This allows for gross abuses of the system, the most publicised being of film critic Doug Walker on his YouTube channel ‘Channel Awesome’. His review of the movie ‘My Neighbour Totoro’, ironically a recommendation and in complete fair use, was claimed against and he was hit with a copyright strike. His channel was hit with massive restrictions from YouTube and ad revenue was stripped from every video he uploaded. The redressal systems neither worked nor were easily accessible, and only after he made a video that got wide attention did YouTube take it seriously.           YouTube musician Miracle of Sound was hit with copyright claims against the music he created for his own channel and others. Game critic Jim Sterling, as explained in his video on the issue (explicit language) was forced into a rather innovative action – he used footage from multiple games published by companies like Konami, Nintendo and Rockstar in a single video, causing them all to raise Content ID claims on the video which effectively nullified the claims to a point where no action at all could be taken. This pointed to the critics an example of how easy the system had become to abuse and game for only potential gains.

This led to the creation of the movement #WTFU (Where’s The Fair Use) by Doug Walker and supported by multiple high profile YouTubers calling YouTube and Google to take action and change the existing policies towards fair use and copyrights. YouTube responded, stating that they would take action to make a change, however change is yet to be seen. YouTube has to balance a fine line between allowing for creative content and protection of copyrights and intellectual property. However, it has become evident that their algorithms for discovering this content as well as their mechanisms for dealing with copyright issues are outdated, biased towards the issuing party and difficult to redress if wrong. Change is necessary to create a platform that can better deal with the immense number of users that YouTube possesses and until it arrives, YouTubers like Channel Awesome and Jim Sterling will actively fight against the blatant issues that exist within the system.

Rights of persons with disability and Copyright

Ed. Note.: This post, by Benjamin Vanlalvena, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

How are disabled persons affected by Copyright Law?

The onset of the digital era has brought in an ease of access for persons with disabilities greater access to resources earlier not available not to them through assistive technology.

Copyright laws are set in place to protect the interest of the right-holder of the creative work. However, every time a visually impaired, blind or print disabled person wanted to convert the work into a format compatible with a screen reading software, they would have to reproduce the book in its entirety. A lot of E-books and other similar digital media content have Digital Rights Management (DRM) Systems which place technical restrictions to prevent the copies from being copied. This conflict between persons with disabilities having the right to access information and the copyright holder’s right to control copying of their work created an obstacle for such persons with disabilities leading to a ‘global book famine’. According to the World Blind Union, roughly 5 percent of the millions of books published were made available in formats which would be accessible to persons who are blind, visually impaired or print disabled. This is a cause of concern considering that there are 285 million persons in the world who are blind, visually impaired and print disabled, 90 percent of whom live in low-income settings in developing countries.

What has been done to address the interests of both parties?

To balance the interests of both parties, on 27th June 2013, the Marrakesh Treaty to Facilitate Access to Published Works for Persons Who Are Blind, Visually Impaired, or Otherwise Print Disabled was adopted by member States of the World Intellectual Property Organization (WIPO), and with the ratification of the treaty by Canada, 20 countries having ratified the treaty, the treaty will come into force on September 30, 2016.

What is the Marrakesh Treaty and what does it attempt to do?

The Marrakesh Treaty is a recognition of the rights of blind/visually impaired/print disabled persons and the difficulties and obstacles faced in getting information from the books. It calls for contracting parties to adopt provisions in their respective domestic laws to permit reproduction, distribution works in accessible format for persons who are blind, visually impaired or print disabled. It is pertinent to note that this does not apply to all persons but organizations defined either under Article 2(c) of the treaty or in the domestic laws.

The Marrakesh Treaty takes note of Technological Protection Measures (“TPMs”) and in Article 7 states that Contracting Parties are to take measures to ensure that circumvention of the same would not be deemed illegal if used for the purpose mentioned in this treaty.

The treaty allows distribution of copies in an accessible format to for the exclusive use of beneficiary persons without the authorization of the right holder (even distribution to/import from another country).

The definition of a beneficiary in the Marrakesh Treaty is quite broad and is inclusive of persons suffering from dyslexia, paralysis, etc. Though it does not apply to audio-visual works according to Article 2(a), it applies to works in audio forms such as audiobooks. One must also note that Article 12(2) of the Marrakesh Treaty states that it is without prejudice to other limitations and exceptions for persons with disabilities provided by national law.

Beyond the Treaty

A number of countries in their national laws provide exceptions which are broader than the one provided in the Marrakesh Treaty. The scope in the Indian Copyright Act for example, allows for such an exception to be available for the benefit of persons with disability, which is inclusive of all disabilities requiring a special format to access the work. Countries like Australia, United Kingdom and United States of America also have some similar provisions of exceptions for persons with disabilities.

While the Marrakesh Treaty does not address the issue for every person with disabilities [such as those who are hearing-impaired] it is still a commendable step in that direction.


Ed. Note.: This post, by Sayan Bhattacharya, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

Search engines which are quintessential to our internet experience are mechanisms of indexing and crawling through data to provide us with a list of links which are most relevant to both our present and past searches. Figuratively, its functions range from directing users to seats in a movie hall to being the very seat in the movie hall.

Tonnes of data lie a click away thanks to these third parties.The question then evolves surrounding the huge quantity of data and the ethics of its presentation to users lying at the mercy of private entities which almost enjoy an unquestionable monopoly in this regard. To what extent this can it be held liable for the data it presents? This article seeks to deal with the following legal issues surrounding holding search engines liable for:-

  • Copyright infringement of individuals
  • Defamatory Content in search results
  • Autocomplete suggestions: Affecting freedom of speech, privacy and personality of individuals.


The problem with extending liability to these search engines on a principal level lies with them being third party content providers and not the original publishers breaching copyright standards. A debate of search engines being publishers or a mere link provider to publishers ensues since search engines after the initial search entry by the user happen to filter relevant data from its already present resources. Therefore they do have some amount of publishing nature in determining what is relevant and what is not however neutral the algorithms might be. But imposing liability for merely linking users to data irrespective of its legality is problematic.

Copyright laws have been trying to bring back fairness in online searches to provide a checks and balances mechanism to curb presence of plagiarised content to protect rights of initial publisher of being sole distributors of their work. The aim at providing  laws which balance both a free information environment and protection of rights of the copyright holder. Courts across Europe have held these search engines liable for inducement of copyright infringement in several cases.

Recently European Parliament came up with its single digital market and copyright reforms which requires digital content providers who give access to large amount of copyright protected data to provide protection using technology. It also requires them to provide copyright holders with functioning of such checks and balances system. This law becomes problematic on two levels:-

  • A law requiring control of ‘Large amount of copyrighted data’ seems pretty vague since no applicable threshold is established by these reforms as to what constitutes large amount of copyrighted data
  • The protection of such data using technology probably refers to protection through filtering illegal content. This is problematic in the sense that not all data which breaches copyright standards be detected and neither does it apportion any kind of responsibility to search engines as to what standard of precautions are to be adopted.


 Courts in the case of this issue have been more rational in its apportioning liability to search engines as compared to copyright issues.The debate has majorly been surrounding whether search engines are mediums or publishers. In 2009, Metropolitan International Schools Limited brought a defamation case against Designtechnica Corporation, Google UK Limited, and Google Inc which provided a distinction between search engines and other internet based entities and set the precedent for future search engine related cases.

The court held that the search engine operators exercised no control over Designtechnica’s actions because a search yields a list of links determined relevant to the query. The technology ranks the pages in order of perceived relevance, without any form of human intervention thus excluding intent and knowledge factor; the search results to any given query depend on successful crawling, indexing, and ranking. The court held that a search engine is “a different kind of Internet intermediary,” which prevented the search engine from exercising complete control over the search terms and search results.


The most debated issue as regards to extending liability to search engines has come in the form of its autocompletion of searches wherein the major issue has been surrounding whether it predicts or orients users to specific data. First developed as a feature to assist physically disabled and slow people to increase typing speed, its use has now become ubiquitous and identifiable to search engines. The following are the issues surrounding the same:-

  • Conveyance of misleading messages – For example when you search using the name of a business enterprise you might be linked to keywords like “fraud” or “dishonest”. It might end up revealing unwanted details about a person’s past. These searches might be contrary to what you are looking for and might even end up causing breach of privacy or discomfort.

  • Uncompetitive Practices and Unfair Preference based linking: How often have you tried to search for a specific link but the first few links or most often majority of links on a search engine point to another link which is very dominant in the online market? An investigation by the Competition Commission of India has claimed that Google Inc allegedly “abused its dominance” in the Indian market by incorporating clauses in its agreements with users that restricted them from availing services of third-party search engines. A similar probe by the European Competition Commission showed that Google abused its dominant position in market to show “systematic favourable treatment” to its Google own ecosystem applications like Google maps. The counter argument to this is that obviously google searches work on neutral algorithms based on popularity and relevance. But in a research called the snowball effect in modern searches it has been seen that a suggested search indulges users’ curiosity and orients them towards searches that may hence influence in turn the algorithms.

The extension of liability in this regard even though seems perfectly legitimate becomes problematic on a principle level. Search engine algorithms are finally codes written by human beings. At the end of the day like every business enterprise, search engines might have its own set of priorities and preferences. Even if users may expect that Google presents the results on the basis of supposedly ‘neutral’ algorithms, ‘Google has never given up its right as a speaker to select what information it presents and how it presents it’


We’ve dealt with and contrasted individual rights of search engines and societal rights, business ethics and freedom of expression and rights and duties of search engines in terms of specific issues which calls for a checks and balances system. This article talked about how there needs to be a balance between the above mentioned criteria.

An important phenomenon of these search engines which this article talked about in the very beginning is that of monopoly. The very fact that most of us resorting to find data on the world wide web would ultimately resort to Google for links irrespective of copyright breaches, defamation and unfair trade practices shows the dominating power of Google which hardly changes despite use of unethical trade practices by it or through it.

The fact remains that if anyone can solve the existing illegal practices in the world of internet and restore fairness then that is these search engines due to its monopoly and extreme bargaining power over the content displayed. Therefore is it then legally correct to extend search engine liability for misuse of data on their platforms because of this being the only mode of control?

The Right to Be Forgotten – An Explanation

Ed. Note.: This post, by Ashwin Murthy, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

The right to be forgotten is the right of an individual to request search engines to take down certain results relating to the individual, such as links to personal information if that information is inadequate, irrelevant or untrue. For example, if a person’s name is searched on Google and certain information appears relating to that person, the person can request Google to remove that information from the search results. This has its largest application in crime and non-consensual pornography (revenge porn or the distribution of sexually explicit material depicting a person without their consent). If X committed a petty crime and a person searching X’s name finds this petty crime, it leads to an obvious negative impact to X, in terms of job prospects as well as general social stigmatisation. X can ask the providers of the search engine to remove this result, claiming his right to be forgotten. The right is not necessarily an absolute right – in its current stage of discussion it merely applies to information that is inadequate, irrelevant or untrue and not any and all information relating to the person. Further there lies a distinction between the right to privacy and the right to be forgotten – the right to privacy is of information not available to the public while the right to be forgotten is removal of information already available publicly.

            Proponents of the right to be forgotten claim that it is a person’s right to have such outdated or immaterial information deleted from the Internet, or at least from the results of search engines. Photographs, comments, links shared – these are all things that people post in their youth (and sometimes at a not so young age) without a second thought. These people should have the right to delete such content from the Internet to protect their right to privacy and consequentially their right to be forgotten, protecting them from unnecessary backlash at rather innocuous actions. For example, a Canadian doctor was banned from the United States when an internet search showed that he experimented with LSD at one point of time in his life. With the right to be forgotten he can erase such pages from the results of the search engine. Victims of revenge and involuntary porn would have an easy mechanism to ensure that such objects are removed from the internet, a task that is difficult to achieve without such a right.  Critics however claim that this right to be forgotten is a substantial setback to the freedom of information and free speech. Any information spread on the Internet would have the potential to be taken down due to legitimate or seemingly legitimate claims of the right to be forgotten, regardless of the qualitative value of the information. Further, the right to be forgotten would impede with a person’s right to know. The easiest way to discover the background of a person is to Google them. This is especially relevant when employing someone or entering into an agreement of trust. If a person is looking for a security guard and a Google search shows that the applicant for the job is or was a thief, then this information on the Internet is of great use to this person hiring such a man – information that would otherwise not be available to the person. Removing this information denies the person their right to know and access this information. Also, implementation of such a right is technically difficult, forcing a complex algorithm to be developed to correctly identify what sites and results should and should not be removed in the event of a claim of right to be forgotten, especially considering the permanency of content on the Internet with the reposting and reproduction of content that occurs today. Locating every site to remove the content is technologically difficult.

            This right has its premier legal backing in the case of Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González, a decision by the Court of Justice of the European Union (CJEU). In the case, the Spanish citizen Gonzalez wished to remove a Google search result of an auction notice of his repossessed house that was fully resolved with and thus irrelevant. The Court held that the search engine (Google) must consider requests of removal of links and results appearing from a search of the requestor’s name under the grounds of the search result being irrelevant, outdated or excessive. The Court thus clarified that while people do possess this right to be forgotten, it is not absolute and must be balanced against other fundamental rights, including the freedom of expression. Thus the CJEU stated that assessment on the same must be decided on a case-to-case manner. This is line with an EU Regulation, the General Data Protection Regulation (GDPR), in providing only a limited form of the right to be forgotten. Originally this only applied to European countries – Google delisted search results only from European domains (,, etc). Thus if a European citizen requested removal of a result, it would be removed from all European domains but nowhere else. CNIL, France’s data protection regulator, went to the length of fining Google for not removing the requested search results from all domains of Google worldwide, not just the French domain. While Google is fighting this case in France’s highest court, this is a symbol of a slow recognition of a far more expanded form of the right to be forgotten, applicable to search results worldwide.

            The right to be forgotten is not alien to India either – the first case of the same was a request in 2014 to the site to remove certain content, however this request was soon dropped. In 2016, a man raised a request before the Delhi High Court for a valid request for removal of his personal information from Google search results of a marital dispute. The Court recognized this claim and sent an inquiry to Google, to be replied to by September 19th. However, there is currently no legal framework present in India for the same nor does the landmark EU judgement apply in India.

            The right to be forgotten remains a nascent right, not fully developed or fleshed out. There are debates as to the pros and cons of such a right, and the extent to which such a right can and should be granted. However there is a clear rise as to its relevance in the technological and legal fields and will undoubtedly crystallise into a comprehensive right in the near future.

For further reading:

  1. The Audacious ‘Right to Be Forgotten’, Kovey Coles, CIS-India
  2. The Right to Be Forgotten, EPIC
  3. Debate: Should The U.S. Adopt The ‘Right To Be Forgotten’ Online? (audio), NPR


Ed. Note.: This 101, by Vishal Rakhecha, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016 or simply the Aadhaar Act passed in the Lok Sabha to facilitate the transfer of benefits and services to the individuals. This is done by giving them Unique Identification Numbers. At first glance Aadhaar seems like a brilliant scheme to ensure that the tax payer’s money does not end in the wrong hands. But the provisions in the Act raise some serious concerns about the way it can be used by the state to encroach upon the right to privacy of individuals. Apart from this the centrally maintained system to save the data in the Central Identities Data Repository makes it vulnerable to cyber-attacks. The huge uproar against the government is also because of the way Aadhaar was passed, as a money bill, despite the fact that it does not qualify for the same.

According to the ‘law’[1] having an Aadhaar card is not mandatory. But, almost all government schemes today require it from availing a subsidy on LPG to applying for a passport. This continuing trend of using Aadhaar cards as a proof of identity has been spilling into the private sector, since the government allows private entities to use Aadhaar as an identity proof, from getting a mobile number to wanting to sign up on matrimonial sites, it becomes impossible to conduct your day to day activities freely without having an Aadhaar card.

Despite the fact that the government is practically forcing the citizens to get an Aadhaar card, they place their trust on the regime to have some amount of reasonable standard in securing their data. To begin with the entire concept of using bio-metric scans being used is not fool-proof and there have been cases where the fingerprints of the registrar have been registered combined with the fact that unlike passwords and pass codes, bio-metrics cannot be re-issued.

The data collected is not sufficiently protected[2], say for example the fact that the Aadhaar numbers are not cryptographically encrypted and are available in a manner readable by humans. This gives scope for people to easily identify the individuals and the chances of identity theft also increase due to this. The passwords and PIN are stored in the form of hashes but the biometric data is stored in the original form. All the information about the keys and hashes in the UIDAI makes internal trust a very important basis for the protection of the data. This is clearly troubling as the people inside the system can access the data anytime they want and also makes it very easy for someone once inside to tamper with the records. There is no set procedure to carry out data inspection making the process extremely arbitrary.

The fact that Aadhaar is not able to protect the privacy of the data giver is aggravated by the way the data is maintained. The centralised system makes it even more susceptible to attacks[3] as these systems have been shown to have inherent flaws when it comes to protecting privacy. The Aadhar in particular is again more harmful as there are no justifications or reasons as to why there is a need for the centralised database. The fact that the data is localised makes it the ideal target for hackers and foreign governments. Apart from the fact that this system is more vulnerable, it is also much costlier than say a smartcard (which is followed in the UK) or an offline biometric reader. These systems are more advantageous as they are cheaper, do not require real-time access and are safer compared to the centralised system.[4]

Now coming to the Act itself which has several problems, while it is true that Act makes it mandatory to use the information only in the way specified when taking the ‘consent’ of the data giver. Firstly, we need to understand that most people who apply for the scheme are people who have little or no knowledge about the information and have no idea about the consequences of doing so could be. Even if we ignore this fact, the Act provides for section 33(1) which allows for the disclosure for the information pursuant to the order of a district judge or above and section 33(2) which allows any officer of the rank of Joint secretary and above the right to order the disclosure of the information in the interest of national security without the consent of the person.

It is extremely important to understand that an Act that was made to ensure that the money transferred from the Consolidated Fund of India to the person who deserves the money gives the government so much power to actually be able to conduct surveillance on the people is clearly problematic. This is because one, there is a blatant absence of self-imposed checks on the executive power in the mode of ensuring that the government in the way as to what constitutes a situation of national security. Two, under what circumstances the judges can authorise the revelation of the data has not been specified. This gives immense power to these bodies to swoop down and let the government use the data in whatever manner they deem fit.

Though the Act has several benefits but the very hasty manner in which it was passed and the fact that there is a lack of self-restriction on the way the state can use the information. It is understandable that there are certain circumstances which necessitate the government to monitor individuals but unless it is done in a manner which gives the state immense power in terms of the ability to clamp down on dissent whenever it wants to. This is the very reason that there is such a massive amount of criticism of the Aadhaar Act. There is still scope for amendments to be made to the law if the legislature wants to maintain the trust with the civil society.

[1],  Justice K.S.Puttuswamy (Retd) & anr v. Union of India & ors

[2] Japreet Grewal, Vanya Rakesh, Sumandro Chattapadhyay, and Elonnai Hickock,, Report on Understanding Aadhaar and its New Challenges, The Centre for Internet Studies

[3], Electronic Frontier Foundation

[4] Kritika Bharadwaj,, The Mission Creep, Behind the Aadhaar Project, The Wire



Ed. Note.: This 101, by Kaustub Bhati, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

Have you ever used a torrent to download something not available freely? You must have. Ever wondered how it works and why there is so much fuss about it being illegal and people using it might face legal sanctions?

A torrent is typically a file sharing method in which large media files are shared between private computers by gathering different pieces of the file you want and downloading these pieces simultaneously from people who already have these files. This process increases the download speed manifold. An example would be if 5,000 people are downloading the same file then it doesn’t put much pressure on the main server itself but what happens is that every individual use contributes upload speed which in turn ensures that the file transfer is fast. The download, hence, doesn’t really take place from the main server but from the 4999 other users currently downloading. This is typically known as P2P or Peer-to-Peer sharing method.[1]

Now since the invention of torrents in 2001 by Bram Cohen, it is being used to share a massive no. of files everyday some of which are copyrighted materials such as video games, movies and songs which should technically be paid for but are being distributed freely. This loss of revenue for the copyright-holding companies and the subsequent law-suits brings us to the forefront of our discussion: The Legal issues.

The torrent is not wholly illegal as the general misconception is. It can be used to share files across different people legally but the line is drawn when it comes to copyrighted content because then it amounts to Intellectual property theft. Almost 97 – 98% hosting companies (the companies which provide the link rather than host the content themselves) do not allow hosting torrents, and most of them are simply afraid of the word torrent. In India too, the authorities are catching up, Sec 66 of IT Act provides for 2 years of imprisonment and fines for people who download pirated media from the internet.

The Cyber Cell Department catches this illegal downloading by paying some IP Troll companies which lets them join the hosts of people downloading torrents allowing them to see the IP addresses and hence come knocking on their door steps. This leads to another legal issue as the IP address only tells WHERE the file is being downloaded and not WHO is downloading and also what would be done if the downloader was a minor? These are some problems faced by the prosecution in the court of law.

One of the major problems that can be seen globally is the difference in copyright laws in different countries giving rise to file-sharing sites such as The Pirate Bay to swim through this crack by saying that no laws of the host country i.e. where the servers lie, are being violated and hence everything is legal. Internet sites like Pirate Bay even posts legal notices sent to them by companies like Sony, DreamWorks and Electronic Arts on their site accompanied by mocking retorts.[2]

The prominent arguments presented in favour of the sites like these are that the sites themselves are not illegally distributing copyrighted materials but are functioning just like any other search engine like google or yahoo providing relevant search results to a query after which the users themselves are responsible for the acquisition of the materials in question and they themselves should be held liable and not the intermediaries. While Sweden in its judgement against co-founders of Pirate Bay refuted this, countries like Netherlands, Ukraine, India and the privacy-conscious Switzerland have become new piracy havens due to the exact same loopholes in the law deeming the intermediaries as not guilty of what their customers do.

While countries like Japan have severely strict laws entailing a 10-year imprisonment for uploading and a 2-year imprisonment for downloading illegal content[3], Germany is also not far behind, imposing a fine of €1000 fines or more if even a single instance of a copyrighted material is downloaded through BitTorrent.

These were some of the legal ramifications regarding the use of torrents, in an era when everyone wants everything to be freely available with just a click, Peer-to-peer file sharing has become a platform for political activities against intellectual property laws and has sparked movements such as the anti-copyright movement advocating complete or partial remission of current legislations. In the end, the ball is thrown towards the general masses to catch and in turn be implicated in illegal activity or let it fall and support the rights of the creators of the content we so want to see.

[1] Carmen Carmack,” How BitTorrent Works”,

[2] Dennis H, A Pirates’s Life in Sweden,

[3]Japan introduces piracy penalties for illegal downloads,