Category Archives: Regulation

Kill the Kill Switch

The internet has grown from being just a communication medium to becoming a marketplace, an entertainment source, a news centre, and much more. At any given moment, there are thousands of gigabytes of information travelling across the planet. But all of this comes to a standstill when the internet shuts down. An internet shutdown is a government-enforced blanket restriction on the use of internet in a region for a particular period of time. The reasons vary from a law and order situation to a dignitary visiting the place. There is a requirement for an analysis into whether such shutdowns can be justified, even on the direst of grounds.

These shutdowns can be initiated with little effort, as far as the authorities are concerned, because the Internet Service Providers (ISPs) do not hesitate to follow government ‘directives’. The justifications provided by them can range from being possibly reasonable to being absurd. For example, in February, the Gujarat government blocked mobile internet services across the state because the Gujarat State Subsidiary Selection Board was conducting exams to recruit revenue accountants. This was done given the “sensitive nature of the exam” and that it was “necessary to do so to prevent misuse of mobile phones.” This step is very clearly disproportional in terms of actions and effects. There are other methods through which the exam officials can stop malpractice in exams, with stopping mobile internet over the entire state being not only inefficient, but also highly disruptive to the general populace. This distinction as to whether the step is proportional gets complicated when the government justifies shutting down the internet on grounds of law and order situations or national security.

Before we answer these questions, we need to first probe into the very foundation on which a democracy functions – discourse. The very nature of democratic discourse necessitates the need to have information. When there is lack of information, public discourse loses its functionality, as the participants’ understanding will not be enough to provide targeted solutions to the specific problems to be addressed. The internet has now become one of the most important mediums for information dissemination, with the ability to provide ground level data about the people where the conventional media cannot enter or does not want to. It acts as a medium for those sections of the society, which are normally outside the purview of the mainstream, to be able to raise their voice for general public to hear. By virtue of this, it becomes an important tool in furtherance of the democratic process – the right of free speech and expression. Keeping this extremely important function of the internet in mind we can now analyse the problem of internet shutdowns.

The following questions must be answered to even consider shutting down the internet: first, whether the problem is so huge that it becomes necessary to take such an extreme step; second, whether the government has considered other alternatives, even if the problem is big enough; third, whether the functioning of a major communication channel will benefit or harm the general population and; fourth, whether there are enough safeguards to ensure that the government does not abuse this power that has been provided.

The concept of shutdowns even when there is a valid justification can be problematic. When the unruly sections are using a few limited channels to spread hate and rumours the government can shut down those specific channels and contain the situation instead of shutting the entire internet accessability down, which affects the businesses and lives of millions of innocent parties. This also reduces the collateral damage that can take place from shutting down websites which are harmless or even more tangibly systems which banks run on. Despite all of this if a blanket restriction is required, the question arises as to who should be able to put it. Section 144 of the Criminal Procedure Code has been employed here, however its validity has been called into question multiple times.

There are enormous free speech implications of not letting people make use of an important communication channel. Earlier the conventional media were the sole sources of information for the general public, making them the gatekeepers of information. These organisations though free to a great extent can be influenced by the government to not attack it directly or not report certain atrocities by instilling a fear of some sort of sanction. With the advent of internet enabled communication channels, each and every individual could contribute to the broader pool of information. By shutting down the internet, the government is cutting off the information at the source about the situation. This leads to concerns related to accountability as there is little ground-level data about the atrocities or any excessive use of force used by the law enforcement authorities.

Furthermore, when there is a law and order, for example in cases in Gujarat government shut down during the Patidar movement situation it becomes very important that the people do not get misled by fake news and rumours, and the internet could prove to be a very useful tool to fight fire with fire. The government can use the same channels to reach out to the public and reduce the amount of confusion. For example, during the Cauvery Riots the Bangalore City Police effectively used Twitter and Facebook to dispel rumours and instil a sense of security among the people. Not letting an average citizen participate and engage with the other individuals and the state during tough times further alienates them. The safety of the loved ones during these situations is the top most priority of the general populace. The internet serves as a medium to communicate with them and during internet shutdowns the access to this is cut off. This only leads to further chaos and unrest, and thus shutting down accessibility is counterproductive.

One of the most significant and tangible damages that the blocking of internet does is to business establishments. According to Brookings institute, the damage that is done due to internet shutdowns is $968 million from July 2015 to June 2016 in India. Banks  are largely dependent on the internet to conduct their daily transactions, and face massive problems during a shutdown. The infrastructure that is required in using debit and credit cards, ATM’s and internet banking work on the power on internet. In addition, this affects the brick-and-mortar stores as a significant number of them have started to move towards using digital payment modes post demonetisation. Needless to say, the most immediate impact is faced by e-commerce websites who by the very nature of their activities are reliant on internet.

There is also a more insidious side to this. When an easier measure like cutting access to a communication medium is used to address a broader societal complication, it is only a surface level step of cutting off of engagement on that issue. The move of shutting down the internet is only a highly publicised step, which makes it seem like the shutdown is a part of a bigger set of measures being used to tackle the situation. This creates an illusion where the actual problem continues to persist.  The state will continue to use only coercive power to deal with it. Targeted measures which would provide for much better long-term solution are not taken into consideration due to lack of political will or simply lethargy on the part of establishment. In addition, this points to the wider issue of a lack of understanding both of the actual issues at hand and the manner in which the internet works, in terms of how interconnected the populace is with the internet and thus the effect it has on the entire society.

As more and more people start joining the internet and the government starts pushing towards a digitised economy, it becomes all the more necessary to not shut the internet down. The local, state and national governments need to take responsibility for public disorder and engage with the issue at hand. The state needs to start balancing the interests of national security and also protection of individual rights.

The Internet Freedom Foundation has  launched a campaign to address this very issue- support them by going to keepusonline.in and signing the petition for the government to make regulations, so as to reduce arbitrariness while imposing these shutdowns.

Consent to Cookie: Analysis of European ePrivacy Regulations

This article is an analysis of the newly passed ‘Regulation on Privacy and Electronic Communications’ passed by the European Union.

A huge part of our daily life now revolves around the usage of websites and communication mediums like Facebook, WhatsApp, Skype, etc. The suddenness with which these services have become popular left law-making authorities with little opportunity to give directions to these companies and regulate their actions. For the large part these services worked on the basis of self-regulation and on the terms and conditions which consumers accepted. These services gave people access to their machinery for free, in return for personal data about the consumer. This information is later sold to advertisers who later on send ‘personalised’ advertisements to the consumer on the basis of the information received.

With growing consciousness about the large-scale misuse that can take place if the data falls into wrong hands, citizens have started to seek accountability on part of these websites. With increasing usage of online services in our daily lives and growing awareness about the importance of privacy, the pressure on governments to make stricter privacy laws is increasing.

The nature of data that these services collect from the consumer can be extremely personal, and with no checks on the nature of data that can be collected, there is a possibility for abuse. It can be sold with no accountability in the handling of such information. Regulations such as those related to data collection, data retention, data sharing and advertising are required, and for the most part have been lacking in almost all countries. The European Union however has been in a constant tussle with internet giants like Google, Facebook and Amazon, over regulations, as though these companies have operations in Europe, they are not under its jurisdiction. In fact they are not under the jurisdiction of any countries except the ones they are based in. The EU on 10 January 2017 released a proposal on the Privacy of individuals while using Electronic communications which will come into force in May 2018.

The objective of the ‘Regulation on Privacy and Electronic Communications’ is to strengthen the data protection framework in the EU. The key highlights of the data protection laws are as follows:

  • Unified set of Rules across EU – These rules and regulations will be valid and enforceable across the European Union and will provide a standard compliance framework for the companies functioning in the Union.
  • Newer Players – Over-the-top services are those services which are being used instead of traditional such as SMS and call. The law seeks to regulate these Over-The-Top services (OTT) such as WhatsApp, Gmail, Viber, Skype, etc., and the communication between Internet-of-Things devices which have been outside the legal framework as the existing laws and regulations are not wide enough in scope to cover the technology used.
  • Cookies – A cookie is information about the user’s activity on the website, such as what is there in the user’s shopping cart. The new regulations make it easy for the end-users to give consent for end-users for cookies on web browsers and making the users more in control of the kind of data that is being shared.
  • Protection against spam – The proposal bans unsolicited electronic communication from mediums like email, phone calls, SMS, etc. This proposal basically places a restriction on spam, mass sending of mails or messages with advertisements with or without the end-user consenting to receive those advertisements.
  • Emphasis on Consent – The regulation lays strict emphasis on the idea of user-consent in terms of any data being used for any purpose that is not strictly necessary to provide that service. The consent in this case should be ‘freely given, specific, informed, active and unambiguous consent expressed by a statement or clear affirmative action’.
  • Limited power to use metadata – Unless the data is necessary for a legal purpose, the service provider will either erase the metadata or make the data anonymous. Metadata is data about data – it is used by the Internet Service Providers, websites and governments to make a summary of the data available to create patters or generalised behaviour to use specific data easily.

The Regulation has far-reaching effects in terms of taking into its fold businesses which were earlier not a part of the regulations and would cover any technological company which provides electronic communications services in the Union. This would require businesses to sustain costs to redesign their communication system and ensuring that their future software updates are designed in such a way that the users’ consent is taken.

The main argument raised by the proposal in favour of bringing in the new Regulation is that an increasing number of users want control over their data and want to know where their data is going and who it is accessed by. This is because of the growing consciousness about the far-reaching effects of providing huge quantities of personal information to private entities with little or no check on the use of the data.

The biggest relief given to both the users and service providers was the change in the cookie policy. The previous regulation made it mandatory for the website to take consent before any cookie was placed on the user’s computer. This would have led to the user being bombarded with requests on the computer. The new regulation lets the user choose the settings for the cookies from a range of high-to-low privacy while installing the browser and after every six months they would receive a notification that they can change the setting.

There is however the issue of how the websites will know that the user has opted out of receiving targeted advertisements. There is a possibility of using a tool called Do-No-Track – a tool when turned on sends out signals to a web browser, that the user does not wish to be tracked. The system was utilised in the past, but given the lack of consensus in the industry as to the method of usage and the fact that a large number of websites simply ignored the DNT signals, it lost its utility. This Regulation will give the much necessary push for the usage of this system as would be useful, because if a user chooses not be tracked the websites have to respect that choice.

The Regulation also makes consent the central feature of communications system. Earlier consent was said to be implied, that if the individual is using the operators service was considered as consent to allowing the operator to collect information about the end-user. This could have a huge effect on the way these entities earn revenue where in some cases the sole method of earning revenue is advertising. Technology companies have to dole out huge amounts of money to pay to run their servers and for the staff which works on maintaining the website and researching on newer technology to improve their services. Companies which are dependent on advertising could lose a large amount of the revenue which they get if a large number of its users opt-out of providing information and receiving targeted advertisements.

Several critics from the industry argue that the new framework will make it extremely difficult for the operators as they do not necessarily classify data. The multiple layers of data and information collected are simply classified as ‘analytics’. The websites do not always know the purpose the data is going to be used until after it is used. This would make it difficult for the operator when it comes to deciding what comes under the law. In addition, the operators depend on third-parties to collect the information for them. The regulation makes it abundantly clear that the information to be collected should be the bare minimum that is required to provide the services and data that is required for web audience measuring. The third-parties also would be protected under this law, if the information collected by the website necessary to provide those services or if the user has already given consent. A more transparent system instead would make the system accountable as it would give a factual basis to assess whether the operator is complying with reasonable ethical standards.

The users also have an option under the law not to receive unsolicited calls, messages and mails. These kinds of calls, messages and mails are a huge nuisance with the companies doing this facing no liability. Only UK among the countries in the EU has strict laws and hefty fines for such kind of direct advertisements. This system would require the prior consent of the user when obtaining the information and before the sending of advertisements, and inform them about the nature of marketing and the nature of withdrawal. Even though consent is given to the operator the law mandates the communication of the procedure of opting opt-out to the user in clear terms. The operator will also have to have a prefix for all the marketing calls. This is similar to India, where the TRAI initiated Do-Not-Disturb system gives the user an option to block different kinds of unsolicited and automated advertisements through calls and messages.

The Regulation can form a benchmark for the other countries. The regulation with its central focus being the privacy and consent of the user, places a requirement for transparency and accountability of the operator – a necessary condition to run any organisation providing such services. While the changes may seem radical in terms of the costs that the industry as a whole may incur, given the sensitive nature of the information that they deal with, such regulations will and should become a norm for all the players in the market and any new players who wish to join it.

Fake News and Its Follies

fake-news

Fake news may seem to be very innocuous and in fact might not seem to cause much harm to anyone or have any real-world consequences. Fake news is a phenomenon where a few individuals, sites and online portals create or/and share pieces of information either completely false or cherry-picked from real incidents with the intention to mislead the general public or gain publicity. We all have at least once received a message on WhatsApp groups or on Twitter or on Facebook saying things like – Jana Gana Mana received ‘best national anthem’ award from UNESCO, or that the new Rs 2000 notes have a GPS enabled chip, or that Narendra Modi has been selected as the Best PM in the world by UNESCO. These apparently harmless rumours have done little more than made Twitter trolls target unsuspecting individuals, sometimes even well-known people.

This problem of ‘fake news’ has led to some very tangible damage in today’s world, such as, the recent rumour in Uttar Pradesh and surrounding areas, that there was a severe shortage of salt. The price of salt which was otherwise about Rs 20/kg, shot up to Rs 250/kg and in some cases to Rs 400/kg. The police had to resort to riot control and raids in multiple places to prevent looting and hoarding. The situation blew up to such a great extent that the state’s Chief Minister had to come out with a statement that there was adequate quantity of salt available.

Spreading false information for personal gain is not a new phenomenon, but with the growth of social media and other easily accessible news portals, the reach of the same has reached new heights. This concept came to the forefront given the amount of misinformation propagated by both the sides in Brexit and US presidential elections. This has grown to such a great extent that Oxford Dictionary selected ‘post-truth’ as the word of the year. In a post-truth society, individuals/groups are easily able to influence public opinion for or against their beliefs by posting false and incorrect information online (and probably even get paid for it).

There is a fundamental reason as to why fake-news is bad, it makes it tougher for the individuals to trust established institutions. The relationship between media and citizens is that of trust, the people expect the news portals to be honest and unbiased in their reporting. But, when they are constantly exposed to increasing amount of misinformation and hoaxes, they start losing the faith they have in these institutions. What this does is create a smoke-screen, through which people are not able to see and, judge or reach a definitive conclusion as to what is to be believed and what is not to be believed.

Though there is no set legal provision in India dealing with the problem of fake-news, the closest law the country has that deals with some sort of misinformation being spread is the defamation law. But even the validity of defamation law has been called into question, though the criminal defamation law has been upheld by the SC. It has been stated by critics that the law is being used by the establishment to curb the rights of individuals who question the actions of the governments or its leaders. Sites like Facebook, Reddit, Twitter, etc., can be classified as intermediaries and are the primary sources of fake news. Intermediary liability deals with the liability which can be placed upon such sites, and is dealt with under the IT Act. The provisions under this Act however are not adequate to deal with the issue of fake news. This is because intermediaries are only liable for breaches in privacy of the end-users and not for spread of misinformation.

There are a few other countries which have laws which deal with the subject of misinformation. Germany has mandated Facebook to maintain a 24/7 functioning Legal Protection Office in Germany. This department would take complaints from victims to them and the department would have to initiate an investigation and resolve the issue. If after 24 hours, the department fails to take any action, the company will be charged 500,000 euros (Rs 3,60,00,000) per day the news is left online. China had in 2013 made stringent rules against rumour-mongering. Indonesia has also set up a National Cyber Agency which would deal with content that the agency thinks are ‘slanderous, fake, misleading and spread hate’.

There is a possibility that there could be a chilling effect on the freedom of free speech and expression,  Facebook for example as a corporate entity will in trying to avoid the fine, block any sort of information which comes into question. This is because there is no accountability on the actions in this case. In the cases of China and Indonesia, the governments become the sole deciders of what truth constitutes and anything which they do not want the public to know or any information which is against the establishment’s viewpoint would be labelled as ‘fake’.

The promulgation of fake news has brought into focus the role of sites like Facebook, Twitter, Reddit, etc., which have becoming one of the major sources of news consumption in the developed world. Several analysts have blamed sites like Facebook for the absolute lack of accountability these sites have in dealing with the problem of misinformation spreading on their portals. Then again, moves taken by Facebook and Reddit have been questioned by free speech activists.

This problem of fake news actively being shared and the consequent need to set up regulations to counter this flow by social media outlets and the like raises some serious ethical and legal questions, including whether corporate entities like Facebook, Reddit, Google, etc., should be given a free hand in blocking or blacklisting ‘fake news’, whether the government should step up and actively take a part in stopping fake news and whether the benefits of checking the spread of misinformation are valuable enough to censor any sort of ‘suspected’ news. As of now most laws have still not adapted towards tackling these issues, however there has been a slowly shifting trend towards dealing with the same.

 

RELIANCE JIO: REGULATORY AND PRIVACY IMPLICATIONS

Ed. Note.: This post, by Sayan Bhattacharya, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

In the world of technology dominated by a power struggle in terms of presence and absence in data circles, Reliance Jio has probably made the biggest tech news of the year with its revolutionary schemes. By adopting a loss-leader strategy of immediate loss and ultimate dominance, Reliance Jio has promised its subscribers stellar features like free voice calls, extremely cheap data packages, abolition of national roaming  charges and striking down extra rates on national holidays on shifting to its network. This is set to significantly affect competition by taking India’s data scenario from a data scarcity to data abundance mode.

Tech companies were hesitant to hand over consumers too much data till this point since they believed the same would act contrary to their business interests. So intra tech-company competition existed within set boundaries till now.

A long standing argument has been in terms of data inequality, wherein people living in rural areas are greatly disadvantaged due to absence of accessibility to data, presence of high tariffs and lack of initiative. This marked shift is significantly going to  affect this particular section of society. The arrival of data-abundance schemes of Reliance Jio might trigger of the kind of competition needed to make internet more accessible and solve the divide existing in status quo.

The second shift which is less talked about in the classy launch statements is the treatment of these immense amount of data which will be at their disposal, post such a move.  The markets in United States and Europe have been relatively normalised to the idea of data abundance for tech companies in comparison to Indian markets. There also exists a subsequent system of checks and balances placed in the judiciary of these markets to control ethics of data collection like specific privacy laws, special courts, media and NGO sensitisation of existing problems with data collection. Such specific laws and structure is almost non-existent or minimally present in the India which makes such problems harder to deal with.

This article seeks to answer the implications of such a move in terms of privacy of consumer data, regulatory mechanisms and its subsequent impact on the market.

The  major transition when a user shifts from a conventional network to that of a Reliance Jio is in the shift from conventional calling facilities to data calling, wherein Reliance Jio uses a technology called VoLTE to make data calls. This technology is being introduced for the first time here but is already prevalent in European and US markets . These features are different from those available on social media platforms like WhatsApp which have exclusive privacy policies including checks and balances preventing breach of privacy like end-to-end encryptions.

Consequently huge amount of data will now be floating in terms of data calls and  concern is over a third party monitoring private calls. In the world of data, the flow of data is monitored using a technique known as Deep Packet Inspection, which is a form of computer network packet filtering that examines the data part in terms of a packet as it passes an inspection point, searching for protocol non-compliance,viruses, spam, intrusions, Apart from the legitimate inspections, its critique comes in the form of a third party inspection of data which can be grossly misused in terms of:-

  1. Data Snooping and Eavesdropping

  1. Data Mining – The ethics of digging up history of searches in order to use data for unfair advantage of parent companies. The problem with this kind of data history existing is surrounding the fact that algorithms instead of predicting future searches tends to show same results in order to orient users to preferential data. This becomes extremely problematic when it comes to Reliance Jio which aims at achieving a closed ecosystem through its applications thus exploiting net neutrality laws. This was debated extensively in the application of Free Basics of Facebook in which Reliance Jio was a stakeholder.

  1. Internet Censorship – Government intervention to control the flow of data is another concern. In this regard we are essentially concerned with silent monitoring of data to serve government propaganda in order to define what is viewable and what is not.

Further, Reliance Jio tries to incorporate an app ecosystem which comes as a part and parcel with the package which includes JioTV app, a JioCinema app, a JioMusic app, a personal digital wallet,  JioMags and a Newspaper application. The extensive push for a relatively close ecosystem might lead to exploitation of loopholes in Indian net neutrality laws, thus working to a disadvantage for third party. Data Mining techniques might even be used in this regard to identify customer patterns to suit needs of parent company in absence of a strict system of checks and balances as present in countries which have adopted this technology.

Reliance Jio network has worked relatively well insofar as Reliance to Reliance calls are concerned. But when it comes to calls to another operator there have been significantly high cases of call drops reported. Thus promising features intended to incentivise a switch to Reliance Jio network suffers a major roadblock in terms of implementation.

In light of the arguments presented, the shift that has been triggered by Reliance Jio needs an effective system of checks and balances in terms of regulatory measures to ensure the following:-

  • Maintenance of principles of net-neutrality
  • Protection of private consumer data
  • Prevention of privacy breaches
  • Consumer protection in terms of reducing call drops to other operators
  • Prevention of unfair trade practices in terms of data mining to suit needs of parent company

LEGAL ISSUES SURROUNDING SEARCH ENGINE LIABILITY

Ed. Note.: This post, by Sayan Bhattacharya, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

Search engines which are quintessential to our internet experience are mechanisms of indexing and crawling through data to provide us with a list of links which are most relevant to both our present and past searches. Figuratively, its functions range from directing users to seats in a movie hall to being the very seat in the movie hall.

Tonnes of data lie a click away thanks to these third parties.The question then evolves surrounding the huge quantity of data and the ethics of its presentation to users lying at the mercy of private entities which almost enjoy an unquestionable monopoly in this regard. To what extent this can it be held liable for the data it presents? This article seeks to deal with the following legal issues surrounding holding search engines liable for:-

  • Copyright infringement of individuals
  • Defamatory Content in search results
  • Autocomplete suggestions: Affecting freedom of speech, privacy and personality of individuals.

COPYRIGHT INFRINGEMENT BY SEARCH ENGINES

The problem with extending liability to these search engines on a principal level lies with them being third party content providers and not the original publishers breaching copyright standards. A debate of search engines being publishers or a mere link provider to publishers ensues since search engines after the initial search entry by the user happen to filter relevant data from its already present resources. Therefore they do have some amount of publishing nature in determining what is relevant and what is not however neutral the algorithms might be. But imposing liability for merely linking users to data irrespective of its legality is problematic.

Copyright laws have been trying to bring back fairness in online searches to provide a checks and balances mechanism to curb presence of plagiarised content to protect rights of initial publisher of being sole distributors of their work. The aim at providing  laws which balance both a free information environment and protection of rights of the copyright holder. Courts across Europe have held these search engines liable for inducement of copyright infringement in several cases.

Recently European Parliament came up with its single digital market and copyright reforms which requires digital content providers who give access to large amount of copyright protected data to provide protection using technology. It also requires them to provide copyright holders with functioning of such checks and balances system. This law becomes problematic on two levels:-

  • A law requiring control of ‘Large amount of copyrighted data’ seems pretty vague since no applicable threshold is established by these reforms as to what constitutes large amount of copyrighted data
  • The protection of such data using technology probably refers to protection through filtering illegal content. This is problematic in the sense that not all data which breaches copyright standards be detected and neither does it apportion any kind of responsibility to search engines as to what standard of precautions are to be adopted.

DEFAMATION

 Courts in the case of this issue have been more rational in its apportioning liability to search engines as compared to copyright issues.The debate has majorly been surrounding whether search engines are mediums or publishers. In 2009, Metropolitan International Schools Limited brought a defamation case against Designtechnica Corporation, Google UK Limited, and Google Inc which provided a distinction between search engines and other internet based entities and set the precedent for future search engine related cases.

The court held that the search engine operators exercised no control over Designtechnica’s actions because a search yields a list of links determined relevant to the query. The technology ranks the pages in order of perceived relevance, without any form of human intervention thus excluding intent and knowledge factor; the search results to any given query depend on successful crawling, indexing, and ranking. The court held that a search engine is “a different kind of Internet intermediary,” which prevented the search engine from exercising complete control over the search terms and search results.

AUTOCOMPLETE SUGGESTIONS

The most debated issue as regards to extending liability to search engines has come in the form of its autocompletion of searches wherein the major issue has been surrounding whether it predicts or orients users to specific data. First developed as a feature to assist physically disabled and slow people to increase typing speed, its use has now become ubiquitous and identifiable to search engines. The following are the issues surrounding the same:-

  • Conveyance of misleading messages – For example when you search using the name of a business enterprise you might be linked to keywords like “fraud” or “dishonest”. It might end up revealing unwanted details about a person’s past. These searches might be contrary to what you are looking for and might even end up causing breach of privacy or discomfort.

  • Uncompetitive Practices and Unfair Preference based linking: How often have you tried to search for a specific link but the first few links or most often majority of links on a search engine point to another link which is very dominant in the online market? An investigation by the Competition Commission of India has claimed that Google Inc allegedly “abused its dominance” in the Indian market by incorporating clauses in its agreements with users that restricted them from availing services of third-party search engines. A similar probe by the European Competition Commission showed that Google abused its dominant position in market to show “systematic favourable treatment” to its Google own ecosystem applications like Google maps. The counter argument to this is that obviously google searches work on neutral algorithms based on popularity and relevance. But in a research called the snowball effect in modern searches it has been seen that a suggested search indulges users’ curiosity and orients them towards searches that may hence influence in turn the algorithms.

The extension of liability in this regard even though seems perfectly legitimate becomes problematic on a principle level. Search engine algorithms are finally codes written by human beings. At the end of the day like every business enterprise, search engines might have its own set of priorities and preferences. Even if users may expect that Google presents the results on the basis of supposedly ‘neutral’ algorithms, ‘Google has never given up its right as a speaker to select what information it presents and how it presents it’

CONCLUSION

We’ve dealt with and contrasted individual rights of search engines and societal rights, business ethics and freedom of expression and rights and duties of search engines in terms of specific issues which calls for a checks and balances system. This article talked about how there needs to be a balance between the above mentioned criteria.

An important phenomenon of these search engines which this article talked about in the very beginning is that of monopoly. The very fact that most of us resorting to find data on the world wide web would ultimately resort to Google for links irrespective of copyright breaches, defamation and unfair trade practices shows the dominating power of Google which hardly changes despite use of unethical trade practices by it or through it.

The fact remains that if anyone can solve the existing illegal practices in the world of internet and restore fairness then that is these search engines due to its monopoly and extreme bargaining power over the content displayed. Therefore is it then legally correct to extend search engine liability for misuse of data on their platforms because of this being the only mode of control?

The Right to Be Forgotten – An Explanation

Ed. Note.: This post, by Ashwin Murthy, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

The right to be forgotten is the right of an individual to request search engines to take down certain results relating to the individual, such as links to personal information if that information is inadequate, irrelevant or untrue. For example, if a person’s name is searched on Google and certain information appears relating to that person, the person can request Google to remove that information from the search results. This has its largest application in crime and non-consensual pornography (revenge porn or the distribution of sexually explicit material depicting a person without their consent). If X committed a petty crime and a person searching X’s name finds this petty crime, it leads to an obvious negative impact to X, in terms of job prospects as well as general social stigmatisation. X can ask the providers of the search engine to remove this result, claiming his right to be forgotten. The right is not necessarily an absolute right – in its current stage of discussion it merely applies to information that is inadequate, irrelevant or untrue and not any and all information relating to the person. Further there lies a distinction between the right to privacy and the right to be forgotten – the right to privacy is of information not available to the public while the right to be forgotten is removal of information already available publicly.

            Proponents of the right to be forgotten claim that it is a person’s right to have such outdated or immaterial information deleted from the Internet, or at least from the results of search engines. Photographs, comments, links shared – these are all things that people post in their youth (and sometimes at a not so young age) without a second thought. These people should have the right to delete such content from the Internet to protect their right to privacy and consequentially their right to be forgotten, protecting them from unnecessary backlash at rather innocuous actions. For example, a Canadian doctor was banned from the United States when an internet search showed that he experimented with LSD at one point of time in his life. With the right to be forgotten he can erase such pages from the results of the search engine. Victims of revenge and involuntary porn would have an easy mechanism to ensure that such objects are removed from the internet, a task that is difficult to achieve without such a right.  Critics however claim that this right to be forgotten is a substantial setback to the freedom of information and free speech. Any information spread on the Internet would have the potential to be taken down due to legitimate or seemingly legitimate claims of the right to be forgotten, regardless of the qualitative value of the information. Further, the right to be forgotten would impede with a person’s right to know. The easiest way to discover the background of a person is to Google them. This is especially relevant when employing someone or entering into an agreement of trust. If a person is looking for a security guard and a Google search shows that the applicant for the job is or was a thief, then this information on the Internet is of great use to this person hiring such a man – information that would otherwise not be available to the person. Removing this information denies the person their right to know and access this information. Also, implementation of such a right is technically difficult, forcing a complex algorithm to be developed to correctly identify what sites and results should and should not be removed in the event of a claim of right to be forgotten, especially considering the permanency of content on the Internet with the reposting and reproduction of content that occurs today. Locating every site to remove the content is technologically difficult.

            This right has its premier legal backing in the case of Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González, a decision by the Court of Justice of the European Union (CJEU). In the case, the Spanish citizen Gonzalez wished to remove a Google search result of an auction notice of his repossessed house that was fully resolved with and thus irrelevant. The Court held that the search engine (Google) must consider requests of removal of links and results appearing from a search of the requestor’s name under the grounds of the search result being irrelevant, outdated or excessive. The Court thus clarified that while people do possess this right to be forgotten, it is not absolute and must be balanced against other fundamental rights, including the freedom of expression. Thus the CJEU stated that assessment on the same must be decided on a case-to-case manner. This is line with an EU Regulation, the General Data Protection Regulation (GDPR), in providing only a limited form of the right to be forgotten. Originally this only applied to European countries – Google delisted search results only from European domains (google.fr, google.de, etc). Thus if a European citizen requested removal of a result, it would be removed from all European domains but nowhere else. CNIL, France’s data protection regulator, went to the length of fining Google for not removing the requested search results from all domains of Google worldwide, not just the French domain. While Google is fighting this case in France’s highest court, this is a symbol of a slow recognition of a far more expanded form of the right to be forgotten, applicable to search results worldwide.

            The right to be forgotten is not alien to India either – the first case of the same was a request in 2014 to the site Medianama.com to remove certain content, however this request was soon dropped. In 2016, a man raised a request before the Delhi High Court for a valid request for removal of his personal information from Google search results of a marital dispute. The Court recognized this claim and sent an inquiry to Google, to be replied to by September 19th. However, there is currently no legal framework present in India for the same nor does the landmark EU judgement apply in India.

            The right to be forgotten remains a nascent right, not fully developed or fleshed out. There are debates as to the pros and cons of such a right, and the extent to which such a right can and should be granted. However there is a clear rise as to its relevance in the technological and legal fields and will undoubtedly crystallise into a comprehensive right in the near future.

For further reading:

  1. The Audacious ‘Right to Be Forgotten’, Kovey Coles, CIS-India
  2. The Right to Be Forgotten, EPIC
  3. Debate: Should The U.S. Adopt The ‘Right To Be Forgotten’ Online? (audio), NPR

Privacy – A right to GO?

Ed. Note.: This post, by Ashwin Murthy, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

For centuries rights have slowly come into existence and prominence, from the right to property to the right to vote and the right against exploitation. In the increasingly digital world of interconnection, the latest right to gain immense popularity is the right to privacy. This right entails the right to be let alone and more importantly the right to protect one’s own information – informational privacy. Thus armed with the right to privacy, one can limit what information others have access to and may use, and thus what information corporations might have or what is up on the Internet. This right to privacy comes in direct contact with applications downloaded on phones, which often ask for permissions to various information on the phone – a device which already possesses a great deal of information of the owner, including the location of the user, their phone number, their emails, their chat conversations and their photos. Applications often ask, either explicitly or in their terms and conditions, for permissions to access varying degrees of the information on the phone, sometimes in a rather unexpected fashion (such as a flashlight app asking for permissions to location), and more recently these apps have been singled out for their questionable privacy settings.

            The latest app to come under fire for its privacy settings is Pokémon GO, an Android and iOS game that took the world by storm, being downloaded over 100 million times by August. The game is an augmented reality game that allows people to catch Pokémon in the real world through synchronous use of the phone camera and location detection. With such popularity, the app was inevitably scrutinised for its privacy settings, especially since it appeared that Pokémon GO was given full permission of the owner’s account. Adam Reeve, a former software engineer at Tumblr, was the first to cause a commotion when he wrote a post detailing all the information the app supposedly had access to. Niantic, the creators of Pokémon GO, later stated that this was an error and the app only accessed basic account information for logging in and in fact could not access information in applications like Gmail or Calendar, later confirmed by security developers. While this was clarified and fixed by Niantic, there were many who were still sceptical, losing trust in not just Pokémon GO, but also in apps in general.

            This sceptical perspective is however what is required to prevent apps from unduly gaining information, particularly those created by the more unsavoury companies who are less scrupulous about their privacy setting, to the point of intentionally trying to get far more information than what was expected from such an app. Pokémon GO, with its shady privacy settings and thus ensuing headlines of hysteria, was merely the catalyst to this questioning as to why such permissions are in fact required by many apps. While it turned out that Niantic did not in fact have very much access to the information people suddenly thought it did, Pokémon GO, by its very nature of using the camera and location services of the phone, potentially has access to far more information than what would be desired, to the point where it has been speculated that the app could be used for spying purposes. While such speculations remain conspiracy theories, the existence of these conspiracy theories is important in itself. Security and governmental agencies are increasingly attempting to access and store the information that such apps and companies themselves have access to. An intelligence agency, if working in tandem with a Niantic, could easily just make Pokémon appear in the house and thus have an interior view of the house through the owner’s phone camera. Niantic’s privacy policy, among other things, states that it may collect and store information about the owner’s location – information that is almost too easy to use for less than noble purposes, and is just one of many apps that can do the same.

            While of course many consumers may not have a problem with these applications having access to such information, they must first have an awareness that these applications actually do have access to all of this information when they are downloaded. Consumers are often content to merely accept the terms and conditions of an app without reading them. The scope for abuse of privacy is almost unparalleled. For there to be a change, the sceptic atmosphere that Pokémon GO accidentally created is needed, and not just for the short period of time that it existed in the wake of Adam Reeve’s post. Currently there is almost zero awareness of the degree to which applications can access and store private information, especially when the privacy policies and terms and conditions are not read or are incomprehensible. The publishers and creators of apps and other such software must be made to disclose explicitly what access they have and what information they can see/store/use. A high level of scrutiny from the consumers would ensure this, especially in the dearth of laws that exist on this specific issue. India has implemented the Information Technology (Amendment) Act, 2008, adding S.43A and S.72A which deal with implementation of reasonable security practices for sensitive information and punishment for wrongful loss or gain by disclosing personal information respectively. These however are both inadequate and too broad to effectively deal with such issues as apps invading a person’s right to privacy. Further such laws would apply only to the app’s usage in India. Thus creation and effective implementation would still only be on a very localised level, further causing a need for the people to be more conscious themselves.

The privacy settings in Pokémon GO might have been a harmless error from a seemingly benevolent company however most companies are not quite as harmless. Consumers must be vigilant to prevent their private lives and affairs slipping away from them, a task which hopefully Pokémon GO has somewhat equipped them to do.

For Further Reading:

  1. Data Protection in India: Overview – Stephen Mathias and Naqeeb Ahmed Kazia, Kochhar & Co
  2. Don’t believe the Pokémon GO Privacy Hype – Engadget
  3. Pokemon GO raises security concerns among Google users – Polygon

REGULATIONS FOR SELF-DRIVING CARS

Ed. Note.: This post, by Vishal Rakhecha, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

Self-driving cars have for long been a thing of sci-fi, but now with companies like Uber, Google, Tesla, Mercedes, Audi and so many more conducting research in this field they don’t seem as unrealistic. Self- driving cars are vehicles which do not require human supervision, with autonomy of varying degrees. Such technology is already present -to a limited extent – in the form of cruise control, parking assist, etc. The creation of such technology would inevitably require a sound system of rules and regulations. These laws among other things must be capable of setting a set of standards for the companies, securing the physical safety and protecting the privacy of the end user. Presently Motor Vehicles Act, 1988 and Central Motor Vehicles Rules, 1989 are the only rules related to automobiles. These laws are inadequate in terms of their application to autonomous cars. This article deals with the changes in the law which may be required to deal with challenges which this new technology may present. These modifications will be essential to ensure the protection of all stakeholders when these contraptions do come on Indian streets. This article will deal with regulations of self-driving cars of level 3 and 4.

The National Highway Traffic Safety Administration in the USA has segmented the autonomous cars in 5 levels.  For the purpose of this article we need to understand what levels 3 and 4 are. In level 3 the car has a high level of automation and does not require the driver to constantly monitor the roadway during the trip with brief periods where the driver control is necessary. The cars in level 4 are completely autonomous and are capable of performing all safety-critical functions. The user is only required to put in the destination and navigation details.

There have to be certain specifications and features based on best practices of the industry which each car must have to get clearance from the government authorities. Each car must have, other than the very basic features required for it to be autonomous, a steering wheel, pedals and an overriding mechanism to give control to the human operator at any time. The tests would include capacity to sense obstructions in front of it, to interpret and adhere to traffic signs and understand signals given by the other drivers on the road mechanical or by hand, ability to follow instructions as provided by its operator, come to sudden stops and increase speed, change lanes, ability to identify smaller objects like children, cycles, pedestrians, etc.

As mentioned earlier both CMVR and MVA have provisions to deal with vehicles requiring constant human supervision. In these laws the probable consequential placement of liability in cases of mishaps is related to only these types of vehicles. Therefore to give these cars a legal backing to actually be able to operate autonomously on the street it is important to include them in the ambit of the definition of the word ‘driver’. A possible version of this could be- ‘a machine capable of manoeuvring itself and that which has passed a driving test.’

The Tesla model S has an auto-pilot mode, which sends information about the places the car travels to, creating an ever growing map with data about the type of road, traffic conditions and other pertinent information. This is going to be true for any autonomous car if it has to become a viable means of transportation. This large-scale collection of data as promising as it is in terms of improving the way autonomous cars are able to understand their surroundings and adapt to them, it raises concerns about the privacy of the consumer. The information thus collected will inevitably be about the consumer’s personal details. This is problematic as the company can use the information for purposes not immediately considered to be for the benefit of the end user for instance, advertising, etc. The government has to make strict provisions making consumer’s informed consent mandatory for any way the data is to be used by the company. This data again has to be well-protected from cyber-attacks and hacks. The manufacturers should be required to maintain robust systems to ensure the safety of the information collected and conduct periodic tests to assess the workings of the system.

When accidents happen in normal circumstances, the driver is in most cases considered liable, but with the advent of autonomous cars fixing liability would become difficult. To be able to identify the events immediately preceding the accidents, German lawmakers have made it mandatory to have black boxes in all autonomous cars, a similar step can be taken here too. The black boxes will have information about when the driver took control of the car (if he did so) and what malfunction could have lead to the mishap. The liability would be placed on the manufacturer unless the consumer has not added any new feature through his own volition.

The possibilities which this technology presents in terms of positive outcomes is that it can lead to in say for instance in reducing the number of accidents, providing an accessible means of transportation for the old, disabled, etc. But, to harness the full potential of these vehicles we need to have a sound system of law which protects all stakeholders from the challenges which the introduction of these cars could pose.

Uber – Into the New Tomorrow

[Image Source: http://flic.kr/p/jyCqcH]

The rapid influx of technology has in recent times forced various firms to revamp their respective business models. The taxi industry is no exception. In this blog post, I will discuss the Delhi government’s ban on Uber cabs and the issue of its compliance with the IT Act, 2000 or the Radio Taxi Scheme, 2006. I will analyze the ban on the lines of the economic concept of ‘entry costs’. I will also deliberate on the need for leveling the play field between both, the radio cab operators and the taxi ‘app’ companies.

Continue reading Uber – Into the New Tomorrow

December, 2014: Fireworks and more!

December, 2014, has been the month when the Indian community received a multitude of shocks, one after the other and each one more powerful, on the issue of internet-related legal problems.

First, we had the lamentable Uber issue, which was followed by Airtel announcing (and later withdrawing) its VoIP-data plan, which violated Net Neutrality down to the first principle. This also inspired TRAI to work on a consultation on Net Neutrality. Soon after, we learnt that SoI had filed a case against Google for “displaying an incorrect map” of India. And just as the month was wrapping up, Airtel and Hathway accidentally blocked all of imgur rather than just a single image.

The biggest surprise of all, finally and unquestionably, came on New Year’s Eve itself, with the Government blocking 32 websites, apparently for hosting anti-India content from ISIS, and then unblocking four, namely, (gist) Github, Weebly, Vimeo and DailyMotion.

This was also the month where the ShammiWitness twitter account issue came to light, and some very interesting developments took place the cases on Sections 66A, 69A and 74 of the Information Technology Act.

December, then, has been quite the interesting month. We’ve covered the issues involved in the above disputes in detail earlier. Specifically, on the question of blocking websites, I’d refer you to my post here, and Veera’s post here. On the question of net neutrality, I’d refer you to my post here. We will be coming out with comments on the ShammiWitness issue and on Uber shortly.

But it is unarguable that the events of December 2014 have had an immense effect on internet regulation in India, the ripples of which we’ll be dealing with for some time yet. The sad part, though, is the fact that most of these changes seem to indicate that as far as the Internet is concerned we, as a country, are headed down exactly the wrong roads.