Law Enforcement v. End-to-End Encryption

In a post-Snowden world, there has been relatively more awareness and interest in the right to privacy regarding digital communications; and in knowing when the government can snoop-in on personal conversations. A majority of the communications taking place today are digital and involve two crucial processes i.e. encryption and decryption. Encryption (which is conversion of information into a code) happens when a message/call is initiated. At the same time, decryption (conversion of code back into useful information) happens when the message/call is received by the recipient. There are multiple nuances in this process; both in the technological aspect and the legal aspect.

For quite a while now, WhatsApp chat pages show the message – “Messages and chats are now protected with end-to-end encryption.”. The end-to-end encryption or E2EE (first used in program called Pretty Good Privacy, by Phil Zimmermann in 1991) is a form of encryption that makes it improbable if not impossible to intercept a private conversation. Traditionally, there are 3 instances when a conversation can be intercepted – firstly, from the device of the sender before encryption, secondly, when the information code in is transmission and thirdly from the device of the recipient after decryption.

The two ends i.e. the sender and the recipient, stay vulnerable to unwanted physical access or hacking but it’s the second instance where majority of the snooping takes place. It is here that E2EE becomes useful, for tech companies to bypass court orders and in extension protect user data. E2EE in the simplest of terms means that two people communicating are the only ones who have the specific keys to decrypt each other’s messages and any other person who intercepts such data will have nothing but an unintelligible code. While most communication apps or telecommunication providers have the decryption keys in their own servers; which grant them the ability to see or hear any conversation that passes through their servers, E2EE eliminates this obstacle by giving the keys to both individuals and not the service provider. Imagine the system as that of a letter-box, anyone can put in the messages and lock it (public key) but only the intended recipient has the key to unlock his messages.

This effective bypassing of the service providers has both pros and cons to it. On one hand, it allows for greater freedom for expression of opinions and beliefs without the fear of any sanctions, while on the other hand it stops the governments from carrying out intelligence activities vital for national security. The government, in ensuring the safety of its citizens, does covert operations such as surveillance which enables them to intercept vital communications between suspects; which may lead to stopping of terrorist threat. The importance of such can be gauged from the fact that the latest attack in London included secure-device (encryption) communications between the terrorists and also that ISIS issued instructions for its followers on how to communicate through encrypted apps to plan attacks.

Privacy however is not the only lens through which encryption can be seen in a global setting, for example, promotion and use of E2EE is seen as a human rights issue, as it furthers individual privacy and freedom of expression, which are two rights contained in the International Covenant on Civil and Political Rights (ICCPR). Yet UN reports like “The Right to Privacy in a Digital Age” and “The Promotion and Protection of the Right to Freedom of Opinion and Expression” expound on the idea that judicially ordered decryption is not violative of human rights and has laid down a three part test to limit when a government can restrict encryption.

There is an intense debate about the curbing of powers of law enforcement authorities to gather information through court notices from service provider companies. This debate gained public light after the incident in Brazil, where Facebook had its assets frozen subsequent to its non-compliance of a court order to provide WhatsApp conversational information of a bank robbery gang; which Facebook couldn’t have provided even if it wanted to, as it had no means to do so after enabling E2EE. This incident coupled by Apple’s refusal to the FBI to decrypt the iPhone of the San Bernardino shooter and install a backdoor in their operating system for use by law enforcement, have prompted the UK government to take the issue one step further by enacting a new legislation for surveillance through equipment interference.

The intelligence gathering aspect of E2EE is marred with internal conflictions in the state itself as the state needs such encryption tools to secure its own data but also resents it, as it makes public surveillance harder. Hence, while promoting stronger encryption programs for state use, they limit their citizen’s ability to do so. While certain states like Germany encourage public use of E2EE to avert the covert intelligence gathering abilities of the FIVE EYES countries.

Another facet of this issue is the restrictions in commercial and export area; profit driven tech companies in order to boost sales promote E2EE (more popularity, more sales) and oppose state-imposed rules as this would mean that they can import or build only those applications which allow third party access. Since, every state strives for stronger encryption tools to protect their own data and deal with upcoming security threats, there is state imposed regulations on selling such technology by tech companies to certain states for the purposes of national security and foreign policy goals.

A hypothetical solution to this problem of law enforcement and national security versus privacy can be the innovation of an internal system within the system of such service providing companies like WhatsApp. The internal system, when established would compare every number that tries to send a message with a blacklist (The numbers law enforcement wants to track with judicial approval). When a blacklisted number tries to send a message, the server can stop the E2EE services for the said number from that point onwards and the collected information can be stored in a separate database which can only be accessed by the company’s department handling judicial obligations.

Technology has and will continue to benefit us in ways that cannot be counted, but unaccountable use of such is also capable of great harm. The need for security more than privacy might result in a paradigm shift against rigid privacy laws as is prima facie seen in lawmakers of Florida after the Orlando attacks. It is the cooperation of law and technology together that will result in swifter disbursal of justice and achieve a balance between privacy security and public safety. The adoption of a system such as the one suggested above might be the first step to strike that balance between privacy and safety.

The internet has grown from being just a communication medium to becoming a marketplace, an entertainment source, a news centre, and much more. At any given moment, there are thousands of gigabytes of information travelling across the planet. But all of this comes to a standstill when the internet shuts down. An internet shutdown is a government-enforced blanket restriction on the use of internet in a region for a particular period of time. The reasons vary from a law and order situation to a dignitary visiting the place. There is a requirement for an analysis into whether such shutdowns can be justified, even on the direst of grounds.

These shutdowns can be initiated with little effort, as far as the authorities are concerned, because the Internet Service Providers (ISPs) do not hesitate to follow government ‘directives’. The justifications provided by them can range from being possibly reasonable to being absurd. For example, in February, the Gujarat government blocked mobile internet services across the state because the Gujarat State Subsidiary Selection Board was conducting exams to recruit revenue accountants. This was done given the “sensitive nature of the exam” and that it was “necessary to do so to prevent misuse of mobile phones.” This step is very clearly disproportional in terms of actions and effects. There are other methods through which the exam officials can stop malpractice in exams, with stopping mobile internet over the entire state being not only inefficient, but also highly disruptive to the general populace. This distinction as to whether the step is proportional gets complicated when the government justifies shutting down the internet on grounds of law and order situations or national security.

Before we answer these questions, we need to first probe into the very foundation on which a democracy functions – discourse. The very nature of democratic discourse necessitates the need to have information. When there is lack of information, public discourse loses its functionality, as the participants’ understanding will not be enough to provide targeted solutions to the specific problems to be addressed. The internet has now become one of the most important mediums for information dissemination, with the ability to provide ground level data about the people where the conventional media cannot enter or does not want to. It acts as a medium for those sections of the society, which are normally outside the purview of the mainstream, to be able to raise their voice for general public to hear. By virtue of this, it becomes an important tool in furtherance of the democratic process – the right of free speech and expression. Keeping this extremely important function of the internet in mind we can now analyse the problem of internet shutdowns.

The following questions must be answered to even consider shutting down the internet: first, whether the problem is so huge that it becomes necessary to take such an extreme step; second, whether the government has considered other alternatives, even if the problem is big enough; third, whether the functioning of a major communication channel will benefit or harm the general population and; fourth, whether there are enough safeguards to ensure that the government does not abuse this power that has been provided.

The concept of shutdowns even when there is a valid justification can be problematic. When the unruly sections are using a few limited channels to spread hate and rumours the government can shut down those specific channels and contain the situation instead of shutting the entire internet accessability down, which affects the businesses and lives of millions of innocent parties. This also reduces the collateral damage that can take place from shutting down websites which are harmless or even more tangibly systems which banks run on. Despite all of this if a blanket restriction is required, the question arises as to who should be able to put it. Section 144 of the Criminal Procedure Code has been employed here, however its validity has been called into question multiple times.

There are enormous free speech implications of not letting people make use of an important communication channel. Earlier the conventional media were the sole sources of information for the general public, making them the gatekeepers of information. These organisations though free to a great extent can be influenced by the government to not attack it directly or not report certain atrocities by instilling a fear of some sort of sanction. With the advent of internet enabled communication channels, each and every individual could contribute to the broader pool of information. By shutting down the internet, the government is cutting off the information at the source about the situation. This leads to concerns related to accountability as there is little ground-level data about the atrocities or any excessive use of force used by the law enforcement authorities.

Furthermore, when there is a law and order, for example in cases in Gujarat government shut down during the Patidar movement situation it becomes very important that the people do not get misled by fake news and rumours, and the internet could prove to be a very useful tool to fight fire with fire. The government can use the same channels to reach out to the public and reduce the amount of confusion. For example, during the Cauvery Riots the Bangalore City Police effectively used Twitter and Facebook to dispel rumours and instil a sense of security among the people. Not letting an average citizen participate and engage with the other individuals and the state during tough times further alienates them. The safety of the loved ones during these situations is the top most priority of the general populace. The internet serves as a medium to communicate with them and during internet shutdowns the access to this is cut off. This only leads to further chaos and unrest, and thus shutting down accessibility is counterproductive.

One of the most significant and tangible damages that the blocking of internet does is to business establishments. According to Brookings institute, the damage that is done due to internet shutdowns is $968 million from July 2015 to June 2016 in India. Banks  are largely dependent on the internet to conduct their daily transactions, and face massive problems during a shutdown. The infrastructure that is required in using debit and credit cards, ATM’s and internet banking work on the power on internet. In addition, this affects the brick-and-mortar stores as a significant number of them have started to move towards using digital payment modes post demonetisation. Needless to say, the most immediate impact is faced by e-commerce websites who by the very nature of their activities are reliant on internet.

There is also a more insidious side to this. When an easier measure like cutting access to a communication medium is used to address a broader societal complication, it is only a surface level step of cutting off of engagement on that issue. The move of shutting down the internet is only a highly publicised step, which makes it seem like the shutdown is a part of a bigger set of measures being used to tackle the situation. This creates an illusion where the actual problem continues to persist.  The state will continue to use only coercive power to deal with it. Targeted measures which would provide for much better long-term solution are not taken into consideration due to lack of political will or simply lethargy on the part of establishment. In addition, this points to the wider issue of a lack of understanding both of the actual issues at hand and the manner in which the internet works, in terms of how interconnected the populace is with the internet and thus the effect it has on the entire society.

As more and more people start joining the internet and the government starts pushing towards a digitised economy, it becomes all the more necessary to not shut the internet down. The local, state and national governments need to take responsibility for public disorder and engage with the issue at hand. The state needs to start balancing the interests of national security and also protection of individual rights.

The Internet Freedom Foundation has  launched a campaign to address this very issue- support them by going to and signing the petition for the government to make regulations, so as to reduce arbitrariness while imposing these shutdowns.

Consent to Cookie: Analysis of European ePrivacy Regulations

This article is an analysis of the newly passed ‘Regulation on Privacy and Electronic Communications’ passed by the European Union.

A huge part of our daily life now revolves around the usage of websites and communication mediums like Facebook, WhatsApp, Skype, etc. The suddenness with which these services have become popular left law-making authorities with little opportunity to give directions to these companies and regulate their actions. For the large part these services worked on the basis of self-regulation and on the terms and conditions which consumers accepted. These services gave people access to their machinery for free, in return for personal data about the consumer. This information is later sold to advertisers who later on send ‘personalised’ advertisements to the consumer on the basis of the information received.

With growing consciousness about the large-scale misuse that can take place if the data falls into wrong hands, citizens have started to seek accountability on part of these websites. With increasing usage of online services in our daily lives and growing awareness about the importance of privacy, the pressure on governments to make stricter privacy laws is increasing.

The nature of data that these services collect from the consumer can be extremely personal, and with no checks on the nature of data that can be collected, there is a possibility for abuse. It can be sold with no accountability in the handling of such information. Regulations such as those related to data collection, data retention, data sharing and advertising are required, and for the most part have been lacking in almost all countries. The European Union however has been in a constant tussle with internet giants like Google, Facebook and Amazon, over regulations, as though these companies have operations in Europe, they are not under its jurisdiction. In fact they are not under the jurisdiction of any countries except the ones they are based in. The EU on 10 January 2017 released a proposal on the Privacy of individuals while using Electronic communications which will come into force in May 2018.

The objective of the ‘Regulation on Privacy and Electronic Communications’ is to strengthen the data protection framework in the EU. The key highlights of the data protection laws are as follows:

  • Unified set of Rules across EU – These rules and regulations will be valid and enforceable across the European Union and will provide a standard compliance framework for the companies functioning in the Union.
  • Newer Players – Over-the-top services are those services which are being used instead of traditional such as SMS and call. The law seeks to regulate these Over-The-Top services (OTT) such as WhatsApp, Gmail, Viber, Skype, etc., and the communication between Internet-of-Things devices which have been outside the legal framework as the existing laws and regulations are not wide enough in scope to cover the technology used.
  • Cookies – A cookie is information about the user’s activity on the website, such as what is there in the user’s shopping cart. The new regulations make it easy for the end-users to give consent for end-users for cookies on web browsers and making the users more in control of the kind of data that is being shared.
  • Protection against spam – The proposal bans unsolicited electronic communication from mediums like email, phone calls, SMS, etc. This proposal basically places a restriction on spam, mass sending of mails or messages with advertisements with or without the end-user consenting to receive those advertisements.
  • Emphasis on Consent – The regulation lays strict emphasis on the idea of user-consent in terms of any data being used for any purpose that is not strictly necessary to provide that service. The consent in this case should be ‘freely given, specific, informed, active and unambiguous consent expressed by a statement or clear affirmative action’.
  • Limited power to use metadata – Unless the data is necessary for a legal purpose, the service provider will either erase the metadata or make the data anonymous. Metadata is data about data – it is used by the Internet Service Providers, websites and governments to make a summary of the data available to create patters or generalised behaviour to use specific data easily.

The Regulation has far-reaching effects in terms of taking into its fold businesses which were earlier not a part of the regulations and would cover any technological company which provides electronic communications services in the Union. This would require businesses to sustain costs to redesign their communication system and ensuring that their future software updates are designed in such a way that the users’ consent is taken.

The main argument raised by the proposal in favour of bringing in the new Regulation is that an increasing number of users want control over their data and want to know where their data is going and who it is accessed by. This is because of the growing consciousness about the far-reaching effects of providing huge quantities of personal information to private entities with little or no check on the use of the data.

The biggest relief given to both the users and service providers was the change in the cookie policy. The previous regulation made it mandatory for the website to take consent before any cookie was placed on the user’s computer. This would have led to the user being bombarded with requests on the computer. The new regulation lets the user choose the settings for the cookies from a range of high-to-low privacy while installing the browser and after every six months they would receive a notification that they can change the setting.

There is however the issue of how the websites will know that the user has opted out of receiving targeted advertisements. There is a possibility of using a tool called Do-No-Track – a tool when turned on sends out signals to a web browser, that the user does not wish to be tracked. The system was utilised in the past, but given the lack of consensus in the industry as to the method of usage and the fact that a large number of websites simply ignored the DNT signals, it lost its utility. This Regulation will give the much necessary push for the usage of this system as would be useful, because if a user chooses not be tracked the websites have to respect that choice.

The Regulation also makes consent the central feature of communications system. Earlier consent was said to be implied, that if the individual is using the operators service was considered as consent to allowing the operator to collect information about the end-user. This could have a huge effect on the way these entities earn revenue where in some cases the sole method of earning revenue is advertising. Technology companies have to dole out huge amounts of money to pay to run their servers and for the staff which works on maintaining the website and researching on newer technology to improve their services. Companies which are dependent on advertising could lose a large amount of the revenue which they get if a large number of its users opt-out of providing information and receiving targeted advertisements.

Several critics from the industry argue that the new framework will make it extremely difficult for the operators as they do not necessarily classify data. The multiple layers of data and information collected are simply classified as ‘analytics’. The websites do not always know the purpose the data is going to be used until after it is used. This would make it difficult for the operator when it comes to deciding what comes under the law. In addition, the operators depend on third-parties to collect the information for them. The regulation makes it abundantly clear that the information to be collected should be the bare minimum that is required to provide the services and data that is required for web audience measuring. The third-parties also would be protected under this law, if the information collected by the website necessary to provide those services or if the user has already given consent. A more transparent system instead would make the system accountable as it would give a factual basis to assess whether the operator is complying with reasonable ethical standards.

The users also have an option under the law not to receive unsolicited calls, messages and mails. These kinds of calls, messages and mails are a huge nuisance with the companies doing this facing no liability. Only UK among the countries in the EU has strict laws and hefty fines for such kind of direct advertisements. This system would require the prior consent of the user when obtaining the information and before the sending of advertisements, and inform them about the nature of marketing and the nature of withdrawal. Even though consent is given to the operator the law mandates the communication of the procedure of opting opt-out to the user in clear terms. The operator will also have to have a prefix for all the marketing calls. This is similar to India, where the TRAI initiated Do-Not-Disturb system gives the user an option to block different kinds of unsolicited and automated advertisements through calls and messages.

The Regulation can form a benchmark for the other countries. The regulation with its central focus being the privacy and consent of the user, places a requirement for transparency and accountability of the operator – a necessary condition to run any organisation providing such services. While the changes may seem radical in terms of the costs that the industry as a whole may incur, given the sensitive nature of the information that they deal with, such regulations will and should become a norm for all the players in the market and any new players who wish to join it.

TRAI’s Consultation Paper on Net Neutrality and the Regulatory Approach to Net Neutrality in India

Net neutrality is the principle of non-discrimination of all data on the Internet, regardless of the Internet Service Provider (ISP). Regardless of the source or content of the data, all data itself must be treated equally. There has been a growing movement for the recognition and acceptance of net neutrality as a principle not just by the people but by the Government itself, leading to the desire for regulations and policies that protect net neutrality. The Telecom Regulatory Authority of India (TRAI) recognising the same, held pre-consultations and drafted a consultation paper on the specific issue of net neutrality.

In this consultation paper the TRAI focused on a few core issues regarding net neutrality specifically in India, that being of the definition and principles of net neutrality, transparency, traffic management and policy and regulatory approaches to the entire issue. The consultation paper serves as recognition of the important legal and policy questions to address as well as to provide a clearer understanding of net neutrality itself. The opinions of the multiple stakeholders, including Telecom Service Providers (TSPs), content providers and academicians, were taken into account as well.

When discussing the policy and regulations approaches that India could take, the consultation paper was interestingly far more ambivalent than in any of the other sections of the paper. The paper provided three different approaches towards regulation: cautious observation, tentative refinement and active reforms towards regulations and laws relating to net neutrality.

There is a larger question first, of whether the government should indeed regulate the Internet and enforce principles like net neutrality. The Electronic Frontier Foundation (EFF) raised this issue when the Federal Communications Committee (FCC) in the United States passed the Open Internet Order of 2010, an order mandating, among other things, net neutrality. Yet, as was pointed out by Pranesh Prakash of the Centre for Internet and Society, there is a need for regulation. It prevents monopolization and aids in ensuring certain goals such as universality and maximum utility. Without regulations, net neutrality in particular would be an ancillary consideration for ISPs, with practices such as throttling more favourably looked upon. There are of course problems with regulation, both in the forms of bias present in the government and that of over-regulation, however with adequate stakeholder representation this can be mitigated. The consultation paper by TRAI has fulfilled this aspect, and the same ideally will strike the correct balance with regards to regulations.

Countries around the world enforce (or don’t) net neutrality in different ways. The United States creating regulations rose from their belief that the internet is a utility and not a luxury (to the point of being court mandated). Their regulations, as with the Open Internet Order, were thus more citizen focused (how effective it is or actually focused on citizen needs is a separate question). Currently however Trump is rolling back any regulations that were unpopular with big telecom companies. The European Union in 2015 ensured net neutrality but it has been criticised for being plagued with loopholes. In China, considering that the regional ISPs are all owned by the government, there is apparently net neutrality ensured by the government.

India on the other hand is caught in a somewhat nascent stage with regards to regulations. Initially there was close to no government action in favour of net neutrality, but on February 8th 2016, the TRAI barred telecom service providers from charging differential rates for data services in response primarily to the actions by Facebook and Airtel, essentially upholding the principle of net neutrality. Yet, in the absence of a legal framework, as pointed out in the consultation paper, there is still scope for violating the principle of net neutrality by private ISPs. It is at this juncture that the consultation paper poses the questions as to what manner of governance would be most suitable.

The current policy adopted by the TRAI, as well as other countries, is to simply wait and watch. The TRAI would simply observe the practices of the service providers, providing them the freedom to take action as they see fit. This is problematic considering that violations occur currently and in the absence of any legal frameworks they can continue to do this unchallenged. Predictability in the form of a lack of retribution leads to further abuse.

The other option identified by the TRAI is self-regulation. Here all licensed ISPs would follow a voluntary form of adherence to the core principles of net neutrality, with the TRAI providing overall guidance, monitoring the ISPs. This sort of model is well practiced in Europe, including in Denmark, Sweden and the UK, however there are indeed pitfalls of this measure as well. The lack of uniformity is a cause for concern, ranging from transparency to traffic management, and thus leads to a lack of optimisation. When taking multiple countries into account, this effect is magnified multifold, especially in Europe with a large number of highly developed small nations. However, this method has had success in Norway. Applying it to India however, much like most other Western concepts applied to India, causes problems due the differences in context. It becomes easier to provide small corporations this power over a relatively small number of people, however in a country as large as India, with a larger number of big corporations, the same becomes far more difficult. This is not to say that it is misguided or impossible, merely that it is not as easy nor may it be as successful as it would have been in Norway.

The TRAI plans to act upon any notification of abuse of net neutrality or discrimination in either of the two options but there are significant problems, some of which the TRAI themselves identified. The first problem the TRAI identified is a failure or delay in identifying cases of discrimination, a problem that is extremely difficult to counter. Users rarely have opportunity to actually identify and understand the discrimination, and bodies like the TRAI do not have the resources to cover the entirety of the nation. The second issue is the lack of power the TRAI itself possesses. In the absence of any legal frameworks, the TRAI cannot unilaterally impose its will with ease. Further, relating to the third problem identified, this lack of law leads to uncertainty, and uncertainty is a disincentive for businesses to enter the market. In addition, the lack of an adequate definition of what net neutrality is as well the limits of what is required in terms of transparency and traffic management. While these are sought to be addressed by the consultation paper, it cannot singlehandedly solve these issues.

The consultation paper provides for a course of action that can be taken in the case of active reforms, primarily in the form of licensing, regulations and legislative changes. Licensing allows for a form of control, as only those ISPs who abide by the standards set out by the TRAI would be provided the licenses. Australia, through the ACMA and Bangladesh, through the BTRC, both provide licenses to ISPs. For licensing to be successful however there must first be an accurate definition and understanding as to what constitutes the core of net neutrality, and thus the limits that must be placed on ISPs to protect the same. Further, licensing all ISPs is a laborious task, and after licensing monitoring is even more difficult. While there is an association for ISPs, it is essentially defunct (as can be ironically seen from their very site). The TRAI simply does not have the resources to look into all ISPs and whether they maintain the conduct required by their license. Further, the TRAI needs some regulatory power in the form of the law in order to actually make decisions or levy punishments. Explicitly laying down what is permitted in a license is only the first step.

Regulations in the form of Quality of Service requirements could lead to a reduction in discrimination on the basis of quality. This regulation could include certain aspects such as preventing throttling and blocking, or other forms of preferential treatment, as well as laying down a particular standard that is to be met, regardless of content. It would also allow for a mandatory level of transparency. While this action is likely to be helpful, it is not the perfect solution. Creating the standard itself is difficult, enforcing it is even more so. It further does not combat the entirety of the discriminations that occur, and acts more as a stopgap.

Legislative changes are the most effective towards attaining active reforms, though as pointed out earlier, providing the government this power is not a unilaterally positive action. Yet if the TRAI has this legislative power to back its actions, or even another external, quasi-judicial body to act in its stead, this would lead to enforceability, leading to a greater incentive to follow the principle of net neutrality.

In terms of regulations and policy approaches, the TRAI sticks to these issues. In the paper, the importance of monitoring is also brought up, however there is a distinct lack of both innovative solutions and a recognition of the specific domestic issues India faces. Instead the paper focuses more on the approaches of foreign countries, hoping that a patchwork solution will work for all.

In conclusion, the paper raises certain questions which are to be answered in future consultations. The questions of which body should be given the power of monitoring and supervision, collaboration with other stakeholders and the manner in which the legal framework should be evolved are all directly relevant and extremely pertinent towards creating an effective upholding of net neutrality. Hopefully, these questions and the multiple issues and problems of the measures raised in the paper will be addressed during the actual consultations on the 15th of February, 2017.

For Further Reading:

IFF Summary on TRAI’s Paper –

Vox – Saving Net Neutrality through Republican Legislation –

International Telecommunications Union – Discussion paper on regulations of Net Neutrality –


Reconstructing a Crime Scene: Virtual Reality in Courtrooms

Virtual Reality is the latest buzz in the technological sphere, especially with the arrival of VR devices like headsets from giants like Facebook (Oculus Rift), Sony (PlayStation VR) and HTC (Vive). It is a relatively old concept (Aspen Movie Map was the first example, created by MIT in 1978) but with advancement in contemporary technologies, virtual reality has progressed by leaps and bounds in its effectiveness i.e. from being a mere 3-D image to an immersive and interactive system. Apart from its use in gaming and other entertainment purposes, it has been proposed to use this technology in another rather unexpected facet that of judicial proceedings.

Prior to the use of the technology it is important to analyse the concerns regarding the technology in question in a courtroom setting. The concerns are multifold: the manipulation of facts regarding the actual scene of crime/accident, time delay in court proceedings, cost of the expensive procedure and the possibility of manipulation of the trier of facts (judge/jury) i.e. inducing bias. While all of these are relevant questions regarding the use of virtual reality in courtrooms, each of the above concerns can be mitigated.

The problem repeatedly being raised in regard to VR i.e. the manipulation of facts is actually a factual misconception of what VR is capable of. Virtual reality is a computer generated 3-D interactive simulation, in simpler terms, it is a computer-generated environment which simulates some of our senses like vision and hearing to make the artificial environment seem real. The virtual environment, in this case, the scene of the crime/ accident is made by the computer, using 2-D images of the real-world place with minor adjustments by a technician in giving the architectural design the same aesthetics as that of the real-world place. In the case of a shooting, the area of impact and the trajectory of the bullets, or in the case of stabbing, the angle of wound and weapon penetration, are all put inside the simulation after gathering factual information from standard forensic tests. Hence, nothing in the simulation is the manifestation of a recollection from memory. The layout of the virtual crime scene can be checked using the photographs of the real crime scene for any inconsistencies, removing the question of manipulation of facts.

The issue of time delay can be also dealt with. Software like crime scene virtual tour  and IC-CRIME can create a virtual simulation from the 2-D images of the crime scene significantly faster. The cost concerns are very pragmatic issues keeping in mind the novelty of the technology and hence its expensiveness, it can be argued that the relatively wealthy party in a case might use its resources to use this technology while the other party could not afford to fight fire with fire. But, the facts of the scene remaining same for both, it should be the prerogative of the state to provide a virtual simulation to be used by both the parties. Also, with further improvement in technology the cost of using such technologies will go down, hence making it more accessible.  Countries like UK, USA and Switzerland have already granted funds for further research and use of this technology in courtrooms.

The concern regarding the introduction of bias can be nullified as virtual reality instead of inducing bias has been proven to show a reduction in bias in both jury and judges. Firstly, no trial is free of bias. Bias is “inclination or prejudice for or against one person or group, especially in a way considered to be unfair.” It is of two types – explicit and implicit. Every human possesses both the types, formed through an individual’s life experiences. The notion of a right to fair trial rests on the notion of a judge/jury being unbiased and hence capable of looking at evidences objectively. To further this notion, a process called voir dire, is performed to eliminate prospective jury members on the criteria of certain signs of bias shown by them i.e. explicit bias.

Since explicit bias manifests itself in our actions and is easy to root out, the problem lies with implicit bias as it rests in our sub-conscious and is very hard to notice. An individual may/ may not be aware of his/her own implicit biases. Studies have shown that an average American judge/jury form an implicit relation between black person and weapon and between a white person and innocence. Contemporary studies have shown that a black judge is more likely to give a harsher sentence to a white person than a black person and vice versa. From the results of these studies it can be reasonably said that the strong preference of white judges for white people and black judges for black people is based on a very fundamental emotion – empathy i.e. the ability to feel from another’s perspective.

Virtual Reality comes into play in such a situation by providing the very platform to see another man’s perspective. Studies by Prof. Mel Slater from University of Barcelona have shown that when people with implicit racial biases (tested from above studies) were put in virtual simulation and made to see a mirror reflecting an individual, different in race than the person in simulation (technically called bodyswapping); people showed reduced implicit biases and more empathy. Since bias and empathy cannot be rooted out of an individual, the next best thing that can be, is have a diversity of biases so that no one starts severely disadvantaged.

The role of empathy in a judicial decision is not a quantifiable one but its presence is an unmistakable one even though not explicitly expressed. Judges hesitate to show empathy towards any party because of the stigma; that any empathy from a judge in his professional capacity is a taboo and should not be condoned. Also, the more practical reason is that any exhibition of empathy might invite a strict scrutiny from the appellate courts and along with it bring the possibility of mar on their performance reviews.

But in reality, we need judges to be more empathetic so as to understand why people do certain things and assign corresponding sentences since the purpose of law is not just retribution but also rehabilitation of the convict back in the society. As much as we would like to think that the law is absolute and clear, it is not always so and there is much room for a judge’s discretion.

Apart from the problems and their subsequent solutions, to the use of virtual reality in courtroom discussed above; there are a few more advantages. Stanford studies have shown that virtual reality simulations can help us check the reliability of a witness’s testimony by trying out the simulation from the witness’s perspective. For example, if a witness testifies to see a particular crime happen standing somewhere far-away from the crime scene, it could be checked in the simulation if the witness had the line of sight to see the crime from where he said he was standing. It can also help preserve a crime scene in the virtual world when preserving it in real world is not possible, such as homicide on a busy street or an accident on a bridge.

It is evident from the studies conducted till now that the use of VR in modern-day courtrooms is quite plausible given the fact that it can help incline the human factor i.e. the implicit emotions, in a way that is beneficial for all. Moreover, the practical concerns can very much be mitigated with further research in the technology as it will become more adaptive and hence more efficient for courtroom purposes. The pros of using this technology in courtrooms do appear to outweigh any significant impediments that can be foreseen and hence more research and enforcement of such technology should be promoted in courtrooms around the world for faster and more effective disbursal of justice.

PATENTING OF HUMAN GENES: Intellectual Property vs Access to Healthcare & Research

In the case briefs of Myriad Genetics vs Associated Molecular Pathology, amongst the several moving stories of victims of gene patents, contained the story of Abigail, a 10-year-old with a long QT syndrome, a serious heart condition that, if left untreated, could result in sudden death. A company in this case had obtained patent on two genes associated with this condition and developed a test to diagnose the syndrome. But then they went bankrupt and never offered such tests. Another lab tried to offer the test to Abigail, but the previous company which held the patent to such diagnosis threatened to sue the lab for patent infringement. So as a result, for 2 years, no test was available. During that time, Abigail died of undiagnosed long QT.

In 1790, the US Government started issuing patents under Patents Act 1790, with the motive of “Encouraging Arts and Sciences”. These intellectual property rights slowly became the biggest statutory safeguards of research and investment in a democracy. Edible business cards, nicotine infused coffee, a rock-paper-scissors card game for people too lazy to use their hands and finally thong diapers which make up the list of some rather amusing patents the USPTO has issued to protect intellectual property.

In initial days, this IPR regime was under huge criticism from ethical and moral firewalls which questioned how the right of commercial exploitation of resources can be rested at the hands of a selected few [link found here]. But the intensification of competition in industries (especially the biotechnological industry in this case) called for some sort of incentive creation for investment in research which came in the form of patents.


Genes as we have pointed out before are units of heredity. A gene is a segment of DNA that codes for a specific protein or set of proteins. In this article, we will talk about legal framework of gene patenting with respect to the following three forms DNA in its natural cellular environment, isolated genomic DNA, modified synthetic cDNA. Scientists have discovered methods of extracting DNA from its natural cellular environment which is later used for purpose of diagnosis through gene sequencing. This isolated genomic DNA is sometimes modified by splicing and removing the non-coding introns to make a DNA made of only exons. This kind of modified DNA is called cDNA and the same is used to express particular proteins by scientists in human body. The difference between isolated genomic DNA and cDNA is that there is human modification involved in the later while there is none in the earlier [link here].


The landmark decision of Diamond vs Chakrabarty in 1980- opened the floodgates for patenting of microorganisms where the judge ruled that human-made living matter is
. Anything under the sun made by man was considered to be patent worthy as long as they were not discoveries or manifestations of nature. The court clarified the threshold required for obtaining a patent as: “relevant distinction was not between living and inanimate things but between products of nature, whether living or not, and human-made inventions.”

“Thus, a new mineral discovered in the earth or a new plant found in the wild is not patentable subject matter. Likewise, Einstein could not patent his celebrated law that E=mc2; nor could Newton have patented the law of gravity. Such discoveries are “manifestations of . . . nature, free to all men and reserved exclusively to none.”


The United States Patents and Trademarks Office though seemed to have paid little attention to the limitations under the patent regime which in no way allowed patents to products of nature. The Patents Office undertook an expansive interpretation of law and went on to grant patents to a number of “engineered DNA molecules”. It handed patents for isolated DNA on the ground that since these molecules had been secluded from their natural cellular environment they were no longer “products of nature”.

This standing USPTO practice was challenged for the first time in a 2009 case against the grant of patents to Myriad Genetics. Myriad Genetics had after extensive investment and research discovered locations of BRCA 1 and BRCA 2 genes on the human chromosome which was landmark in the medical science. Mutation of BRCA 1 and BRCA 2 genes could lead to breast and ovarian cancer in women (it increases the chances of contracting breast cancer by 50-80% and ovarian cancer by 20-50% approximately according to case reports of Myriad Genetics vs. AMU). Though the scientific community was aware of threat of cancer from heredity, they were clueless as to the exact location of the BRCA 1 and BRCA 2 genes on the human chromosome. USPTO gave Myriad Genetics the exclusive rights to isolate these genes and carry diagnostic tests. The problem started when Myriad tried to enforce these patent rights against organisations which tried to test such gene mutation since they had an exclusive right over the isolation process which is an essential step in any diagnostic testing.

The case of Association of Molecular Pathologies vs. Myriad Genetics can be contextualised as a case of civil rights versus intellectual property rights. The company’s patents on BRCA 1 and BRCA 2 genes were ruled invalid on March 29, 2010 by Judge Robert W. Sweet in a U.S. District Court. On an appeal, the Court of Appeals for the Federal Circuit reversed the trial court judgment on July 29, 2011 and held that the genes were eligible for patents.

On December 7, 2011, the ACLU (fighting the case on behalf of petitioners) filed a petition for a writ of certiorari to the Supreme Court. On March 26, 2012, the Supreme Court vacated the Federal Circuit’s judgment and remanded the case for further consideration in light of Mayo Collaborative Services v. Prometheus Laboratories, Inc. in which the Supreme Court had ruled, just six days earlier, that more restrictive rules were required to patent observations about natural phenomena.

Myriad Genetics vs. AMP

“Myriad did not create or alter either the genetic information encoded in the BCRA1 and BCRA2 genes or the genetic structure of the DNA. It found an important and useful gene, but ground-breaking, innovative, or even brilliant discovery does not by itself satisfy the §101 inquiry.” 

“Ground-breaking, innovative, or even brilliant discovery does not by itself satisfy the §101 inquiry.”

On June 2013, the nine judge bench of the US Supreme Court ruled that: “A naturally occurring DNA segment is a product of nature and not patent eligible merely because it has been isolated”. However the Court went on to hold that manipulation of genes by human intervention to create something not found in nature (like cDNA) is patent eligible. The court emphasised on the “product of nature” exclusion used in previous judgments and also on Section 35 which makes isolated genomic DNA non-patent eligible but something like cDNA patent eligible due to significant human intervention in production process leading to change in nucleotide sequence.


Section 2(1)(j) of the Patent Act, 2005, defines the “invention” as a new product or as process involving an inventive step and capable of industrial application.

Indian Patents Act recognises patents only for inventions capable of having industrial application and not mere discoveries of existing inventions. The cross jurisdictional reference through successive amendments to Indian Patent Law can be linked to the US Patent law system of not granting patents to “products of nature”. Furthermore Section 3 of the Indian Patents Act lists out a set of inventions which cannot be patented due to negatively weighing in on policy considerations.

The 2002 Calcutta High Court case of Dimminaco AG vs Controller of Patents and Designs was the first in line of cases to question the legal provisions relating to gene patents. The company sought a patent on the process of preparing Bursitis vaccine where the final product was living organisms. The Patents office had denied patent rights to plaintiff grounding its reasons on the fact that such process couldn’t technically be considered a process of manufacture under the definition of invention under Indian Patents Act since the end product was a living product which they claimed was inherently a part of nature. The court overturned the Patents Office decision and remarked that there was no statutory bar on the patenting manner of manufacture even though the end product is living in nature. The Indian Patents Act was amended in 2002, to specifically permit among other things microbiological and biological processes.


Indian Patent law doesn’t recognise naturally occurring DNA to be patentable in any manner whatsoever due to provisions of Section 3(c) of Indian Patents which regards mere “discoveries” as non-patent eligible. Act 3(j) of Indian Patents Act which prohibits patentability of any form of animal or plant body is also relevant in this case.

Indian Patents office till 2013 allowed patents on isolated genomic DNA but Indian Biotechnology Guidelines of 2013 changed such a regime by treating such isolated material as mere “discovery” making them non-patent eligible under Section 3(c) of Indian Patents Act and also under Section 3(d) of the same legislation which can consider isolated DNA as “mere discovery of a new form of a known substance” which does not contribute to enhancement of the efficacy of that substance make them non-patent eligible again.

On patent eligibility of cDNA, it is argued to be a product artificially derived by man from work done on a naturally occurring substance. Indian patent case law doesn’t have enough precedent value although to determine the amount of alteration/deletion/moderation by human intervention needed to make modifications on objects of nature patent eligible. The primary question remains as to whether such levels of human modification takes cDNA out of the limitations imposed by Section 3(c), 3(d), 3(e) and 3(j) of the Indian Patents Act to make it patent eligible.[further analysis found here]


  • The creation of incentive for investment in research is one of the biggest functions of any patent. This is done in order to balance out the risks taken in lieu of investment in research.
  • If a process of gene isolation proves relevant factors of novelty, non-obviousness and utility then why it can’t be treated like any other scientific invention notwithstanding that it’s a part of human body is a question at the core of this entire controversy.
  • Most of public health concerns arising from lack of access to healthcare due to high cost of research is usually external to the patent system.
  • Many of the countries around the world already have a system of gene-patents. The need of any legal framework is to create a positive harmonised set of laws in congruence with laws in other countries.
  • A more realist argument supporting gene patents would be that of adopting a restrictive approach in handing patents leading to companies filing for trade secrets under IPR resulting in removal of research work from the public domain.


  • Patents are handed on the basis of inventions and not discoveries. Even though there is a lot of investment going into a work of research, a patent can’t be simply granted for that reason. There are other ways of creating incentives which does not lead to monopolization of research.
  • The argument from moral and ethical point of view [read more about ethical issues here] is with regards to questions surrounding how a part of human body can be patented since research on the same is usually based on common heritage and ownership of human race in general. Moreover, genes are units of heredity and Article 4 of Universal Declaration of Human Genome and Human Rights state that no financial gains should arise out the same.
  • Another huge criticism of gene patents is the counter analogy to the first argument made in favour. Exclusivity or monopolization of research may pre-empt future research by blocking access to public knowledge.

The question as a matter of policy thus boils down to balancing two major factors in the gene patenting regime i.e. incentivising of research and investments in research versus promoting public access to necessary healthcare. One of the ulterior objectives of the Human Genome project was to make information available to the general public at large. Expansive approach of reading patent laws has been seen to have wide repercussions leading to reduction of access to healthcare and research.


The need for incentivising research was one of the main reasons behind  introduction a liberal patent regime but the fact that in spite of large investments in research, mere discoveries or products of nature shouldn’t be made patent eligible is what should guide our jurisprudence of gene patents. There are other methods present in public domain to incentivise such landmark scientific achievements which are not patent eligible like grant of awards or subsiding costs of research. The advantage of these over granting patents is that they don’t intrude into public access to healthcare and research. The grant of individual property rights cannot supersede under any circumstances the basic civil rights of individuals.







Cashless Societies: Causes for Concern


 Source: CNN

The idea of a cashless society, i.e., ‘a civilization holding money, but without its most distinctive material representation – cash’, is said to have originated in the late 1960s. The transition to go cashless had been slow and steady, but it is now increasing at a rapid pace this last decade. As technology evolves, the shift from a cash reliant to a cashless society is becoming more apparent. At least in the urban society, using ‘contactless payments’ or ‘non-cash money’ is not unheard of. It has been reported that not only did the first debit card possibly hit the markets in the mid-1960s but that in 1990, debit cards were used in about 300 million transactions, showing the rise of the same in today’s society. Before welcoming this change with open arms, we must take care that we do not ignore the security and privacy concerns, some of which will be addressed in this article.

As we are transitioning from a cash-reliant society to a [quasi] cashless society, there are some fears about phones being hacked or stolen, or reliance placed on devices which require batteries or internet – what if either is not available? However, conversely, our cash or wallets could be stolen, destroyed in a matter of seconds, could be misplaced, etc. The only difference is the medium of transaction.

Fear is a factor which inhibits change, however these fears are usually not unfounded. In the year 2014, Target, the second-largest discount store retailer in the United States was hacked and up to 70 million customers were hit by a data breach. Furthermore, 2 years later, it was reported that roughly 3.2 million debit cards were compromised in India, affecting several banks such as SBI, ICICI, HDFC, etc.

Nevertheless, as earlier pointed out, just as financial details present online can be stolen, so can paper money. With each transaction taking place online, the fears of online fraud are present, however Guri Melby of Liberal (Venstre) party noted, “The opportunity for crime and fraud does not depend on what type of payment methods we have in society.” A mere shift in the means of trade will not eliminate such crimes. It is here that I must clarify that a cashless society could be in various forms and degrees, be it debit/credit cards, NFC payments, digital currencies such as bitcoin or even mobile transactions such as M-Pesa.

Bruce Schneier, cyber security expert and author of best seller, Data and Goliath, notes that the importance of privacy lies in protection from abuse of power. A hegemony of the authorities over our information – details [and means] of our every transaction – provides absolute power to the authorities and thus a much higher scope for abuse. Daniel Solove, further notes that abuse of power by the Government could lead to distortion of data; however, even if we believe the government to be benevolent, we must consider that data breaches and hack could (and do) occur.

Cash brings with it the double-edged sword of an anonymity that digital transactions do not provide. A completely cashless society might seem attractive in that each transaction can be traced and therefore possibly result in reduction of tax evasion or illicit and illegal activities; however, though that crime might cease to exist in that form, it could always evolve and manifest itself in some other form online.

One of the concerns raised in this regard is that the government could indefinitely hold or be in possession of our transaction history. This seems to be an innocent trade-off for the ease and convenience it provides. The issue that arises however, as Domagoj Sajter notes, is that every single citizen has become a potential criminal and a terrorist to the government, worthy of continuous and perpetual monitoring. The citizens become latent culprits whose guilt is implied, only waiting to be recorded and proven. The principle of innocent till proven guilty vanishes in the mind of the government.

Furthermore, a completely cashless society places power with the Government with no checks and balances of the same. Advanced technology could disable funding of mass actions, extensive protests and large-scale civil disobediences, all of which are important traits of democratic processes. It is pertinent to remember that Martin Luther King Jr. was tracked by the FBI. Providing the government with more ease in curtailing democratic processes leads to a more autocratic governance.

Consider the following: an individual finds out that the Government or one of its agencies is committing a crime against humanity, and she reports it to the public. Not only could her personal life be excavated to find faults but any support that she would receive in terms of money (in a cashless society) could possibly be blocked by the Government. Minor faults could be listed and propaganda could be spread to discredit her point or deviate the masses’ attention. By controlling the economy, they could wring the arms of the media and force them to not focus on or to ignore the issues raised by her.

Michael Snyder also raises an important point about erasure of autonomy in a cashless society, “Just imagine a world where you could not buy, sell, get a job or open a bank account without participating in ‘the system’”. It need not start with forcing people to opt-in, simply providing benefits in some form could indirectly give people no choice but to opt-in. The Supreme Court of India has noted multiple times that the Aadhar Card cannot be made compulsory (a biometric identity card). However, the Aadhar card has been made mandatory to avail EPF Pension Schemes, LPG Benefits and even for IIT JEE 2017. The Government of India is even mulling making Aadhaar number mandatory for filing of income tax (I-T) and link all bank accounts to the unique identity number by the end of this financial year. The government is concurrently working on developing a common mobile phone app that can be used by shopkeepers and merchants for receiving Aadhaar-enabled payments, bypassing credit and debit cards and further moving to cashless transactions. The Aadhaar-enabled payment system (AEPS) is a biometric way of making payments, using only the fingerprint linked to Aadhaar. These are all part of the measures taken by the Indian government to brute force the Indian economy into a cashless form.

Policing of the citizen is not a purely hypothetical scenario; it has already taken place in the past. In 2010, a blockade was imposed by Bank of America, VISA, MasterCard and PayPal on WikiLeaks. In 2014, Eden Alexander started a crowdfunding campaign hoping to cover her medical expenses, but later, the campaign was shut down and the payments were frozen; the cause being that she was a porn actress. We must also take into account the empowerment that cash provides; consider an individual saving cash from their alcoholic or abusive spouse, or the individual who stuffs spare notes under her mattress for years because it gives her a sense of autonomy. We should take care that in seeking development, we do not disempower the downtrodden, but lift them up with us.

The idea of a cashless society is no longer strange, with multiple corporations and even countries having expressed their interest in going cashless. Harvard economist and former chief economist of the IMF, Kenneth Rogoff in his Case Against Cash argues that a less-cash society [in contradistinction to a cash-less society] could possibly reduce economic crime, he suggests in the same article that this could be executed by a gradual phasing out of larger notes. A cashless or less-cash society is inevitable. In Sweden, cash transactions made up barely 2% of the value of all payments made. The question thus is not about when [it will happen] but what are the safeguards we set up to protect our rights.

For further reading:

1] Melissa Farmer: Data Security In A Cashless Society

2] David Naylor, Matthew K. Mukerjee and Peter Steenkiste: Balancing Accountability and Privacy in the Network

3] Who would actually benefit from a Cashless Society?

4] Anne Bouverot: Banking the unbanked: The mobile money revolution

5] Kenneth Rogoff: Costs and benefits to phasing out paper currency

Fake News and Its Follies


Fake news may seem to be very innocuous and in fact might not seem to cause much harm to anyone or have any real-world consequences. Fake news is a phenomenon where a few individuals, sites and online portals create or/and share pieces of information either completely false or cherry-picked from real incidents with the intention to mislead the general public or gain publicity. We all have at least once received a message on WhatsApp groups or on Twitter or on Facebook saying things like – Jana Gana Mana received ‘best national anthem’ award from UNESCO, or that the new Rs 2000 notes have a GPS enabled chip, or that Narendra Modi has been selected as the Best PM in the world by UNESCO. These apparently harmless rumours have done little more than made Twitter trolls target unsuspecting individuals, sometimes even well-known people.

This problem of ‘fake news’ has led to some very tangible damage in today’s world, such as, the recent rumour in Uttar Pradesh and surrounding areas, that there was a severe shortage of salt. The price of salt which was otherwise about Rs 20/kg, shot up to Rs 250/kg and in some cases to Rs 400/kg. The police had to resort to riot control and raids in multiple places to prevent looting and hoarding. The situation blew up to such a great extent that the state’s Chief Minister had to come out with a statement that there was adequate quantity of salt available.

Spreading false information for personal gain is not a new phenomenon, but with the growth of social media and other easily accessible news portals, the reach of the same has reached new heights. This concept came to the forefront given the amount of misinformation propagated by both the sides in Brexit and US presidential elections. This has grown to such a great extent that Oxford Dictionary selected ‘post-truth’ as the word of the year. In a post-truth society, individuals/groups are easily able to influence public opinion for or against their beliefs by posting false and incorrect information online (and probably even get paid for it).

There is a fundamental reason as to why fake-news is bad, it makes it tougher for the individuals to trust established institutions. The relationship between media and citizens is that of trust, the people expect the news portals to be honest and unbiased in their reporting. But, when they are constantly exposed to increasing amount of misinformation and hoaxes, they start losing the faith they have in these institutions. What this does is create a smoke-screen, through which people are not able to see and, judge or reach a definitive conclusion as to what is to be believed and what is not to be believed.

Though there is no set legal provision in India dealing with the problem of fake-news, the closest law the country has that deals with some sort of misinformation being spread is the defamation law. But even the validity of defamation law has been called into question, though the criminal defamation law has been upheld by the SC. It has been stated by critics that the law is being used by the establishment to curb the rights of individuals who question the actions of the governments or its leaders. Sites like Facebook, Reddit, Twitter, etc., can be classified as intermediaries and are the primary sources of fake news. Intermediary liability deals with the liability which can be placed upon such sites, and is dealt with under the IT Act. The provisions under this Act however are not adequate to deal with the issue of fake news. This is because intermediaries are only liable for breaches in privacy of the end-users and not for spread of misinformation.

There are a few other countries which have laws which deal with the subject of misinformation. Germany has mandated Facebook to maintain a 24/7 functioning Legal Protection Office in Germany. This department would take complaints from victims to them and the department would have to initiate an investigation and resolve the issue. If after 24 hours, the department fails to take any action, the company will be charged 500,000 euros (Rs 3,60,00,000) per day the news is left online. China had in 2013 made stringent rules against rumour-mongering. Indonesia has also set up a National Cyber Agency which would deal with content that the agency thinks are ‘slanderous, fake, misleading and spread hate’.

There is a possibility that there could be a chilling effect on the freedom of free speech and expression,  Facebook for example as a corporate entity will in trying to avoid the fine, block any sort of information which comes into question. This is because there is no accountability on the actions in this case. In the cases of China and Indonesia, the governments become the sole deciders of what truth constitutes and anything which they do not want the public to know or any information which is against the establishment’s viewpoint would be labelled as ‘fake’.

The promulgation of fake news has brought into focus the role of sites like Facebook, Twitter, Reddit, etc., which have becoming one of the major sources of news consumption in the developed world. Several analysts have blamed sites like Facebook for the absolute lack of accountability these sites have in dealing with the problem of misinformation spreading on their portals. Then again, moves taken by Facebook and Reddit have been questioned by free speech activists.

This problem of fake news actively being shared and the consequent need to set up regulations to counter this flow by social media outlets and the like raises some serious ethical and legal questions, including whether corporate entities like Facebook, Reddit, Google, etc., should be given a free hand in blocking or blacklisting ‘fake news’, whether the government should step up and actively take a part in stopping fake news and whether the benefits of checking the spread of misinformation are valuable enough to censor any sort of ‘suspected’ news. As of now most laws have still not adapted towards tackling these issues, however there has been a slowly shifting trend towards dealing with the same.



Ed. Note: This post by Vishal Rackecha is a part of the TLF Editorial Board Test 2016.

One of the greatest problems for the Indian Economy faces today is the problem of financial inclusion and the lack of credit in rural areas and for micro industries. In 2013, the Reserve Bank released a paper based on the findings of a committee under the chairmanship of Nachiket Mor. This committee said that services provided through mobiles and other internet portals are a low-cost method and under the right regulatory setup would have the potential bringing financial services to places where the formal banking setups find it unviable or unprofitable to setup branches. This is because having both credit and savings functions is necessary. The committee suggested that allowing non-banking businesses with huge customer bases and comprehensive data about the consumers will be able to increase the reach of the requisite facilities in regions where they are not available.

Payment banks would provide be able to provide services such as payments and holding demand deposits to their customers. The concept of payment banks also brings with it the benefits of having a robust payment mechanism at your fingertips and yet not having to spend on the costly infrastructure and manpower required to maintain an actual bank.

The RBI issued in December 2014 released guidelines for payment banks for an entity to register itself as a payment bank. Eligible promoters should be in pre-paid payment instrument (PPI), Non-banking financial institutions (NBFC’s), telecom operators and supermarkets. Another factor was that these entities should have a good track record and having had properly run their business for a minimum period of 5 years.

Each individual account would be allowed to deposit a maximum sum of 1 lakh rupees; they would be given interest on these savings. These banks would be allowed to issue debit-cards and ATM cards. All their services have to be accessible through mobile; and will be used for automatic cashless and chequeless payment of bills. Payment banks cannot undertake lending activities. They will also provide services like being able to transfer money from the accounts via mobile. RBI has also, with TRAI issued rules for telecom operators on the charges for the services of these payment banks.

Payment banks would have to maintain CRR and SLR based on RBI guidelines. The minimum paid-up capital would be 100 crores and their outside liabilities should not exceed more than 33.33 percentage of their net worth. The minimum initial capital requirement paid by the promoter has to be 40% of the entire investment made and the foreign investment would vary according to that private sector banks. Each of these banks has to have a fully networked and technology driven system of functioning from the beginning. Presently 11 payment banks have been issued licenses; this includes Vodafone m-pesa, Aditya Birla Nuvo Ltd, Department of Posts, etc.

These ‘banks’ will go a long way in shaping the financial sector in the nation and will lead to inclusion of presently uncared for section of the Indian economy. This will though not change the monopoly the traditional banks have over the credit supply. It will also go on to promote the goals of both Pradhan Mantri Jan Dhan Yojana and Digital India of including more and more Indians in the organised sector of finance but also make cashless payments more accessible for the poorer sections. This is because the chances of creating a branch in remote village are far lesser than being able to take a mobile phone there. The system though has its promises and will change the dynamics of this sector, assessing the true potential of the system will not be possible till it is implemented in its entirety.


Ed. Note: This post by Sayan Bhattacharya is a part of the TLF Editorial Board Test 2016.

Google launched its first smartphone series called Pixel some time earlier this month. The major shift from being software producer to being both hardware and software producer was a calculated change in policy to take a direct dig at Apple’s hardware throne.

Apple stood as undisputed kings in terms of design and the meticulously designed software which ran on them, perfecting user experience with highest precision. Google on the other hand was the undisputed king of software and search engines, comprising of much higher software offerings than any other. Even the most diehard fans of iPhones spent most of their time on their devices using Google products. The changeover was thus a direct policy measure to cut through Apple’s base in hardware design but providing an alternative with Google’s exclusive product range.

On the surface, the launch seem to be all about glittery issues surrounding the inherent competition between Google and Apple, but the media, customers and the makers of our privacy law often tend to ignore the bigger picture being entangled in the mesh of technology. One of the major components of Google’s cutting edge technology over iPhones is its Artificial Intelligence which promotes active data mining. The absence and presence of privacy norms is what distinguishes the new features of Google from the existing features in Apple devices. The assumption on part of Google is that its customers are willing to give up some amount of privacy in order to make life easier and the assumption on part of iPhone is that customers value their privacy more than anything.


The latest Artificial Intelligence in Pixel allows the software to read mails, text messages, calendars. When Google’s AI magically delivers you the answer to the question you asked, it is a case of data mining. It is not against the law too, because technically on paper you have given Google certain permissions by not reading the fine print or skimming through it, which allows it to read through your chats, mails, location history and browsing and what not for it to give you some magical results. So the argument here is that it mostly not a free consent that people are providing due to lack of important information while making the same choice.


The major shift in terms of technology in the new AI that Google has developed in Pixel is in terms of its ability to actively read and understand the context of an act or a conversation. So for example if you are on either of Google Allo or Google Home and chatting about going for a dinner with your family at a selected time, you can be sure to expect a reminder about the same along with reviews about the restaurant and even a direct link to book an Uber ride. This is because the AI feature reads your conversations, figures out context and links you to your needs over the web.

“Adding to that, the very Google Allo introduced in order to challenge authority of messaging applications like WhatsApp, Snapchat and Messenger is not end to end encrypted like all of these messaging applications are until you move into incognito mode. However, the Incognito mode within the Google Allo is only an optional feature, instead of being a default setting like in secure chat apps such as Apple’s iMessage, Facebook Messenger, and WhatsApp. In consequence, Allo’s privacy and security got heavily criticized.”

“NSA whistleblower Edward Snowden has criticised Google Allo app on Twitter, and said Google’s decision to disable end-to-end encryption was dangerous. He asked people to avoid using the app, and his tweet has been re-tweeted over 8000 times on the site.”

The problem essentially with this kind of a feature is the fact that it prioritises data mining for ease of access over consumer privacy. The very fact that now privacy of data is an option instead of the norm is what leads to questioning the ethics of data mining however easier it makes one’s life. The fact that a third party is able to store your conversations, read it by actively understanding its context and finally applying the same to aid future actions on your device is what is astounding in this regard.


In another instance if you back up your photos to Google photos, the Google Assistant is capable of recognizing what’s there in the photo using its computer vision wherein it can understand when the same was taken and who all are there in it. Thus the Google AI goes to the extent of not just mining your data but also linking the data excavated to that of other user’s data which Google has excavated through its software. The ultimate end goal is to link the entirity of data collected to create a form of network which is omnipresent but can’t be seen. The question doesn’t arise out of the same networking but out of the means of achieving the same. The data is thus being excavated without a free consent and is being linked with external third party data without prior permission.

Another huge concern surrounding this huge data storage is with government snooping through data packet inspections which exist already in network connections. A switch to Google Pixel means a switch to an almost completely internet run software which further increases chances of breach of privacy.


Google aims at making its artificial intelligence the next big thing post its position in world of search engines and software. It aims to make its customers switch from a mobile first world to an AI first world. But the underlying assumption is that the same can be done at the cost of user privacy.


%d bloggers like this: