All posts by Kartik Chawla

Kartik Chawla is a fourth year student at NALSAR, with an avid interest in the fields of Technology Law and Cyber Law, and is the Editor-in-Chief of the TechLawForum@NALSAR Blog. He has worked in the field for a while, on topics ranging from Digital Privacy to Internet Regulation. In the rare event that he is not stuck to a screen, he finds time to drink soda, eat pizza (or Subway; or corn), and indulge the petrolhead in him. He tweets as @krotikchawla, but you can follow him on Academia, LinkedIn as well.

Meet the New Internet, Same as the Old, Old Internet – except its not the Internet (Part III – the Future of Networks)

The roots of the Internet, in Tim Berners-Lee's original proposal. (Image Source: https://flic.kr/p/67bqGj)
The roots of the Internet, in Tim Berners-Lee’s original proposal. (Image Source: https://flic.kr/p/67bqGj)

This is the third post in this series; the first two, which set the background for the issue, are available here and here.

The question therefore becomes – is it time we look beyond the ‘internet’ as it exists, to newer models of communication? The ‘models’ I refer to here are not absolutely novel – nothing under the sun is. These models still rely on the TCP/IP protocol, still use parts of the ‘internet’, still use the network laid down for it – learn from it, and improve it. These models, in fact, bring to mind the original image that was created of the internet, so much so that we can actually call these models of communication the legacies of the ideas of the ‘original internet’, challenging the dominance of the ‘neo-internet’. So is it time we focus on these models, develop them, and mark the decline of the ‘neo-internet’? Continue reading Meet the New Internet, Same as the Old, Old Internet – except its not the Internet (Part III – the Future of Networks)

Advertisements

Meet the New Internet, Same as the Old, Old Internet – except its not the Internet you know (Part II – The Tangled Wires)

The roots of the Internet, in Tim Berners-Lee's original proposal. (Image Source: https://flic.kr/p/67bqGj)
The roots of the Internet, in Tim Berners-Lee’s original proposal. (Image Source: https://flic.kr/p/67bqGj)

This is the second in my three-part series on the issue. The first and third parts are available here and here.

Tangle One

I’ll start with a side-note. In public debate, somehow, Network Neutrality ends up being represented as an absolutist concept, as “ISPs should perform no discrimination between the data travelling on their networks”. Now, as welcome an ideal as that is, the problem is that that is not practically possible, mostly because of Quality of Service (‘QoS’) concerns. This, of course, does not mean that Network Neutrality should not exist – there exist multiple proposals that reconcile these concerns with Network Neutrality, an example of the same being the application-agnostic discrimination, put forth by Barbara van Schewick.

Tangle Two

Now, for the core argument, the concerns are a bit technical. A corollary of the Schumpeterian theory is that each and every innovation that breaks a previous monopoly ends up itself being monopolised in the long run. This is what happened with the telephone, the radio, the television, and cinema. This has not technically happened with the Internet (I elaborate on the “technically” below) in some places, an exception that Network Neutrality is credited with. Continue reading Meet the New Internet, Same as the Old, Old Internet – except its not the Internet you know (Part II – The Tangled Wires)

Meet the New Internet, Same as the Old, Old Internet – except its not the Internet you know (Part I – a bit of Background)

(Image Source: https://flic.kr/p/67bqGj)
(Image Source: https://flic.kr/p/67bqGj)

This is the first post in my three-part series on Network Neutrality, the internet, innovation, and the future of networks. The second and third post are available here and here.

The fundamental idea of the Schumpeterian model of Creative Destruction images a continuous cycle of Creation and Destruction of monopolies, presenting a continuous story of capitalism. Of course, the entirety of the Schumpeterian economic discourse is a very complex issue, and I have my issues with parts of it, but I am using the Schumpeterian analogy here as it is relevant to the point I will be making here. The breaking point of a monopoly starts, in this limited context, with a new technology that ‘decentralises’ power, therefore challenging the existing ‘monopoly’. And this is where the Internet comes in. Continue reading Meet the New Internet, Same as the Old, Old Internet – except its not the Internet you know (Part I – a bit of Background)

Role of Intermediaries in Countering Online Abuse: Still a Work In Progress, Part II

This is the second in a two-part series by Jyoti Panday of Centre for Internet and Society, Bangalore, on the role of intermediaries in addressing online abuse. The first part of this post is available here.

SIZE MATTERS

The standards for blocking, reporting and responding to abuse vary across different categories of platforms. For example, it may be easier to counter trolls and abuse on blogs or forums where the owner or an administrator is monitoring comments and UGC. Usually platforms outline monitoring and reporting policies and procedures including recourse available to victims and action to be taken against violators. However, these measures are not always effective in curbing abuse as it is possible for users to create new accounts under different usernames. For example, in Swati’s case the anonymous user behind @LutyensInsider account changed their handle to @gregoryzackim and @gzackim before deleting all tweets. In this case, perhaps the fear of criminal charges ahead was enough to silence the anonymous user, which may not always be the case.

TACKLING THE TROLLS

Most large intermediaries have privacy settings which restrict the audience for user posts as well as prevent strangers from contacting them as a general measure against online harassment. Platforms also publish monitoring policy outlining the procedure and mechanisms for users to register their complaint or report abuse. Often reporting and blocking mechanisms rely on community standards and users reporting unlawful content. Last week Twitter announced a new feature allowing lists of blocked users to be shared between users. An improvement on existing mechanism for blocking, the feature is aimed at making the service safer for people facing similar issues and while an improvement on standard policies defining permissible limits on content, such efforts may have their limitations.

The mechanisms follow a one-size-fits-all policy. First, such community driven efforts do not address concerns of differences in opinion and subjectivity. Swati in defending her actions stressed the “coarse discourse” prevalent on social media, though as this article points out she might be assumed guilty of using offensive and abusive language. Subjectivity and many interpretations of the same opinion can pave the way for many taking offense online. Earlier this month, Nikhil Wagle’s tweets criticising Prime Minister Narendra Modi as a “pervert” was interpreted as “abusive”, “offensive” and “spreading religious disharmony”. While platforms are within their rights to establish policies for dealing with issues faced by users, there is a real danger of them doing so for political reasons” and based on “popularity” measures which may chill free speech. When many get behind a particular interpretation of an opinion, lawful speech may also be stifled as Sreemoyee Kundu found out. A victim of online abuse her account was blocked by Facebook owing to multiple reports from a “faceless fanatical mob”. Allowing the users to set standards of permissible speech is an improvement, though it runs the risk of mob justice and platforms need to be vigilant in applying such standards.

While it may be in the interest of platforms to keep a hands off approach to community policies, certain kind of content may necessiate intervention by the intermediary. There has been an increase in private companies modifying their content policy to place reasonable restriction on certain hateful behaviour in order to protect vulnerable or marginalised voices. Twitter and Reddit’s policy change in addressing revenge porn are reflective of a growing understanding amongst stakeholders that in order to promote free expression of ideas, recognition and protection of certain rights on the Internet may be necessary. However, any approach to regulate user content must assess the effect of policy decisions on user rights. Google’s stand on tackling revenge porn may be laudable, though the decision to push down ‘piracy’ sites in its search results could be seen to adversely impact the choice that users have. Terms of service implemented with subjectivity and lack of transparency can and does lead to private censorship.

THE WAY FORWARD   

Harassment is damaging, because of the feeling of powerlessness that it invokes in the victims and online intermediaries represent new forms of power through which users’ negotiate and manage their online identity. Content restriction policies and practices must address this power imbalance by adopting baseline safeguards and best practices. It is only fair that based on principles of equality and justice, intermediaries be held responsible for the damage caused to users due to wrongdoings of other users or when they fail to carry out their operations and services as prescribed by the law. However, in its present state, the intermediary liability regime in India is not sufficient to deal with online harassment and needs to evolve into a more nuanced form of governance.

Any liability framework must evolve bearing in mind the slippery slope of overbroad regulation and differing standards of community responsibility. therefore Therefore, a balanced framework would need to include elements of both targeted regulation and soft forms of governance as liability regimes need to balance fundamental human rights and the interests of private companies. Often, achieving this balance is problematic given that these companies are expected to be adjudicators and may also be the target of the breach of rights, as is the case in Delfi v Estonia. Global frameworks such as the Manila Principles can be a way forward in developing effective mechanisms. The determination of content restriction practices should  always adopt the least restrictive means of doing so, distinguishing between the classes of intermediary. They must evolve considering the proportionality of the harm, the nature of the content and the impact on affected users including the proximity of affected party to content uploader. Further, intermediaries and governments should communicate a clear mechanism for review and appeal of restriction decisions, accommodating the right to be heard and reinstating wrongfully removed content.

Role of Intermediaries in Countering Online Abuse: Still a Work In Progress, Part I

The TechLawForum@NALSAR is happy to bring you a detailed two-part post by Jyoti Panday of Centre for Internet and Society, Bangalore, on the role played by Intermediaries in countering abuse on the internet. Jyoti is a graduate of Queen Mary’s University, London. Her work focuses on the interaction between intermediaries, user rights, and and freedom of expression. 

The Internet can be a hostile space and protecting users from abuse without curtailing freedom of expression requires a balancing act on the part of online intermediaries. As platforms and services coalesce around user-generated content (UGC) and entrench themselves in the digital publishing universe, they are increasingly taking on the duties and responsibilities of protecting  rights including taking reasonable measures to restrict unlawful speech. Arguments around the role of intermediaries tackling unlawful content usually center around the issue of regulation—when is it feasible to regulate speech and how best should this regulation be enforced?

Recently, Twitter found itself at the periphery of such questions when an anonymous user of the platform, @LutyensInsider, began posting slanderous and sexually explicit comments about Swati Chaturvedi, a Delhi-based journalist. The online spat which began in February last year,  culminated into Swati filing an FIR against the anonymous user, last week. Within hours of the FIR, the anonymous user deleted the tweets and went silent. Predictably, Twitter users hailed this as a much needed deterrence to online harassment. Swati’s personal victory is worth celebrating, it is an encouragement for the many women bullied daily on the Internet, where harassment is rampant. However, while Swati might be well within her legal rights to counter slander, the rights and liabilities of private companies in such circumstances are often not as clear cut.

Should platforms like Twitter take on the mantle of deciding what speech is permissible or not? When and how should the limits on speech be drawn? Does this amount to private censorship?The answers are not easy and as the recent Grand Chamber of the European Court of Human Rights (ECtHR) judgment in the case of Delfi AS v. Estonia confirms, the role of UGC platforms in balancing the user rights, is an issue far from being settled. In its ruling, the  ECtHR reasoned that because of their role in facilitating expression, online platforms have a requirement “to take effective measures to limit the dissemination of hate speech and speech inciting violence was not ‘private censorship”.

This is problematic because the decision moves the regime away from a framework that grants immunity from liability, as long as platforms meet certain criteria and procedures. In other words the ruling establishes strict liability for intermediaries in relation to manifestly illegal content, even if they may have no knowledge. The ‘obligation’ placed on the intermediary does not grant them safe harbour and is not proportionate to the monitoring and blocking capacity thus necessitated. Consequently,  platforms might be incentivized to err on the side of caution and restrict comments or confine speech resulting in censorship. The ruling is especially worrying, as the standard of care placed on the intermediary does not recognize the different role played by intermediaries in detection and removal of unlawful content. Further, intermediary liability is its own legal regime and is at the same time, a subset of various legal issues that need an understanding of variation in scenarios, mediums and technology both globally and in India.

LAW AND SHORT OF IT                  

Earlier this year, in a leaked memo, the Twitter CEO Dick Costolo took personal responsibility for his platform’s chronic problem and failure to deal with harassment and abuse. In Swati’s case, Twitter did not intervene or take steps to address  harrassment. If it had to, Twitter (India) would be bound by the liability framework established under Section 79 and accompanying the Rules of the Information Technology Act. These legislations outline the obligations and conditions that intermediaries must fulfill to claim immunity from liability for third party content. Under the regime, upon receiving actual knowledge of unlawful information on their platform, the intermediary must comply with the notice and takedown (NTD) procedure for blocking and removal of content.

Private complainants could invoke the NTD procedure forcing intermediaries to act as adjudicators of an unlawful act—a role they are clearly ill-equipped to perform, especially when the content relates to political speech or alleged defamation or obscenity. The SC judgment in Shreya Singhal addressing this issue, read down the provision (Section 79 by holding that a takedown notice can only be effected if the complainant secures a court order to support her allegation. Further, it was held that the scope of restrictions under the mechanism is restricted to the specific categories identified under Article 19(2). Effectively, this means Twitter need not take down content in the absence of a court order.

CONTENT POLICY AS DUE DILIGENCE

Another provision, Rule 3(2) prescribes a content policy which, prior to the Shreya Singhal judgment was a criteria for administering takedown. This content policy includes an exhaustive list of types of restricted expressions, though worryingly, the terms included in it are  not clearly defined and go beyond the reasonable restrictions envisioned under Article 19(2). Terms such as “grossly harmful”, “objectionable”, “harassing”, “disparaging” and “hateful” are not defined anywhere in the Rules, are subjective and contestable as alternate interpretation and standard could be offered for the same term. Further, this content policy is not applicable to content created by the intermediary.

Prior to the SC verdict in Shreya Singhal, actual knowledge could have been interpreted to mean the intermediary is called upon its own judgement under sub-rule (4) to restrict impugned content in order to seek exemption from liability. While liability accrued from not complying with takedown requests under the content policy was clear, this is not the case anymore. By reading down of S. 79 (3) (b) the court has addressed the issue of intermediaries complying with places limits on the private censorship of intermediaries and the invisible censorship of opaque government takedown requests as they must and should adhere, to the boundaries set by Article 19(2). Following the SC judgment intermediaries do not have to administer takedowns without a court order thereby rendering this content policy redundant. As it stands, the content policy is an obligation that intermediaries must fulfill in order to be exempted from liability for UGC and this due diligence is limited to publishing rules and regulations, terms and conditions or user agreement informing users of the restrictions on content. The penalties for not publishing this content policy should be clarified.

Further, having been informed of what is permissible users are agreeing to comply with the policy outlined, by signing up to and using these platforms and services. The requirement of publishing content policy as due diligence is unnecessary given that mandating such ‘standard’ terms of use negates the difference between different types of intermediaries which accrue different kinds of liability. This also places an extraordinary power of censorship in the hands of the intermediary, which could easily stifle freedom of speech online. Such heavy handed regulation could make it impossible to publish critical views about anything without the risk of being summarily censored.

Finally, some clauses in the content policy are India specific such as rule 3(2)(i) which restricts any content that threatens “unity”, “integrity”, “defence”, “security”  “sovereignty” of India, “friendly relations with foreign states”, “public order” or “causes incitement”. Enforcing intermediaries to protect sovereignity of a nation by outlining the contours of speech is an impractical requirement for all intermediaries some of which may not even be based in India. Twitter may have complied with its duties by publishing the content policy, though the obligation does not seem to be an effective deterrence. Strong safe harbour provisions for intermediaries are a crucial element in the promotion and protection of the right to freedom of expression online. By absolving platforms of responsibility for UGC as long as they publish a content policy that is vague and subjective is the very reason why India’s IT Rules are in fact, in urgent need of improvement.

Shreya Singhal, and how Intermediaries are simply Intermediaries Once Again – Striking down the Chilling Effect

The concept of ‘intermediary liability’ in all its nuances, as I have written before, is one of the bulwarks of the internet as we know it, including one of the aspects of it that we all know and love – the power it gives to each and every individual to exercise their right to free speech. In fact, it is that very power that even I am exercising right now as a blogger, even as part of an academic institution. This post looks into the Shreya Singhal and Ors. v. Union of India judgment, the contentions raised therein by intermediaries, and the consequences it has for intermediaries and internet-users alike. We will be looking at the Section 69A issues in a separate post.

Intermediary Liability is quite fragile and multifaceted a concept, balancing multiple interests on multiple fronts. To name the broadest stakeholders, intermediary liability balances the rights of the users (of the internet), against the profit incentives of the intermediary, and the policing of the government. An extremely interesting instance of the last part of this can be seen in the 2013 House of Lords’ Select Committee on Communications’ Report on Media Convergence. As per the report, the Committee essentially states that the best way for regulating content on the internet is through the intermediaries alone.

In India, we now follow a different model. Up till last week, we followed a rather shaky and quite criticised notice-and-takedown regime under Section 79 of the Information Technology Act and the Rules promulgated under it, but that changed with the Shreya Singhal judgment.

Arguments made by the Petitioners

The contentions that had been raised by the petitioners in the Shreya Singhal case were that Rule 3(4) of the Guidelines required the intermediary to exercise its own judgment regarding the legality of the information in question, and then disable whatever information was in contravention of Rule 3(2) of the same. The petitioners also argued that there were no safeguards provided for intermediaries under the 2009 Rules made under S. 69A. Furthermore, sub-rule (2) of Rule 3 was also argued to be vague and overbroad, and to have no relation with the subjects specified under Art. 19(2) of the Constitution of India.

Similar contentions were raised with S. 79(3)(b) as well, with regards to how it asks the intermediary to judge the legality of the content in question – and not just legality, but whether they fall under the unnecessarily broad category of ‘unlawful acts’.

The Court’s Judgment

The Court rightly concludes, on the basis of its analyses, that S. 79 is an exemption provision for the intermediary. Thus, it has to be necessarily seen in the context of the offences under the Act, such as S. 69A. As S. 69A in no way calls for the intermediary to make its own decisions about the legality of content. Furthermore, the Court had concluded earlier in the judgment that blocking orders under S. 69A can only be passed either by a competent Court or by a Designated Officer after complying with the 2009 Rules.

On this basis, the Court has struck down the notice-and-takedown regime. Now, the intermediary is only required to remove content after it receives a notice regarding the same. Furthermore, the ‘unlawful acts’ for which such orders can be made have also been read down to only the grounds allowed under Art. 19(2). The Court here has specifically stated that content can only be taken down after an order has been passed by a competent authority. Thus, the twofold judgement of the Court here is that:

  1. a) intermediaries are required to block access to or remove content only once they receive an order from the competent authorities, and
  2. b) such an order can only be passed on the basis of the grounds laid down under Art. 19(2).

What it Means for Intermediaries and the Users

Thus, the liabilities of the intermediaries under S. 79(3)(b) and Rule 3(4) of the Intermediary Liability Guidelines have been read down to only when they ignore a direct order from either a Court or a Designated Officer, which has been passed on the grounds enumerated under Art. 19(2). This is, without a doubt, a huge victory for intermediaries across India. Along the lines of the incentive theory explanations that I have used before, this is at the absolute edge of the spectrum – the intermediary is, in this case, not required to block any content as per anything less than a court order. Therefore, the intermediaries can host as much content as their technology allows them to, and only take it down when they get the proper orders. They face no liability for their content, but only for the direct disobedience of orders. Thus, under this model, the intermediaries are incentivised to host as much content as they can.

But at the same time, this also means that the onus of judging content lies, now, entirely on the shoulders of the Courts. And as we have mentioned before, the Courts have a dubious track record when it comes to copyrighted content and online pirated copies. Thankfully, the beauty of this judgment is that even with the above ‘overzealousness’, so to say, of the Courts when it comes to pirated content, there will be no chilling effect on the intermediaries! This is because they do not have to make decisions regarding legality of content, they only have to follow the orders of the Court. Therefore, they will not face the ‘fear’ of being found liable for any ‘grey area’ content that passes under their radar. Thus, there is an indirect but extremely positive effect on the free speech of the users.

The Court makes a note of an interesting contention when it notes the arguments made by the petitioners, that “intermediaries by their very definition are only persons who offer a neutral platform through which persons may interact with each other over the internet”. And that is exactly what the Court has done here – created neutral gatekeepers of the internet, or at least as neutral as companies with vested economic interests can be. The onus and liability for the legality of the content is now on the Courts and the Designated Officers, and no long rest in the hands of the intermediaries. Facebook is, then, just Facebook once again, and no longer has to judge the legality of the content its users put up.

And that is an a huge step forward for free speech in India, because, along with protecting free speech at the first, direct level of the user through striking down 66A, the Court has also promoted the free speech of the users through the second, indirect level, through striking down the chilling effect that found its home in the notice-and-takedown regime, going far beyond what I, at least, had expected from it. And it has thus, perhaps, taken the first step in paving the way for true freedom of speech on the Internet in India.

A Victory, and Moving Forward – TRAI Consultations on OTTs

Last week, the Supreme Court of India in its judgment in the case of Shreya Singhal and Ors. v Union of India has decreed S. 66A of the Information Technology Act unconstitutional in its entirety, and at the same drastically restricted the ambit of Ss. 69A and 79 by reading into them the jurisprudence of Art. 19(1) (a) and 19(2). It has at the same time struck down the notice-and-takedown regime, replacing it with a system with more oversight, as we will see in following posts.

We will shortly be coming out with separate, detailed posts on each of the separate dimensions of the judgement, including but not restricted to the Free Speech issues, the Intermediary Liability issues, and the Website blocking concerns. But before we start on to that, a short word of caution.

The victory of 66A is an absolutely immense victory for freedom of speech in India, and not just in the case of the internet – the judgement is a well-written, multifaceted one, which will in all probability have an impact on free speech jurisprudence for years to come. But freedom of speech on the cyberspace is not a victory that is final yet. As of right now, the most crucial debate in the domain of the Indian cyberspace, which holds its future in its hands, is that of Network Neutrality.

And right now, the TRAI has just this week itself released its Consultation Paper on Over-The-Top (OTT) services. While we will be releasing our posts on this issue soon as well, you can read the paper for yourself here, and read Medianama’s post on the issue here.

The crucial part here is that this paper is open for consultation at the moment. We do not have, in India, a John Oliver who can appeal to the masses and flood the TRAI with comments. But that in no way means that the work that is done here is any less important, or that these issues deserve any less concern. Please read, and please comment. These are the issues that decide the future of the internet in India, as much as S. 66A did, if not more.

Comments should be sent to: advqos@trai.gov.in

Editors’ Picks (8/2/15)

1. We Can Now Build Autonomous Killing Machines. And That’s a Very, Very Bad Idea, Rober McMillan, WIRED.

2. Blocking online porn: who should make Constitutional decisions about freedom of speech?, Chinmayi Arun, Scroll.in.

3. Office Puts Microchips Under Employees’ Skin, Luke Karmali, IGN.

4. Why economists are wrong about tech, Michael Baxter, The Next Web.

5. Watch Vestigen’s Project Ara Sensors Show How Modular Smartphones Could Change The World, Darrell Etherington, TechCrunch.

6. #Mufflerman vs #IronLady: it’s hashtag war in Delhi politics, Tania Goklany, Hindustan Times.

7. FCC Chairman Tom Wheeler: This Is How We Will Ensure Net Neutrality, Tom Wheeler, WIRED.

8. Virtual Reality, The Empathy Machine, Josh Constine, TechCrunch.

Editors’ Picks (01/02/15)

1. Securing a future for Digital India, Arun Mohan Sukumar, The Hindu.

2. SC orders Google, Yahoo! And Microsoft to stop advertisements relating to sex determination, Apoorva Mandhani, LiveLaw.

3. Drone maker to add no-fly firmware to prevent future White House buzzing, Sean Gallagher, ARSTechnica.

4. The Pirate Bay is live once again,  Selena Larson, DailyDot.