OONI at RightsCon 2018

We are excited to participate at RightsCon next week: the world’s leading event on human rights in the digital age. Annually hosted by Access Now, this three-day conference will take place in Toronto between 16th-18th May 2018.

Over the last years, RightsCon has provided us the opportunity to participate in critical discussions in the digital rights field, meet many fascinating people and organizations, and to form new coalitions. We therefore look forward to participating at RightsCon and are eager to meet many new faces!

Read this post to learn more.

Publication date: 11th May 2018

Publisher: Open Observatory of Network Interference (OONI)

OONI Community Interviews: Moses Karanja

We first met Moses Karanja several years ago at the Citizen Lab Summer Institute. He’s a Kenyan information controls researcher, having previously worked with Strathmore University Law School research centre, CIPIT. Currently, he’s a PhD student at the University of Toronto.

Over the last years, Moses has championed OONI community engagement across Africa. Thanks to his tireless efforts, communities in many African countries are now running OONI Probe and using OONI data to examine internet censorship and other forms of network interference. We have worked with Moses on a number of research reports and are grateful for his commitment to defending a free and open internet.

Today we publish an interview with Moses so that you can have a chance to meet him too and learn more about his work.

View his interview here.

Publication date: 9th May 2018

Publisher: Open Observatory of Network Interference (OONI)

ParkNet: Short Documentary on Internet Censorship in Cuba

Last year we had the opportunity to travel to Cuba to explore its internet landscape. We spent most of our time hopping from one public WiFi hotspot to another, measuring networks in Havana, Santa Clara, and Santiago de Cuba. You might remember that we published a research report on our findings.

Today we publish a short documentary (“ParkNet”) on our study of internet censorship in Cuba.

View the video here.

Publisher: Open Observatory of Network Interference (OONI)

Publication date: 23rd April 2018

The Illusion of User Choice

CEOs like Alphabet’s Larry Page and Facebook’s Mark Zuckerberg have repeatedly claimed that users have choice. That we choose which information to share, and therefore control our data.

This narrative is misleading, and here’s why.

Privacy settings

Privacy settings only cover a small portion of our data. They allow us to explicitly choose “yes” or “no” in relation to a predetermined subset of our data, excluding all other types of data that companies make money out of.

Regardless of whether you use Google and Facebook or not, these companies collect information about you anyway. Most websites on the internet use Google and Facebook tracking technologies, allowing these companies to track our online activities. Facebook may not sell our personal data individually (as Zuckerberg repeatedly claimed this week), but they sell access to our aggregate data: the types of information that group us, profile us, and make us a target of advertising. All such data expands far beyond the choices we can make via privacy settings.

By arguing that privacy settings provide us control over our data, internet companies are essentially implying that only a subset of our data deserves protection, while all our other data is fair game in the data industry. This argument also appears to be an attempt to exempt internet companies from their responsibility, and to place the responsibility of data protection on the user.

The opportunity though to make explicit choices about our data (and to therefore have control over and be responsible for it) is more often the exception, than the norm.

Terms of service: Take it or leave it

Most of our data is processed (and analysed, shared, retained, disclosed, etc.) on the basis of terms and conditions, which we cannot easily negotiate. When we use online services, companies can use our data for any purpose that can be justified on the basis of their corporate interests. This means that our data can be used as part of profiling, marketing and advertising.

During the Facebook hearings earlier this week, U.S. Senators complained about Facebook’s terms of service being “too long” and written in “complicated English” for the average user. The main problem, however, with the privacy policies of most internet companies is that they are too vague in relation to our data and fail to answer a number of key questions adequately, such as: Who are the mysterious “third parties” that our data is shared with and disclosed to? Which information is shared with and disclosed to them? How exactly will our data be used and why? How and why will its use be re-purposed? Which information is tracked and collected when we visit third-party websites? Which information is used for marketing and advertising purposes, why and how?

Answering these questions may be challenging for a number of reasons. A lot of this probably depends on the types of services, the users and the data they generate (directly and indirectly). The data industry consists of a long chain of diverse companies who aggregate, buy and sell different types of datasets. As a result, it’s likely the case that not even these companies can determine what will ultimately happen to our data once it’s sold, shared or disclosed in aggregate to other third parties who in turn have different policies, comply with different laws and have different customers. In other words, tracking the secondary, repurposed use of our data once it has been aggregated and sold in the opaque and seemingly endless chain of data brokers can, at best, be an inconvenience and, at worst, unattainable.

But if internet companies are going to place the responsibility of data protection on the user, they should be prepared to answer these difficult questions. With their vague privacy policies and terms of service, they instead argue that if we are not comfortable with their data processing practices, we should simply refrain from using their services altogether. It’s pretty much a “take it or leave it” situation.

Realistically though, can we refrain from using the most popular internet services?

The Network Effect

What makes leaving (or refraining from using) popular internet services so difficult is what is called The Network Effect: the value that these services have because many people use them (including your social networks, such as your friends, family, and colleagues).

How easy is it to connect with friends and to find out about events if we’re not on Facebook? More and more companies – including our banks and potential employers – rely on online social networks to verify our identities and/or to check our credibility. The online “public square” is provided by companies like Twitter. If we choose not to use these services, are we effectively being excluded from public discussions?

The Network Effect appears to be a symptom of a race between internet companies to monopolize services. Because few alternatives to their services exist, more people end up using them, they accumulate value and The Network Effect is (almost) unavoidable. In some cases, alternative services may not even exist or are unknown outside of niche tech circles. None of this is a coincidence. Internet giants, like Alphabet and Facebook, work hard to buy their competition. A mere look at Alphabet’s acquisitions and investments over the last decade shows that they bought many startups that could have posed competition to their services. Similarly, Facebook bought some of their competitors, such as Instagram and WhatsApp.

What is concerning though is that it seems like their competitors want to be bought. Apart from the monetary value of being bought by a multi-billion dollar company, there is also the social value – the prestige – of being bought by them. Many startups are considered “successful” if they are eventually bought by the likes of Alphabet, even if this contributes to a less competitive market and fewer alternative services. This is precisely what makes companies like Alphabet and Facebook so powerful, and “user choice” questionable.

If alternative services barely exist and our societies use and rely on the services provided by internet giants, is opting out of their services even an option? And even if we do choose to leave such services, what choices can we make for the protection of our data? Is it even possible to protect our data, given that most websites on the internet use tracking technologies owned by internet giants?

Data Responsibility

Internet companies place the responsibility of data protection on the user because it is convenient (and profitable) for them to do so.

We can argue that internet companies should protect their users by default, and perhaps if most of their users demand such protection (and start boycotting their services) they’ll have some incentive to do so. But the reality is that the foundation of their business model relies heavily on our data, and so they’re more likely to provide “cheap protections” to make us happy.

We can argue that governments should apply stricter regulations, but the reality is that many regulators struggle to understand the complexities around the data industry and how the internet works. Even experts struggle to understand these things.

So if internet companies and governments realistically can’t (or lack the incentive to) protect our data adequately, who will?

We can take matters in our own hands: install ad blockers, use search engines like DuckDuckGo instead of Google Search, use Tor browser instead of Google Chrome, use Etherpad instead of Google Docs, use Riseup instead of Gmail, use Signal instead of WhatsApp, and the list of alternatives goes on. But is this enough?

The problem is that some of these alternatives don’t work as well (or don’t offer as many or as great features) as the services offered by the internet giants we’re all used to. Many of these alternative, privacy-enhancing services are poorly funded and lack resources – especially in comparison to multi-billion dollar internet companies, like Google or Facebook. And the reality is that many of us need great services in order to get our jobs done. We also need services that the rest of our friends, family, and colleagues use. Switching to solely using alternative, privacy-enhancing services is an inconvenience and a privilege that many can’t afford (due to lack of know-how, the Network Effect, etc.).

But this issue requires much more than individual actions.

The Data Problem

The Data Problem is societal. The fact that your data is being collected, analyzed, and sold without your knowledge or explicit consent is a problem. But the bigger problem is the analysis, use, and sale of aggregate data. Or to put it differently, the clustering of society, the profiles that internet companies create about groups and communities within societies, and the algorithmic discrimination that comes with that.

The Data Problem is structural. We live in a world where our daily lives rely on internet services, which in turn rely on the data generated by our daily lives. This feedback loop ends up handing more and more power over to companies that have no real social responsibility. In this type of world, can “user choice” be anything but an illusion?

Privacy used to be about secrecy, but over the last decade (with the boom of internet services), we have come to redefine privacy as “control over data”. Has the time come to re-think this privacy definition again?

The idea that privacy is the core objective of data protection is quite individualistic. It assumes that harm can mainly be posed to the individual and its rights. While this can be true, my main concern is the harm that the data industry can potentially cause to societies. The algorithmic clustering of society into groups and profiles, and the discrimination that can result from that. The power that companies – who weren’t elected and who bear no social obligations other than to please their customers – are accumulating every day. The shifting power dynamics and their consequences on groups and societies at large.

With the narrative of “user choice”, internet companies are diverging our attention from the real problem: their power and influence on society. The Data Problem is not so much about your personal data, but about how all of our data in bulk feeds this new Infocratic System.

OONI’s recent participation at events in Africa, India, and Europe

Over the last months, the OONI team had the opportunity to host workshops, give presentations, and participate in discussions at the following conferences and events:

These events provided us a great opportunity to meet many fascinating people from various communities, learn about their work, form new collaborations, and collect feedback for the improvement of our tools and methodologies.

Read the rest of the post here.

Publisher: Open Observatory of Network Interference (OONI)

Publication date: 11th April 2018

Sierra Leone: Network disruptions amid 2018 runoff elections

Last weekend, two network disruptions occurred in Sierra Leone right before and after the country’s runoff elections.

This post examines these disruptions and shares data that corroborates local reports.

It seems that the network disruptions were caused by an ACE submarine cable cut. Google traffic and BGP data suggest that the second disruption, following the runoff elections, could be an internet blackout.

Read the post here.

Publisher: Open Observatory of Network Interference (OONI)

Publication date: 5th April 2018

Investigating Internet Blackouts from the Edge of the Network: OONI’s new upcoming methodology

Imagine a day where the internet is shut down completely. You have to work, check the news, and communicate with your friends and family. All of a sudden, you can’t do any of that, because there simply is no internet. It feels like a strange form of time travel has taken place: you’re thrown several decades into the past, into a world without internet, but in one which has learned to heavily rely on it. And sometimes, you remain in that world for several days (or months, in the case of the anglophone region of Cameroon). None of this makes sense, and there’s no clear justification for it either.

This is the type of reality that millions of people around the world experience every year, when an internet blackout takes place in their region.

Read more here.

Publisher: Open Observatory of Network Interference (OONI)

Publication date: 4th April 2018

Iran Protests: OONI data confirms censorship events

At this point, you have probably read all about the major anti-government protests that erupted across Iran over the last week. You may have even read about how services like Telegram and Instagram were blocked, reportedly as part of a government attempt to stifle the unrest.

We publish this post to share OONI network measurement data collected from Iran between 28th December 2017 (when the protests started) to 2nd January 2018. OONI data confirms the blocking of Telegram, Instagram, and Facebook Messenger amidst Iran’s protests and reveals how blocks were implemented.

Read the report here.

Publication date: 5th January 2018

Publisher: Open Observatory of Network Interference (OONI)

Year in Review: OONI in 2017

As the end of 2017 approaches, we publish this blog to share some OONI highlights from the last year. We also share some of the things we’ll be working on in 2018!

Read the post here.

Publication date: 30th December 2017

Publisher: Open Observatory of Network Interference (OONI)

OONI at the 34th Chaos Communication Congress (34C3)

The OONI team attended the 34th Chaos Communication Congress (34C3): Europe’s largest hacker conference on technology, society, and utopia. We hosted an assembly (called the OONI-verse), and our project lead (Arturo Filasto) presented OONI.

Learn more here.

Publication date: 23rd December 2017

Publisher: Open Observatory of Network Interference (OONI)