A new social contract or a mere blank check?

Ricardo Zapata Lopera
11 min readOct 17, 2019

Four things to know before thinking about a new ‘social contract’ for the digital age

Photo by Joël de Vriend on Unsplash

Let’s easily put this: there are many things we don’t know are happening behind or as a product of digital technologies. Not nice things, worrying things. But our society is obnubilated by tech discourse and we can only think about all the solutions it can bring to our troubled world. The narrative goes on to say that we are in the midst of a revolution (the 4th for some, the 5th for others) and it’s time to forge a new social contract to make the unleashed forces of such disruptive technologies work for society.

And I would agree. We are indeed in the midst of a moment of tensions and the power inequalities brought by digital technologies should be re-balanced. Work, democracy, the public sector, and life in itself will not continue the same. But before running on to sign a new contract, we should know what’s the deal about. We’ve been told it was about solutions but I certainly don’t see that we as a society even manage to grasp what’s really at stake. A new social contract today is just a mere blank check to big tech capital, the digital economy is not what we have heard of and slowly we discover the truth behind it.

In the next paragraphs I will argue that, at least in four ways, digital technologies are not fulfilling their promise. Quite the contrary. To solve it, and as a condition to bet for a new social contract, I suggest we should work on building consensus over stronger accountability from big digital players and improving the democratic control we have over value creation.

First: self-determination and democracy are at risk

Shoshana Zuboff’s idea of Surveillance Capitalism describes the mechanisms that operate in the digital economy to extract what she calls “behavioral surplus”, a situation when more data are rendered than required for service improvements. The surplus data is used to feed machine intelligence that fabricates predictions of user behavior and sells them in behavioral futures markets for profit maximization. This logic drives the need for more data extraction for the sake of making predictions, comparable to guaranteed outcomes in real-life behavior. This extraction began online, but today has gone for getting data from the physical world, our daily life, and our bodies and selves, ultimately betting on modifying our behavior.

Zuboff, as a psychologist, develops a well-grounded reflection on how digital technologies are, ultimately, getting their power from an unequal division of learning, in which their owners get to know more of ourselves than we do, having in their hands the “means of behavioral modification”. For Zuboff, what is at stake is ultimately the individual’s self-determination. This is not just a human right, but the basic component of what we understand as democracy.

The problem is not a private one. It is not just the individual’s privacy and autonomy that are at risk. The problem is shared, and, as such, any solution or alternative must also be grounded in collective action.

Second: automation is a chimera, digital technologies are labor-intensive

The digital transformation is intensifying the precariousness of human work. In “The Workers of the Click”, Antonio Casilli documents a phenomenon that we are just discovering: the digital economy and artificial intelligence are neither disappearing nor automating human labor, but digitizing it. With this, they convert the human productive gesture into sub-paid or unpaid micro-operations, because it considers them too small, not very obvious, too playful, or of little added value. Digital labor on-demand (Uber or Taskrabbit), micro-working (Amazon Mechanical Turk or UHRS) and networked social work (Facebook or Snapchat) are the three major modalities that Casilli distinguishes.

For example, Yann LeCun, VP and Chief AI Scientist at Facebook, explains that the advances in AI are not due so much to the improvement of scientific methods during the last years, but due to the increasing availability of hundreds of millions of images, texts, and sounds examples distributed in millions of categories. Casilli argues that platform value has not come through the “user-generated content”, but through the “user-generated content classification”.

What we are understanding today is that digital platforms and ‘artificial intelligence’ really depend on a high dose of human work, either to train algorithms, nurture databases or, in many cases, to pretend that something is automatic (thanks to micro-tasking). Nevertheless, the labor required to accomplish it is outsourced, fragmented and, sometimes, hired under precarious conditions.

For Casilli, this is not a temporary phenomenon, as might be thought. He argues that as long as AI seeks to reach the level of human intelligence, the latter will also move so that the distance that separates them will never be erased. Human intelligence transforms and adapts to new practices, and AI will require perpetual updates that, as today, only the humans will be able to operate.

Third: who’s controlling public decisions?

The most recent report by Philip Alston, UN’s Special Rapporteur on extreme poverty and human rights, “produced a devastating account of how new digital technologies are revolutionizing the interaction between governments and the most vulnerable in society. In what he calls the rise of the “digital welfare state”, billions of dollars of public money is now being invested in automated systems that are radically changing the nature of social protection”, as reported by The Guardian.

Alston argues that “the digitization of welfare systems has very often been used to promote deep reductions in the overall welfare budget, a narrowing of the beneficiary pool, the elimination of some services, the introduction of demanding and intrusive forms of conditionality, the pursuit of behavioral modification goals, the imposition of stronger sanctions regimes, and a complete reversal of the traditional notion that the state should be accountable to the individual.”

Hand in hand with the development of artificial intelligence and smart contracts, changes are emerging in the way we regulate and control the processes that affect us as a society. These technologies involve decision making through algorithms where the code becomes the law and accessing it becomes a matter of technical expertise. Its opacity, little transparency, and virtually no accountability are the biggest problems. Additionally, regulation and control by public authorities is still a challenge, although it is possible to think of ways to influence the governance of these systems. Particularly in blockchain-based applications, their distributed and transnational nature allows them to be resilient to the intervention of external actors. With this, a democratic tension is generated: where is the responsibility? Under what criteria are decisions made? Can the public regulator intervene in case of violation of rights? Does it have the technical capabilities to do it?

Fourth: zero marginal cost? Digital technologies have a material limit

We have been told that digital technologies are fully scalable. A criterion of infinity is part of the digital mindset. Albert Wenger best described this in his book “World After Capital”, where he suggests that “Once a piece of information is on the Internet, it can be accessed from anywhere on the network for no additional cost”. This is the idea of zero marginal cost.

That additional YouTube video view? Marginal cost of zero. Additional access to Wikipedia? Marginal cost of zero. Additional traffic report delivered by Waze? Marginal cost of zero.

But the material side of the digital is out of this perspective. Think about it in three ways. The direct impacts of the production and use of ICTs (information and communication technologies) on the environment are the most obvious ones. But there are also indirect impacts related to the effect of ICTs on production processes, products, and distribution systems. Finally, there’s also the structural/behavioral impacts related to the stimulation of structural change and growth in the economy by ICTs, and the impacts on lifestyles and value systems. These last two might be positive and negative. This is the framework developed by Berkhout and Hertin back in 2004.

Source: Berkhout, F., & Hertin, J. (2004). De-materialising and re-materialising: digital technologies and the environment. Futures, 36(8), 903–920.

It is striking to look at some direct impacts. The French think-tank The Shift Project estimates that digital technologies would emit 4% of greenhouse gas emissions in 2020 (2,1 GT). That’s half of the emissions of the light vehicles sector (8%) and twice the emissions of air transportation (2%). For example, just video streaming would emit as much CO2 as Spain (300 Mt per year). The troubling aspect is that the global energy consumption of digital media grows by 9 percent per year.

Take the case of revolutionary technologies like blockchain. O’Dwyer and Malone estimated the energy requirements of the Bitcoin Network in 2014. They suggested that the total power used for Bitcoin mining was around 0.1–10GW. As a reference, the average Irish electrical energy demand and production is estimated at around 3GW.

Yet, the problem is not just energy consumption. Waste production from tech hardware is becoming a significant problem. Just in the European Union around 350,000 tons of electronic waste are exported to emerging countries, according to the NGO Basel Action Network. And the number of smartphone devices has skyrocketed. While in 2013 its numbers amounted to 1,7 billion, it is estimated that by 2020 they could increase up to 5,8 billion units. These devices need scarce materials and, yes, the production of one gram of a smartphone still needs 80 times more energy than producing one gram of a car.

Some people will throw out that, with time, we’ll find more efficient ways of production and the direct impacts could be mitigated. But what remains through all discussions about the environmental impacts of digital technologies is a permanent tension between the potential to build a smarter and more efficient economy through digital tech, and the acceleration of the present, traditional economy facilitated by digital technologies. The “Jevons Paradox” better explains this phenomenon.

It tells us that, as unitary costs decrease due to technical advances, consumption also tends to increase, driving total costs to similar levels as before. For example, “on average, when comparing generic online and traditional behaviors, online shopping tends to be more efficient than traditional shopping. However, when taking into account the variability of multiple consumer behaviors, this is no longer the case.” As shown in the following graph, Weideli and Cheikhrouhou estimated that online shopping could even be more polluting than traditional shopping, if shoppers were more impatient and ordered fast shipping.

Source: Weideli, D., & Cheikhrouhou, N. (2013). Environmental analysis of US online shopping. Ecole Polytechnique Fédérale de Lausanne — EPFL: Lausanne, Switzerland.

We could think of managing these types of behaviors by encouraging cleaner modes of shopping. Nevertheless, at the end of the day, can we stop the simplicity digital tech offers to buy with just a click? Andrew McAfee argues our present capitalist system is helping us to be more efficient, but data shows that the increase of human CO2 emissions continues.

Is there no room for austerity in this discussion?

What to do?

Describing problems might appear easy, especially in a world that places a great deal of value on solutions. But centuries of scientific discovery have taught us that making the right questions is the first step to solve a problem.

Nonetheless, we do have an issue today with the disruption brought by digital technologies and it is important to think about solutions. Some advocate for pursuing a path of networked individuals, increasing returns to scale, zero marginal cost and new productive alliances between customers and companies. This is the path defended, for example, by Nicolas Colin in Hedge, where he suggests it is the time to develop a new safety net for the digital age.

But this approach suggests we sign a blank check. What is the deal about? We have been hearing about automation, but today we discover the great amount of labor force required to get the digital wheel moving. We were promised a smarter State, but what we see today is the depoliticization of public action through the surrender of public accountable processes to opaque, private actors. We thought the digital was a synonym to infinite possibilities, but today we discover that our bytes even have a carbon footprint. We thought the digital was about improving our lives, but today we discover we are just the raw material of someone else’s profit.

There are, however, a couple of things we can do beyond complaining. Casilli’s approach is worth considering. He suggests the first step lies in recognizing digital labor, based on Axel Honneth’s notion of recognition as the central element of social conflicts. Second, he reviews the ‘common sense’ solutions to unrecognized labor many talk about. At first, it could be argued that it is necessary to bring the social conquests of formal wage labor to digital labor. But digital labor’s ability to move across the planet will mean the geographic displacement of the problem, after installing social protection in one country. A marginal, old-fashioned approach to face that will be platform certification, but scaling remains a barrier. Some other market-oriented solutions that have been proposed are the installment of a micro-royalties system and of a personal data market, but, according to Casilli, they will only deepen the ‘remuneration per piece’ system that already exists, doing nothing to solve the problem of fair remuneration.

Casilli finally proposes a new type of platform. In fact, he suggests a return to the original 19th-century idea of platform, where dependent labor was abolished, private property repealed, and governance of the commons was instituted. But for the author, the idea of ‘platform cooperativism’ risks remaining a phenomenon of a niche that can easily be co-opted.

For him, a more integral solution orients towards creating an informational common domain with common governance over data. He leans towards the creation of a social digital revenue (to be distributed to each individual, but also to the platform community as a whole), although leaves plenty of unconcluded proposals for implementing it. Ultimately, he argues, the main characteristics of new platforms should consider remunerating the digital labor, financing the commons, and renouncing to present logic of proprietary closeness and technological opacity.

This proposal resonates with Evgeny Morozov’s proposed agenda to reclaim the “feedback infrastructures”. For him, “the ownership and operation of the means of producing ‘feedback data’ are at least as important as the question of who owns the data itself”. It also aligns with Sébastien Soriano’s proposals of data as a common good (“Crowdsourced data should belong to the many!”) and inter-operable platforms.

In the end, the logic is straightforward: in a collectively built environment, value should be accessible to the many.

But also in the heart of these ideas lies the need to match collective value production and governance of such infrastructures. What should be recognized is that most of the digital economic value is socially produced, and as such, should be socially governed. This means changing the way how we conceive the digital. One interesting vision is that of Omri Ben-Shahar, who introduced the idea of data pollution. He argues that we should look at digital environments with an environmental law mindset, as a way to sort out public differences, defend public values, and reduce market failures. But, as it has been argued, environmental law has proven its limits and that is why today the idea of environmental democracy is understood as a step forward.

Environmental democracy is the dynamic development of the three access rights: access to information, to public participation, and to justice in environmental matters. In the digital, greater transparency from platforms, more participation from the stakeholders in their governance, and better conflict resolution mechanisms will provide a solid institutional framework to eventually reach consensus over the new social contract for this age.

Finally, getting the right information might remind us about the material limits of digital technologies. Present discussions about regulating the digital economy assume ad infinitum possibilities, a mistaken departure point. A sense of digital austerity could be necessary after all. If tech is that problematic, let’s try to use it less. Not a joke.

This post was written as an assignment for the ‘Regulation and Digital Economy’ course at Sciences Po Paris, taught by Bertrand Pailhes and Sébastien Soriano.

--

--

Ricardo Zapata Lopera

Writing on digital, civic and urban affairs. I studied Public Policy at Sciences Po Paris. ES EN FR.