2021
Writing
Paper
Role:
Co-author

Co-authored by Jaya Klara Brekke, Balázs Bodó and Jaap-Henk Hoepman. Published by the Internet Policy Review.

Decentralisations: A Multi Disciplinary Perspective

Decentralisation: a multidisciplinary perspective

Abstract: Decentralisation as a concept is attracting a lot of interest, not least with the rise of decentralised and distributed techno-social systems like Bitcoin, and distributed ledgers more generally. In this paper, we first define decentralisation as it is implemented for technical architectures and then discuss the technical, social, political and economic ideas that drive the development of decentralised, and in particular, distributed systems. We argue that technical efforts towards decentralisation tend to go hand-in-hand with ambitions for rearranging power dynamics. We caution, however, against simplistic understandings of power in relation to the decentralisation-centralisation spectrum, and argue that in practice, decentralisation might very well be served by and produce centralising effects. The paper then goes on to discuss the critical literature that highlights some of the common assumptions and critiques made about decentralisation and the pros and cons of a decentralised approach. Finally, we propose some of the missing parts to current debates about decentralisation, and argue for a more nuanced and grounded approach to the centralisation/decentralisation dichotomy.

1. Introduction

The concept of decentralisation traverses multiple contexts, fields and disciplines. We begin this multidisciplinary discussion on decentralisation with describing the technical definitions and motivations for decentralisation in network engineering. We then move on to discuss the broader motivations for such decentralised networks, which span social, political and economic aims. Our intention is not to compare cases of decentralisation across disciplines and contexts as much as to point out that a study of technical decentralisation will invariably invoke, but not always produce, forms of social, political and economic decentralisation. In the process, we identify a shared concern among the vastly different contexts, definitions and uses of the term: the discontent with, and the reform of existing power relations, whether expressed in technical or social terms.

Decentralised and distributed technical systems have given rise to some truly unique social and economic practices. The success of the BitTorrent file-sharing protocol contributed to the rise of anti-copyright political movements, and shifted multiple business practices. The Tor network (https://www.torproject.org/) provides secure communications to individuals vulnerable to surveillance, censorship or prosecution. Blockchain created a global network of value transfer outside of existing institutional frameworks. Under various labels, such as ‘web 3.0’, ‘re-decentralisation’, or ‘blockchains’, various communities have been trying to implement techno-social systems where technical decentralisation is consciously used to pursue social, economic, or political goals. In practice, however, such projects often involve and depend on centralised infrastructures or decision-making, or indeed produce centralising effects. Rather than this necessarily being a critique of such projects, we argue that the coexistence of different systems in practice presents an opportunity to develop more nuanced analyses of the properties, benefits and downsides to both centralised and decentralised technical architectures. In the following, we catalogue the drivers of decentralisation, but also point to those oft hidden factors, which may limit the uncritical, cross-domain application of decentralisation as an organisational schema to implement, as opposed to imagine alternative modes of social order.

2. Decentralisation as a network topology - the technical definition

Decentralisation is often used as a general term for describing network architectures that more precisely span from decentralised to distributed. Nevertheless, the distinctions are technically significant: the topology of networks (their nodes and their interconnections) determines their properties (Bondy & Murty, 2008). One widely referenced classification of network topologies distinguishes between centralised, decentralised, and distributed networks (Baran, 1964).

In this schematic, centralised describes a network with one central node (for example a server), or a cluster of tightly connected nodes, that is connected to all other nodes in the network (clients), while all these other nodes are only connected to this single central node. As a consequence, the failure or destruction of the central node disconnects all nodes from the network, and prevents them from communicating with each other.

In a decentralised network, there is a hierarchy of nodes, where nodes at the bottom of the hierarchy essentially are part of a small star network that connects them with a node one level higher in the hierarchy. These nodes are again part of another star network connecting them to the next higher-level node in the hierarchy. Failure of a few nodes in a decentralised network still leaves several connected components of nodes that will be able to communicate with each other (but not with nodes in a different component).

At the other end of the spectrum, distributed networks are networks where every node has roughly the same number of connections (called edges) to other nodes. Distributed networks have the property whereby failure of a few nodes (even if they are chosen more or less on purpose) still leaves the network connected, allowing all nodes to communicate with each other (albeit over a possibly much longer path than in the original network).

The more distributed a network is, the more resilient it is overall to various forms of disturbances and the less dependent the network as a whole is on any single node. This also suggests that the network might be less dependent on any particular person, company or organisation that would be operating a particular node. Indeed, the fact that the internet is decentralised, but not distributed, has become a major technical argument for why governments and technology companies are able to exert more control over it than the internet pioneers envisioned (Barlow, 1996; Galloway, 2004; Kaiser, 2019; Snowden, 2019; Walch, 2019). These properties have granted significant ‘narrative power’ to the distributed network topology (Reijers & Coeckelbergh, 2018) and to Baran’s network diagrams as illustrative of a story of power as well as network typologies.

The concepts of centralization / decentralization / distribution can apply to both physical and virtual networks, further strengthening its appeal and applicability. The internet itself is a concrete decentralised network (with actual physical connections between nodes) but we often experience it as a centralised virtual network when we connect to centralised web servers, services and platforms.

The virtual client-server structure, strengthened both through business models and technical means, have allowed companies like Amazon, Facebook and Google to establish highly centralised virtual networks of communications or commerce across decentralised concrete networks (van Dijck et al., 2018). Partly in response to the heavily centralised nature of key online services (search, communication, content distribution), decentralised, or distributed alternatives have been proposed (e.g. Mastodon, Solid, etc,). Decentralization, and especially distributed networks, however, come with particular technical challenges.

2.1 Challenges in distributed systems

Radically decentralized, i.e. distributed network topologies raise unique challenges, such as those related to coordination and fault tolerance. In distributed networks no single centre of control exists that can ensure coordination. Instead, distributed networks are organised through protocols, which spell out the generic rules and technical standards nodes need to follow to be able to join the network (Galloway, 2004).

The two most important roles of protocols in distributed systems are to (1) facilitate coordination or ‘consensus’ in the network, and (2) make the system tolerant to faults. Coordination rules are necessary, since no single node has a complete, consistent, real-time view of the state of the system. A classical coordination problem studied in this setting is the mutual exclusion problem, where one critical resource (say a network printer) is shared among several nodes, and where each of the nodes needs exclusive access to the resource every once in a while, to complete a task (as otherwise the pages of several documents would get garbled when printed) (Dijkstra, 1983). Other coordination challenges include efficient routing, the sharing of global information on the state of the network, and dealing with faults in the network.

The problem of fault-tolerance entails how to ensure that the overall network remains functional while some of its components fail. Designers and engineers consider several failure scenarios for distributed systems: where nodes become unavailable, display unexpected, or unaccounted for behaviour, as well as what are called Byzantine failures, where nodes attempt malicious, manipulative or destructive behaviour. Under certain conditions, so-called Byzantine Agreement protocols exist that allow a system to agree on a common output even if at most one-third of the nodes are faulty or malicious (Lamport et al., 2019). Fully asynchronous systems (where there are no time bounds), however, defy solutions to the Byzantine Agreement problem (Fischer et al., 1985).

Distributed architectures can increase a network’s fault-tolerance by increasing the amount of nodes that would need to be faulty in order to compromise the whole network, but distribution often comes at a cost. For example, geographical distribution might increase resilience to environmental catastrophes, such that for example a power outage affecting nodes in one place does not affect nodes elsewhere, therefore keeping the network running. However, such geographical distribution in the meantime introduces vulnerabilities to the connectivity of the network.

Protocols, such as the TCP/IP protocol of the internet, or something more complex like the Bitcoin protocol do not define and regulate all aspects and all possible behaviour of and within the system. The overall properties of a distributed system arise, in theory, from emergent behaviour caused by local decisions, made by local nodes, based only on local information. Therefore, some failures, which emerge from the autonomous actions of individual nodes are systemic in nature. Systemic failures do not have a solution within the network, nor in the protocol rules that govern its technical aspects, and they require some form of external control, such as institutional governance, to address. For example, the maintenance, and development of protocols which govern distributed networks are often controlled by centralised, hierarchical, closed, or charismatic forms of authority, such as bureaucratic organisations, engineering meritocracies, or charismatic leaders (O’Neil, 2014). Consequently, changes, upgrades to the protocol layer, and the handling of issues that lay outside of the purview of the existing protocol are governed by other means (Katzenbach & Ulbricht, 2019) for example decision making bodies, monitoring and enforcement mechanisms, legal certainty through regulation, etc.

Absent from such governance frameworks, distributed networks based on voluntary participation may face a split, or ‘forking’, and the establishment of a new network under new rules. Successful forks in networks are rare: the setup of a new network is always costly, and carries the risk of either, or both networks failing, the former because of bad rules, the latter because of lack of sufficient support (Azouvi et al., 2018). More complex external governance mechanisms introduce trade-offs and conflicts between different layers of the network, their degree of decentralisation, and effectiveness. Despite various efforts, to this date, no distributed network has been able to develop a distributed and effective form of governance which is able to seamlessly blend technical and human aspects of rulemaking, conflict resolution and enforcement and/or remove the messy human components from the governance process (Azouvi et al., 2018; De Filippi, 2019; De Filippi & Loveluck, 2016; Méadel et al., 2017; Reijers et al., 2016).

Nevertheless, due to the success of practical distributed applications like P2P systems (Buford et al., 2009), blockchain, and distributed ledgers (Buterin, 2013; Nakamoto, 2008; Narayanan et al., 2016) and secure multiparty computations (Cramer et al., 2015; Yao, 1982) radical decentralisation has once again been proposed in other domains (Bodó & Giannopoulou, 2019).

3. Drivers for decentralisation across disciplines

In the previous section, we have given a technical definition of decentralisation, and mentioned a number of decentralised and distributed technological applications. In this section, we discuss the rationales behind choosing more decentralised technical architectures over more centralised ones. First, we start with the concerns of computer science, and engineering. Then, we take a look at how these fundamentally technical considerations are intertwined with particular social, political, or economic considerations, and give support to decentralised, or even fully distributed social, political, and economic structures.

3.1 Information security

There are a number of motivations for decentralisation that stem from information security engineering. Radically decentralised network topologies are understood to be more resilient because (as discussed above) there is no ‘central point of failure’, meaning the network as a whole does not depend on any single node. If one node is compromised, the rest of the network will continue to function as intended. Distributed network topologies can be used to achieve privacy, censorship resistance, availability, and information integrity information security properties. In centralised network topologies traffic has to run through a specific server, which grants those who control and have access to that server significant powers to observe, manipulate, or cut off traffic (Troncoso et al., 2017). Distribution can enhance privacy and censorship resilience by ensuring that data are not held and controlled by a third party (Diaz et al., 2008; Troncoso et al., 2020). Some decentralised topologies can be used in a separation strategy (Hoepman, 2014), one of several privacy enhancing design strategies, by processing data locally on end user devices, or by splitting data across multiple nodes and only reassembled by the intended recipient. Higher degrees of decentralisation also make it more expensive to observe network traffic because there are more nodes that would need to be monitored. It can therefore contribute towards anonymous or pseudonymous communication (Meiklejohn et al., 2013). Decentralisation can also be used as a strategy to improve availability. With data replicated across multiple nodes rather than held on one server, that data can be available even if a few nodes are offline. Decentralisation can be used as a technique for ensuring the integrity and security of information because the information will be held across, authenticated and routed through, multiple different devices.

Several of these security design benefits of decentralised and distributed topologies increase as the size of the network increases. When there are more nodes it becomes increasingly difficult for anyone to control enough of these to attack the network. Information security engineering, in the context of decentralised and distributed networks, therefore often focuses on thresholds of tolerance, namely how many nodes in the network would have to ‘collude’ in order to attack the network (for example Byzantine fault tolerance). These potential benefits of decentralisation however all depend on the specific systems designs, and the degree of decentralisation. As argued by Troncoso et. al. (2017), if decentralisation is done naively it might multiply the attack vectors. And depending on the decentralised systems design, distributed architectures in particular might also be worse in terms of availability and information integrity. With no central server, there might not be any clear oversight or guarantee that a given file will be available. For example, the DAT protocol (https://www.datprotocol.com/) for distributed networks has the benefit of being local-first and granting significant control and information security possibilities as content is served directly from people’s individual devices. If a person is offline, the content they are serving will be unavailable unless otherwise arranged. This can be a feature or a bug depending on the system’s design and security requirements. Distributed systems therefore require establishing different patterns for interaction and usage than what has been established for traditional server-client networks (Wagner et al., 2020), or even many decentralised architectures.

3.2 Power

‍The relationship between technical and non-technical discussions on the merits of decentralisation has been that of mutual inspiration. History, social, political theories, techno-social imaginaries, ideologies have informed those who designed decentralised technology architectures (Brunton, 2019; Golumbia, 2016; Roszak, 1969; Swartz, 2018; Turner, 2006), and in turn, decentralised and distributed technical systems have been proposed as templates for alternative modes of social, economic, political organisations, hoping to address concerns of political oppression, economic inequality, or the existing power asymmetries of social interactions (Brekke, 2020; Erickson et al., 2015; Reijers & Coeckelbergh, 2018). Some P2P filesharing service operators, such as the Pirate Bay operators, based their anti-copyright struggle on the censorship resistant nature of the BitTorrent protocol.

Nakamoto, the inventor(s) of bitcoin, had the explicit aim to remove central and commercial banks from their intermediary position in financial transactions. The Tor network is designed (in part) to provide an escape route from online censorship and state surveillance.

In its most general sense, the centralisation/decentralisation dichotomy is often framed in terms of power asymmetries. The greatest concern of libertarian and anarchist thought is the abuse of political or economic power, born from specific historic conditions and experiences (Boaz & Boaz, 2015; Graeber, 2004). These concerns have had significant influence on the histories and designs of decentralised and distributed technical systems (Brekke, 2020). Some distributed networks are designed to consciously oppose existing power structures, such as P2P file sharing in case of copyright, or Tor in case of censorship. Other distributed forms of technical and social organisation (such as wireless mesh networks, digital cooperatives, open source software development, P2P resource sharing networks, distributed autonomous organisations and various forms of crowdfunding) offer alternatives to, or contest existing modes of social, political, economic collaboration in a non-confrontational way (Yeung, 2019). Distributed technologies thus fit into a larger history of struggles (Foucault, 1979; Said, 1986; J. C. Scott, 1985), in which power is continuously limited, contested, and negated through various (in this case technosocial) counter-practices, conflicts, and escapist utopias (J. C. Scott, 1990).

That being said, the claim of many decentralisation evangelists (such as some blockchain maximalists, and techno-libertarians), that decentralised, or distributed, nonhierarchical forms of organisation can, or indeed will abolish existing power structures within society seems to be overly optimistic. For one, distributed networks are rarely immune to the dynamics that create positions of power (i.e., power to exclude, but also to set rules, and mitigate disputes) in other forms of organisation. Even if a network starts with a distributed design and a corresponding protocol, power can accumulate both in technical and social dimensions. Technical nodes can enjoy external advantages (such as cheaper power in the bitcoin network, or better connectivity), which can be reinforced if the protocol favours the producers in a network at the expense of consumers. For example, in the BitTorrent and Tor networks preferential attachment rules favour nodes with higher bandwidth who provide connectivity to transacting bandwidth-consumers in the network. In blockchain networks miners (producers of security) are rewarded by transacting parties (consumers of the service). These dynamics shift resources and power to producers. Some of these flows can be addressed on the protocol level, when, for example, blockchain projects choose ASIC-resistance algorithms to counter the accumulation of power by those who can use purpose-built mining hardware. But such protocol changes only highlight the power issues at the social dimension of decentralised networks. At the minimum, changes to the protocol require some framework of coordination and decision making which can range from charismatic leadership, via meritocratic autocracy, to direct democracy. Even in the last case, beyond a certain scale, power tends to accumulate in the hands of those who have enough reputation, social capital, time, and other resources to participate in the governance process. In summary, particular network topologies (centralised, or distributed) rarely fully capture how power actually flows through and against those templates. In fact, power relations are multidimensional, and polycentric (Foucault, 2009). Both in complex social settings and in apparently simple technical systems multiple forms and sources of power intersect with each other.

3.3 Politics

Decentralisation and distribution are often assumed to remedy the potential abuse of power of coercive intermediaries through disintermediation. Amongst some communities, technical disintermediation is also thought to be a way to retire from politics. The assumption is that, if distributed networks ensure non-coercive coordination, then the need for political governance is also resolved. The term ‘disintermediation’ invokes institutional theory, mostly to point to the costs, and rarely to the benefits of having privileged intermediaries in a network of relations. The removal of intermediaries who can control, censor, tax, limit, or boost particular social, economic interactions is believed to automatically lead to more freedom for transacting parties, and the types of transactions (Berners-Lee, 2019). However, central intermediating actors emerged because they deliver value: they lower transaction costs, coordinate action, define and enforce democratically agreed upon rules, correct failures, limit negative, and enhance positive externalities, which individual transactions cannot (Arrow, 1969; Coase, 1937). In contrast, parties in a distributed arrangement face exponentially growing transaction and coordination costs (Langlois & Garzarelli, 2008), a limited capacity to negotiate the general rules of the exchange, face take or leave decisions, may create substantial negative externalities, and often result in new, more insidious and invisible forms of intermediation (Freeman, 1972). This is the main reason why in many instances (in the economy, political system, or computer networks) decentralised or even centralised, rather than distributed designs prevail.

Disintermediation by distributed technologies is in fact a form of reintermediation, the replacement of one form of intermediary with another. Decentralised networks are institutional frameworks which enable, and facilitate transactions under their own particular rules, and come with their own particular costs and benefits, regarding the scope and depth and trustworthiness of the services they provide (Bodó, 2020). With time, many distributed technical systems developed more pronounced, and often more centralised institutional functions and capacities. Closed torrent trackers provide originally unaccounted for benefits, such as quality assurance, rules, and long tail availability, over the distributed P2P BitTorrent file sharing network (Bodó, 2014). Blockchain networks implement formalised, and not fully distributed governance institutions, such as rule-setting, conflict resolution, arbitration, also on top of the distributed ledger protocol (Europechain, 2020). Despite these efforts we are yet to see all functions of institutional intermediaries implemented in a fully distributed manner.

3.4 Economics

The development of distributed and decentralised networks is historically intertwined with the economics discipline: the Austrian school economist Hayek, for example, conceived markets as an ‘information processor’, a decentralised mechanism to coordinate resources and needs (Mirowski & Nik-Khah, 2017). Distributed network models therefore exhibit many of the same assumptions of market economics, namely: autonomous rational agents that interact solely via a perfect competitive market; participants cannot unilaterally alter the rules; the resources are mobile; network exit or entry has no cost. Historically, socially and politically these assumptions have been powerful in sustaining an ideology of the market as a non-coercive coordination mechanism. However, markets require significant legal and ideological enforcement to function in practice, often with substantial systemic coercion. As economic historians observed, every participant in a competitive market tries to be a monopolist (Braudel, 1992; De Landa, 1996). The exploitation of competitive edges leads to the recentralisation of markets into monopolies. Without governance mechanisms in place, nodes may collude, people may lie to each other, markets can be rigged, and there can be significant cost to people entering and exiting markets.

That being said, there have been several attempts to experiment with new economic models based on distributed networks. In hacker and free / open-source software cultures, information networks would enable information sharing at almost no cost, such that information, and in particular code, could form a common resource pool. Such open, free knowledge commons would then enable distributed logics of development and wealth creation. Many such efforts however were subsequently capitalised on by firms, indeed contributing to technologies that have since become hugely centralised (Bodó, 2019; O’Neil et al., 2020; Szulik, 2018).

Modularity has also been a key concept in the analysis of how decentralised and distributed networks enable new forms of collaborative production (Benkler, 2006). Modularity in technical systems means that each module is responsible for a particular task, invoking the help of other modules (to perform other tasks) through a clearly defined application programming interface (API). Modules could be developed more or less independently of each other. A classic example of such modules are the libraries that are part of the operating system, or software development kit. The growth of the web gave rise to the development of web services, which can also be seen as remote modules. Instead of a single computer running the whole system including all of its modules, the system now depends on the remote execution of most of its tasks by some other server. Many client server systems (which are the epitome of centralised networks) grew from this model. We should not confuse modularity with distributed network or decentralisation though. In distributed networks, the idea is not so much to split a larger task into smaller and different subtasks each performed by a different module, but rather to divide the same task over many nodes that all cooperate and coordinate to execute it. Loosely speaking, distribution is concerned with sharing resources (like storage space or computation power), and only rarely with distributing tasks.

In the social/economic domain the breakdown of the production process to small chunks, requiring little expertise is a Fordist / Taylorist invention (Beniger, 1986). The commons-based peer production framework proposed that such modular labour can also take place voluntarily, in the service of unowned, or communally owned and used resources, outside of the traditional frameworks of the firm and the market (Benkler, 2006; O’Neil, 2015). The peer production logic combines the division of labour and expertise with the redundancy and collaboration of distributed networks. Knowledge would be shared across networks, as a commons, with production taking place in a distributed manner.

Software libraries, maintained as independent open-source projects, are a prime example of a modular technical system, produced by a modular organisation of labour, where many independent developers that do not necessarily know or even trust each other together contribute to an overarching system. Yet, as O’Neil (2014) notes, many factors, including the scale of such modular production networks, have an impact on whether modularity, as an organising principle, is effective without formal, often centralised, authoritative structures of governance.

More recently, blockchain and distributed ledger technologies have given rise to new experimentation with markets and economic ideas whereby hackers and information security engineers have been borrowing concepts from different schools of economics to serve the aims of organising and funding distributed networks, motivated by anti-authoritarian ideologies across the political spectrum (Atzori, 2015; Brekke, 2020; Davidson et al., 2018; B. Scott, 2015; Swartz, 2018). These efforts range from cryptoeconomics whereby economic concepts are used to achieve information security properties (Brekke & Alsindi, 2021; Buterin, 2017; Zamfir, 2015) to token economics (Voshgmir, 2019) to new ways of organising commons (Rozas et al., 2018) and ideas such as bonding curves (Balasanov, 2018; Titcomb, 2020). In contrast to the decentralised markets of Hayek, lately, market experiments on such distributed ledger technologies (as well as indeed in market design efforts by economic disciplines, see Frankel et al. (2019); Ossandón (2019)) have been perceived less as a perfect, universal coordination mechanism and more in terms of social engineering to achieve certain behavioural outcomes.

4. A critical look at decentralisation

In the context of network technologies, various discourses often conflate three aspects of decentralisation. First, decentralisation is a principle for design and engineering which can be used as a means to achieve certain properties. Second, it can be an aim, where a given system is intended to have decentralising effects, for example decentralising the load of computation. Finally, it can also be a claim, whereby a given system is designated as a decentralised system, but does not always live up to that in its deployment. It is important to distinguish between these.

The dangers of confusing design principles, aims, and claims become pronounced when abstract network topologies start to serve as templates for social, economic, or political modes of organisation (Schneider, 2019). The political, social, or economic aims of decentralisation (more autonomy, reduction of power asymmetries, elimination of market monopolies, direct involvement in decision making, solidarity among members of voluntary associations) are laudable goals in and of themselves. However, such aspirations should not be too easily conflated with particular engineering solutions. A decentralised network topology might not produce decentralising social and political effects and might not even be particularly decentralised in its technical deployment. For example, a cryptocurrency system might comprise a distributed network of nodes, while producing highly centralised effects in terms of wealth or other resources, or a protocol might be designed and promoted as distributed but then only be run on a handful machines owned by the same company.

Full decentralisation of social, economic, or political relations is difficult and overly relies on an idea of an autonomous individual or rational economic agent that is willing and able to participate. Consequently, many distributed applications and services rarely consider the social coordination functions. When they do, they are designed as abstract mechanisms such as vote casting, delegation and aggregation. More sophisticated governance mechanisms (with experiments in incorporating reputation scores, tokenised accounting of contributions and dispute resolution) often remain underutilised, because participation in such mechanisms is simply too costly for the individual. This often creates a certain structurelessness in the social, political dimension. Indeed, a seminal text by Jo Freeman titled The Tyranny of Structurelessness (1972), originally written in the context of horizontal organising in feminist political collectives, has been often quoted in online forums discussing power dynamics in blockchain communities. As De Filippi and Loveluck (2016) have pointed out: seemingly horizontal, unstructured organisations risk not recognising and preventing hidden hierarchies and centralised power dynamics.

The more general issue at hand relates to the relative costs and benefits of distributed versus centralised networks. Nodes in distributed networks have to provide technical and sometimes human resources for the network, such as running a secure node, with sufficient bandwidth, computing, or storage capacity, or providing knowledge, engagement, and participation in the governance processes. Centralisation allows individual nodes, persons to offload some of these burdens, while still enjoying the benefits of the network. The potential costs of failure of the central node may be high, but as long as the perceived risk of failure is low, centralisation may be a reasonable choice. In most cases, the cost benefit analysis may point towards more centralised architectures. In contrast, the individual costs of being in a distributed network are relatively high, while the benefits can be very narrow, and specific. Take, for example, the privacy protecting Tor network. Tor is able to give reasonable levels of privacy at the cost of using a distributed network to route messages with lower speeds, and larger latency. These costs are apparently too large for everyday users who are willing to settle for lower levels of privacy. On the other hand, for political dissidents who fear government retribution, journalists, whose integrity depends on their ability to protect their sources, and other groups for whom strong privacy is essential, the cost-benefit analysis justifies the higher costs of using this distributed network.

Last but not least, distributed networks often come with a substantial degree of openness. The goal of censorship resistance limits the conditions of joining a network to the acceptance of some basic rules, protocols, or standards. Therefore, the boundaries of a distributed network are porous. The inability of the individual network members to police the network’s boundaries, and the distributed networks’ strategy to tolerate, rather than to police potential bad behaviour leaves these networks vulnerable to both tragedies of the commons: under-provision of critical resources, and the overuse / capture of the value that the network provides. For example, in open P2P file sharing networks individual downloaders have little incentive to keep uploading the content after they have downloaded it. This limitation leaves long-tail content inaccessible without further rules. Closed, membership only torrent trackers emerged to address this problem with strict accounting of uploads and downloads, and with the establishment of strong community norms that favour cooperative behaviour, even without strong enforcement mechanisms in place (Bodó, 2014; Kash et al., 2012).

Over-use as a problem emerges if selfish actors can capture the value provided by the network. For example, selfish users may clog the limited capacity of the Tor network by video streaming or file-sharing. Value overuse can also happen if the network resource is non-competitive. The legitimacy of both the BitTorrent and the bitcoin networks have been profoundly shaped by their illicit uses, and illegal uses may capture the overall value of the network, and deny that for legitimate uses.

5. Conclusion

The currently fashionable web 3.0 discourse tends to frame decentralisation as a panacea to a swath of social, economic, political woes. Indeed, without distributed protocols, we would not have the internet, and a number of highly consequential digital technologies, from P2P networks to distributed computing infrastructures. We argue in this article that decentralised, and distributed networks are exactly that: network topologies, with a number of predominantly technical properties.

Though these network topologies may have many things in common with particular social, political, economic forms of organisation, their relationship is far from straightforward. A distributed network does not automatically yield an egalitarian, equitable or just social, economic, political landscape. Therefore, we should not expect, and must not limit the role of technology to replace existing centralised social structures, however far they seem to fall from these laudable, utopian ideals.

Instead, we may think about distributed networks as one element in a complex, interdependent framework of how we govern ourselves. In this framework, there are indeed badly organised centralised structures, which abuse the power they have, but we also have time-tested ways to ensure that such centralised institutions are trustworthy (Sztompka, 1999). Likewise, some distributed systems work so well that they have already sunk into the background as an infrastructure. Meanwhile others developed hybrid operational forms in which distributed elements and more hierarchical forms of collaboration mutually support and reinforce each other, as in the case of Wikipedia. Still others, like most blockchain projects, are still searching for governance logics from which more centralised elements can be excluded. The fact that such efforts fail says little about whether this failure is the inevitable conclusion or just a precondition of success of these efforts (Daub, 2020).

In any case, it seems likely that the revolution will not be radically decentralised (O’Dwyer, 2015). Distributed techno-social networks may position themselves as antagonists of the current powers that be, and they may even be successful in maintaining politically, economically stable technological autonomous zones, but their true potential probably lies elsewhere.

No social, political, economic system is truly monolithic. Usually, we rely on a multitude of coexisting and cooperating systems to facilitate the same thing. Just as the robustness of a distributed system comes from the multiplication of the means through which something takes place, so does the robustness of our complex social, economic organisation depend on having multiple different systems to achieve similar goals.

We warn against a proselytising zeal with regards to distribution as an overarching organisational template. A better understanding of the power and limitations of decentralisation is necessary and would allow for hybrid approaches and less simplistic assumptions about what decentralisation can or cannot achieve.

Published: 16 June 2021

Citation: Bodó, B. & Brekke, J. K. & Hoepman, J.-H. (2021). Decentralisation: a multidisciplinary perspective. Internet Policy Review, 10(2). https://doi.org/10.14763/ 2021.2.1563

No items found.