text

In Conversation with Clemens Apprich on ‘Decolonizing Technology’

About media technology and its political implications in terms of identity politics and the importance of critical thinking in digital cultures. What does it mean to ‘decolonize’ technology, and how can it be that technology generates a ‘colonizing’ practice in the first place?

Clemens Apprich is head of the department of Media Theory, which co-hosts AIL’s Talk Series on the topic of ‘Decolonizing Technology’

Read more on the subject (in German)

Image by © Foto: Universität für angewandte Kunst / Adam Berry, transmediale

Clemens Apprich

is full professor and head of the department of Media Theory at Angewandte. His current research focuses on filter algorithms and their use in data analysis procedures as well as machine learning methods. He is author of Technotopia: A Media Genealogy of Net Cultures (Rowman & Littlefield International, 2017) and co-editor of the book Pattern Discrimination together with Hito Steyerl, Florian Cramer and Wendy Chun (University of Minnesota Press, 2018)

Elisabeth Falkensteiner (Curator and Co-Head of AIL) talked to Clemens Apprich about media technology and its political implications in terms of identity politics and the importance of critical thinking in digital cultures.

Elisabeth Falkensteiner (EF)

In recent times, the political visions or, rather, the naïve utopian dream of a fundamental democratization of the internet has been dulled just as the hope has faded that artificial intelligence and machine learning will develop ‘neutral’ digital worlds.

In fact, new technologies are even more likely to entrench inequality, reinforce racist and sexist tendencies and promote reactionary identity politics – which will have consequences in virtual space as well as in real life. After all, most 90s online cultures had their roots in activism and subcultures.

Where did we go wrong?

Clemens Apprich (CA)

Many technologies that constitute what we call the online world were created in the 90s – at a time when the future of the internet was still undecided and subject of passionate debates. The mass distribution of so-called web 2.0 applications, building on the initial internet infrastructure and known as ‘social media’ platforms today, has led to a shift in the balance of power: away from public, social and artistic positions towards commercial interests. The use of these new media platforms has also led to a staggering increase in digital data. However, an enormous amount of computational power is required to generate ‘valuable’ information from this data, something only global corporations can afford these days.

And these automated pattern recognition systems are anything but impartial. On the contrary, algorithms learn from our data and therefore adopt everything that is part of it, including our racist, sexist, classist and ableist prejudices as well as heteronormative concepts. We are thus confronted with self-fulfilling prophecies, which are then relabeled as objective decision-making processes.

One of these prophecies is the assumption that any proximity between data points is ‘significant’ as such. This homophilic premise, which translates as ‘birds of a feather flock together’, is based in the US-history of segregated housing.

Wendy Hui Kyong Chun, who will give a talk as part of our lecture series, reminds us in her new book that it is precisely this principle of segregation that continues to govern our online world.

Every time Amazon, Facebook, TikTok or Tinder recommend and suggest new products, friends, content or lovers, we come across homophilic clusters that fuel the toxic online-climate today. However, instead of being inherent to the technology, the issue of data bias and algorithmic discrimination concerns the logics these technologies are made to implement and establish; logics that come from retrograde – not to say reactionary – identity politics, and a cancel culture actually deserving that name, which blocks all non-identitarian content, eventually filtering out all we prefer not to see.

EF

In summary one can therefore say basic technologies are driven by a capitalist logic and hegemonic politics. Which current alternative methods and systems promote anti-discriminatory behavior?

Is there a way out of this mess?

CA

At the heart of machine learning, which mostly catches our eye in the form of algorithmic recommender systems, is the simple logic of sifting through big chunks of data and turning them into information. This requires a particular logic or pattern that reflects our social behavior. After all, it is not the technologies that are male and white but the ideas they are fed. And this might be an opening for us: if we think about technologies as being a part of us, as reflections of ourselves, then we can finally engage with them properly.

Hito Steyerl’s work is certainly a prime example in this regard: she does not concern herself with technical solutions or simply downplays the technological conditions we live in; instead she focuses on the beliefs, myths and concrete ideological interests that are portrayed and adopted by these technologies.

Her narrative and visual language reject the glossy aesthetics of most digital art. Instead of reviewing the latest computer hardware or software, she presents ‘poor images’ that stir and enable discussions about digital technologies – ranging from social media and virtual worlds to recent AI applications.

Obviously, no technological innovation will be able to solve our social problems.

You simply cannot fight racism by using better data or algorithms. It is a political fight which requires political organization; online media can only ever play a supportive role here. This also applies to any critique of techno-capitalism, which needs to take place in a political context.

Big digital corporations must play by the rules of society – especially in terms of taxation and collective data management (‘data trusts’) – and need internal reorganization. And the solutions to meet these challenges are as old as the algorithms themselves. This can be best illustrated by the unionization of Amazon workers on Staten Island, New York, an event taking place while their boss, Jeff Bezos, was busy exploring outer space. One of the richest men on earth tries to escape this planet, while his workers unionize and fight for its and their survival. This is a poignant, almost cartoonish depiction of our current situation.

EF

Can art or media art and interdisciplinary practices assist in decolonizing technology? What role can art play?

CA

I think we need to be careful when using context-specific terminology and concepts – especially when they are derived from certain historical contexts that range from the Haitian Revolution to the struggles for independence after the Second World War.

After all, the ‘decolonization of technology’ seems to be a conflicting term, not least because decolonization describes the retrieval of stolen land, whereas in the case of technology our concern is with the understanding of our own cultural practices.

In an extended sense, though, we might speak of a decentralization of this knowledge by analyzing the – oftentimes violent – origins of technology.

There are several good examples, such as Luiza Prado de O Martins’ work, which focuses on alternative forms of knowledge and modern technology’s divergent points of origin – covering not only digital media but also other areas of interest like birth control.

And then there are also the artworks of Simon Denny, which render visible and thereby discuss the material conditions of our online world. His project ‘Mine’, for example, illustrates the nexus between the mining data and minerals – since both are integral to the functioning of our digital culture.

Especially the way artistic practices can engage with the production processes of our digital culture makes them viable to transcend current debates – which are mostly concerned with better data or better models – and create new technological visions.

In my opinion, Ramon Amaro, who will give a talk in early December, together with Tiara Roxanne, is one of few people who have truly accepted this challenge. Referencing theories by Frantz Fanon and Gilbert Simondon, he demands a radical break in the relationship with our technologies.

With a background in mechanical engineering as well as art history, he interprets technological systems, and especially AI, as reflections of ourselves with all the problems that come with it. His upcoming book, The Black Technical Object: On Machine Learning and the Aspiration of Black Being, focuses on the highly problematic and mostly racist history and logic of modern statistics, which form the basis of today’s machine learning systems. His demands therefore do not stop at improving these processes but press for an epistemological break, so as to be able to create something new.

EF

When we look at social media and its news feed algorithms, which do not only fuel the attention economy of its users but also divide them into separate bubbles, we can see it losing its utopian and emancipatory potential. The pandemic and the surrounding debate have instead underlined how social media contributes to social divisiveness. Still, what were the initial principles and ideas of social networks?

CA

In the early 2000s, the media theorist Tiziana Terranova already emphasised the significance of ‘free labour’ in digital societies. Her analysis has become even more relevant with social media platforms: on and with these platforms we produce a digital economy and profits that only benefit a few instead of the many. Sadly, we do this in the belief of ‘authenticity’;

everyone is convinced they are being creative, while in fact being caught in the behavioral patterns and restrictions imposed by tech companies. Bernard Stiegler once called this the ‘systemic stupidity’ of our digital culture.

However, what I like about Tiziana Terranova’s work is that she never tires to stress the ‘social’ in social media. As a matter of fact, social media algorithms are not interested in the individual person – their holiday photos or music collections – but rather in the relationships to other individuals in a given network; this ever-changing trans-individuality is still an important, yet underrated fact which we need to take into account when critiquing our capitalist digital society.

Generally, all technologies are no more than cristallisations of our social relations. For this reason, we also need to find new ways to engage with them. The theoretical field of Disability Studies could be an interesting point of departure, as it traditionally focuses on scrutinizing what we perceive as ‘normal’ in our culture.

As part of our lecture series, Katharina Klappheck will talk about machine learning processes in terms of disability and the unruly. And in March, Tung-Hui Hu will speak to us in a similar way about non-Western concepts of digital cultures in his lecture on analogue internet culture.

If we want to counter the divisive potential of these technologies, we should not – even in the face of justified criticism of our excessive mediated society – submit to any kind of cultural pessimism or technophobia. For there is always a playful approach to deal with our neuroses and technological dependencies.

Thank you Clemens Apprich for the Interview!