Inside the responsibility machine: Exploring the algorithmic strategizing of a fintech start-up

Publikation: KonferencebidragPaperForskningpeer review

Abstract

“We shape our tools and then our tools shape us.” This maxim, apocryphally attributed to Marshall McLuhan, marks the starting point of the present investigation. Tailored to our context, the questions it raises are: How do the strategies of fintech entrepreneurs shape the technological innovations they produce, and how do these technological innovations shape the strategies of fintech entrepreneurs? More specifically, the paper is based on in-depth ethnographic data from a Danish fintech company that uses technological innovations to provide sustainable pensions investments. The company has developed an algorithm that screens the portfolios of exchange traded funds (ETFs) in order to identify the companies included in these portfolios. On the basis of this identification the company excludes all ETFS that contain companies known to be involved in fossil fuels, weapons or tobacco from its own investment strategy. Our analysis focuses on the fusion of algorithmic and human agency in the operations of the company and, more fundamentally, in the development of the company’s strategy and the constitution of the company as such.In so doing, we assume that strategies are performative and that technologies are active participants in performative processes of strategizing (Carter, Clegg & Kornberger, 2010). Indeed, strategies can, themselves, be viewed as technologies of organization, as means of making the organization known to itself and to others (Holt 2018). On this basis, we seek to contribute to the emerging literature that questions the affordance-agency dualism, which all too often haunts studies that purport to begin from an assumption of agential relationality (Plesner & Gulbrandsen, 2015). Thus, we begin but also depart from the sociomaterial perspective on organizations and organizing as inspired by science and technology studies (STS) (Orlikowski, 2007) and elaborated in the strategy-as-practice (SAP) literature (Jarzabkowski & Spee, 2009). Within this perspective, the agency of technologies has predominantly been theorized in the limited sense of facilitating human action; technological affordances may invite certain acts but cannot in themselves realize agency (Gulbrandsen & Just, 2016; Withagen, de Poel, Araújo & Pepping, 2012). In seeking to establish a framework that provides a better account of the radically relational agency of strategy-as-technology, we draw on studies that zoom in on the role played by algorithms in communicating and creating organizations; combining and extending concepts of ‘algorithmic public relations’ (Collister, 2015), ‘algorithmic brands’ (Carah, 2017), ‘algorithmic decision-making’ (Newell & Marabelli, 2015), and ‘algorithmic organizing’ (Neyland, 2015), we will conceptualize ‘algorithmic strategizing’ as a process in which artificial intelligence (AI) is not only a tool for realizing strategies, but increasingly sets strategic agendas of their own. In exploring how algorithmic strategizing is the organization, we (re-)turn to theories of the body and mind as simultaneously parts and whole. The latter is not ‘the ghost in the machine’ of the former; to the contrary, the two can only exist in relation to each other (Koestler, 1967; Ryle, 1949; see also, Cummings & Thanem, 2002). What happens, we wonder, when technology and strategy are theorized along similar lines?Having established our conceptual framework of algorithmic strategizing, we apply it to Holt’s (2018) understanding of strategies as acts of critical judgement and ask what happens when the judge ‘has no ghost’ (or human spirit). Some critical accounts of the non-neutrality of algorithms, the biases of AIs, and the blind-spots of big data have offered alarming answers to this question (boyd & Crawford, 2012; Noble, 2018; O’Neill, 2016). Others have sought to dampen the alarm, arguing that algorithms do what their codes tell them to do and suggesting we should focus less on algorithms in and of themselves and more on the agency of programmers (Klinger & Svensson, 2018).From our perspective there is merit in both accounts, but even when juxtaposed they only offer part of the story as each resort to a dualistic explanation. One pivotal episode from our case company may serve to indicate what is also at play: at one point the programmers thought they were done ‘feeding’ the algorithm with training data, but then they realized it had ‘figured out’ what it was being trained to do and had been providing false positives to seem better. This incident illustrates that the quality of an AI not only depends on its code but also on its training data – and beyond that the question is what the machine learns in the process of running data through its algorithm (Neff & Nagy, 2016). In our case: what is the enacted concept of sustainability? And, more broadly: what are the organizational self-images of algorithmic strategizing? As human and machine become one, maybe both are fooled by the agency they produce...
OriginalsprogEngelsk
Publikationsdato2019
StatusUdgivet - 2019
Begivenhed14th Organization Studies Workshop : Technology and organization - Saint John Hotel , Mykonos, Grækenland
Varighed: 23 maj 201925 maj 2019
http://osofficer.wixsite.com/osworkshop

Workshop

Workshop14th Organization Studies Workshop
LokationSaint John Hotel
Land/OmrådeGrækenland
ByMykonos
Periode23/05/201925/05/2019
Internetadresse

Citer dette