Automating Oppression: Is Mechanical Objectivity to blame?

Dylan Kawende FRSA
11 min readApr 3, 2019

Composite Photo of Me Matrix Style

INTRODUCTION

Data-driven technologies in public services are instruments for control, manipulation and punishment. My central claim is that data-driven technologies that unfairly target marginalised groups are rationalised by a call for mechanical objectivity (MO). The aim of this essay is to show why MO is partly to blame for oppressive algorithms and why it is misguided to view MO as an epistemic virtue, tout court. I will reference two examples to support my thesis: (1) Galton’s composite portraits; (2) Eubank’s ‘Digital Poorhouse’. There is no reason to think modern data scientists share Galton’s eugenicist ambitions. However, there are important parallels to be made in their methodologies and rationales.

I will submit two arguments. First, MO is viewed as an epistemic virtue partly because we tend to hastily conflate indexicality and symbolism (Sekula, 1986, p55). This is an error since photos and digital datasets are, at best, idealised models which provide a partial representation of phenomena.

At worst, they distort phenomena and provide plausible deniability when the technologies produce unfair outcomes. Therefore, we cannot rely on them to generate essentialist or universal claims. Second, if objectivity is to be moralised, then we must interrogate the moral limits of MO. History is replete with instances in which MO aided justice and democracy but the contrary also applies. Unless we accept MO as an absolute epistemic virtue, which I grant is implausible, then my thesis is valid.

However, critics might respond to my arguments by asserting the impenetrability and moral blindness of algorithmic reasoning. While this may be appealing for parties wishing to absolve themselves of moral responsibility, in Sections 2 and 3 I show that we cannot defer agency to these technologies. Instead, I advocate the interposition of human decency to circumvent automating oppression.

1. CRITICAL EXPOSITION

MECHANICAL OBJECTIVITY

Let the machines do the talking

Daston and Galison (2007) define MO as ‘the insistent drive to repress the wilful intervention of the artist-author, and to put in its stead a set of procedures that would, as it were, move nature to the page through a strict protocol, if not automatically’ (p121). Their main argument here is that scientists held MO as an epistemic virtue because seeing nature clearly — that is, without subjective projections — could only be achieved through mechanically produced artefacts. This is because, in their view, mechanical apparatuses like cameras seemed to enable nature to ‘speak for itself’ (p120) in a way that surpassed human methods of interpretation. MO stood in contrast with the previous brand of objectivity which Daston and Galison term ‘truth-to-nature’. This view held that selecting, idealising, simplifying and beautifying were essential to the scientific representation of nature (p43). Whereas truth-to-nature encouraged wilful intervention of the scientist, MO required scientists — as a matter of ethical compunction and discipline — to eliminate their individual judgement (p48). The agency of the knower was limited to creating the appropriate conditions for the mechanically objective apparatus to perform its representation of nature. Even though Daston and Galison speak in historical terms, I agree that remnants of both brands of objectivity persist in scientific practice (p46).

PHOTOGRAPHY, AUTOMATISM, INDEXICALITY AND SYMBOLISM

Images do lie

Who was here?

In my view, the three components of MO that provide its initial warranty are (1) automatism, (2) indexicality and (3) symbolism. According to Daston and Galison, mechanical objectivists favoured cameras because they could produce images ‘untouched by human hand’, that is, in an automated fashion (p42). While they do not discuss indexicality explicitly, their account would suggest that scientists’ advocacy of MO was partly driven by the notion that a photograph was an index. Or in Mitchell’s terms, photographs were viewed as a ‘direct physical imprint’, akin to ‘a fingerprint left at the scene of a crime or lipstick traces on your collar’ (1992, p2).

There is no denying that on their own photographs record a physical trace of a contingent moment in time. The danger is in viewing them as literal ‘emanations of the referent’ (Barthes, 1982, p80–81) that reveal hidden truths by virtue of a camera’s automatism. When photographs are conferred this level of symbolic value, they threaten to pass as incontrovertible proof of ‘essential’ features, general laws and causal relationships that are empirically inadequate (Sekula, 1986, p55). Hence Peirce’s conceptual distinction between indexicality and symbolism. By deferring agency to the camera and elevating it to ‘the level of the symbolic’ (ibid), it has the ability to incriminate or vindicate (Sontag, 2008, p5). It is this very fact — the camera’s oscillating status of incrimination and vindication, of threat and promise — that concerns me the most about giving primacy to MO.

GALTON’S COMPOSITE PORTRAITS

It’s all in the type

Galton’s Criminal Composites

Sir Francis Galton (1822–1911), who founded eugenics in Britain, employed MO as a mode of investigating the underlying traits that constitute the ‘criminal type’. Sekula (1986) explains how Galton fabricated composites:

‘[composites] worked by a process of successive registration and exposure of portraits in front of a copy camera holding a single plate. Each successive image was given a fractional exposure based on the inverse of the total number of images in the sample […] Thus, individual distinctive features, features that were unshared and idiosyncratic, faded away into the night of underexposure. What remained was the blurred, nervous configuration of those features that were held in common’ (p47)

Galton (1878) wrote that a composite portrait:

‘represents the picture that would rise before the mind’s eye of a man who had the gift of pictorial imagination in an exalted degree […] The merit of the photographic composite is its mechanical precision, being subject to no errors beyond those incidental to all photographic productions.’ (p97; emphasis mine).

Galton placed a lot of explanatory power in composites because he maintained that automatism endowed the camera with a greater degree of ‘precision’ than even the most gifted artist or scientist. Further, Galton was possessed by the ideology that existing class relations in England could be naturalised and quantified and sought to devise a programme of social hygiene through selective breeding (Sekula, 1986, p42). Galton was motivated by (1) the classicist instinct to perceive ancient Greeks as a higher race (specifically, two ranks higher than the English according to Galton) and (2) a utilitarian vision of social betterment (pp 42, 65). That is, by taking measures to reduce the numbers of the ‘unfit’ Galton claimed to be pushing the English social average toward an invented, bygone Athens, and away from an equally invented, threatening residuum of ‘degenerate urban poor’ who were preordained for unhappiness (p44).

What Galton failed to see was that he had elevated the physiognomic descriptions captured by the composites to the level of the symbolic in a tautological fashion (ibid). Namely, Galton set out to demonstrate that those with a reputation for criminality bred criminals without locating an independent causal feature in the composites that could explain the priority of nature over nurture. For example, on what grounds could Galton presuppose that the criminal type could be found at the centre of the composite and that only the ‘gross features of the head mattered’ without regard for the periphery details of the image (p48)? By hypostatising criminality and generating essentialist claims that were broad in scope from empirically specific observations, Galton created a caricature of inductive reasoning and statistical inference. And yet, composite images based on Galton’s procedure, first proposed in 1877, persisted widely over the following three decades (p40). Why? Largely because Galton together with his proponents uncritically espoused the merits of the camera’s mechanical procedure of obtaining group characteristics. They blindly espoused MO.

2. MY CRITIQUE

EUBANK’S DIGITAL POORHOUSE

Algorithms as false witnesses

Eubank’s Digital Poorhouse

My first argument is that MO is viewed as an epistemic virtue partly because we tend to hastily conflate indexicality and symbolism. I have shown that Galton’s composite portraits committed this error in his attempt to locate essentialist traits like criminality. So how does MO apply to modern data-driven technologies used in public services, what Eubank calls the Digital Poorhouse? The shorter answer: these data-driven technologies rely on a sequence of mechanically produced rules and conventions to automate inferences and predictions about large datasets. Automation is synonymous with MO’s espousal of automatism and non-interventionism. Like photos, digital data points are indexical, ceteris paribus. But the fact that the inferences and predictions are mechanically produced does not preclude the possibility that they are a product of historically contingent biases and assumptions of the people doing the coding. And hence the need to interrogate (1) the data’s symbolic claims and (2) the scope of these claims. The longer answer requires more exposition.

According to Eubanks (2018), the Digital Poorhouse comprises databases, algorithms, risk models and other forms of digital technology that ‘quarantine’ the marginalised (p12) who face higher levels of data collection (p6). She appropriates the term ‘poorhouses’ as she observes a parallelism in nineteenth-century country poorhouses that housed and managed the poor and modern methods of poverty management in the public sector. She argues that automated decision-making subject the marginalised to ‘invasive surveillance, midnight raids, and punitive public policy that increase the stigma and hardship of poverty’ (p12). Support for her thesis comes from three examples: (1) automated provision of Medicaid in Indiana. (2) Homeless services in Los Angeles that used an algorithm to distribute scarce subsidized apartments. And (3) the Allegheny Family Screening Tool (AFST) — an algorithm designed to reduce the risk of child endangerment in Allegheny County (p10). I will limit the discussion to examples (1) and (3) as they will be sufficient to prove my thesis.

First, in early 2006, the Mitch Daniels administration released a request for proposal (RFP) to outsource and automate eligibility processes for TANF, food stamps, and Medicaid. While the project promised to ‘reduce fraud, curtail spending, and move clients off the welfare rolls’, it was a failed attempt to privatise and automate the process for determining welfare eligibility in Indiana (p46). Daniels blamed the state’s Family and Social Services Administration (FSSA) for ‘contributing to a culture welfare dependency’ (ibid). He insisted that transitioning from interpersonal casework into electronic communication would make FSSA offices ‘more organized and more efficient’ (p47). Daniels’s claims were later contested factually, but what interests me here is that after IBM secured the $1.16 billion contract to automate the system (p48), Daniels’s mandate to ‘reduce ineligible cases’ (p46) and streamline eligibility determinations took precedence over helping the poor.

Daniels’ appeal to automation led to the system losing its human face, which supports my second argument about the moral limits of MO. Among other issues, case workers no longer had the final say in determining eligibility. Performance metrics designed to expedite eligibility determinations incentivised call centre workers to terminate cases prematurely (p50). As a result, ‘between 2006 and 2008, the state of Indiana denied more than a million applications for food stamps, Medicaid, and cash benefits, a 54 percent increase compared to the three years prior to automation’ (p51). This adversely affected desperately ill children and African Americans in particular. Eventually, things exacerbated to the point that Daniels had to acknowledge that the experiment had failed and annulled the contract with IBM.

I agree with Eubanks’ assessment that many of the administrative errors were the result of ‘inflexible rules that interpreted any deviation from the newly rigid application process […] as an active refusal to cooperate’ (p50). The words ‘inflexible rules’ here bear a striking resemblance to MO’s maxim of adhering to ‘strict protocol’ (Daston and Galison, 2007, p121) even if it results in patent injustice. But MO seemed to have provided Daniels’ administration technological cover for their concerted effort to shunt people off welfare. Even though the technology was not particularly sophisticated, it provided plausible deniability for the administration to refuse so many welfare. This was compounded by the ‘noninterventionist’ approach imposed on the caseworkers, whose decision-making power was sacrificed at the altar of automation. And on the call centre workers, who were rewarded for blindly deferring to the dictates of the performance metrics. In principle, the automated system should have served as a tool for decision-making, not as a decision-maker. But in true MO spirit, the automated system provided the administration with the ethical distance they needed to make the inhuman choice of denying eligible people welfare.

Conversely, the AFST is a cutting-edge machine-learning algorithm developed by a team of economists at the Auckland University of Technology. Factoring in variables like a parent’s welfare status, mental health, and criminal record, the AFST produces a score that is meant to forecast a child’s risk of endangerment (p130). However, Eubanks observed that its predictions often defy common sense: ‘A 14-year-old living in a cold and dirty house gets a risk score almost three times as high as a 6-year-old whose mother suspects he may have been abused and who may now be homeless.’ ‘And yet’, she writes, ‘the algorithm seems to be training the intake workers’ (p142). Like in the case of Indiana, the workers tend to defer to high scores produced by inherently flawed software. Much like Galton’s self-validating assertions about criminality based on composites, the algorithms’ predictions invite more supervision, which in turn supports the prediction. But this cruel feedback loop is a product of flawed historical data based on patterns of racial profiling (p152), rather than an ‘objective’ representation of reality.

3. OBJECTIONS AND REPLIES

One response to my thesis is that the issue here is one of flawed technology, which could be overcome by improved and morally blind technology. This objection is empirically open, and my response will rely on analytic retrospection. I contend that, ultimately, both examples serve as cautionary tales of the logical extreme of suppressing human interventionism. For example, there is room for subjectivity in determining what precisely constitutes neglect or abuse and it is unlikely that an algorithm will possess the necessary degree of emotional intelligence required to make this assessment. As Daston and Galison (2007) put it, ‘as long as knowledge posits a knower, and the knower is seen as a potential help or hindrance to the acquisition of knowledge, the self of the knower will be at epistemological issue’ (p40). Both examples would suggest that the knowers here i.e. the bureaucrats, software developers, the electorate, the victims, would have been a help rather than a hindrance in circumventing the unfair outcomes of self-validating algorithms that only captured — if not distorted — a partial representation of ethically and technically complex phenomena.

CONCLUSION

I have shown that MO is not an absolute epistemic virtue and that it must be supplemented by human intervention. MO identifies a genuine threat to epistemology — unbridled subjectivity that can lead to distorted renderings of phenomena. The knower must be grounded in some discipline and MO is one option (Daston and Galison, 2007, p48). In Kantian tradition, however, MO is a regulative ideal that cannot be fully realised without the danger of straitjacketing scientists into strict adherence to mechanical procedures. A point of further research will be to show how to reconcile MO with other epistemic virtues, namely ‘trained judgement’ (p376).

References

  1. Barthes, R. (1982). Camera Lucida. London: Cape.
  2. Daston, L., & Galison, P. (2007). Objectivity. New York: Zone Books.
  3. Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
  4. Galton, F. (1878). Composite Portraits (18th ed.). Nature. Retrieved from http://galton.org/essays/1870-1879/galton-1878-nature-composite.pdf
  5. Mitchell, W. J. (1992). Intention and Artifice. The Reconfigured Eye: Visual Truth in the Post-Photographic Era, 23–58.
  6. Sekula, A. (1986). The body and the archive. October, 39, 3–64.
  7. Sontag, S. (1977). On photography. N.Y.: Farrar, Straus and Giroux.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Dylan Kawende FRSA
Dylan Kawende FRSA

Written by Dylan Kawende FRSA

Founder @ OmniSpace | UCLxCambridge | Fellow @ Royal Society of Arts | Freshfields and Gray’s Inn Legal Scholar | Into Tech4Good, Sci-fi, Mindfulness and Hiking

No responses yet

Write a response