Last week’s post discussed the newly announced arrangement between some of the tech giants called the Partnership on AI. Embedded in that were (at least) two additional, timely issues that each merit their own posts. The first, the subject of this week’s post, is the issue of the ethics of algorithms and their potentially pernicious effects. The second, to be discussed next week, is the issue of transnational standards of ethics. For now, let’s stick with algorithms and agency.
It is no surprise that black-box algorithms have ethical implications. One need simply to enter the term “ethics of algorithms” into Google’s search engine (which will use its own algorithm to return thousands of results in the blink of an eye) in order to find an array of sites including: a site for a project funded by the National Science Foundation titled, fittingly, The Ethics of Algorithms; an article from July 2016 in MIT’s Sloan Management Review on Ethics and The Algorithm; a 2015 journal article, Toward an Ethics of Algorithms, on networked information algorithms; a primer page from the Center for Internet and Human Rights; and even a TED Ed Talk from Eli Pariser on the implications of internet search algorithms for a democratic society. Cathy O’Neil has written an entire book that broaches this topic (and others) in her newly released Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, which she described in this NPR interview and elsewhere. All of these links contain elaborations on the ways in which ostensibly objective computer algorithms contain myriad value judgments embedded within them, each with their own profound ethical implications.
This is clearly a looming issue for society, especially when coupled with the aforementioned issues on artificial intelligence. Given the ubiquity of technology in everyday life, it has sweeping implications for everyone. Yet, without an appreciation for the underlying codes that enable the technology to operate, the ethics of these algorithms can go unnoticed. Moreover, without a functioning oversight board or regulatory body, what guarantees the ethical outcomes that society might desire?
With a chemical plant, it is easy to see when its operators dump effluent into a local waterway and for spectators to cry foul (or for the fowl). What is the equivalent for something more hidden, say, an algorithm? Furthermore, who bears responsibility? With the chemical plant, it is easy (or at least easier) to pinpoint the individual(s) taking that physical action to improperly dispose of waste material. With something more diffuse, such as lines of codes developed by a team within a team within a company, do the ethical considerations change? One would presume the answer to be a resounding no, but what, then, are the options for recourse? Is this the role of a new canon in a code of ethics, or is this best left to a consortium of high tech companies, a la Partnership on AI?
Perhaps the salience of computer codes and algorithms and their ethics adds an alternative meaning to the word “codes” in “Shadowcodes”…
Footnote: Some of these issues are from now. For example, Deborah Johnson, professor emeritus at UVA, wrote a book on Computer Ethics back in 1985. Still, the argument “we’ve made it this far, so why should things change now?” fails to address these substantive concerns raised above about algorithms and their ability to shape human behavior.