On the Conduct of Business and Artificial Intelligence

Last week, a partnership was announced between Amazon, Google, Facebook, IBM, and Microsoft called the Partnership on Artificial Intelligence to Benefit People and Society. The Partnership’s website states that one of its goals is:

To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Not to make this a particularly “meta” post, but is it possible that these individuals anticipate promulgating a code of ethics as a result of their efforts? If so, does the creation of a code of ethics by a collection of publicly traded companies change the nature of the code, casting doubt on its ideological purity, or is the nature of these corporations an irrelevant piece of information?

It would be nice to think of companies as possessing responsibilities to society, especially since they are afforded some of the same rights as citizens (see: First National Bank of Boston v. BellottiCitizens United v. Federal Election Commission), but nothing guarantees that result. Law professor Ciara Torres-Spelliscy further explores this idea in her book Corporate Citizens? An Argument for the Separation of the Corporation and the State (Carolina Academic Press, 2016). The crux of the idea is that corporations do not inherently possess the same responsibilities to society that individuals citizens possess. With that in mind, what recourse does the public have for ensuring the ethical design of artificial intelligence? Is it sufficient to naively hope that the members of the Partnership are benevolent, altruistic night watchmen just trying to protect society from the slippery slopes along the artificial intelligence development path? After all, to assume the opposite might be tantamount to imputing malevolent intentions to them, an arguably unwarranted and unfair conclusion. Still, humans are fallible, and this would not be the first time that the best laid plans of mice and men went awry. An alternative route would be the use of government regulatory bodies, but that is a traditionally ponderous mechanism ill-adapted to handling the speed of technological development. Given the scope of the issue, and the economic forces that motivate some of the actors involved, is this the best we should hope for when it comes to outlining the ethics of artificial intelligence? Perhaps a more eclectic, multi-pronged approach is more advantageous?

Pragmatically, the Partnership is certainly a start. It is almost certainly better than the alternative of not having a consortium of some of the bigger companies in the artificial intelligence field considering the ethical implications of their technologies. Nonetheless, is it this the best alternative? Moreover, are there possible reasons to believe that the Partnership’s limitations will be insufficient and engineering codes of ethics, along with engineering educators, should do more to adapt their own internal standards rather than relying on those of specific companies?


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s