Ethical principles have been proposed for AI. A common feature of these proposals is an expert body who develop a framework and rules to govern the development and use of AI.
This approach should be compared with the counterfactual involving diverse decisions of entrepreneurs making competing bets on the future and consumers acting on their preferences; within a legal and policy framework of market governance. The counterfactual is itself founded on ethical principles. Adopting specific ethics for AI involves two problems.
First, AI is not a distinct and unique category within which distinct problems or harms are likely to arise. Problems, and the possible need for new ethics and rules, can be expected to relate to a specific application of AI rather all AI; and may also relate to human decisions and institutions. Focussing on ethics for AI involves a category error.
Second, if ethics for AI is to have any bite, it would involve substitution of the views of a committee for the views of entrepreneurs and consumers engaged in a process of innovation without permission and selection via a contestable process. Ethics for AI would involve the concentration of power in a group and a reduction in individual agency and innovation - an outcome that would arguably be unethical. Given the promise of AI, an immediate challenge is identifying and removing barriers to the adoption and use of AI; and adapting existing law and regulation to a technology and market context which may require different modes of regulation, and potentially less regulation, to the extent that AI enables the market to reduce information asymmetries and better protect consumers from harm.
Where new standards are justified, they should ultimately apply to all algorithms and decisions, including human decisions. This may require that we keep humans 'out of the loop' where their performance is inferior to that of machine-learning based algorithms; and that will likely prove to be primarily a political, rather than ethical, challenge.
This paper by Brian Williamson explores these issues