
So recently I posted an article about the ongoing debate on AI - something of very great interest to me - and my very good friend Jim Davies posted the following comment (getting it down to the gist):
So we have an interesting problem of customers wanting the ethical decisions [made by AI] to be a more public, open discussion, perhaps done by ethics experts, and the reality is that the programmers are doing the deciding behind closed doors. Is it satisfying for the rest of us to say merely that we’re confident that the engineers are thinking and talking about it all the time, deep in Google’s labs where nobody can hear them?
There are some interesting things to unpack - for example, whether there really are such things as ethics experts, and whether ethical decisions should be made by the public or by individuals.
Personally, as an ex-Catholic who once thought of going into the priesthood, and as an AI researcher who thinks about ethics quite carefully, I believe most so-called ethical experts are actually not (and for sake of argument, I’ll put myself in that same bin). For example, philosopher Peter Singer is often cited as an ethical expert, but several of his more prominent positions - e.g., opposing the killing of animals while condoning the killing of infants - undermine the sanctity of human life, a position he admits; so the suggestion that ethics experts should be making these decisions seems extraordinarily hazardous to me. Which experts?
Similarly, I don’t think ethical decisions in engineered systems should not be made by the public, but I do think safety standards should be set consistent with our democratic, constitutional process - by which I mean, ethical standards should reflect the will of the people being governed, consistent with constitutional safeguards for the rights of the minority. Car safety and airplane safety are good examples of this policy; as I understand the law, the government is not (in general) making actual decisions about how car makers and airplane makers need to meet safety standards - that is, not making decisions about which metals or strut designs keep a vehicle safe - but are instead creating a safety framework within which a variety of approaches could be implemented.
There’s a lot to discuss there.
But one thing that still bugs me about this is the idea that engineers are talking about this deep in corporate labs where no-one can hear them. I mean, they are having those conversations. But some of those same engineers are saying things publicly - Peter Norvig, a Director of Research at Google, has an article in the recent What to Think About Machines that Think, and some other Googler is writing this very blog post.
But my experience is that software engineers and artificial intelligence researchers are talking about this all the time - to each other, in hallways at GDC, over dinner, with friends - as far back as I can remember.
So I guess what’s really bothering me is, if we’re talking about it all the time, why does nobody seem to be listening? And why do people keep on saying that we’re not talking about it, or that we’re not thinking about it, or that we’re clearly not talking about it or thinking about it to the degree that the talking and thinking we’re not doing should be taken away from us?
-the Centaur