Sometimes we also make the reverse mistake, where we ask AI engineers to adjudicate on complex ethical questions that should remain the responsibly of wider society. The question, “what constitutes harmful data?”, is too complex and important to be left to AI practitioners, who have their own biases and incentives. This is the sort of question that should be adjudicated by wider society.
A smart way to nail down where the responsibility of AI engineers ends is by pointing out where the responsibility of society begins. Decisions about how and where AI is deployed, and its appropriate limitations, should be made by governments and wider society.
- How much explainability do we require from AI? The sharp end of this debate involves areas like banking and law enforcement, where we’ve already agreed that algorithms should not make choices based on race, gender, and other protected characteristics. This debate is at a relatively advanced stage, and the dust has settled on this idea of ‘impact’. If an AI model is likely to impact well-being, then AI engineers are required to build explainable AI. However, as new capabilities emerge and impact sectors in new ways, this debate will continue.
- What should be excluded from AI training data? The massive datasets required to train a modern NLP system can only come from the vastness of the internet. However, we all know the internet is awash with hate speech and misinformation. As the courts are seeing, the boundaries of intellectual and artistic property is a complex one. For example, the New York Times may block OpenAI’s crawler from their system, but what about all the quoted material reposted by regular people (often presented without citation)? In some cases it may not be possible to prevent these models from absorbing IP. So, do we ban them? Should topics that relate to IP be forbidden?
- What is an acceptable amount of error for a particular task? How many car accidents is too many when it comes to driverless cars? Tesla talks about error rates orders of magnitude lower than human error rates, yet still this is not good enough. Our legal system is built on agency, but AI agents are not considered agents under the law. Air Canada learned this the hard way recently where it was forced to honour some of its chatbot’s offers. It’s a complicated question and every solution will be case-by-case. For instance, we will probably decide to tolerate more error in a hotel reservation robot, versus a driverless car, robot, or autonomous military vehicle.
- Mistakes are inescapable, so who will bear the responsibility? If a driverless car crashes, how much of the cost is borne by the manufacturer, the car owner and those who maintain the infrastructure? What about if a stock price crashes? A prison sentence, a credit score, a home loan application, the examples could go on.
These are only a few examples from many.
5. Should we allow human jobs to be replaced?
6. Can AI companies be expected to keep super intelligences under control?
7. How much should we regulate the users of AI versus its creators?
8. What about environmental considerations?
...
Thankfully, the pressure on engineers is being alleviated. A core aspect of incoming AI regulation and principles revolves around preventing harm and promoting equitable outcomes.
Europe’s ‘The AI Act’ is the most advanced example of this, a legislative framework seeking to balance AI innovation with ethical considerations and the protection of fundamental rights. The AI Act takes a ‘fundamental rights approach’ by focusing on the impact of the use of AI, versus regulating any specific technology. The legal reasoning for doing this is to create a set of standards that will stand the test of time, driving ethical AI development long into the future. As European parliament president, Roberta Metsola, said, it’s “legislation that will no doubt be setting the global standard for years to come.”
Whilst the time has come for international organisations to install governance procedures that comply with emerging AI regulations, there will still be many nuances not perfectly covered by the law. There will still be many moments in an engineer’s career where they will have to make up their own minds about ethical AI.
They shouldn’t have do, but they will have to.