So many of you might have heard some of the hype around artificial intelligence. But what is artificial intelligence anyway? Is it particular techniques like machine learning? Is it. anything a computer does that would otherwise have required human intelligence while we're still working out what exactly artificial intelligence is.
SHOULD WE WORRY ABOUT AI AND ALGORITHMS IN GOVERNMENT?
I'm gonna use a broader term algorithms to describe the automation of processes yielding an output. But in using that term, I'm including the kinds of technologies associated with artificial intelligence as well. Now, because of developments in artificial intelligence and also because of greater confidence around different kinds of automation, we're increasingly using algorithms not only to process data in. our decision making really important decisions. So governments are increasingly relying on algorithms to decide how to allocate resources or to provide services. So here's my question in Australia, should algorithms make official decisions? Is there anything here that we should worry about? And no, I'm not talking about evil killer robots taking over the world. I'm talking about far more mundane things, things like Robodip. So this is an algorithm that drew on tax and welfare data to determine who had been overpaid benefits and by how much. Now, when this processing was previously done by humans, they do on a wide variety of different kinds of data. But the algorithm relied on a simple formula and that formula assumed stability of income over the course of a year now. What that meant was, is that people like Ken whose income was uneven, received a debt letter for money that he didn't owe. But that's not the end of the story.
The process was so complicated, Ken described it as Orwellian, that he was unable to resolve the issue with Centrelink in two years. Is it OK that can get computer says no What exactly is the problem with Robodet turning to a different example, the compass tool takes in a wide variety of data and is able to predict whether someone who's committed an offense is likely to commit another one. Amazing technology, right? Predicting the future And this is being used particularly, but not exclusively in the United States by judges, provolboards, prison authorities in their decision making. now sounds amazing, but of course, there's a flaw and pro publica pointed out a really important one. There's a higher false positive rate for African Americans. Now what does that mean? That means that if you're African American, you're more likely to get a high risk score even though you would not, in fact, go on to reoffend to make this real.
The man on the left received a high risk score of 10 despite only having one prior nonviolent offense, whereas the man on the right received a much lower score of 3 despite a prior offensive attempted burglary. Only the man on the right, in fact, went on to reoffend. Now one example is not statistics, but Pro Publica was able to prove that this was happening at scale. So Is this OK is it alright to use an algorithm that is drawing on historic data to make decisions about an individual? Now, particularly how long that person's gonna spend in jail? Moving from the United States to China, we can look at the social credit system. Now, this algorithm, again, draws on a wide variety of variables to give you a social credit score, which is then used to decide, can you travel? Can you access the internet? Can you send your kid to a particular school now? Artificial intelligence is involved here in some situations. So for example, facial recognition is used to detect people who are jaywalking across the street. That will then lead to a bad credit score. So social credit, a score. Perhaps you were automatically detected crossing a road against the light, determined your access to social services, your ability to travel and many other aspects of your life. Is. this is this. kind of thing. Something would be willing to put up with in Australia. right? And if and we answer that question thinking about ethics thinking about politics, thinking about the accuracy of the calculation. now here's the thing in Australia, we can choose more or less which companies we deal with. If you don't like Amazon's recommendation algorithm, you can go to your local bookstore and buy your books fair. But we don't get to choose about our interactions with government.
So in light of that, what are we as citizens of Australia entitled to expect now? I would argue that Ken was entitled to expect that if he receives a debt notice from the government that represents a debt he actually owes did in fact, the government has done enough testing and evaluation before it sends out letters to confirm that that's the case and that if he disagrees and wants to contest that, there's a fair, speedy process he can use to do so I think all Australians are entitled to be treated fairly. Now that's a complex thing. What does fairness mean? If we look at the compass tool, the company who made it argued that the tool was fair and was able to show it met the company's own fairness metric. In fact, it's mathematically impossible to be fair in every sense of the term, but nevertheless, we want our government to try to think deeply about what fairness means in the context in which a particular system is being deployed and to do testing to confirm that a system will be fair. In particular, we want government to be aware of the risks of relying on data collected in a historic context with all the racism and sexism that exist in the world and using that to draw instances about us today and then to make decisions about us based on those inferences.
If we think about the social credit system, I would like to think that Australians would expect a government not to exercise that level of surveillance and control over the citizenry. Now this is a democracy and everyone is entitled to their own view on what is an appropriate level of surveillance. And some of you might think that some surveillance might be appropriate. For example, to help law enforcement solve crimes But even in a democracy, they need to be red lines. And I would like China's social credit on the far side of the red line. So returning to the initial question, should government rely on algorithms in its decision making? What exactly is the problem here? And is it really about the technology? Well, despite the nature of the examples, not really. robo debt demonstrates that government should not automate a flawed process affecting a vulnerable population. But it doesn't really tell us that automation generally is a bad idea. Compass is an example of what can go wrong when relying on historic data sets to make to draw inferences about people without really thinking about issues around fairness But it doesn't really matter whether we use machine learning and artificial intelligence or whether we use good old fashioned statistics. Machine learning and artificial intelligence, or whether we use good old-fashioned statistics,
China's social credit system is an example of government surveillance and control But would we be any less concerned if, instead of facial recognition, a traffic lights, we had humans watching us all the time and inputting things into systems so that ultimately, when we try to enroll our kid in the school, another human doesn't let that happen? In other words, what is the problem here? My argument is, it's not actually how technologically sophisticated the tool is. It's about centering human needs and human rights in the systems that government build. So if that's the problem, what's the solution? Many of you may have heard of AI ethics, and indeed, governments and organizations and academics are creating lists of AI ethical principles. These include things like fairness, accountability, transparency, beneficence, motherhood and apple pie. And there's a measure of ethics washing here. So the Australian government introduced its AI ethical principles while it was still actively pursuing robotic. AI ethics is not going to solve the problem for three reasons. First, it's not about the sophistication of the technology and classifying it as AI. Second, the principles are too vague to be useful. Compass is fair, according to its own metrics. And first. it doesn't give you a remedy. So Ken, who received the erroneous debt letter, couldn't do anything with the AI ethical principles the government had released. It doesn't help AI ethics might be fantastic if we're trying to prevent the robo apocalypse. But what we're really worried about, or we should be worried about in Australia at the moment, isn't that not yet? It's things like Kafka S navigation through complex government systems without human empathy or support. It's relying on on biased historic data sets to draw inferences about us and then use that indecision making. It's government surveillance and control that we can see in things like China's social credit system. So with AI ethics isn't the solution.
What is? I argue we need 4 things. We need to build constraint into the legislation that authorizes the use of these kinds of systems to require things like proper testing and evaluation and full transparency. We need. protection for privacy and autonomy as the foundation of human dignity We need standards with enough detail so that organizations can create practical policies for designing, using and purchasing AI systems. And we need citizens to understand the kinds of problems I've been talking about and the values that we need to protect so that they stop government from putting out either flawed or inappropriate systems So how would that play out in the context of our examples? Well, if we had it for robo dead, first of all, the legislation would have required that the system follow the rules for when debts are actually owing and not use shortcuts like annualized income. We would have testing and evaluation according to clear standards through which government could confirm that that was indeed the case.
The code would have been released transparently, allowing any buds to be picked up by the community in advance of deployment. And after all that, if Ken had received an erroneous debt letter, he would have had access to because resources would have been put in to enable it. A efficient process for contesting the debt. What about compass? Well, one could very well argue that data driven decision making is inappropriate in certain contexts and sentencing is probably one of those contexts. But there are going to be contexts in which government does want to rely on data for making decisions about how it allocates resources, for example. So what do we need to think about there? First of all, we need to deeply interrogate what fairness means in the context in which the system will be deployed. We need to work out what is the right fairness metric. And that might change how we do things. So New Zealand has a risk assessment all but it relies on a very narrow range of variables associated with prior offending. Maybe that, for example, is less problematic in that context. Once we determine what is fair, we need to test and evaluate not only for the accuracy of the algorithm, the figures you see reported, but against our fairness metric.
Again, using standards to do that well. we need transparency around what systems are being used for citizens know about them. We need the ability to contest this kind of decision making when it's inappropriate. We. did all of that. Hopefully people will no longer be spending longer in job because of the color of their skin. But we'll also all be able to be more confident about how government is using our data to make decisions about us now. I'm hoping we stay far away from China's social credit system, but this requires a level of constant vigilance if we're concerned about surveillance, if we're concerned about state control, including through cyber physical systems, they stopping you entering a train station, right? Then we need to remain educated and we need to pay attention when government tries to push the boundaries and deplete our values. So algorithms can be used appropriately for official decision making. If that is done thoughtfully, not with AI effect, but instead with legal protections, mandating things like proper evaluations stand is telling agencies how to do that promptly and an educated public willing to challenge the government that introduces problematic systems. So we can get the benefits of AI we can use automation in government decision making, but next time the system is coming out and hopefully you'll even be told about it. Ask whether systems have been put in place so that you can be confident about its appropriate use. Thank you.

Comments
Post a Comment