A Crucial Look at the Unethical Risks of Artificial Intelligence

Posted by

Artificial Intelligence Pros and Cons:

As much as we wonder at the discoveries and the artificial intelligence benefits to society of AI and prediction engines, we also recoil at some of their findings. We can’t make the correlations that this software discovers go away, and we can’t stop the software from re-discovering the associations in the future. As decent human beings, we certainly wish to avoid our software making decisions based on unethical correlations.

Ultimately, what we need is to teach our AI software lessons to distinguish good from bad…

Unintended results of AI: an example of the disadvantage of artificial intelligence.

A steady stream of findings already makes it clear that AI efficiently uses data to determine characteristics of people. Simplistically speaking, all we need is to feed a bunch of data into a system and then that system figures out formulas from that data to determine an outcome.

For example, more than a decade ago in university classes, we ran dome tests on medical records trying to find people that had cancer. We coded the presence of disease onto our training data, which we then scanned for correlations to other medical codes present.

The algorithm ran for about 26 hours. In the end, we scanned the data for accuracy, and needless to say, the system returned fantastic results. The system reliably honed in on a medical code that predicted cancer; and more specifically, the presence of tumors.

Of course, at the outset, we’d like to assume this data will go to productive, altruistic uses. At the same time, I’d like to emphasize that the algorithm delivered the response: “well, of course, that is the case,” substantially demonstrating that such a program can discover correlations without being explicitly told what to look for…

Researchers might develop such a program with the intention to cure cancer, but what happens if it gets into the wrong hands? As we well know, not everyone, especially when driven by financial gain, is altruistically motivated. Realistically speaking, if we use a program looking for correlations to guide research leading to scientific discoveries for good intent, it can also be used for bad.

The Negative Risk: Unethical Businesses

By function and design, algorithms naturally discriminate. They distinguish one population from another. The basic principles can be used to determine a multitude of characteristics: sick from healthy; gay from straight; and, black from white.

Periodically the news picks up an article that illustrates this facility. Lately, it’s been a discussion of facial recognition. A few years ago the big issue revolved around Netflix recommendations.

The risk is that this kind of software can likely figure out, for example, if you are gay with varying levels of certainty. Depending on the data available, AI software can figure out figure out all sorts of other information that we may or may not want it to know or that we may not intend for it to understand.

When it comes to the ethics and adverse effects of artificial intelligence, it’s all too easy to toss our hands in the air and have excited discussions around the water cooler or over the dinner table. What we can’t do is simply make it we can’t make it go away. This concern is a problem that we must address.

Breakthrough: The Problem is its own Solution

Up to this point, my arguments may sound depressing. The good news is that the source of the problem is also the source of the solution.

If this kind of software can determine from data sets the factors (such as the presence of tumors) that we associate with a discrimination (such as the presence of cancer), we can then take these same algorithms and tell our software to ignore the results.

If we don’t want to know this kind of information, simply ignore this type of result. And then, we can then test to verify that our directives are working and our software is not relying on the specified factors in our other algorithms.

For instance, say we determine that as part of a determination of the risk of delinquent payment for a mortgage, we know that our algorithm can also determine gender, race or sexual orientation. Rather than using this data, which is likely a wee bit racist, sexist, and bigoted, when calculating a mortgage rate recommendation, we could ask it to ignore said data.

In fact, we could go even further. Just as we have equal housing and equal employment legislation, we could carry over to legislate that if a set of factors can be used to discriminate, then software should be instructed to disallow the combining of those elements in a single algorithm.

Discussion: Let’s look at an analogy.

Generally speaking, US society legislates that Methamphetamine is bad, and people should not make it, but at the same time the recipe is known, and we can’t uninvent meth.

An unusual tactic is to publicise the formula and tell people not to mix the ingredients into their bathtub “accidentally.” If we find people preparing to combine the known ingredients, we can then, of course, take legal action.

For software, I’d recommend that we take similar steps and implement a set of rules. If and when we determine the possible adverse outcomes of our algorithms, we can require that the users (business entities) cannot combine the said pieces of data into a decision algorithm, of course making an exception for those doing actual constructive research into data-ethical issues.

The Result: Constructing and or Legislating a Solution

Over time our result would be the construction of a dataset of ethically sound and ethically valid correlations that could be used to teach software what it is allowed to do. This learning would not happen overnight, but it also might not be as far down the line as we first assume.

The first step would be to create a standard data dictionary where people and companies would be able to share what data they use, similar to elements on the chemical periodic table. From there we would be ready to look for the good and the bad kinds of discrimination. We can take the benefits of the good while removing the penalties from the bad.

This process might mean that some recommendations would possibly have to ask if it would be allowed to utilize data that could be used to discriminate based upon an undesirable metric (like race). And it might mean that in some cases it would be illegal to combine specific pieces of data, such as for a mortgage rate calculation.

No matter what we choose to do, we can’t close Pandora’s box. It is open; the data exists, the algorithms exist; we can’t make that go away. Our best bet is to put in the effort to teach software ethics, first by hard rules, and then hopefully let it figure some things out on its own. If Avinash Kaushik’s predictions are anywhere near accurate, maybe we can teach software actually to be better than humans at making ethical decisions, only the future will tell!

If you’re curious about the subject of AI and Big Data read more in my piece Predicting the Future.

Leave a Reply