Predicting the Future: The Big Data Quandary

Predicting the Future: the Big Data Quandary

Posted by

The Role of Testing in Indeterminate Systems: should the humans be held accountable?

Big data is a hot topic that introduces us to incredible possibility and potentially terrible consequences.

Big data essentially means that engineers can harness and analyze traditionally unwieldy quantities of data and then create models that predict the future. This is significant for a variety of reasons, but primarily because accurate prediction of the future is worth a lot of money and it has the potential to have an effect on the lives of everyday citizens.

Good Business

On one level big data allows us to essentially reinvent the future by allowing software to encourage individuals to do something new that they as of yet have not considered (and might never have), such as recommendations for viewing on Netflix or buying on Amazon. Big Data can also provide daily efficiencies in the little things that make life better by saving us time or facilitating decision making. For businesses, Big Data can give deeper meaning to credit scores, validate mortgage rates, or guide an airline as to how much they should overbook their planes or vary their fares.

Efficient

Optimizing algorithms based on data are even more powerful when we consider that their effectiveness is reliably better than actual humans attempting to make the same types of decisions and predictions. In addition to the computing power of big data, one advantage algorithms have over human predictions is that they are efficient. Algorithms do not get sidelined or distracted by bias and so avoid getting hung up by ethical judgments that humans are required to consider, either by law or by social code. Unfortunately, this doesn’t mean that algorithms won’t avoid making predictions that present ethical consequences.

Should algorithms be held to the same moral standards as people?

Optimization algorithms look for correlations and any correlation that improves a prediction may be used. Some correlations will inevitably incorporate gender, race, age, orientation, geographic location or proxies for those values. Variables of this sort are understandably subject to ethical considerations and this is where the science of big data gets awkward. A system that looks at a user’s purchasing history might end up associating a large weight (significance) to certain merchants. And those merchants might then happen to be hair care providers, which then means that there is a good chance that the algorithm has found an efficient proxy for race or gender. Similarly, the identification of certain types of specialty grocers or specialty stores, personal care vendors or clothing stores might reveal other potentially delicate relationships.

As these rules are buried deep inside a database it is hard to determine when the algorithms have managed to build a racist, sexist, or anything-ist system. To be fair, neither the system nor its developers or even the business analyst in charge of the project, had to make any conscious effort for an algorithm to identify and use these types of correlations. As a society, we implicitly know that many of these patterns exist. We know that women get paid less than men for the same work; we know that women and men have different shopping behaviours when it comes to clothing; we know that the incarceration rate for minorities is higher; and we know that there will be differences in shopping behaviors between different populations based on age, race, sex and so on.

Can algorithms be held to the same moral standards as their human developers or should the developers be held responsible for the outcomes? If the answer to either or both of these questions is “yes,” then how can this be achieved both effectively and ethically? When ethically questionable patterns are identified by an algorithm, we need to establish an appropriate response. For example, would it be acceptable predict a lower acceptable salary to a female candidate than a male candidate, when the software has determined that the female candidate will still accept the job at the lower rate? Even if the software did not know about gender, it may determine it based on any number of proxies for gender. One could argue that offering the lower salary isn’t a human judgment, it is simply following sound business logic as determined by the data. Despite the “logic” behind the data (and the fact that business makes this kind of decision all the time), hopefully, your moral answer to the question is still “no, it is not okay to offer the female candidate a lower suggested salary.”

Immoral or Amoral: What is a moral being to do?

If we label the behavior of the algorithm in human terms, we’d say that it was acting immorally; however, the algorithm is actually amoral, it does not comprehend morality. To comprehend and participate in morality (for now) we need to be human. If we use big data, the software will find patterns, and some of these patterns will present ethical dilemmas and the potential for abuse. We know that even if we withhold certain types of information from the system (such as avoiding direct input of information like race, gender, age, etc.) the system may still find proxies for that data. And, the resulting answers to that data may continue to create ethical conflicts.

Testing for Moral Conflicts

There are ways to test the software and determine if it is developing gender or race biases with controlled data. For instance, we could get create simulations of individuals that we see as equivalent for the purpose of a question and then test to see how the software evaluates them as individuals. Take the instance of two job candidates with the same general data package but vary one segment of the data, say spending habits. We then look at the results and see how the software treated the variance in the data. If we see that the software is developing an undesired sensitivity for certain data we can go to the drawing board make an adjustment, such as removing that data from the model.

In the end, an amoral system will find the optimal short-term solution; however, as history has shown, despite humanity’s tendency towards the occasional atrocities, we are moral critters. Indeed, modern laws, rules, and regulations generally exist, because as a society we see that the benefits of morality outweigh the costs. Another way to look at the issue is to consider that for the same reasons we teach morality to our children, sooner or later we will likely also have to impart morality into our software. We can teach software to operate according to rules of morality and we can also test for compliance, thereby ensuring that our software abides by society’s rules.

Responsibility: So why should you care?

Whose responsibility is it (or will it be) to make sure this happens? My prediction is that given the aforementioned conundrum, in the near future (at most the next decade) we will see the appearance of a legal requirement to verify that an algorithm is acting and will continue to act morally. And, of course, it is highly probable that this type of quality assurance will be handed to testing and QA. It will be our responsibility to verify the morality of indeterminate algorithms. So for those of us working in QA, it pays to be prepared. And for everyone else, it pays to be aware. Never assume that a natively amoral piece of technology will be ethical, verify that it is.

If you enjoyed this piece please discuss and share!

Click here to learn more about Bas and Possum Labs.

One comment

Leave a Reply