Are You On The Wrong Side Of An Algorithm?
Research carried out in Australia by US data experts have highlighted the possible dangers of a future where, by default, many of us could fall foul of unchecked big-data and automated algorithmic decision-making.
Travels Yield Insights
After travelling around Australia studying the potential risks posed by algorithmic decisions affecting more parts of our lives, Dr Powles and Professor Frank Pasquale, an expert in big-data and information law at the University of Maryland, were able to highlight some important points to bear in mind.
Do You Fit The Algorithm Maker’s Assumptions?
One important thing to remember about algorithms is that they are made by a human who has a set of assumptions, and that they rely on statistics. On this basis, the researchers have concluded that some statistical processes such as averaging results in those people who fit the assumptions made by the algorithm’s creators being treated fairly, and those who don’t i.e. those who are ‘statistical outliers’ are in danger of being treated unfairly by default by algorithms.
Why Use Algorithms?
According to Professor Frank Pasquale, some of the main reasons why algorithms are used and sometimes favoured over human interaction are:
– Speed and scale. They allow you to do things quickly and can allow you to scale them to over tens of thousands of people, which can also help users to avoid the queues.
– Efficiency and productivity. The progress of pushing the use of big data and AI forward is often framed in terms of efficiency or productivity, and algorithms help with these.
– Greater sharing and enabling re-use of data.
– Human preference. The Professor noted that there are many occasions where people prefer interaction with a computer system to e.g. what they perceive as uncaring or unfeeling bureaucrats. Also, young people are becoming more accustomed to, and often prefer interacting with computers e.g. seamless, frictionless, easy interaction.
Problems
The Professor also noted that problems with relying on complicated algorithms can arise when they lack transparency i.e. what exact factors people are being judged on. The example of consumer credit scores was given where if you are eligible for a mortgage can depend, in part, on secret algorithms used to assess how creditworthy you are.
Also, as Dr Powles pointed out, some computerised systems can become so complicated that even their designers don’t know how they work, and this can cause serious problems as regards transparency.
What Does This Mean For Your Business?
There is clearly a case for the value-adding use of algorithms and the products of big data as they enable businesses to increase speed and achieve scale in dealing with processes such as searches, applications etc.
As Dr Powles points out in the findings of this research however, it is vitally important that, if big decisions made by businesses and organisations that affect lives are made by algorithms, there should be fairness and transparency by default. Algorithms also need to be accountable to the people about whom they’re making decisions.
Another important point, this time from Professor Pasquale, is that automated decisions that cannot be explained simply to the people they affect will be alienating.
Businesses, therefore, need to take these factors into account when introducing algorithms to important business processes e.g. screening job applicants and making decisions on whether or not services can be supplied to those who apply.