Home United States USA — software Playing God: Why artificial intelligence is hopelessly biased – and always will...

Playing God: Why artificial intelligence is hopelessly biased – and always will be

261
0
SHARE

It’s said that humans are created in God’s image. Should we fear AI created in our own?
Much has been said about the potential of artificial intelligence (AI) to transform many aspects of business and society for the better. In the opposite corner, science fiction has the doomsday narrative covered handily.
To ensure AI products function as their developers intend – and to avoid a HAL9000 or Skynet-style scenario – the common narrative suggests that data used as part of the machine learning (ML) process must be carefully curated, to minimise the chances the product inherits harmful attributes.
According to Richard Tomsett, AI Researcher at IBM Research Europe, “our AI systems are only as good as the data we put into them. As AI becomes increasingly ubiquitous in all aspects of our lives, ensuring we’re developing and training these systems with data that is fair, interpretable and unbiased is critical.”
Left unchecked, the influence of undetected bias could also expand rapidly as appetite for AI products accelerates, especially if the means of auditing underlying data sets remain inconsistent and unregulated.
However, while the issues that could arise from biased AI decision making – such as prejudicial recruitment or unjust incarceration – are clear, the problem itself is far from black and white.
Questions surrounding AI bias are impossible to disentangle from complex and wide-ranging issues such as the right to data privacy, gender and race politics, historical tradition and human nature – all of which must be unraveled and brought into consideration.
Meanwhile, questions over who is responsible for establishing the definition of bias and who is tasked with policing that standard (and then policing the police) serve to further muddy the waters.
The scale and complexity of the problem more than justifies doubts over the viability of the quest to cleanse AI of partiality, however noble it may be.
Algorithmic bias can be described as any instance in which discriminatory decisions are reached by an AI model that aspires to impartiality. Its causes lie primarily in prejudices (however minor) found within the vast data sets used to train machine learning (ML) models, which act as the fuel for decision making.
Biases underpinning AI decision making could have real-life consequences for both businesses and individuals, ranging from the trivial to the hugely significant.
For example, a model responsible for predicting demand for a particular product, but fed data relating to only a single demographic, could plausibly generate decisions that lead to the loss of vast sums in potential revenue.
Equally, from a human perspective, a program tasked with assessing requests for parole or generating quotes for life insurance plans could cause significant damage if skewed by an inherited prejudice against a certain minority group.
According to Jack Vernon, Senior Research Analyst at IDC, the discovery of bias within an AI product can, in some circumstances, render it completely unfit for purpose.
“Issues arise when algorithms derive biases that are problematic or unintentional. There are two usual sources of unwanted biases: data and the algorithm itself,” he told TechRadar Pro via email.
“Data issues are self-explanatory enough, in that if features of a data set used to train an algorithm have problematic underlying trends, there’s a strong chance the algorithm will pick up and reinforce these trends.”
“Algorithms can also develop their own unwanted biases by mistake… Famously, an algorithm for identifying polar bears and brown bears had to be discarded after it was discovered the algorithm based its classification on whether there was snow on the ground or not, and didn’t focus on the bear’s features at all.”
Vernon’s example illustrates the eccentric ways in which an algorithm can diverge from its intended purpose – and it’s this semi-autonomy that can pose a threat, if a problem goes undiagnosed.
The greatest issue with algorithmic bias is its tendency to compound already entrenched disadvantages. In other words, bias in an AI product is unlikely to result in a white-collar banker having their credit card application rejected erroneously, but may play a role in a member of another demographic (which has historically had a greater proportion of applications rejected) suffering the same indignity.
The consensus among the experts we consulted for this piece is that, in order to create the least prejudiced AI possible, a team made up of the most diverse group of individuals should take part in its creation, using data from the deepest and most varied range of sources.
The technology sector, however, has a long-standing and well-documented issue with diversity where both gender and race are concerned.
In the UK, only 22% of directors at technology firms are women – a proportion that has remained practically unchanged for the last two decades. Meanwhile, only 19% of the overall technology workforce are female, far from the 49% that would accurately represent the ratio of female to male workers in the UK.
Among big tech, meanwhile, the representation of minority groups has also seen little progress.

Continue reading...