A year or two ago, how many would have known what the word algorithm meant? Now it is a word in common use. It crops up whenever automation or artificial intelligence is mentioned.
The term ‘automation’ once conjured up images of robots doing manual tasks; now it encompasses intellectual or cognitive tasks being undertaken automatically. We are told that already the majority of financial transactions are carried out not with pencil and paper and calculators, but via algorithms.
The images of robots scurrying round the factory floor building motor vehicles or fulfilling customer orders in a vast warehouse, as happens in the Amazon organization, are easy enough to envisage and understand, although the programming behind these activities would be a mystery to most of us.
How many understand how an algorithm works, or even what is it?
Although the concept of an algorithm dates back to the 9th Century, it has come into its own during this century as we seek to automate a multitude of tasks previously done manually.
A simple definition of an algorithm is a self-contained sequence of actions to be performed, beginning with inputs and finishing with outputs.
In computer parlance, an algorithm is a well-defined procedure, a sequence of unambiguous instructions that allows a computer to solve a problem. Algorithms can perform calculations, data processing and automated reasoning tasks.
Wikipedia provides an example of a simple algorithm in mathematics – a set of instructions to find the largest number in a list of numbers arranged in random order. Finding the solution requires looking at every number in the list. This simple algorithm, stated in words, reads:
- If there are no numbers in the set then there is no highest number.
- Assume the first number in the set is the largest number in the set.
- For each remaining number in the set: if this number is larger than the current largest number, consider this number to be the largest number in the set.
- When there are no numbers left in the set to iterate over, consider the current largest number to be the largest number of the set.
For computer processing, those instructions are written in computer language, for example using ‘if – then – else’ propositions: IF 'such and such is so’ THEN 'do this’, ELSE 'do this'.
Such straightforward mathematical algorithms seem harmless enough. An input is processed and the output is reliably produced.
Now though these mathematical calculations are used in commerce and finance, for example in stock market transactions where the computer programs of stock brokers compete with one another to accomplish the most advantageous transactions for their clients. There are stories of stock broking firms using faster and faster computers and building faster and faster transmission lines to the stock exchange to outdo their competitors. Even a few thousandths of a second faster transmission can make all the difference.
At times the speed and number of such competing automated instructions have brought the stock market to a halt – the ‘flash crash’.
We need though to get away from the notion that mathematical algorithms are pure and free from bias because they use the science of mathematics. Cathy O’Neil, a Harvard PhD and data scientist tells us why in her recently published book: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, published by Crown. Her experience working for a hedge fund at the time of the global financial crisis informs her views and her writings.
In an article in The Guardian last October by Mona Chalabi titled Weapons of Math Destruction: Cathy O'Neil adds up the damage of algorithms, Chalabi points out that algorithms that started as rule-based processes for solving mathematical problems are now being applied to more and more areas of our lives. She continues:
This idea is at the heart of O’Neil’s thinking on why algorithms can be so harmful. In theory, mathematics is neutral – two plus two equals four regardless of what anyone wishes the answer were. But in practice, mathematical algorithms can be formulated and tweaked based on powerful interests.
O’Neil saw those interests first hand when she was a quantitative analyst on Wall Street. Starting in 2007, she spent four years in finance, two of them working for a hedge fund. There she saw the use of weapons of math destruction, a term O’Neil uses to describe “algorithms that are important, secret and destructive”.
The algorithms that ultimately caused the financial crisis met all of those criteria – they affected large numbers of people, were entirely opaque and destroyed lives. O’Neil left the hedge fund: “I left disgusted by finance because I thought of it as a rigged system and it was rigged for the insiders; I was ashamed by that – as a mathematician I love math and I think math is a tool for good.”
According to O’Neil, algorithms can be used to reinforce discrimination and widen inequality, by ‘using people’s fear and trust of mathematics to prevent them from asking questions.’
This can occur when aspects of life other than objective mathematical propositions are the inputs to the algorithm.
Her book explains how algorithms can do this – such as the ones used to measure the likelihood a convicted person will relapse into criminal behaviour: ‘When someone is classed as “high risk”, they’re more likely to get a longer sentence and find it harder to find a job when they eventually do get out. That person is then more likely to commit another crime, and so the model looks like it got it right.’
O’Neil tells us that ’…contrary to popular opinion that algorithms are purely objective, “models are opinions embedded in mathematics”. Think Trump is hopeless? That will affect your calculations. Think black American men are all criminal thugs? That affects the models being used in the criminal justice system.’
But O’Neill tells us that sometimes it’s hard for non-statisticians to know which questions to ask. Her advice is to be persistent. ‘People should feel more entitled to push back and ask for evidence, but they seem to fold a little too quickly when they’re told that it’s complicated.’ She adds: ‘If someone feels that some formula has affected their lives, at the very least they should be asking, how do you know that this is legal? That it isn’t discriminatory?’
Algorithms have the capability to sort through vast amounts of data – so-called big data. But what data should algorithms be sorting?
Writing in The Guardian, in a article titled: How algorithms rule the world, Leo Hickman says: ‘From dating websites and City trading floors, through to online retailing and internet searches (Google's search algorithm is now a more closely guarded commercial secret than the recipe for Coca-Cola), algorithms are increasingly determining our collective futures. Bank approvals, store cards, job matches and more all run on similar principles. The algorithm is the god from the machine powering them all, for good or ill.’
We are becoming aware that our Internet browsing history, our Google searches, our contributions to Facebook, Twitter and other social media are being monitored and fed back to us in the form of suggestions about what we might buy or eat or how we should vote.
The values and the objectives of those who design algorithms are reflected in the data collected and the algorithms used to process the data.
In The Guardian article: How algorithms rule the world Viktor Mayer-Schönberger, professor of internet governance and regulation at the Oxford Internet Institute, warns against humans seeing causation when an algorithm identifies a correlation in vast swaths of data.
He cautions us about: ‘… the possibility of using big-data predictions about people to judge and punish them even before they've acted. Doing this negates ideas of fairness, justice and free will. In addition to privacy and propensity, there is a third danger. We risk falling victim to a dictatorship of data, whereby we fetishise the information, the output of our analyses, and end up misusing it. Handled responsibly, big data is a useful tool of rational decision-making. Wielded unwisely, it can become an instrument of the powerful, who may turn it into a source of repression, either by simply frustrating customers and employees or, worse, by harming citizens.’
Mayer-Schönberger presents two very different real-life scenarios to illustrate how algorithms are being used. He explains how the analytics team working for US retailer Target can now calculate whether a woman is pregnant and, if so, when she is due to give birth: ‘They noticed that these women bought lots of unscented lotion at around the third month of pregnancy, and that a few weeks later they tended to purchase supplements such as magnesium, calcium and zinc. The team ultimately uncovered around two-dozen products that, used as proxies, enabled the company to calculate a “pregnancy prediction” score for every customer who paid with a credit card or used a loyalty card or mailed coupons. The correlations even let the retailer estimate the due date within a narrow range, so it could send relevant coupons for each stage of the pregnancy.’
‘Harmless targeting, some might argue. But what happens, as has already reportedly occurred, when a father is mistakenly sent nappy discount vouchers instead of his teenage daughter whom a retailer has identified is pregnant before her own father knows?’
Mayer-Schönberger's second example of our reliance upon algorithms throws up even more potential dilemmas and pitfalls: ‘Parole boards in more than half of all US states use predictions founded on data analysis as a factor in deciding whether to release somebody from prison or to keep him incarcerated.’
Awareness of the useful potential of algorithms is valuable, but so is their propensity for doing harm in the wrong hands or for the wrong reasons. But how many of our citizens are aware?
Will we all awake one day and find that our lives are being controlled secretly by forces whose self interest, not ours, is being served? By forces that want us to buy in a certain way, transact our business in a certain way, view cultural and travel offerings in a certain way, vote in a certain way, behave in a certain way, and even think in a certain way? By forces that selectively benefit those at the top and penalize those at the bottom? By forces that increase the inequality that afflicts the world today?
Does that sound like George Orwell’s Nineteen-Eighty Four?
We ought to be afraid. We ought to resist strongly our lives being taken over and controlled by algorithms.
But who will listen? Are our politicians aware of the threat of algorithms? More significantly, are they capable of altering this surreptitious take over of our world?
Do you use Google, Bing or Yahoo as online search engines? Do you respond to emailed requests to take a quiz or respond to an online opinion poll? Are you attracted by offers of a prize if you respond to a survey? Do you enter contests that promise alluring rewards? Do you use YouTube or Netflix or Stan? Have you used iTunes or Google Play or Amazon online to order items?
Have you noticed that your online searches often mysteriously throw up the very things that interest you?
If so, chances are that you may already be in the thrall of the algorithm creators, already slaves to the algorithm.
Are algorithms ruling your world?
What is your opinion?
Do you feel you are being manipulated through your Internet searches?
Have you had any troublesome experiences using the Internet?
Let us know in comments below.
Dog whistling in the park
2353NM, 30 July 2017
It could be said that Senator Pauline Hanson and the other One Nation senators have ridden the coat tails of racism and bigotry to reach the lofty heights of the Red Chamber on Capital Hill in Canberra. Hanson will tell you that she sincerely holds those views and while it demonstrates her ignorance of how discrimination adversely affects the society we all live in, she and her fellow One Nation members are entitled to their opinion …
Ad astra, 6 August 2017
Inequality amblyopia is a condition affecting some conservatives, who simply cannot see inequality when looking directly at it. The facts and figures that convince objective observers that there is increasing inequality in our nation, are simply not visible to them.
As in childhood amblyopia, or ‘lazy eye’ as it is called colloquially, there is nothing wrong with the eye. Amblyopia results …
Dutton for PM – no thanks
2353NM, 13 August 2017
If the conservative ideologues get their way, Peter Dutton could be Prime Minister within a few months. If Dutton became Prime Minister, he would be the eighth person to be Prime Minister with double letters in his last name. For the record, if you get asked the question at a trivia night, the others are (in order) Cook, Scullin, Fadden, Rudd, Gillard, Abbott and Turnbull. The history of the last four is well known and in all cases their terms as …
For more posts on The Political Sword, click here.