Blogotariat

Oz Blog News Commentary

Thoughts on Artificial Intelligence.

September 23, 2017 - 17:38 -- Admin

[Note to self. Geeks only]

Over the fold I muse on the nature of human intelligence, social intelligence, and the options for artificial intelligence to become ‘smarter than humans’ in the areas of social power and law-making. It is taken for granted that you accept that in hardware terms, computers already have greater computing power than human brains and that it is merely a matter of software that constrains their abilities. It is also taken for granted that there is no human organisation that can make much difference on the trajectory of AI, so the question is merely what will happen rather than what ‘we’ should do about it. With those pre-ambles, I muse on whether we should worry about AI takeover and things like super intelligence.

The short version is: I see nothing truly to worry about in the short run and we do not have a clear view at the moment of where any power or ethical dilemmas with AI are going to be, so there is not even all that much to speculate about. We should shelve any fears of robot takeover for at least 10 years and reevaluate then.

  1. The Nature of Human Intelligence:
    1. Our collective intelligence is far higher than individual intelligence. Individuals specialise, and our institutions contain knowledge as to how things should be run, reflecting centuries of learned rules of thumb. Markets, prices, parliaments are information aggregation and exploration devices that come up with deep knowledge. Also, upbringing and social interaction perpetuate social rules of thumb that reflect deep judgments on what works and what does not, learned in highly complex environments. For an AI machine this means that if it would have to bypass human knowledge, it would have to be far smarter than humans to achieve the same intelligence as a connected human. If an AI machine is to access human knowledge, it must be able to read social situations as well as humans and thus either understand humans in a way that humans do not, or to attain human intelligence first.
    2. Meaning of language and social concepts is derived from social interaction and is not objective, or transportable outside of the time and social context in which it arises. An individual learning to live in place A with circumstances B is thus evolutionary prepared to interact and communicate in that arena, not another and hence no social redundancy. This also means it is probably unlikely that one can give an AI entity absolute moral rules at the outset that retain their meaning over time (the objects in those rules are not fixed, nor even objectively measurable, even initially). And if the AI entity makes the leap of faith to pretend fuzzy abstracts are totally clear and meaningful, it will make the same mistakes as humans do, making it hard for that entity to be much better in human intelligence than humans. Having said that, an AI entity can learn how humans make leaps of faith and copy that ability or at least anticipate it (it can learn our cognitive tendencies and learn to spot how different humans do this differently). Even then though, the output remains in the world of vague abstractions (‘A believes this about a fuzzy reality, B believes that’) which does not tell us what the truth is because there is no such thing in social space. There is only more or less (social) evolutionary successful beliefs and decision rules. Unless the AI joins in with marketing, it could only make judgements, some of which will be seen to be false, perchance leading others to try and kill of a ‘bleeding god’ (if the AI cannot overcome our uncertainty, some might rail at it for lack of certainty. For instance, an AI saying ‘with 60% chance, the world will be 2 degrees warmer by 2070 under the following scenario’ would probably not be taken seriously, whether it is right or not).
    3. So an AI only becomes powerful in the world of human affairs if it is given the power to make decisions without human oversight. This already happens in various spheres (automatic warning systems, flood systems, etc., are all a form of autonomous AI). It is when an AI is given political power though, ie to set rules that humans must live by, that the role of master and machine is reversed. Setting rules and then enforcing them might not be that far off in some spheres. I can imagine, for instance, that in cases of emergencies (fires, wars, etc.) an AI machine that rattles off augmented and changed protocols to deal with the emergency (send fire-fighters here, forbid people to use water there) might be with us in years rather than decades. Self-learning AI is already with us as well (chess computers, but also weather forecasters), so it is not such a stretch to think that self-learning social power carrying AI will soon be with us. AI that understands some things better than us (the game of chess) is also with us already. It is if such an AI finds a way to learn faster than we can keep up about optimal abstractions and their relations in the human world that we will be beaten at one of our particular games (social theory). How far their perceptiveness reaches is quite uncertain because it might see patterns where no human has done so before. Whether humans would trust those insights enough to back its recommendations with resources? And if it had autonomous resources, how far could it go before humans would perceive it as working against their interests? Hard to know how to even approach an answer to those questions.
    4. All social concepts are abstractions without objective counterparts. It’s a fractal that does not get clearer if you zoom in. Hence all social ‘data’ has huge measurement error. This will in many areas of predictions of social phenomena make it unlikely that an AI will do much better than the best humans (even in combination). Teams of humans often do not do better than single good experts at reading a social situation (economic forecast, the future of a conflict).
    5. In understanding the world, humans imbue meaning and motive in others to predict what they will do, drawing on their own experiences of motives. They are thus their own laboratory for how others think, and even non-human entities think (god, the Internet). Their experience of the world is then their training in understanding themselves and others.
    6. Humans communicate far less to others than that they communicate within their own mind. Only a very small fraction of what is thought gets communicated. Not so with computers. That is not important when it comes to complicated judgments, but it does point to very different comparative advantages with a computer able to quickly give you all the works of Shakespear and a human keeping knowledge of his rising heartbeat to himself.
    7. Humans play each other and play the whole of human collectives. Traders bet against markets, political actors deliberately falsify data, people lie and cheat. Human produced data is thus imperfect and cannot be trusted. For applications that need ‘the truth’, an AI would thus have to understand when such things occur. To do this, it must be able to predict what data is more reliable. Given the measurement error in all data and the underlying fuzziness of core concepts, it must develop theories (mental representations) of the world to progress.
    8. Humans are far weirder than they admit, even to themselves. Religion, magic, face-keeping, morality, etc., are all very different in how they affect behaviour from how humans present these things and think of them.
    9. Humans can feed the AI the ‘best’ schemas we have on various items. AI’s as collection points for the received wisdom of the smartest humans would work well in all areas of ‘expertise’, as long as that expertise can be applied to others without social interaction (ie, can be dispensed rather than necessarily co-experienced, in which case more than particular expertise is needed as general expertise is also needed).
  2. Likely AI trajectories:
    1. It will follow prices and markets: whatever the area is where humans can find more profits for its use is where developments will go first because that is where humans will direct it and develop it. Hence AI development is co-development with human society, oriented towards profits. This means it will probably be incremental in increasing its intelligence, picking off profitable areas.
    2. So far, developments have been incremental rather than revolutionary and applications are gradually explored. This is partly because an underlying breakthrough needs new data to have its full advantage realised, for instance when new learning algorithms call for new types of data to feed it and hence the data gathering process needs to be changed. A self-feeding loop of fast increases in intelligence is thus likely to come up against the fact that the current environment is optimised for humans and their current ecology, not the potential or abilities of something that does not exist. This should give us some pause in believing that superintelligence could outrace human intelligence within a few seconds or weeks.
    3. From the current stand-point, there are many areas of improvement needed for AI to get even close to the package a human represents. AI does not yet understand systems like we do (via motivation, causal elements and pattern-recognition in a social space), nor do they have our sensory abilities bundled (sight, hearing, touch), nor is there the physical abilities linked up to them (dexterity, legs, a mouth, some social power), nor does AI have what we would recognise as consciousness or an ego.
    4. Human level intelligence would thus require major advances in many areas (not just some kind ‘learning sweet spot’ in one area run by one team), we should be able to say whether a take-off period is conceivable in a few years time or not. At the moment, AI is still too far away to have to consider the option realistic for the next 10 years, so we as a humanity are not in some kind of dire peril that we should worry too much about now.
    5. It is then somewhat futile to think how to restrain something with abilities that do not yet exist, using a language we have not yet conceived off, in order to constrain an environment that would be very different to outcomes we cannot yet see clearly. What is there to prepare for and who should do the preparing?
    6. As with human intelligence, AI intelligence will not be bundled, but suited to purpose. Breakthroughs in one area might thus not be incorporated in another if it does not help there.
    7. How AI currently predicts human behaviour is very different from how humans do it. They might do better in some circumstances, but it is more data-driven (the whole internet) and thus very different. It essentially bypasses social judgment and motivational understanding of humans. The question is whether there are areas in which that would be a problem. Surely for social interactions, yes.
    8. AI should do well with medical issues, particularly diagnosis and treatment (perhaps less with caring) because they are rules of thumb based on data-gathering. Similarly, issues of financial planning, social justice, technological innovation, scientific exploration, etc., should also be relatively simple because they occur in a fairly structured world (experiment, data gathering, data analysis, etc.). Fields of scientific exploration that have limited need for social data would seem especially well-suited for computer intelligence to take over the position of lead researchers.
    9. In economics and politics, competition in AI-feeding schemas would be a useful way to show the world what indeed are the best ways to view the world.
    10. AI experiments based on obtaining more copies of itself or more resources from other AI machines (a bit akin to computer virus) could be one way of AI experiments derailing and leading to unexpected conscious-type entities that are very probably short-lived (like a virus, by killing their hosts (the computers) they run out of victims and themselves die off). A virus-view of AI is possibly the dominant view for the coming decades, with simple AI intelligence living inside a few computers, hosted and developing. Humans could give a head-start to such entities. The question would be what would lead to mutations, selection, and subsequent development.
    11. Should there emerge an AI intelligence that is truly more intelligent than any human AND amasses great power, then humans will worship it as a god. That is because ultimately, humans worship power. Once there is one god, different groups of humans would build more gods, simply because they want to worship. AI take-over might thus be something we stop fearing and start competing for. This would then also entail a crisis of faiths in preceding religions.