By: Ayman Saeed , Integrated Cyber Defense Architect- Reposted from LinkedIn Blog with permission
Highly marketed projects like Sophia and Aibo try to convince us that true human-like AI (AGI) and even conscious AI (ASI) is around the corner, mixing the truth with a bit of falsehood to make it shiny and plausible. In reality, we're not even close.
Make no mistake; Artificial Intelligence is real. It's a new super-effective way of automating business processes and perform routine tasks far quicker than humans. So where the confusion is?
In this article I will try to separate the hype from reality, eating the elephant one bite to cover the whole philosophical aspect of Artificial Intelligence in this very one Article.
Biological Intelligence Modeling
Artificial Intelligence has been widely defined as the ability of a digital machine to perform tasks commonly associated with biological intelligent beings.I have created the below conceptual diagram to provide a high-level mapping of the three AI tiers to biological "Knowledge," "Intelligence," and "Consciousness."
- Artificial Narrow Intelligence (ANI) or Weak AI: outperforming humans in a very narrowly defined task, unlike general Intelligence – narrow Intelligence focuses on a single subset of abilities, any software that uses machine learning to make decisions can generally be considered narrow AI.
- Artificial General Intelligence (AGI) or Strong AI: performing the same intellectual tasks as a human to the same standard as humans, with the ability to learn, reason, plan and communicate in a natural language. AGI doesn't currently exist.
- Artificial Super Intelligence (ASI) or Conscious AI: outperforming humans Intelligence by self-improvement that lacks human's genetic limitations. ASI doesn't exist and in my view will never exist because we can't build a machine that mimics something that we cannot even describe.
Can we fully describe what Biological Intelligence and Consciousness is? In fact, the answer is no, we can't.
Biological Intelligence and Consciousness resist the abstraction of definition, classification, and clear distinctions in our mind; it doesn't fit in a boundary of our conclusions of what they exactly are.
So let's try to scratch the surface of a complex abstract concept, by first "learning" the basics.
How Humans Learn
Learning is the process of acquiring new knowledge, skill, or value. The learning input is data, we use our sensory system to receive data and our learning system to model its relations and patterns to build knowledge, and then we store this knowledge in memory.
The brain is made up of literally 10s of billions of neurons. Each neuron connects with many other neurons by synapses, trillions of tiny structures that provide an electrical and chemical junction between neurons, outnumbering the stars in the Milky Way. Let's use an example of a child learning how to ride a bike; it's challenging to keep a balance at first. However, soon that kid masters it. As you practice, your brain sends "bike riding" messages along specific pathways of neurons over and over, forming new connections. In fact, the structure of our brains changes every time we learn, as well as whenever we have a new thought or memory.
A massive project called "The Human Connectome Project" is billions of dollars funded with the aim to provide a compilation of neural data of our brains
Now, has every knowledge been biological learned? in other words, can we know a thing without pre-learning it?
Instincts and Innate Knowledge
So the answer is yes, we actually do know things that we didn't learn, not only that, adding to the mystery, even biological agents like viruses and micro-organisms like bacteria behave like they do know things without even having a learning system.
Instinct is a behaviour performed without being based upon prior learning, and it's not random behaviour, newborn babies have a knowledge of how to use their lips, tongues and roof of their mouths to suck the milk. Hatchling sea turtles emerge from their shells knowing how to navigate their directions towards the water. Phage viruses listen to messages from their relatives when deciding how to attack their hosts. There are hundreds of other examples of knowledge with learning.
So where is instinct/innate knowledge coming from? there is no answer that we can validate under a laboratory microscope.
No Intelligence without Knowledge
Knowledge is generally thought as know-how, by processing information (learning) that gives us the capacity to judge or take an "informative" action.
Knowledge of how to talk, walk, ride a bike, recognize the photos of people we know, their voices, solve a mathematical problem, fly an aircraft,..
Now, how do we biologically store the knowledge that we acquire?, theories that are supported with some experimental evidences suggest that knowledge is stored as biophysical changes in the brain, these changes are called "Engrams"; an experiment showed that rats trained to navigate a maze had stored in their brain a highly distributed representation of the maze, repeated attempts to remove portions of their brain to try to delete this piece of knowledge went unsuccessful.
So knowledge of a specific thing can be easily measured, physically describable, and can be modelled. What about knowledge of everything ?!
The ability to learn about everything is something we do, yet we poorly know "how?"; how does the neural network in a child's brain can process all types of data and build knowledge models for everything. We know that curiosity (another instinctive behavior that we don't understand) is the driver behind the "playing" behavior of a child. Children learn to solve problems (What does this do? Why this puzzle piece is there?) through play. Children also learn colors, numbers, sizes, and shapes by playing. Language develops as a child plays and interacts with others; and socially, children learn to cooperate, negotiate, play by the rules in early games. The ability to gain knowledge of everything around us is another puzzling ability that we are still trying to understand and have some theory about.
Another mystery in the realm of knowledge is what scientists call a common-sense knowledge, our ability to make a judgment by instantly linking an unlimited number of facts together without prior learning.
An example for a common-sense knowledge, is if you hold a ceramic coffee mug in your hand, and then you let it out, you know that it will fall to the ground and break into pieces, spilling the coffee everywhere; not because you tried this before, but you know that's the way things behave due to gravity and liquidity of a coffee drink.
An Attempt to Define Intelligence, The Spearman Theory
There is no standard definition of what exactly constitutes Intelligence. Scientists and Philosophers agree that Intelligence exists and can be measured, evidently people that have been through the same process of acquiring the same knowledge implement their knowledge differently.
Some researchers have suggested that intelligence is a single, general ability, while others believe that intelligence encompasses a range of aptitudes, skills, talents, and behaviors, some of which we don't understand.
One attempt to reduce Intelligence into psychometrics that can be tested to validate the existence of Intelligence is "The General Intelligence Theory" (also known as g factor theory) proposed by Charles Spearman 1904, The general intelligence psychometrics includes:
Quantitative Reasoning: measures a person's numeracy that includes the ability to count, add, and subtract numbers, understand measurements, solve geometry, and word problems.
Fluid Reasoning: is the ability to solve abstract problems with no prior knowledge; for example, completing a series that illustrates a pattern
Visual-Spatial Processing: involves the recognition of both patterns and spatial relationships and the ability to recognize the whole from its constituent parts; for example, assembling puzzles
Knowledge Reasoning: is defined as someone’s accumulated stock of general information that has been committed to long-term memory; for example, explaining a picture that contains silly or impossible scenarios.
Working Memory: is defined as the multiple processes that capture, sort, and transform information in a person's short-term Memory; For instance, recalling a previously presented picture.
Look at the physiology of the brain, it really tells us a fraction of the story of how our cognitive traits evolved. Moreover, intelligence traits like the ability to do maths, don't leave any mark on the fossil record. The evolutionary physiologists answer to this mystery is that the brain evolved to solve these kinds of problems by the same techniques of natural selection and adaptation that we used to understand the evolution of physical and behavioral traits.
In my view, this is a trash theory, why we naturally develop an extremely advanced cognitive ability to comprehend and solve a complex mathematical equation, write a poem and fly a spaceship to another planet, while these are not adaptation requirements; and at the same time, we don't have a mental capacity to understand how we think!
Consciousness, the Mysterious Aspect of our Lives
Consciousness can be described as your awareness or experience of your unique thoughts, memories, feelings, and the environment around you.
These conscious experiences are constantly changing by the process of selective attention. For example, in one moment you may be focused on a movie scene, then you shit your attention to a dinner plan, or might notice the noise of your washing machine. This continuously shifting stream of thoughts is one of the characteristics of this experience.
The scientific and philosophical debate around consciousness started decades before AI was even known or defined, for hundreds or even thousands of years, consciousness was and will be an unsettled battle between dualism and monism. Being a dualist I believe that consciousness is not composed of matter that we can inspect in the lab or replicate by a bunch of code lines, it is a metaphysical problem beyond our mental capacity to solve.
So how do we assess that Consciousness exists?
The answer is that I know, and I can measure my conscious experience, like how much I experience or feel the pain at a moment of pain, and I can know that you have an experience of something by asking you questions, but I cannot measure or observe your conscious experience, and this is what Joseph Levine (1983) calls the explanatory gap.
While Neuroscientists made some progress discovering how the activities of neurons in our brain correlates to some conscious experiences, this progress is what David Chalmers (1995) has referred to as the easy problems of consciousness. When we precisely identify the neural mechanism that accounts for pain. Still, a further question would remain: Why does our experience of pain feel the way that it does? Why does neural firing feel like this, rather than like that, or rather than nothing at all? Identifying pain with neural firing fails to provide us with a complete explanation, and this what Chalmers call the "Hard Problem of Consciousness"
Another argument against the physicalism of consciousness is a philosophical thought experiment proposed by Frank Jackson (1982) called the "Knowledge Argument".
My conclusion is that all our efforts to re-create the abilities of a biological being with super artificial intelligence or artificial consciousness will stay in movies, sci-fi books, and AI futurists' minds, we are not close to any of that, before having a thorough understanding of how the human brain works and what consciousness is, we cannot mimic something that we are not able to describe.
The hypothesis of technological singularity that at some point we will invent machines that can have "knowledge of everything", that can build a spaceship by reading books, and recursively self improve beyond human's ability to control them, is a deliberate distortion of reality. Developing a deep learning algorithm that is being better able to identify objects or understand natural language has not led to an improvement in the deep learning algorithm itself. AlphaGo Zero didn't improve itself by knowing how to self learn, by changing it's algorithmic functions and code lines to self learn. Any improvements to the deep-learning algorithms inside AlphGo have been made by applying our own human intelligence to its design.
We may follow a chain of explanations, physics explains chemistry, chemistry explains biology, and biology correlates some of the brain activities to some aspects of our consciousness, and then we simulate that. Artificial flowers with the same look and scent of a flower don't make them real flowers.
Both horses and rabbits are biologically built of cells, cells build and group themselves based on genetic information encoded in DNA, DNA is made up of chemical molecules, physical structures and properties can describe these molecules, so we have a full materialistic view of what a horse and rabbit is, an average size rabbit weighs 2KG while an average size horse weighs 500KG, the physical recipe is ready, Two Hundred Rabbits equal One Horse, no they don't, because our conscious view of Horse is different from a Rabbit, we can distort this view to manipulate our conscious experience, but we cannot make a Rabbit feel like a Horse!