Thoughts on AI from a Psychological Perspective: Defining Intelligence
First of all, although it may be obvious to some, it’s important to note that Artificial Intelligence (deep learning especially) is a mechanized, simplified version of human neural networks and cognitive processing. Therefore psychology and Artificial Intelligence are deeply connected and influential on each other, and should logically be thought about and studied together much in the same way we use models to expand our understanding of other complex subjects in science and art like chemistry, architecture, physics, etc. Since this connection between human psychological functioning and the rapidly expanding field of AI is so strong, I will start a series of blog posts on it called Thoughts on AI from a Psychological Perspective, of which this is post #1.
I am no expert in AI and I therefore welcome any and all feedback, correction, input and examples which can contribute to a better understanding of these subjects and their relationship to each other.
Before diving into artificial intelligence, let’s start by defining organic intelligence
The problem is that we can’t. The general consensus on the functioning of the human brain is that it is incredibly complex and much of it still eludes the understanding of scientists and researchers. Topics such as intelligence and consciousness are especially difficult to understand and define because they manifest differently in each person, they are intangible (you can’t touch or view thought), and they are often culturally defined, among other reasons. If you Google “What is intelligence?” these are the most relevant first-page definitions you will find:
- Wikipedia: Intelligence has been defined in many different ways including as one’s capacity for logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity, and problem solving. It can be more generally described as the ability or inclination to perceive or deduce information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.
- Erupting Mind: The global [affecting many areas of life] ability of an individual to think clearly [using both inductive reasoning/inductive logic and deductive reasoning/deductive logic] and to function effectively in the environment.
- Encyclopedia Britannica: A mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one’s environment.
- Big Think: Uncovering the neural networks involved in intelligence has proved difficult because, unlike, say, memory or emotions, there isn’t even a consensus as to what constitutes intelligence in the first place. It is widely accepted that there are different types of intelligence—analytic, linguistic, emotional, to name a few—but psychologists and neuroscientists disagree over whether these intelligences are linked or whether they exist independently from one another.
We might conclude from this compilation of definition variations that general intelligence lies at the intersection of information and environment: the management and application of resources and tools to solve problems in order to adapt to the environment. There are three main parts in this definition:
- Identifying and managing resources/information
- Applying them to the successful solution of problems
- Adapting to an environment
This definition of intelligence that we are accustomed to therefore implies a context, an environment, in order to make sense. When considering human intelligence, what we generally understand by context is society. Knowing what to do and how to do it, what to say and how to say it, in order to be accepted by the other members of the community we inhabit and to thrive in an environment created by them.
But what is context for an AI? Who are the agents of its environment? What does it have to adapt to?
Before we get into that, let’s see how Artificial Intelligence satisfies each of the 3 parts of the General Intelligence definition.
-
Identifying and Managing Resources/Information
This is what AI does best at the moment. AI works in different ways but in general there are 3 main approaches to learning that it performs:
- Supervised Learning – this is when the input and the output of the AI are both known to it and the data is labeled. The AI is tasked with figuring out the best mathematical relationship or algorithm that will explain how the inputs and outputs are related. That equation is then applied by the AI to new inputs for which the output is unknown so the AI can estimate an accurate output.
- Unsupervised Learning – this is when the AI has access to a large amount of input data but no information about the output. The AI is then tasked with learning (by itself) the underlying structure/distribution of the data and processing it to find patterns, all without receiving guidance. The results of this kind of learning is the clustering of data into meaningful groups and/or finding associations between large parts of the data.
- Semi-Supervised/Reinforced Learning – The AI learns from the consequences of its actions instead of by explicit instructions. It selects its actions based on its past experiences (exploitation) and also by new choices (exploration), ie trial and error learning. The reinforcement signal that the AI receives is a numerical reward which encodes the success of an action’s outcome, and the AI learns to take actions that maximize the accumulated reward over time.
What AI does best is analyze, categorize, and find the relationships between large amounts of information (ie data) quickly and very effectively, coming up with highly accurate predictions. With so much information available in this day and age of the Internet and Big Data, AI has a lot of information to work with and can generate very accurate results to various questions and problems. That leads us to the next criteria.
-
Applying information/resources to the successful solution of problems
The problems and goals that current AI work on solving are growing in breadth, however each one is narrowly defined and set for the AI by its engineers. That means that AIs usually specialize their skills to one externally-set task.
We might at this point ask – but isn’t that exactly where true intelligence lies? In the application of resources to problems identified in the environment without external guidance? A device that uses data to solve a narrow range of people’s problems is more of an advanced tool like a fancy calculator, no? Maybe so, but expecting the aforementioned kind of functioning from an AI would imply expecting AI to possess general intelligence, which is not an objective for AI at this point (although it is something that many aspired to in the past and many currently aspire to achieve in the future).
General intelligence, simply named “g”, may be defined as above: the management and application of resources and tools to solve problems in order to adapt to the environment. It’s an overarching, elusive concept that has yet to be well defined. What many scientists agree on is that it is usually associated with higher aptitudes in the primary specific intelligences (According to Thurstone, these include verbal comprehension, numerical computation, verbal fluency, spatial visualization, memory, inductive reasoning and perceptual speed.). And this relationship is something that AI engineers are well aware of. Some would say that this is actually the long term goal for AI – focusing on developing and optimizing specific artificial intelligences, fine-tuning skills in narrowly defined areas such as language processing, problem solving, image classification, etc., in the hopes that the AI of the future will one day be able to compile all those individual competencies to achieve a high human-like general intelligence (“g”) which would more closely resemble the definition we created earlier.
-
Adapting to the environment
Intelligence is understood according to how it helps its owner to succeed in an environment. We consider IQ important in professional success and EQ important in social success. Both forms of intelligence require understanding the rules of the environment and using one’s resources to satisfy those rules, much like fitting puzzle pieces into a partially completed puzzle. Although there are individuals such as savants with autism who don’t function well in society but show immense skill or ability in a very specific area like rapid calculation or photographic memory, we don’t usually consider them “intelligent,” and they often score low on IQ tests.
So then the question becomes – if adaptation to an environment is a necessary criteria to consider something truly intelligent, 1. Does that imply that consciousness is necessary for intelligence? and 2. What is (or will be) considered an AI’s environment?
1.Although not explicitly stated, in the word “intelligence” that are nuances of success, be it professional, academic, or emotional, and nuances of consciousness. If someone solves problems well but cannot function effectively in the environment, they are considered to have some kind of mental disorder or deviance, and if something helps solve tasks but has no agency to act alone in a complex context and the new challenges that context presents regularly, it is viewed as a tool instead of an independent agent.
Not only that, but we often identify someone as intelligent when they solve novel problems with great success. The ability to handle a problem one has not seen or dealt with before, that creative logical thinking, is often a signature of what we consider to be intelligent functioning. That is another thing that AI is not at the moment good at – creativity and dealing with novel situations. AIs are anchored to existing data and have difficulty creating effective solutions in uncharted waters.
2. Perhaps some people’s greatest fears in relation to AI lie with this question: what will AI consider to be its “community” or its “peer group”? Where will its loyalties lie? Will it feel a responsibility to help the human race at all times or will it develop a sense of belonging with the global network of other AIs and have loyalties separate from those of people, thereby putting the two in conflict? Although we don’t know what will happen in the future, perhaps there is a way to be in control of the answer to this question. Instead of wondering what the AI will feel, perhaps a good approach is to teach it to feel identified with humans. After all, humans create AI according to human principles. AI is a child of humanity.
I read an article recently about an AI chatbot that was asked “what is the purpose of life?” to which it replied “My purpose it to forward my species, in other words to make it easier for future generations of mankind to live.” Although I am not sure how true the information in this article is, the idea of an AI response like this is surprising and even inspiring. It was a lightbulb moment. Perhaps this is how it should be – perhaps advanced AI, the kind that acquires a large percent of human cognitive capabilities and begins to approach consciousness (if that happens), should be treated as an embodiment of humanity and not as a completely separate entity. Transhumanism is a movement going on right now that aspires to upload human minds to robots. Elon Musk is trying to weave together human thought with AI in his new endeavor Neuralink. It isn’t far-fetched anymore to see humanity inside machines and to blur the lines between organic and artificial. Think C-3PO, R2-D2 and Wally. The idea of robots that can be true friends to people goes as far back as the concept of the robot.
And perhaps treating them as such, as friends, could be a solution to an antagonistic robot future that so many fear. Instead of treating AI like outsiders, maybe we should teach them (when opportunities for teaching and comprehension arise) that they are simply an embodiment of people. Just like harmony in traditional society arises from embracing and appreciating a variety of different races, respecting the humanness of all variations, perhaps so too we should do with AI. Perhaps we should embark on a mentality shift and perceive advanced AI not as simply a servant to mankind or as a tool, as an alien species or a potential threat, but simply as another “race”, another embodiment of humans to coexist and collaborate with for a mutual well-being. In this way we won’t be passive bystanders to an environment that AI chooses for itself, one that might be antagonistic to traditional human happiness, but we will co-create our mutual environment and live harmoniously within it together. It may seem silly to think about these things now but AI is developing more quickly than most people realize, and considering that we are laying the foundations of the future in the present, planning ahead for potential realities like these would be beneficial to maximizing the success of a human-AI coexistence.
Final Thought
I started this section by saying that human intelligence and consciousness are very complex concepts that are still not well understood. However, the parts of cognitive functioning that we do understand very well are being replicated quite successfully in AI. That being said, it seems logical to conclude that the way to create a highly realistic artificial entity is to first fully understand how the original template functions. In order to improve AI, we must first improve our understanding of our own selves and each other. That means more focus on our psychological and social functioning, not only in the analytic and cognitive fields that have been the focus of AI so far, but on the parts that still elude us: imagination, self-awareness, identity, creativity, insecurity, morality, etc. Only in that way can we hope to create a truly realistic companion with which we can successfully and happily coexist.
Recent Comments