ARE HUMANS REALLY LIKE COMPUTERS, NEEDING AN OPERATING SYSTEM?

Scientists, institutions, large multinational corporations like Google and Microsoft, governments and armies all spend billions on Artificial Intelligence (AI). Although they are advancing rapidly through ever more complex algorithms, learning computers, quantum probabilities, reward and punishment systems; they admit to being nowhere near the complexity and behaviorism of the human brain. They declare that humans are not like computers. They cannot be understood or emulated through the binary system that every machine operates on, that we are too complex to be described in ones and zeros, yeses or nos. We are too unpredictable to be fully emulated. But actually, human beings are like computers. The binary system, which is the most efficient system, IS the way we operate, just like computers and their software. To understand why, I will replicate here my research into how the world and its scientists believe it is most likely for the human brain to operate and how they are going about replicating it.

The interest of this manual is not replicating human software, but merely understanding it in order to be able to reprogram it. The primary building blocks and function of cognition are the place to start our understanding, so looking into humanity’s efforts on AI, the place to start would be an intelligence similar to a living being’s and ultimately a human-like intelligence built from scratch, with all its elements fully understood.

And, once a human dreams of something, our built-in creator instincts make its eventual manifestation inevitable regardless of whether it takes years, centuries, or millennia. Wikipedia expands on the issue:

“The AI field is interdisciplinary, in which a number of sciences and professions converge, including computer science, mathematics, psychology, linguistics, philosophy and neuroscience, as well as other specialized fields such as artificial psychology.

 The field was founded on the claim that a central property of humans, intelligence – the sapience of Homo sapiens – “can be so precisely described that a machine can be made to simulate it.” This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of tremendous optimism but has also suffered stunning setbacks. Today it has become an essential part of the technology industry, providing the heavy lifting for many of the most challenging problems in computer science.”

Wikipedia continues with the history and importance of the quest:

“Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the bronze robot of Hephaestus, and Pygmalion’s Galatea. Human likenesses believed to have intelligence were built in every major civilization: animated cult images were worshiped in Egypt and Greece and humanoid automatons were built by Yan Shi, Hero of Alexandria and Al-Jazari. It was also widely believed that artificial beings had been created by Jābir ibn Hayyān, Judah Loew and Paracelsus. By the 19th and 20th centuries, artificial beings had become a common feature in fiction, as in Mary Shelley’s Frankenstein or Karel Čapek’s R.U.R. (Rossum’s Universal Robots). Pamela McCorduck argues that all of these are some examples of an ancient urge, as she describes it, “to forge the gods”. Stories of these creatures and their fates discuss many of the same hopes, fears and ethical concerns that are presented by artificial intelligence.”

Mechanical or “formal” reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing’s theory of computation suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This, along with concurrent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.”

Well, if we can build a machine that behaves like a human then it must feel and react like a human. In all seven layers of functionality in fact, as we will see later. Why would we want to do that? Apparently, we have always dreamed of being able to do this, but most importantly being able to dream to be able to create an intelligence as intricate as ours means that we will need to understand exactly how we work, what we are as human beings, and thus solve the main and most important existential questions. Furthermore, our creation must be able to learn, adapt, and make choices even in unpredictable situations and stimuli, choices about where to set its own warning levels or thresholds for specific actions to reach “this is not as it should be” status of misery, and to cause an internal or external reaction, a change, an action, life. What better confirmation that we know and understand something than being able to create it? Let us look at what collective knowledge exists, what kind of research has been done, and towards which ambitious goals:

“Goals: You awake one morning to find your brain has another lobe functioning. Invisible, this auxiliary lobe answers your questions with information beyond the realm of your own memory, suggests plausible courses of action, and asks questions that help bring out relevant facts. You quickly come to rely on the new lobe so much that you stop wondering how it works. You just use it. This is the dream of artificial intelligence. – BYTE, April 1985

Understanding how we work and being able to replicate it is as ambitious as creating something that would offer us the power of the equivalent of an extra frontal lobe, no less. The article then expands:

“The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. The traits described below have received the most attention.

Deduction, reasoning, problem solving. Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.

For difficult problems, most of these algorithms can require enormous computational resources – most experience a “combinatorial explosion”: the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem-solving algorithms is a high priority for AI research.

Human beings solve most of their problems using fast, intuitive judgments rather than the conscious, step-by-step deduction that early AI research was able to model. AI has made some progress at imitating this kind of “sub-symbolic” problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the probabilistic nature of the human ability to guess.”

Now,  I need to tire you with some more long quotes describing the main issues involved in the quest of AI research, still using the full quote from Wikipedia with my comments about how they can be resolved underneath each passage. Please put your full focus and read attentively, it is worth it:

Knowledge representation and commonsense knowledge. Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts and so on that the machine knows about. The most general are called upper ontologies, which attempt to provide a foundation for all other knowledge. Among the most difficult problems in knowledge representation are:

Default reasoning and the qualification problem. Many of the things people know take the form of “working assumptions.” For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.”

Well, what we do as humans, I propose, is that we choose the most likely statistical image of a bird, according to our most common associations. As the conversation provides more information, we adjust our chosen image with one that does not conflict the story, so that we do not feel cognitive dissonance. If we do not register consciously data in the story about the size or color of the bird, maybe because they do not hold importance for us, we do not feel cognitive dissonance so we can keep our own version, regardless of the facts as presented. What if we were inducing varying levels and qualities of discomfort, of cognitive dissonance, to the AI, as criteria to what level of discomfort can be tolerated at any given time before action must be taken. A robot that feels the different sources of discomfort, of conflict between the desired or expected reality and the experienced reality and can choose which of them it can bear and for how long. It would then need to make a choice based on a calculated risk and face discomfort if it chooses wrong, or endure the discomfort of indecision and stagnation. Which paradox is a bigger disruption to the default state where “everything is as it should be” would be the equation to solve. In this approach, everything can be reduced to true or false. Which choice provides less cognitive dissonance? Choice 1: action, or Choice 0: inaction?

The breadth of commonsense knowledge. The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge require enormous amounts of laborious ontological engineering–they must be built, by hand, one complicated concept at a time. A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the internet, and thus be able to add to its own ontology.”

“The subsymbolic form of some commonsense knowledge. Much of what people know is not represented as “facts” or “statements” that they could express verbally. For example, a chess master will avoid a particular chess position because it “feels too exposed” or an art critic can take one look at a statue and instantly realize that it is a fake. These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically. Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.”

Yes, the number of atomic facts that the average person knows is astronomical. But their perception is based on the core of the collective belief system of humanity, the negative beliefs about Life OS that we explored in the Philosophical layer. Any computer programmed with the human belief system would be paradoxing wildly, because a big part of commonsense knowledge contradicts the very prerequisites of life. For a computer, what humans do is incomprehensible because it actually is based on totally contradictory and mutually exclusive beliefs. To stop the computer burning itself out, simply allow computers to paradox and make them feel variable cognitive dissonance when they do instead of an endless logical loop.

If they have the capacity to choose between different paradoxes and change their own beliefs in order to alleviate the paradoxes (and thus the effects of the short circuiting – pain, “emotional” or physical suffering due to cognitive dissonance) if they can decide to endure less, Presto! A human!

Well, I propose that this subsymbolic knowledge that is accessed intuitively is just engraved memory of suffering from the past; other cases where the presence of particular conditions ended up in some kind of undesirable result and contradiction between realities. By allowing the AI to keep records of cognitive dissonance incidents connecting all relevant memories, whose order of accessibility is directly related to time, repetition and intensity of dissonance as the relevant factors, and you have “the feel.” Emotion.

“Planning: A hierarchical control system is a form of control system in which a set of devices and governing software is arranged in a hierarchy. Automated planning and scheduling. Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or “value”) of the available choices. In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be. However, if the agent is not the only actor, it must periodically ascertain whether the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty. Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.”

“Learning: Machine learning is the study of computer algorithms that improve automatically through experience and has been central to AI research since the field’s inception. Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space. These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Within developmental robotics, developmental learning approaches were elaborated for lifelong cumulative acquisition of repertoires of novel skills by a robot, through autonomous self-exploration and social interaction with human teachers, and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation.”

Again, if a computer is programmed to prioritize its own well-being, and its well-being is to avoid cognitive dissonance (including the dissonance from not being able to achieve its goals) it will cooperate with others as much as it judges that this serves its purpose better. This will vary according to issues such as social acceptance and other free agents “liking it” (people as well as other AI agents if they are involved), and will decide which “hurts” more: risk of failure by acting alone or sharing results and receiving social approbation from others.

In the case of learning, we can see the very same pattern. As the passage explains, we have unsupervised learning and supervised learning. Children that go to touch the fire unsupervised get burned and that pain of disruption of their happiness, that abrupt event of not only physical pain but also the cognitive dissonance caused by believing, “I thought I was safe and protected and I am not.” It takes a high order of ability to remember and create automatic caution and a gradual approach to fire and heat, sometimes even excessive caution and fear. The same reaction can be built in to an artificial intelligence. For example, the individual can choose to decide between the pain of social approbation and acceptance with confronting the risk of reliving the danger. Naturally, the ability of “finding patterns in a stream of input” presents an exponentially higher computational load than finding patterns of specifically cognitive dissonance in a stream of input, the same way that it is easier to pick up someone you do not know from the train station if you are looking for a 55 year old fat bearded man, and not just a name, because you discard automatically the bigger part of the input.

Similarly, say you are trying to count how many groups of the same color car pass at the same time on the motorway, a game my boys and I used to play while driving. It is infinitely easier if it is restricted to red cars or any other specific color. In the case of supervised learning, especially reinforcement learning, the processing of data is reduced if instead of using both reward and punishment the default state is reward, everything is as it should be and zero conflict. Yes-No, binary system.

The discomfort arising from the phenomenon of cognitive dissonance can keep becoming more disagreeable and more diverse. It will keep getting worse while the pleasure from any reward fades with exposure and familiarity. By basing everything on the simplicity of one simple default universal reward, Happiness, all the complexity of the relative and perceived value of each type of reward, including social acceptance and likeability, is removed. Everything would simply be reduced to the disruption (or not) of the default reward, which is our birthright: our happiness.

A disruption would have to be higher than acceptable according to our internal happiness thermostat, which we actually control. We can decide what is acceptable and from which point we choose to be dissatisfied. Viewing it from this angle allows the variables to be drastically reduced, and thus would exponentially reduce the computational load and the time lapse until a decision is made, whether for action or inaction. Nature, like human engineering, always chooses the least complex and most frugal solution, among equally effective options.

 “Natural language processing (communication). A parse tree represents the syntactic structure of a sentence according to some formal grammar. Natural language processing gives machines the ability to read and understand the languages that humans speak. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval (or text mining), question answering and machine translation. A common method of processing and extracting meaning from natural language is through semantic indexing. Increases in processing speeds and the drop in the cost of data storage makes indexing large volumes of abstractions of the user’s input much more efficient.”

Just adding programming for the input of signs of social approvability, both verbal comments and symbolic expressions and body language, would help a computer translate and comprehend events faster, because it could try expressions that it is not sure of and learn through the pain of disapproval when it gets it wrong. Approval when it gets it right would be a factor, but a much less powerful one, and less easily perceivable. Approval is often implicit, and less clearly or powerfully expressed. Correct behavior is usually just taken for granted. With instant testing through the pain caused by other people´s impatience, irritability, or disdain at its mistakes, a computer would gain powerful tools to motivate it to keep learning fast, guided by people´s reactions. People who are good with languages are usually those who dare to speak a language badly and learn from people´s reactions as well as imitating which expressions are appropriate, and which cause laughter, derision, ridicule, exasperation, misunderstanding and dismissal.

Perception. Machine perception, Computer vision and Speech recognition. Machine perception is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A few selected subproblems are speech recognition, facial recognition and object recognition.”

Obviously, adding symbolic recognition of facial expressions and body language would be necessary for an AI that would replicate a human mind. Although it would increase the load and complexity of information that needs to be processed and comprehended, it would undoubtedly help deduce information from shorter inputs by reducing the range of possible alternative aspects of the world. Only by trying to make it happen and calculate the load will we know if it will effectively lighten the load or make it heavier. It might turn out to have a counterbalanced “zero effective impact” result. But even if it increases the load on this issue, the total, overall gains are much higher. Facial expression recognition is already widely researched.

“Motion and manipulation. Robotics. The field of robotics is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, with sub-problems of localization (knowing where you are, or finding out where other things are), mapping (learning what is around you, building a map of the environment), and motion planning (figuring out how to get there) or path planning (going from one point in space to another point, which may involve compliant motion – where the robot moves while maintaining physical contact with an object).”

For this last problem to solve for AI, it seems obvious that including cognitive dissonance considerations such as, for example, clumsiness and embarrassment for stepping on toes or dropping the hot tea on somebody´s lap, would ensure more cautious behavior around other agents and fragile or valuable objects to avoid repetition of the cognitive dissonance experienced when problems occur. This would act in addition to direct sensory “pain” from damage sensors.

Let us look at the long-term goals, those that are dreamed of as achievable, but through further technological progress in computing power. Wonders that seem way too complicated for us to even contemplate how to program them in, the province of pioneers and dreamers who dare to believe they can replicate pure essence and soul in pursuits such as creativity:

 Wikipedia

“Long-term goals. Among the long-term goals in the research pertaining to artificial intelligence are: (1) Social intelligence, (2) Creativity, and (3) General intelligence.

 1. Social intelligence. Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as to early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard’s 1995 paper on affective computing. A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response for those emotions. Emotion and social skills play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, in an effort to facilitate human-computer interaction, an intelligent machine might want to be able to display emotions–even if it does not actually experience them itself–in order to appear sensitive to the emotional dynamics of human interaction.”

Really, do I need to comment?

2. Creativity. Computational creativity. A sub-field of AI addresses creativity both theoretically (from a philosophical and psychological perspective) and practically (via specific implementations of systems that generate outputs that can be considered creative, or systems that identify and assess creativity). Related areas of computational research are Artificial intuition and Artificial thinking.”

Well, approval by others is a big incentive for humans and guidance for the direction of our creativity, as well as self-perception of satisfaction about creativity. Satisfaction with own or perceived by others creativity is the default and any cognitive dissonance programmed in and experienced by diversion from this default would urge the AI to find new, thus creative ways to return to its state equilibrium – happiness – no contradiction between perceived reality and the reality experienced.

3. General intelligence. Artificial general intelligence and AI-complete. Many researchers think that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them. A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project. Many of the problems above may require general intelligence to be considered solved. For example, even a straightforward, specific task like machine translation requires that the machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s intention (social intelligence). A problem like machine translation is considered “AI-complete”. In order to solve this particular problem, you must solve all the problems.”

 Approaches: There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues. A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering? Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems? Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require “sub-symbolic” processing?”

Since I choose to agree with everybody, all of the above long-standing questions are easily answered: Yes to all. And I propose that general intelligence has not yet been solved because the factor of cognitive dissonance has not been taken into consideration. A machine is incapable of paradox and contradiction. If it needs to emulate human intelligence it must be able to incorporate this overlooked, and in my opinion quintessential, characteristic of humans. The simplest logical principle is the one that examines the truth of a statement as simply: True (1) or False (0). It also requires solving large numbers of completely unrelated problems.

Partial solutions to the problem of expressing human functioning in computer terms, which are extremely efficient and successful for complex machinery, have been developed and are in commercial use – such as ones utilizing subjective and fuzzy logic. This is a version of first-order logic, which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). It provides the “sort of” that we humans are so used to. And of course optimization when each choice has both positive and negative aspects is necessary, and yes, intelligence can be reproduced using higher-level symbols, words, and ideas. But a cognitive functionality based on cognitive dissonance needs subsymbolic processing to evaluate say, the disruption of a physical pain putting in danger physical survival compared to the perceptual conflict of societal expectations for approval, social survival.

The way to create a brain that functions like the human brain, is in many ways, served by all of the efforts and approaches that you can Google by yourself, and all algorithms and ways of processing are great for approximating human thinking ever more closely as technology improves (according to Moore´s Law, one of the founders of Intel, computing power will be doubling every two years; this prediction has held true since 1965, in actuality the density of electronic circuits has doubled every 18 months, a much quicker progress than even Moore could anticipate). But an important ingredient is missing. It is not an ingredient that was there from the beginning of the human species but it started evolving at the onset of human civilization when groups of humans started becoming larger than just a single family. The advent of tribes inevitably started changing us.

Before we go into this (and always in full accordance with universally accepted collective knowledge) let us go back to artificial intelligence. If the goal of artificial intelligence is to be better than a human, as many researchers believe, a robot that can be better than us in everything, including physical power and resilience, like computers and robots already are in some areas, for example in mathematical computation, reliability, and games like chess, we should exclude this ingredient. But if the objective is to emulate the human functionality, and thus enjoy things like evolution, intuition, unpredictability, parallel innovation and creation of new ideas, then this ingredient is essential.

This manual is not interested in AI for more efficient manufacturing and serving machines, but in something way more valuable; the potential ability to create a machine that reacts and thinks EXACTLY as a human being would mean that we would have reached a clear global understanding of who we are and how we function.

The only reason I am talking about AI in this manual, is to examine ourselves and our functionality to the core in order to have a more complete understanding of what constitutes a human being. We must be able to provide the same kind of intrinsic functionality as a human, and that means a creature that can function under relativity and contradiction. So let me suggest an approach to creating a copy of what we are, just so that we can understand ourselves. Added bonus: If we can create a living being, we become creators ourselves. What does it take for an artificial brain to not only replicate a living being, but also a human being?

How to make a brain that replicates humans:

 To start, it needs the four laws of the operating system of life, what I call LifeOS:

It needs to have a sense of self as an entity connected but separate from the rest.

It needs to have the priority and the responsibility to preserve itself, to acquire energy and to protect itself and to provide the best operating conditions possible for itself.

It needs to be programmed with an ideal state of things “as they should be,” and any diversion from this state to cause a painful conflict. A conflict between two realities: the ideal and the experienced.

The ideal state of “happiness” should vary according to the changes in the environment and conditions around it. To avoid destructive and debilitating “pain” levels when conditions make it impossible to achieve the previous ideal, the process of adaptation is necessary.

In cases of multiple sources and reasons of disruption, an optimization process is required in order to prioritize the resolution of the disruptions, sometimes needing to choose to not address a disruption because a larger (more unpleasant or dangerous) disruption needs a prioritized allocation of resources. At the moment of multiple disruptions conflicting with the ideal reality and causing “pain,” the bigger the delay in taking action, the higher the cognitive dissonance, and thus the pain. One of the choices for optimization of action towards the reestablishment of things “as they should be” would statistically be to make arbitrary choices between similar level pains but of different quality, for example physical deviation from the ideal versus cognitive variation from the ideal. By having the capacity to make arbitrary choices, the “short-circuiting,” the conflict and “pain” of delaying or being unable to decide on an action and thus allowing the “pain” to continue is reduced. For the arbitrary choice to not cause further dissonance, the level of “as it should be” on all issues that could not be addressed needs to either change completely, or be modified in some way.

One classic example of this is expressed on practically all articles on cognitive dissonance: One of the most famous of Aesop´s Fables, where a hungry fox tries maniacally but unsuccessfully to reach some high hanging grapes until she leaves, dismissively declaring that the grapes were sour and she would not have eaten them anyway! The fox in the story went in to full action with all resources to get the grapes and quench her hunger and desire, but when the pain of failing became too much, she shifted priorities and went hunting, but also changed her belief about the desirability of the grapes. One moment, not having the grapes was extremely “not as it should be” and the next it was exactly “as it should be.” The fox chose to redefine her reality so that it could align with the reality she was unable to change. How many times a day do you do just that, my friend? Adjust your desires and expectations to coincide better with your reality? If you do not, if you keep suffering for things beyond your power, you become clinically depressed.

The next step would be the third law of life, reproduction. The android should have the instruction to create and teach its successor when it is close to reaching the limits of its current design, according to the law of diminishing returns which is the tendency for a continuing application of effort or skill toward a particular project or goal to decline in effectiveness after a certain level of result has been achieved. Failing to do so would not be as it should be, growing discomfort – cognitive dissonance – would ensue, and again, some kind of action would need to be taken.

Finally, since it would live in a universe of constant change on all levels around it that constantly affect and influence its state of “as it should be,” and th

at is inevitable, then change that cannot be neutralized, avoided, or reversed should be “as it should be” as well. Again, an element in service of adaptation, happiness.

This includes of course the “as it should be” of any kind of death; death of another entity that forms part of the environment, death of a project, whatever. For example, a simple toy robot programmed to go to the left towards an air conditioner if its temperature exceeded say, 36 degrees Celsius, and to go to the right, towards a heater if it fell below 18 degrees. Since anything that is not between this range would not be “as it should be”, would be forced to be in constant agitation and discomfort, trying vainly to find an action that would allow it to stop spending energy if, for example, the air conditioner stopped working. It could be right next to the broken air conditioner and as far away from the heater as possible but would still be unable to achieve a pleasant temperature. It would feel powerless to achieve its ideal range of temperature, however far left it went. Ultimately, it would either short circuit, stay immobile, or bang against the wall, unless it could experience cognitive dissonance and the need to be relieved of it. If the best it could possibly get with the air conditioner on the blink were 39 degrees, it would have to adapt, raise its own internal thermostat setting so that it could still become “happy” under the new, inescapable condition.

So, let’s say a machine had a sense of self AND the priority to serve this self to maintain its state of “as it should be/happiness” OR modify the specifications of the ideal state where conditions make it unable to do so to constantly reduce any deviation between the changing conditions. These conditions may be internal (due to its ever-changing physical condition) and external (due to other agents). Now let us say that it would also be endowed through programming with the tendency and desire to produce a successor when its limits are being reached (basically the urge to reproduce like all living beings), and that it also considered continuous change a part of its ideal state as all living beings do– you would still only get a primitive, non-pack animal. Because the laws of life are not enough to make a human. More is needed.

To get an artificial intelligence that reacts like a human being, you would then need to create a set of diametrically opposed instructions, programming that indicates to the machine that having a sense of self is not as it should be, that taking care of this self´s needs as a priority is even more “not as it should be.” All other agents’ wishes or needs should be more important than its own needs. Its instruction to wish to reproduce, to create its successor before it rots out with time and while it can still do so are not as they should be at all. In addition, despite the fact that, by definition, things are almost never as they should be since the two sets of instructions are mutually exclusive, any change to the status quo is definitely not as it should be.

In this case, the android would be, by design, in a constant state of cognitive dissonance and would have to interpret and juggle between its opposing instructions, constantly struggling to choose or modify its beliefs and priorities (its instructions) creatively and fluidly enough to be able to achieve brief periods of reduced dissonance or oblivion.

Then, you would have a human being. Almost. One last thing that would be needed, an extra twist to make things even more paradoxical – you would have to program in a few viruses. Some of them would be activated at random, some on regular intervals (like some that would be activated for a period of 3 to 5 days every 28 days, for example, like a woman’s menstruation). You’d also need some that would be activated for a period of a few years, like adolescence, causing emotions to fluctuate wildly, that would randomly rewrite pieces of code crucial to the determination of the zero state, causing sudden unexplainable changes of great magnitude to our levels of dissonance, to fulfill the role of hormonal emotional shifts, forcing acute philosophical changes that juggle the game of trying to match realities with the ideal state of “everything is as it should be,” happiness. Thermostat gremlins. Then you would have a human being, a being that experiences a wide range of emotions and not just thoughts.

So yes, we function with the binary system, just like computers. But computers with choice over our programming and our actions, a soul.

Spread the word