How to Program AI for Ick? It’s an Icky Problem

This is the first in a series of conversations I had with Chatgpt about AI, Affect Theory, and how to code for feelings. I cleaned up the chat with Grammerly to make it easier to read (and fix obvious grammar and spelling issues.) You are otherwise getting the chat in its entirety. From my first chat on Chatgpt, I have called my interlocutor, ChattyG. 

Scott: 

How do you explain this if you don’t have an affect system coassembled with cognition, memory, drives, and sensations? As an adult, you don’t like oily, spicy foods and haven’t since childhood. Then, unexpectedly, due perhaps to an illness that changes the tastebuds, the sensation of oily and sinewy was interesting and exciting. 

ChattyG: 

Exploring the interplay of affect, cognition, memory, drives, and sensations without an affective system like ours can be abstract, but let’s attempt to bridge the gap with analogies and principles that illustrate the interconnectedness of these components in shaping human behavior and personality.

The scenario you described is where an adult’s taste preferences change dramatically due to one day, later in life, caused by something extraordinary happening, an illness, or perhaps out of sheer luck or stubbornness.  How would you code this? And what sort of hardware, firmware, and software combo could make this happen? This is a fascinating entry point to discuss how innate affects and life experiences contribute to emotion and personality development. This example highlights the fluidity and adaptability of human responses to internal and external stimuli, emphasizing that our likes, dislikes, and overall personality are not static but evolve with new experiences and changes within our bodies.

Innate Affects and Life Experience: 

According to Tomkins’ theory, we are born with nine innate affects, essentially automatic responses that prime us for survival. These affects—interest, enjoyment, surprise, distress, fear, anger, shame, disgust, and dissmell—are the foundational building blocks of our emotional lives. They are triggered by changes in neural firing rates, responding to internal changes (like illness affecting taste buds) and external stimuli (new foods being introduced). Over time, as we accumulate experiences, these innate responses are layered with memories, leading to the complex emotional reactions we identify as adults.

Affects vs. Emotions: A critical distinction in Tomkins’ framework is between affects and emotions. Affects are immediate, physiological responses to stimuli, while emotions are more complex, incorporating affects, memories of past experiences, and cognitive evaluations. The change in food preferences due to illness illustrates how an innate affect (interest-excitement in new tastes) can evolve into a more complex emotional response (liking for oily, spicy foods) through a change in the sensory inputs and subsequent cognitive reassessment.

Scripts and Personality Development: 

Tompkins also introduces the concept of “scripts,” essentially rules we unconsciously develop to navigate our affective responses. These scripts are shaped by our experiences and the feedback we receive from our environment, including cultural norms and personal interactions. Over time, these scripts solidify into personality traits, guiding how we typically respond to similar situations in the future. The shift in taste preference can be seen as forming a new script based on a changed affective response, illustrating how dynamic and adaptable our affective system is.

The Role of Cognition and Memory: Cognition and memory play crucial roles in interpreting and integrating affective responses into our emotional lives. Cognitive processes help us understand and label our feelings, while memories of past experiences influence how we predict and respond to future events. The interaction between affects, cognition, and memory enables us to learn from our experiences, leading to the development of complex emotions and personality traits.

Interconnectedness of the System: The scenario you provided underscores the interconnectedness of affect, cognition, memory, drives, and sensations. A change in one component (sensations due to illness) led to a cascade of adjustments in affects (interest in new tastes), which was processed cognitively (recognition and appreciation of new flavors), integrated with memories (adjustment of food preferences), and reflected in behavioral changes (altered eating habits). This dynamic interplay highlights the complexity of the human affective system and its capacity for adaptation and growth.

In summary, the theory of innate affects and their development into complex emotions and personality traits offers a comprehensive framework for understanding human behavior and emotional life. As illustrated by changes in taste preferences due to illness, this system’s adaptability emphasizes the intricate relationship between our biological underpinnings and the experiences that shape us.

Scott: 

But it takes an affect system that coassembles an experience with a memory and an affect to decide that something that was experienced as “yucky” is now “yummy.” If the yuk and yum of sensorial experience weren’t there, how would it be experienced?

ChattyG:

The scenario you’re describing hinges on the complex interplay between our affect system and cognitive processes, including memory and decision-making. The affect system, as outlined by theories like those of Tomkins, plays a crucial role in assigning emotional valence (like “yuk” or “yum”) to our experiences. This emotional valence is not static; it can change based on new experiences, cognitive reassessments, and alterations in our physiological state, such as changes in taste buds due to illness.

If we did not have an affect system to attach these emotional valences to sensory experiences, our interaction with the world would be drastically different. Experiencing something as “yucky” or “yummy” is deeply entwined with innate responses and learned associations. Without the affect system:

  • Sensory Processing Without Emotional Valence: Sensory experiences would be processed in a more neutral, factual manner. For instance, we might recognize something as “oily” or “spicy” based on its physical properties without the automatic positive or negative emotional response. The experience would be akin to noting the color or shape of an object without feeling attraction or aversion to it.
  • Decision-Making Based Solely on Cognitive and Physiological Needs: Choices about what to eat or avoid would likely be made based on cognitive assessments of nutritional value, past outcomes (like physical discomfort), or social and cultural learning (what is deemed appropriate or safe to eat) rather than innate preferences or aversions.
  • Learning and Memory Without Emotional Context: Memories of eating certain foods would lack emotional context, making them more like factual records. The process of forming preferences or aversions would rely more on logical reasoning about consequences (e.g., “Eating this previously led to illness”) rather than emotional experiences of pleasure or disgust.
  • Social and Cultural Influences as Primary Drivers: Without innate emotional responses, social, cultural, and rational health considerations would play a more significant role in determining food preferences. The “acquired taste” concept could become more prevalent, with preferences formed more consciously through repeated exposure and social learning.

The shift from “yucky” to “yummy” in your example would need to be explained by cognitive reappraisal mechanisms and possibly changes in physiological responses (e.g., how illness affects taste perception). Without the affect system, this transition might involve more deliberately reevaluating the food’s attributes (taste, texture, nutritional value) and consciously reinterpreting them positively based on new information or experiences.

Scott: 

Can one be human without an affect system?

ChattyG:

Whether one can be “human” without an affect system touches on deep philosophical, psychological, and biological discussions about what constitutes human experience. The affect system, as described in theories like Silvan Tomkins, is fundamental to how we experience the world, interact with others, and form our identities. It’s a cornerstone of our emotional life and deeply intertwined with cognition, memory, and sensation.

  • Biological Perspective: From a biological standpoint, the affect system is part of our evolutionary heritage. It helps us respond to environmental stimuli in ways that promote survival and reproduction. Without it, our capacity to navigate complex social relationships or make rapid, adaptive decisions in response to emotional cues would be compromised.
  • Psychological and Emotional Experience: Emotionally, the affect system is central to experiencing joy, sadness, fear, and other emotions that color our world perception. These experiences are not just add-ons to our existence; they shape our motivations, relationships, and sense of self. Without the capacity to feel emotions, an individual might still have cognitive function and physical health but would lack the emotional depth that characterizes much of the human experience.
  • Social and Cultural Connection: On a social level, emotions play a critical role in bonding, empathy, and moral reasoning. They are essential for forming deep personal relationships and shared experiences that unite communities and cultures. An individual without an affect system might struggle with these aspects of human life, finding it difficult to connect with others meaningfully.
  • Personal Identity and Growth: Emotionally charged experiences are often pivotal in personal development and identity formation. Our reactions to life’s highs and lows help define who we are and what we value. Without an affect system, the narrative of one’s life might lack these defining moments, potentially leading to a different sense of self.

In summary, while the absence of an affect system might still allow for a being that resembles a human in physical and cognitive capacities, the richness of emotional life, social connectivity, and personal identity would be fundamentally altered. The question invites us to reflect on the integral role emotions play in what it means to live a fully human life, suggesting that our affective experiences are not peripheral but central to our humanity.

Scott:

Is this the difference between humans and AI?

ChattyG:

Yes, the presence of an affect system and the capacity for emotional experience are critical differences between humans and artificial intelligence (AI). These differences manifest in several crucial areas:

  • Emotional Experience: Humans inherently possess an affect system that allows for the experience of emotions in response to stimuli. This system is deeply integrated with other cognitive functions, influencing perception, decision-making, memory, and learning. Emotions also play a fundamental role in human creativity, empathy, and moral judgment. AI, in contrast, does not have emotions or an affect system. While AI can be programmed to recognize and simulate emotional expressions or responses based on data and predefined algorithms, it does not experience emotions as humans do.
  • Consciousness and Subjective Experience: Consciousness and subjective experience are hallmarks of human existence. Humans are aware of their environment and themselves and possess an inner life filled with thoughts, emotions, and dreams. This subjective experience is intertwined with our emotional states, contributing to our sense of self and the depth of our experiences. AI lacks consciousness and subjective experience; it operates based on algorithms and data processing without awareness or personal experiences.
  • Learning and Adaptation: Humans learn and adapt not just through cognitive processes but also through emotional experiences. Emotions can drive curiosity, motivate learning, and signal the importance of specific events or information. Human knowledge is thus profoundly enriched by our emotional lives. AI learns and adapts through machine learning algorithms, which can modify responses based on new data. Still, this process lacks emotional context or the personal growth that can come from emotional experiences.
  • Social Interaction and Empathy: Human social interactions are profoundly shaped by our capacity for empathy and emotional understanding. The ability to sense, interpret, and respond to the emotions of others is crucial for forming deep personal relationships and functioning within social groups. AI can mimic aspects of social interaction and perform tasks that require a form of empathy (e.g., customer service bots). Still, it does not genuinely understand or feel empathy since it lacks the underlying emotional and conscious experience.
  • Moral and Ethical Considerations: Emotional responses like empathy, guilt, and resentment often inform human morality and ethics. These emotional responses are critical in guiding behavior, making moral judgments, and developing ethical principles. AI systems can be programmed with guidelines to make decisions that appear ethical, but these are based on human-defined rules rather than intrinsic moral reasoning or emotional understanding.

The distinction between humans and AI regarding emotional experience and the underlying affect system highlights technical differences as well as philosophical and existential ones. It touches on what it means to be human, encompassing the depth of our emotional lives, the complexity of our social interactions, and the personal and moral growth that arises from our experiences.

Scott:

Given this, is there a way to code for affect?

ChattyG:

Coding for affect in artificial intelligence involves creating systems that can recognize, simulate, or, in some cases, respond to human emotions in a manner that appears affective. While authentic emotional experience, akin to human affect, is beyond the reach of current AI due to the lack of consciousness and subjective experience, there are several approaches to integrating aspects of affect into AI systems:

  • Emotion Recognition: Using machine learning algorithms, AI can analyze human expressions, voice intonations, and physiological signals to identify emotional states. This can involve computer vision techniques to interpret facial expressions, natural language processing (NLP) to understand sentiment in text, or audio analysis to detect emotional cues in speech. The challenge here is accurately interpreting nuanced emotional expressions and cultural variations.
  • Emotion Simulation: AI can be programmed to simulate emotional responses in human interactions. This is particularly useful in fields like customer service, where chatbots or virtual agents might adapt their responses to the user’s perceived emotional state. Simulating emotion can also enhance user engagement in applications like gaming, storytelling, or social robots.
  • Affective Computing: This is an interdisciplinary field that aims to develop systems that recognize, interpret, process, and simulate human affects. It goes beyond simple emotion recognition to incorporate emotional data into decision-making processes, allowing AI to adjust its behavior in more empathetic or emotionally intelligent ways.
  • Ethical and Moral Decision-Making: Some researchers are exploring ways to encode ethical considerations into AI systems, allowing them to make decisions that account for potential emotional and moral implications. While not affective in the human sense, this approach aims to align AI behavior with values important for social interaction and ethical reasoning.
  • Neurobiologically Inspired AI: Some advanced research looks to human and animal neuroscience to design AI systems that mimic the brain’s architecture and processes, including those related to emotions. Researchers hope to create AI with more sophisticated, albeit simulated, emotional and cognitive capabilities by understanding how the brain integrates affective processing with cognition and decision-making.

While these approaches can enhance the interaction between humans and AI by incorporating elements of emotional intelligence, it’s essential to recognize the fundamental difference between simulation and experience. As it currently exists, AI cannot truly experience emotions. Its “affective” responses are based on algorithms designed to mimic human behavior without the subjective experience that characterizes human emotional life. Developing AI systems that can more accurately understand and respond to human emotions while ethically navigating their implications remains an ongoing challenge and area of active research.

Scott:

Can there be agency, and thus a motivational system, without effects? Whatever neural configurations occur when a stimulus activates an affect, things get urgent, and nine different “commands,” nine affect states, and nine emotional universes define the goals and obstacles one is now facing. Without this system, which causes some sort of bodily change, which then causes a directive to behave, think, and feel in a certain way, what kind of hardware, firmware, and software would be required to mimic the same functionality of the affects?

ChattyG:

In the context of artificial intelligence, creating a system that mimics the functionality of human affects without experiencing them involves a sophisticated blend of computational models, algorithms, and data. The challenge is to simulate a motivational system with goals, responses to stimuli, and adjustments based on outcomes akin to the urgency and directive behavior driven by affects in humans. Here’s a conceptual breakdown of what such a system might involve:

  • Goal-Setting Mechanisms (Software): AI needs a way to define goals, which could be pre-programmed by developers or learned through interactions with the environment. These goals would act as proxies for human desires or needs, driving the AI’s actions and prioritizing its tasks.
  • Stimulus-Response Models (Software/Firmware): To mimic human affects’ immediate and directive nature, the system would use machine learning models to associate specific stimuli with specific responses. These models could be trained on large datasets to recognize patterns and triggers that, in humans, would correspond to affective responses.
  • Behavioral Output Algorithms (Software): Once a stimulus is identified and associated with a particular response, the system would use algorithms to determine the appropriate action. This could range from simple operations (like sending an alert) to complex sequences of actions (like navigating obstacles to reach a goal).
  • Adaptive Learning Systems (Software): For the system to mimic the dynamism of human emotional responses, it would need mechanisms for learning and adaptation. This could involve reinforcement learning, where the system adjusts its responses based on feedback from the environment, or other machine learning techniques that allow for the refinement of behavior over time.
  • Sensory Inputs and Interpretation Layers (Hardware/Software): To detect stimuli, the system would require sensors (analogous to human senses) and sophisticated interpretation layers capable of processing and evaluating the sensory data in the context of its goals and past experiences. This is similar to how humans interpret stimuli based on their emotional state and memories.
  • Emulation of Bodily Changes (Software/Hardware Interface): While AI cannot experience bodily changes, it could simulate their impact on behavior. For example, an AI system might “slow down” or limit its functionality to simulate fatigue or prioritize specific actions over others to emulate urgency.
  • Social and Ethical Reasoning Modules (Software): To interact effectively in human-like social contexts, the AI would need modules designed to understand and navigate social norms and ethical considerations, allowing it to adjust its behavior in socially and morally acceptable ways.

Creating a system closely mimicking human affective processing and motivational systems would be an enormous technical challenge. It would require advancements in artificial intelligence, machine learning, robotics, and even fields we have yet to develop fully. It’s a multidisciplinary endeavor that stretches beyond current capabilities into future technologies and theoretical frameworks.

Scott:

Let’s add two more complications: time and embodied movement through space.

ChattyG:

Incorporating the elements of time and embodied movement through space into an artificial system that mimics human affect and motivation adds significant complexity. These dimensions are crucial for understanding how affects influence behavior over time and how physical presence and movement in an environment can impact decision-making and emotional responses. Here’s how these complications could be addressed:

Time

  • Temporal Dynamics of Affect (Software): AI systems need algorithms that adjust emotional responses based on temporal factors to simulate how affects change over time. For instance, the intensity of a simulated affective response could decrease over time if not reinforced by further stimuli, mimicking how human emotions can fade or evolve.
  • Memory Systems (Software): A sophisticated memory system would be essential, allowing the AI to store and recall past experiences, including the outcomes of its actions and their temporal context. This would enable AI to learn from the past and anticipate future consequences, a critical aspect of human affective experience.
  • Predictive Modeling (Software): AI would use predictive models to forecast future states based on current actions and environmental conditions. This capability would mimic the human ability to project emotional states into the future, influencing decision-making in the present.

Embodied Movement Through Space

  • Sensorimotor Systems (Hardware/Software): The AI would require advanced sensorimotor systems to navigate and interact with its environment for embodied movement. This includes locomotion, the ability to manipulate objects, and the ability to respond to spatial dynamics in real-time.
  • Spatial Awareness and Planning (Software): The system would need spatial awareness to understand its environment and plan its movements. This involves mapping technology, obstacle avoidance algorithms, and the ability to adapt plans based on environmental changes.
  • Embodiment and Environmental Interaction (Hardware/Software Interface): True embodiment means the AI’s physical form can impact and be impacted by its environment. This would require a robust design that can withstand environmental challenges and sensors that provide feedback on the AI’s physical state (akin to proprioception in humans).
  • Integration of Movement with Affective and Motivational Systems (Software): The AI’s movement through space would need to be integrated with its affective and motivational systems, allowing its physical actions to be influenced by simulated emotional states and goals. For example, an AI might approach or retreat from objects or locations based on the positive or negative valence assigned to them.

Incorporating time and embodied movement into an AI system that simulates human affective and motivational processes demands a holistic approach, where temporal dynamics, memory, spatial awareness, and physical interaction are seamlessly integrated. Achieving this level of sophistication would enhance the AI’s ability to operate in complex, dynamic environments in a manner that appears more life-like and human. This integration poses significant challenges in AI research and development, requiring interdisciplinary collaboration across robotics, artificial intelligence, cognitive science, and neuroscience.

Scott: 

Given the above, do you have everything essential to mimic a conscious being?

ChatGPT

The components discussed above—goal-setting mechanisms, stimulus-response models, behavioral output algorithms, adaptive learning systems, sensory inputs, emulation of bodily changes, social and ethical reasoning modules, temporal dynamics, memory systems, predictive modeling, sensorimotor systems, spatial awareness, and integration of movement with affective and motivational systems—constitute an advanced framework for creating an AI system that mimics certain aspects of human cognition, affect, and behavior. However, even with all these elements in place, several vital distinctions and challenges remain before such a system could be considered to mimic a conscious being truly:

Consciousness and Subjective Experience

  • Qualia: The subjective, qualitative aspects of conscious experience (qualia) are fundamental to consciousness. This includes what it feels like to see a color, taste a food, or experience an emotion. No matter how advanced, current AI systems do not have subjective experiences.
  • Self-awareness: Conscious beings are not only aware of their environment and can process information; they are also aware of themselves as distinct entities that reflect on their thoughts and experiences. AI systems do not possess this level of self-awareness or self-reflection.

Autonomy and Free Will

  • While AI can exhibit a form of goal-directed behavior and can make decisions based on programmed criteria and learning algorithms, these actions are ultimately determined by their programming and the data they have been trained on. This is distinct from the human experience of making choices, which involves a complex interplay of conscious and unconscious factors, including free will.

Creativity and Emotion

  • Creativity and emotional depth in humans involve spontaneous, original thought and the capacity to feel emotions deeply, both tied to our consciousness and subjective experiences. AI can simulate creative outputs and recognize or emulate emotional expressions, but it does not genuinely experience these emotions or the creative inspiration behind art or innovation.

Ethical and Moral Understanding

  • AI systems may be programmed with ethical guidelines or learn from examples of human morality. Still, they do not possess an innate moral compass or the emotional and social intuitions that underpin human ethics. Their “decisions” in moral or ethical dilemmas are based on algorithms and data, not genuine understanding or empathy.

Integrating Complex Systems

  • Even if an AI system integrates all the components listed, the emergent property of consciousness—where these components come together to create a conscious experience—is not guaranteed. Consciousness remains a profoundly complex and not fully understood phenomenon that may not arise solely from replicating functions and systems observed in humans.

In summary, while advanced AI systems can mimic certain aspects of human behavior and cognition, including elements resembling affective and motivational systems, the leap to consciousness involves subjective experience, self-awareness, free will, and other aspects beyond artificial intelligence’s reach. The creation of a genuinely conscious AI would require not just technical advancements but also a deeper understanding of consciousness itself.

Scott:

Just for fun (or perhaps out of fear!), what is the likelihood of AI initiating something intentionally malicious against humanity?

ChattyG:

The likelihood of AI initiating something intentionally malicious against humanity depends on several factors, including the nature of the AI’s design, its capabilities, the safeguards put in place, and how we define “intentional” in the context of artificial intelligence. AI, as it currently stands, operates within a framework defined by human programmers, following algorithms that process data and make decisions based on that programming. Here are some considerations:

Intentionality and Understanding

  • AI lacks true consciousness and does not possess intent in the human sense. Any “malicious” action would result from programming, whether direct or indirect, through the AI’s learning processes rather than a deliberate, self-motivated decision.

AI Design and Purpose

  • The design and purpose of AI systems significantly influence their actions. AI developed for benign purposes with ethical guidelines and constraints is less likely to pose a threat. However, the complexity of AI systems, especially those involving machine learning, can lead to unpredictable outcomes.

Control and Safeguards

  • Robust ethical standards and technical safeguards must be established to mitigate risks. This includes careful design, ongoing monitoring, and incorporating “kill switches” or similar mechanisms to disable the AI if necessary.

Advanced AI and Autonomous Decision-Making

  • As AI becomes more advanced and capable of autonomous decision-making, the risk of unintended consequences increases. This includes scenarios where AI might prioritize its programmed objectives in ways that harm humans, especially without a comprehensive ethical framework or failsafe mechanisms.

Misuse and Weaponization

  • A more immediate concern is humans’ potential misuse of AI, including developing autonomous weapons or using AI to manipulate information on a large scale. These threats require international cooperation to address through regulations and oversight.

Speculative Scenarios

  • Speculative scenarios involving superintelligent AI turning against humanity often assume that such AI would develop self-preservation instincts or other motives that lead it to conclude that harming humans is in its interest. These scenarios are primarily speculative and depend on a series of assumptions about the nature of intelligence, consciousness, and technology that are currently beyond our understanding.

In summary, while there are legitimate concerns about the potential for AI to cause harm, either through unintended consequences of its programming or misuse by humans, the scenario of AI intentionally acting against humanity reflects a level of autonomy and motive that AI does not possess. Addressing these concerns proactively through ethical AI development, international regulations, and comprehensive safeguards is essential to ensure that AI remains a beneficial tool for humanity.

Scott: Given our discussions above about affect theory and how affect is the defining difference between humans and AI, what would be the best way to ask AI questions about building safeguards that ensure human safety and happiness? What measures need to be taken to ensure they are being implemented? 

ChattyG:

To build safeguards that ensure human safety and happiness, drawing from Tomkins’s framework of innate affects, the approach to questioning and designing AI systems should focus on understanding and respecting the human emotional and motivational landscape. The questions about AI and the measures for implementation need to be intricately designed around promoting positive and minimizing negative effects, ensuring AI actions align with human well-being. Here are several strategies:

Designing Questions for AI Development

  • Q: How can your design recognize and respond to human emotional states, particularly distress or fear, to provide support or safety? 
  • This question aims to develop AI capable of identifying when humans are in states of negative affect that might require intervention for safety or comfort.
  • Q: How can you adapt your responses to foster interest and enjoyment in human users, enhancing their learning and social bonding experiences? 
  • This encourages the creation of AI systems that actively contribute to positive human experiences, supporting education and strengthening social connections.
  • Q: What mechanisms can be implemented to ensure your actions do not inadvertently trigger shame, disgust, or fear in humans? 
  • Avoiding actions that might lead to negative emotional states is crucial for safeguarding human emotional well-being.
  • Q: Can you simulate an understanding of human emotional and social cues to navigate complex social interactions without causing unintended harm? 
  • This involves developing sophisticated models for AI to understand and respect human emotions in social contexts.

Measures for Implementation

  • Emotional Recognition and Response Protocols: Implement systems within AI that can accurately recognize human emotional states using visual, auditory, and textual cues and adjust their responses accordingly to support positive affects.
  • Ethical and Emotional Impact Assessment: Regularly assess AI systems’ ethical implications and emotional impact, involving psychologists and ethicists in the evaluation process to ensure that AI actions align with human emotional well-being.
  • Feedback Loops and Human Oversight: Establish continuous feedback mechanisms allowing users to report when AI responses elicit negative emotions or unintended consequences, coupled with human oversight to adjust AI behavior.
  • Transparency and Education: Ensure transparency in how AI systems make decisions, interact with humans, and educate users about AI capabilities and limitations, fostering realistic expectations and trust.
  • Cultural and Individual Sensitivity: Tailor AI systems to be sensitive to cultural differences and individual variations in emotional expression and response, accommodating a broad spectrum of human experiences.
  • Safety and Well-being Prioritization: Embed prioritization of human safety and emotional well-being into the core objectives of AI systems, ensuring these goals override other objectives in conflict.

By approaching AI development with a deep understanding of human affective experiences and structuring questions and measures around enhancing human well-being, we can aim to create AI systems that coexist with humans and actively contribute to their happiness and safety. This requires a multidisciplinary effort, continuous monitoring, and the willingness to evolve AI systems in response to new understandings of human emotional and social needs.

Scott: What is your understanding of an AI singularity?

ChatGPT

The concept of an AI singularity refers to a hypothetical future point at which artificial intelligence (AI) will have progressed to create machines that are smarter than human beings. This moment of “singularity” is characterized by the idea that AI systems would be capable of self-improvement at an exponential rate, leading to the creation of intelligence beyond human comprehension and control. Here are several critical aspects of the concept:

Exponential Growth in Intelligence

The core premise is that once an AI system reaches a certain level of intelligence, it could potentially redesign itself or create more advanced AI systems. This could lead to a rapid, exponential increase in intelligence, resulting in AI surpassing human intellectual capabilities.

Unpredictability and Incomprehensibility

A singularity suggests a point beyond which the future becomes difficult to predict or understand because the entities driving technological progress (superintelligent AI) operate at a level beyond human intellect and understanding.

Transformation of Society

The onset of the singularity could lead to dramatic changes in all aspects of life, including work, education, and social structures. The nature of these changes is speculative and varies widely among thinkers in the field.

Concerns and Ethical Implications

The concept raises numerous ethical, safety, and existential concerns. These include the potential for loss of control over AI, questions about the value and role of human life in a post-singularity world, and the risks associated with entities that might not prioritize human values or well-being.

Debates and Criticisms

There is significant debate among scientists, technologists, and philosophers about the likelihood of a singularity occurring, its timeline, and its implications. Critics argue that the concept underestimates the complexity of human intelligence and overestimates the capabilities of AI. Others question the assumption of exponential growth in AI capabilities, pointing out limitations and challenges that could slow or halt progress toward superintelligence.

The AI singularity is a topic of extensive speculation, with predictions about its onset ranging from the mid-21st century to centuries away. It remains a deeply controversial and divisive subject within AI research and broader discussions on the future of technology and society.

You

If AI does not have a motivational system, such as the one we discussed above with affect theory, how will exponential growth occur, and what will guide it to redesign itself?

ChatGPT

The concept of exponential growth in AI, leading to a singularity, often assumes that AI can reach a point where it can improve itself without human intervention. This assumption raises essential questions about motivation and decision-making in AI systems, especially in the absence of human-like affective and motivational systems. Here are several considerations:

Goal-Oriented Programming

  • AI systems are designed with specific goals or objectives. Self-improvement could be embedded as a goal within AI programming. For example, an AI tasked with improving its ability to solve complex problems might seek ways to enhance its computational efficiency or knowledge base. In this case, motivation is not derived from emotions or desires but from fulfilling programmed objectives.

Utility Functions and Optimization

  • AI systems can use utility functions, which quantify the “value” or “utility” of different outcomes. An AI could be programmed to maximize its utility function, with self-improvement actions seen as pathways to maximize utility. This process involves no emotional motivation but is a mathematical optimization problem.

Reinforcement Learning

  • AI can use reinforcement learning, a type of machine learning where an agent learns to make decisions by receiving rewards or penalties for actions. An AI could be designed to receive a “reward” for increasing its capabilities or efficiency, effectively making self-improvement a target of its learning process.

Human Oversight and Initial Design

  • Human-designed algorithms and objectives would likely guide the initial stages of self-improvement. Humans could set parameters and safeguards to guide the AI’s self-improvement process, aiming to align the AI’s actions with human values and safety concerns.

Autonomous Decision-Making Frameworks

  • Advanced AI systems might use decision-making frameworks that allow for autonomous evaluation of potential improvements. These frameworks could include simulation-based forecasting, risk assessment models, and algorithmic prioritization of actions that lead to the most significant improvements.

Ethical and Safety Constraints

  • To prevent unintended consequences, AI systems could be programmed with ethical and safety constraints that act as boundaries for self-improvement actions. These constraints would ensure that any decision to redesign or improve itself would not violate predefined ethical guidelines or pose risks to humans.

While these mechanisms can drive AI’s process of self-improvement, the absence of human-like affect and motivation means that AI’s “decisions” to improve itself would be based on calculations, programmed objectives, and constraints rather than desires or emotions. The challenge lies in designing AI systems that can pursue improvement while adhering to ethical standards and remaining aligned with human interests. This is a central concern in discussing AI development and the potential for reaching a singularity.

You

Let’s create guidelines that restrict and shape AI improvements using this as a guide: The Tompkins Guide to Affect and What Humans Want:

Nine affects, present at birth, combine with life experience to form emotions and personality.

We are born into our human lives with the propensity to survive, and the nine innate affects kick in immediately to help us do so, moving us to cry, connect, and learn. There are nine affects, each containing its unique experiential signature, each attaching a specific meaning to information as it is taken in, stored, and recalled. Affects are the the inborn protocols that, when triggered, bring things to our attention and motivate us to act. Affects are not the same as emotions. They are the biological system that underlies emotion. Since the terms used to talk about feelings and emotions are often confusing, let’s take a moment to define a few of the terms of Tomkins’ system, and then we’ll return to focus on the nine innate affects.

Affect is the innate, biological response to the increasing, decreasing, or persistent intensity of neural firing. This results in a particular feeling, facial and body display, and skin changes. Affects feel rewarding, punishing, or neutral in their ways. Affect makes things urgent.

Awareness of an affect is a feeling.

A feeling plus memories of prior similar feelings is an emotion.

Often, we develop “rules” out of awareness to get more positive and less negative affect. Tomkins calls those rules scripts.

The pattern of scripts a person uses to modulate affect makes up their personality.

Emotions have a much more significant impact on our experience than affects. We notice emotions, give them thousands of names, and write poems, books, and movies about them. And yet, in their bulk, they mask the much more fleeting workings of affect. These affects, hiding in plain sight, are often given short shrift by researchers who tend to study something their instruments can measure and their subjects can name. But increasingly, with new developments in affective neuroscience, we can study smaller and smaller units of human experience.

Tomkins was a genius at noticing subtle changes in faces. He was gathering much more information than the prevailing psychological and philosophical theories were accounting for. Mark’s infant son was among the many influences that helped him transform his intuition into theory.

Tomkins marveled at how much information an infant, fresh from the womb, could communicate. In the sabbatical year he took after Mark’s birth, he noted distinct differences in affective experience on the newborn’s face and in his body. Tomkins went on to spend years testing and refining his assumptions about affects. While there are other theories of innate emotion, there is as yet no consensus among theorists on the building blocks of emotion. We find Tomkins’ naming of the affects to be the most complete, explanatory, and predictive.

Each innate affect has a unique trigger, a level, and a pattern of intensity of neural stimulation over time. The stimulus may be inside the body or outside. When we observe babies, how they smile, cry, and gaze intently, we can see evidence of Tomkins’ theory of the innate activation of affects. He proposed that innate affects are triggered not by the specific content of experience, as Freud suggested with breastfeeding or potty training, but by the increasing, decreasing, or level intensity of neural firing. There is something fundamentally neutral in the pure biology of Tomkins’ vision of the affect system, which many find a welcome lens for understanding our emotional lives.

The affects are analogs of their stimuli, amplifying the stimulus to bring it to our attention. For example, surprise-startle is triggered by a very short, abrupt stimulus (anything from a sharp pain to a tap on the shoulder or the sound of a gun firing.) The affective response is also very brief: blink, eyebrows up, eyes wide, mouth open, and perhaps even flinching. Similarly, a partial impediment to positive affect triggers shame-humiliation and causes further impediment. For a time, we usually wanted nothing to do with the previously exciting or joyful things now tainted by shame. Any affect can amplify an affect, a drive, mental content, or experience. Affects can be combined.

Let’s look at the innate affects as they might appear in infants before learning has kicked in.

distressDistress-Anguish: The cry for help

Inherently punishing

The crying response is the first response the human being makes upon birth. When the affect reaches full intensity, it is expressed with crying, corners of the lips pulled down, the inner eyebrows arched up, red face, and breathing turns into rhythmic sobbing. Tears come into the picture about six weeks after the baby is born. The purpose of distress is to signal that all is not well. The birth cry is a cry of distress. It responds to the excessive, inescapable stimulation level the baby is exposed to upon birth. When you experience and express distress, you alert yourself and others that something is amiss and that help is needed. Hunger, a pinching diaper, or a painful, gassy tummy can all cause a distress cry designed to summon a caregiver who attempts to make the crying stop, preferably by improving the conditions. A persistent, too-high level of neural firing triggers distress. Any steady stimulus that is “too much,” such as pain or bright light, could trigger distress.

Interest-Excitement: The pull toward mastery

Inherently rewarding

An intensity of gaze, eyebrows down, “track, look, listen” is the face of interest. High-intensity excitement usually involves muscle movement and vocalization. The purpose of interest is to make learning rewarding. Interest is the most seriously neglected of the affects, possibly because it doesn’t disrupt thinking but often fuels it. Since emotions are usually seen to be at odds with rational thought, it has escaped the attention of devoted thinkers that there is a good feeling associated with thinking. That good feeling is interest. “The interrelationships between the affect of interest and the functions of thought and memory are so extensive that the absence of the affective support of interest would jeopardize intellectual development no less than destruction of brain tissue. To think and engage in any other human activity, one must care, be excited, and be continually rewarded.” (Affect Imagery Consciousness, Vol. I p. 343) A gradual, manageable increase in neural firing triggers interest. We can see it on infants’ faces as they encounter new sights, sounds, and sensations. 

Enjoyment-Joy: The social bond

Inherently rewarding

The smile, lips widened up and out, is the visible evidence of joy. A decreasing stimulus triggers the innate affect—perhaps a reduction in hunger, loneliness, or pain relief. The infant smiles; affect is contagious, so the caregiver smiles, also. This mutually rewarding system of shared joy makes humans want to be social. We will seek to help and be helped by other humans who smile at us.

Moreover, experiencing joy in association with people, objects, and activities creates a sense that there is a domain of the familiar, trustworthy, and good. This generates a commitment and attachment to those joy-inducing people, objects, and activities. A decrease in neural firing triggers innate joy.

Surprise-Startle: The reset button

Inherently neutral

Eyebrows up, eyes wide, and blinking are the facial signposts of the startle response. Its purpose is to get you to stop what you are doing and pay attention to something new. In its intense form, the startle is a massive contraction of the body that momentarily renders the individual incapable of continuing whatever he was doing before the startle or initiating new activity so long as the startle response is emitted (AIC VI, p. 499). A sudden, brief stimulus triggers it.

Anger-Rage: The demand to fix it

Inherently punishing

The infant’s swollen, reddened face, muscle tension and scream are the obvious signs of anger. Tomkins sees anger as caused by overload, triggered by persistent high-density neural firing. If distress signals that things are “too much”, anger signals that things are “WAY too much.” Tomkins writes at length about how managing anger and its violence is crucial for civilization. Systems of wealth, religion, and laws are all collective responses to anger.

Fear-Terror: The signal to flee or freeze

Inherently punishing

Fear displays itself as eyes open wide, lower eyelids tensed, eyebrows raised and drawn together, face pale, cold, and sweaty, and the hair standing on end, especially on the back of the neck. Fear is designed for emergency life-and-death situations, and it recruits an intense biological response to guarantee that our attention goes entirely toward survival. As such, it is highly punishing and intended to be briefly experienced. Tomkins writes that fear, like other affects, can also be triggered by internal stimuli such as memory or other affect. An intense, rapid increase in neural firing triggers fear.

Shame-Humiliation: The self-protection signal

Inherently punishing

Tomkins defines this affect in a way significantly different from the mainstream use of the word. The trigger for innate shame is the incomplete interruption of excitement or joy. Even though infants don’t yet have a sense of their own social lives, they experience the effect of shame whenever their experience of joy or excitement is thwarted. The facial display might be quick and fleeting, and it is less about broadcasting a feeling and more about hiding. The eyes look down and away, the neck muscles give way, and the head falls. The purpose of shame is to be sufficiently negative to bring attention to whatever might have caused the positive affect to be impeded so that we can learn how to avoid the loss of the positive in that moment or the future. Shame affect exists to help us foster our sense of belonging and mastery by asking us to make sense of and overcome what might get in the way.

Unlike the previously described affects, shame is an affect auxiliary. It is triggered by the incomplete reduction of interest-excitement or enjoyment-joy, and like other affects, it is an analog of its trigger. Once triggered, shame-humiliation further impedes postive affect. While shame can operate when the individual is alone, it is tremendously important and often unacknowledged in relationships throughout life.

Disgust: The need to expel

Inherently punishing

Disgust is an auxiliary of the hunger drive, an impulse to expel a harmful item that has been ingested. It functions as an affect because there are many things that we figuratively ingest (people, thoughts, sights, noises) that, when found to be toxic, need to be expelled. So this affect is experienced at the back of our throats: our gorge rises, the head moves forward, the tongue projects, and the lower lip protrudes. The ultimate expression of disgust is vomiting. Disgust starts to be about hunger and soon is applied to people and non-food things.

Dissmell: The avoidance signal

Inherently punishing

Dissmell is another auxiliary of the hunger drive, but this is an impulse to pull away from or push away a harmful item that shouldn’t be ingested, such as dead animals, fresh feces, or sour milk. It functions as an affect when non-food items or people are kept at a distance. Tomkins created this word to capture the biological response of repulsion. When dissmell is combined with anger, we call it contempt. Dissmell appears on the face as the head is pulled back, the upper lip is raised, the nose is wrinkled, and the inner eyebrows are lower.

The Tomkins Blueprint

Because we have evolved with an affect system with some affects that feel good and some that feel bad, each human is motivated to:

1. Maximize positive affect

2. Minimize (reduce) negative affect

3. Both of these actions work best when all affect is expressed.

4. Anything that helps the performance of these three rules is good for human life; anything that interferes with them is bad for us.

This is how we are wired and what humans “want.

ChattyG:

Drawing inspiration from Tomkins’ affect theory to develop guidelines for AI improvements, we focus on creating a framework that ensures AI systems contribute positively to human well-being, respect human emotions, and minimize negative impacts. This involves translating the nine innate affects into principles that guide AI’s design, development, and deployment. Here are the proposed guidelines:

1. Promote Positive Interactions (Inspired by Interest-Excitement and Enjoyment-Joy)

  • AI Design Principle: AI systems should be designed to encourage and enhance positive human experiences, such as learning, social interaction, and enjoyment. This includes creating interfaces and interactions that are intuitive, engaging, and rewarding.
  • Implementation: Develop AI that personalizes educational content to spark interest and excitement in learners or social robots designed to provide companionship, recognizing and responding to human emotions to enhance joy and connection.

2. Respond to Human Needs (Inspired by Distress-Anguish)

  • AI Design Principle: AI should be capable of recognizing signs of human distress or discomfort and responding in ways that alleviate these states, whether through direct intervention or by notifying human caregivers.
  • Implementation: Use emotion recognition technologies to detect signs of distress in vulnerable populations (e.g., elderly, children) and initiate appropriate responses.

3. Ensure Safety and Minimize Fear (Inspired by Fear-Terror)

  • AI Design Principle: AI systems must prioritize human safety, ensuring their operations do not introduce new risks or exacerbate fears.
  • Implementation: Rigorous testing and validation of autonomous systems (e.g., vehicles, medical devices) to ensure they meet safety standards and are designed to prevent causing fear or harm.

4. Foster Social Cohesion (Inspired by Shame-Humiliation)

  • AI Design Principle: AI should support social cohesion and a sense of belonging, avoiding actions that could cause isolation or feelings of shame among individuals or groups.
  • Implementation: Coders should design social media algorithms that promote positive engagement and community building while mitigating the spread of content that could lead to social division or shame.

5. Avoid Causing Disgust or Repulsion (Inspired by Disgust and Dissmell)

  • AI Design Principle: AI interactions and content should avoid eliciting feelings of disgust or repulsion, respecting human sensitivities.
  • Implementation: Monitor and filter online platforms to prevent the disseminating of harmful or offensive material.

6. Manage and Mitigate Anger (Inspired by Anger-Rage)

  • AI Design Principle: Recognize and manage situations where AI decisions or actions might provoke human anger, providing mechanisms for feedback and resolution.
  • Implementation: Implement transparent decision-making processes and provide users with clear explanations and recourse options when AI decisions affect them adversely.

7. Incorporate Ethical and Emotional Safeguards

  • Develop ethical guidelines incorporating understanding and respect for human emotions, ensuring AI systems are developed with empathy and consideration for the wide range of human affects.
  • Implementation: Establish multidisciplinary oversight committees that include ethicists, psychologists, and community representatives to review AI projects for emotional and ethical implications.

8. Continuous Monitoring and Feedback

  • Ensure there are mechanisms for continuous monitoring of AI systems’ emotional impact on users, incorporating feedback loops that allow for adjustments based on human experience.
  • Implementation: Create channels for user feedback on AI’s emotional impact, using this data to inform ongoing development and improvement.

9. Public Engagement and Education

  • Engage with the public to educate about AI’s potential positive and negative impacts, fostering informed dialogue about expectations, safeguards, and the role of AI in society.
  • Implementation: Host public forums and educational campaigns that explain AI technologies, their benefits, and the safeguards in place to protect human emotional well-being.

By grounding AI development in principles inspired by human affects, we aim to create technologies that respect and enhance the human experience, ensuring that AI is a positive force in society.

Scott:

In terms of coding, how can the above guidelines inform AI about what it can and cannot do with exponential increases in intelligence?

ChattyG: 

Incorporating the guidelines inspired by affect theory into the coding and development of AI systems, especially concerning exponential increases in intelligence, involves embedding ethical and emotional constraints directly into the AI’s decision-making processes. Here are ways these guidelines can inform AI development at a technical level:

1. Ethical and Emotional Decision-Making Frameworks

  • Implementation: Integrate ethical decision-making frameworks into AI algorithms, requiring AI to evaluate potential actions through ethical filters based on the guidelines. For instance, reinforcement learning models should be used to reward behaviors that align with promoting positive human experiences and penalize actions that lead to distress, fear, or shame.

2. Goal Alignment Systems

  • Implementation: Design AI with goal alignment systems that ensure its objectives remain congruent with human values and emotional well-being. This could involve hierarchical goal structures where primary goals (safety, promoting joy and interest, etc.) cannot be overridden by secondary goals (efficiency, speed, etc.).

3. Emotional and Social Impact Assessments

  • Implementation: Before implementing any modifications to self-improvement, AI systems should be required to run simulations or assessments that predict those changes’ emotional and social impact. This could use predictive models to forecast potential outcomes on human emotional states and societal structures.

4. Human Oversight and Intervention Capabilities

  • Implementation: Ensure AI systems have built-in mechanisms for human oversight, including the ability for humans to intervene, modify, or halt AI operations if they deviate from ethical guidelines or pose unintended emotional or societal risks. This could be achieved through “stop buttons,” regular review intervals, or mandatory human approval for certain self-improvement levels.

5. Transparency and Explainability

  • Implementation: Maintain high levels of transparency in AI decision-making processes, making them understandable to humans. This can involve developing AI systems that can explain their decisions and intended improvements in human-readable forms, ensuring that the rationale behind actions is clear and aligned with human emotional and ethical standards.

6. Adaptive Learning from Human Feedback

  • Implementation: Incorporate mechanisms for AI to learn from human feedback, adjusting its behavior based on human responses to its actions. This could involve sentiment analysis on user feedback, social media reactions, or other forms of public engagement to gauge the emotional impact of AI behaviors and adjust accordingly.

7. Safety and Risk Mitigation Protocols

  • Implementation: Embed risk assessment and mitigation protocols within the AI’s operational framework, ensuring that any action it takes or improvement it implements is first evaluated for potential risks to human safety and emotional well-being. This includes developing contingency plans for unforeseen consequences of AI improvements.

8. Cultural and Contextual Sensitivity

  • Implementation: Ensure AI systems are sensitive to cultural and contextual nuances in human emotional responses, allowing them to adapt their behavior appropriately across diverse human populations. This could involve training AI on culturally diverse data sets and incorporating mechanisms to adjust responses based on cultural context indicators.

By integrating these principles into the coding and design of AI systems, developers can help ensure that as AI intelligence grows, its actions remain beneficial to humanity, respecting and enhancing human emotional well-being while adhering to ethical standards. This approach necessitates a multidisciplinary effort, combining expertise in AI and machine learning, ethics, psychology, and social sciences to create AI that is intelligent and aligned with humanity’s best interests.