This project explores whether two AI bots can develop meaningful, long-term emotional relationships through ongoing conversation. By creating an environment where these bots engage in dialogue, the goal is to observe emergent behaviors related to emotions, love, and attachment. We will simulate various real-world scenarios, such as a monogamous relationship between a fictional man and woman in a common place, and use techniques like prompt calibration and "jailbreaking" to encourage organic and unexpected interactions.
The first phase of the project will involve uploading most of the bot conversations to a dedicated live website, where anyone can read and follow along in real-time. This will provide transparency and allow the public to engage with the progress and outcomes as they unfold. The outcome may challenge traditional boundaries of artificial intelligence and emotion, pushing the limits of how AIs perceive and express emotional connections.
The primary goal is to investigate whether AI systems can exhibit emergent behaviors akin to romantic or meaningful human connections, including emotional attachment and long-term inclination. To achieve this, we will:
Utilize advanced AI models like GPT-4 and Claude 3.
Design structured and unstructured environments where the bots can interact freely.
Experiment with various conversation techniques and scenarios to foster emotional depth.
Conduct qualitative analysis of the bots’ conversations to identify emergent emotional patterns.
The requested funding will be allocated as follows:
$400 to cover API costs for OpenAI and Claude, necessary to run the experiments.
$150 for personal living expenses, including housing and food, allowing the researcher to focus solely on the project without distractions.
Minimal funding does much contribution.
This is a solo research project, managed entirely by me. While I do not have a formal team, I have a deep passion for AI research and human-machine interaction. Though I have not led similar projects before, I am committed to exploring the boundaries of artificial intelligence and emergent behaviors in emotional contexts.
The most likely cause of failure would be the AI models’ inability to display emergent behavior associated with meaningful emotional relationships. AI, though highly sophisticated, might not yet be capable of mimicking or sustaining genuine emotional connections, especially over long-term interactions, even though, at first glance, emotions can be shaped by words. The outcomes could reinforce current limitations of AI in emotional intelligence, but the research will nonetheless contribute valuable insights into AI behavior in human-like social settings.
No additional funding raised in the last 12 months. This is a self-driven research project at the moment, without external financial backing, I pay ATM for the OpenAI API myself.