You know the feeling: you're racking your brain for a specific memory or piece of information, but it just won't come to you. Then, out of the blue—maybe you're chatting with a friend, reading a book, or even listening to a song—the right words trigger that elusive memory, making it crystal clear. This phenomenon isn't limited to us humans; surprisingly, it bears a resemblance to how large language models (LLMs) like ChatGPT function.


Associative memory in humans operates in a manner that's strikingly similar to the way LLMs function. While LLMs rely on statistical data and probabilities to predict the next word or "token" in a sequence (Given 'A', then 'B'), the essence is somewhat the same. Both involve the process of connecting dots based on known or familiar data.

On the surface, large language models like GPT-4 seem to work similarly - predicting the next word statistically based on prior probabilities. However, their associations are fragile compared to the robust, interconnected networks in our minds. With the right techniques, we can leverage these models' pseudo-associative abilities more effectively.

Let's explore this phenomenon even more and I'll reference with a recent post from Linkedin that I think illustrates this perfectly.


Associative Memory in Humans

In humans, associative memory helps to link two or more pieces of information together. For instance, if you know a person named Sarah and you learn that she is a chef, your brain will link those two pieces of information. Later on, when you think of Sarah, the idea that she is a chef will likely come to mind. We're particularly good at remembering details when they're linked to something or someone important to us. On the flip side, trivial details that are not connected to anything significant are often forgotten.

The Semantic Network in Human Memory

Human associative memory forms a rich interconnected web of concepts and relationships. Details are tied to contextual anchors, allowing bidirectional recall and inference. This semantic network supports the fluid reasoning required to reverse associations.


How LLMs Retrieve Information

LLMs Excel at Next Token Prediction

The training objective for LLMs is next token prediction - given a sequence 'A', predict token 'B'. Simply, they operate by predicting the next word in a sequence, based on the words that came before it.

For example, when prompted with "Gabriel Macht's mother is...", the model can accurately complete the sentence with "Suzanne Pulier". This statistical learning allows fluent generation but does not imply real understanding.

Reversed Associations Reveal Memory Limits

However, LLMs struggle to reverse associations and leverage contextual information the way humans do. When prompted "Who is Suzanne Pulier's famous son?", the model cannot deduce the answer is Gabriel Macht, despite having that knowledge.

Without the explicit context, the small detail of Gabriel's relation to Suzanne is forgotten. The model's statistical associations are fragile and rigid compared to flexible human reasoning.


Why the Difference?

The inability of LLMs to reverse associations is tied to their training method. They are not designed to form links between information in the same way humans do. In the example of Gabriel Macht and Suzanne Pulier, the LLM treats each question as an isolated query and searches its data for a match. It does not "remember" past interactions or use the context from one question to answer another.


The Problem

Heena Purohit on LinkedIn: LLMs Still Have Limitations in Reasoning | 16 comments
Foundation Models still have limitations. Here’s an example everyone should know: I used GPT-4 in my example below, but the same problem exists with other… | 16 comments on LinkedIn

Let's restate the problem:

When we ask GPT-4: Who is Gabriel Macht's mother?

GPT-4 responds: Gabriel Macht's mother is Suzanne Pulier, a museum
curator and archivist. Gabriel Macht is best known for his
role as Harvey Specter on the television show "Suits."

However:

When we ask GPT-4: Who is the famous son of Suzanne Pulier?

GPT-4: As of my last update in January 2022, Suzanne Pulier was
not publicly known to have a famous son. It's possible that
new information has emerged after 2022 or that there
might be some confusion with another individual. Please
provide more context or check the latest sources for any
new developments.


Paper Review:The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A"

Let's now turn our attention to a paper that highlighted this issue but gave no solution.

This research investigates a surprising limitation in auto-regressive large language models (LLMs) called the Reversal Curse. The question is: Why do LLMs trained on data in the form "A is B" fail to generalize to the reversed form "B is A"?

This is important because it reveals a "fundamental weakness" in how LLMs process information and perform reasoning.

💡
Is it a fundamental weakness of the LLM of the fundamental weakness or unwillingness for Humans to accept that maybe LLMs/AI functions more similarly to us that we thought?

Methodology

The researchers used fine-tuning experiments on synthetic data to expose the Reversal Curse. Here's the breakdown:

  1. Step 1: Fine-tuning
    • The LLM is trained on statements like "Uriah Hawthorne is the composer of Abyssal Melodies".
  2. Step 2: Evaluation
    • The LLM is then tested with questions in both orders:
      • Who composed Abyssal Melodies? (Correct answer: Uriah Hawthorne)
      • Who is Uriah Hawthorne? (Incorrect answer)

This demonstrates that the LLM struggles to answer questions where the order of information is reversed from its training data.

Results

The key finding is that LLMs exhibit the Reversal Curse. They perform well on questions that match the order of information they were trained on but fail to answer correctly when the order is reversed. Importantly, the incorrect answers are not better than random guesses.

This suggests that LLMs don't inherently understand the logical relationship between the two statements ("A is B" and "B is A").

Implications

The Reversal Curse has several implications:

  1. Logical Reasoning Failure: It highlights a fundamental limitation in LLMs' ability to perform basic logical deduction.
  2. Meta-Learning Issue: The Curse suggests LLMs are not effective at meta-learning, where encountering one form should improve performance on the reversed form, which is common in natural language.
  3. Real-World Applications: This weakness can affect real-world applications of LLMs that require reasoning and understanding factual relationships between entities.

LLMs Tested

The research tested two main categories of large language models (LLMs) without specifying the exact versions:

  1. Transformer-based auto-regressive LLMs: This category likely includes GPT-3 and Llama-1, which are mentioned in the paper. These models predict the next word in a sequence based on the previous words.
  2. GPT-4: The paper mentions testing GPT-4 on questions about celebrities, although it's not explicitly stated as part of the fine-tuning experiments.

The Reversal "Curse"

The findings of this research suggest some limitations in contemporary large language models (LLMs) that have implications for their development and real-world applications. Here's a breakdown of what this means:

  1. Logical Reasoning Weaknesses: LLMs appear to struggle with basic logical deduction. If they are trained on data stating "A is B", they can't necessarily infer "B is A" even though these statements are logically equivalent. This raises questions about how well LLMs can handle tasks that require reasoning and manipulating information.
  2. Meta-Learning Issues: The Reversal Curse indicates that LLMs are not effective at meta-learning. Meta-learning allows models to learn how to learn, essentially improving their performance on similar tasks after encountering a new one. In natural language, sentences like "A is B" and "B is A" frequently appear together. LLMs failing to generalize between these forms suggests a shortcoming in their meta-learning ability.
  3. Limitations in Real-World Applications: Since LLMs struggle with the Reversal Curse, their suitability for real-world tasks that depend on understanding relationships between entities or require reasoning is called into question. For example, an LLM used in a question-answering system might fail to answer a question if the information is presented in a way it wasn't specifically trained on.
  4. Focus on Training Data and Task Design: This research emphasizes the importance of carefully considering the training data and task design for LLMs. By incorporating techniques that encourage meta-learning and exposing models to various forms of data presentations, researchers can potentially mitigate the Reversal Curse.
  5. Need for Transparency and Responsible Use: When interpreting outputs from LLMs, it's crucial to remember these limitations. LLMs might appear to generate logical answers, but they may not truly understand the underlying relationships. As LLM technology progresses, researchers and developers need to be transparent about these limitations to ensure responsible use.

Overall, the Reversal Curse highlights areas for improvement in LLM development. By addressing these weaknesses, researchers can create LLMs that are better equipped for tasks that require reasoning and understanding complex relationships within data.


PREFACE: LLMs as Intelligent Systems, Not Fact Machines

The challenges this chatbot faced in recalling associative information contains an important lesson - large language models should not be treated as mere fact databases. Their value lies not in the ability to spit out isolated facts, but in intelligently utilizing their knowledge.

True intelligence entails grasping concepts, principles, and connections across information - not just retrieving facts. Humans draw on accumulated knowledge to dynamically understand and reason about novel situations.

Likewise, the smartest applications of large language models involve prompting them in ways that leverage their statistical learning to uncover new insights. Rather than just asking for explicit facts, we provide contextual framing to activate conceptual relationships from their training data.

Viewing LLMs as intelligent systems instead of fact machines opens possibilities. Their knowledge can be an asset when combined with careful prompting to stimulate higher-level comprehension. We must guide them to synthesize and infer, not just regurgitate isolated pieces of memory.

Just as human intelligence transcends standalone facts, so too can large language models reveal their capabilities when we appeal to relationship building and conceptual linking. Targeted prompting unlocks their potential for intellectual growth rather than narrow factoid recall.

💡
The popular "walking through the Palace" mnemonic memorization technique that many people use and claim to vastly improve their memories, leverages the power of associations. By placing information in a familiar location (the palace) and creating connections between them (walking through the rooms), humans can effectively recall the information in order. This strategy aligns with the LLM's strength in sequential retrieval based on established connections within its knowledge graph.

Using SLiCK: A Framework for Understanding LLM Knowledge

When interacting with large language models, it can be helpful to think in terms of the SLiCK (Syntax Logic Creative Knowledge) Framework which separates the processing and knowledge components.

The Processing Unit interprets prompts and generates responses using the Knowledge Base. The Knowledge Base consists of facts and can be supplemented through the prompt. Just as a person, the Knowledgebase cannot be effectively engaged on its own.

The Processing Unit handles comprehending prompts and crafting responsive output using linguistic skills, creativity, and logic:

  • Syntax Engine ensures syntactic cohesion and quality.
  • Logic Engine focuses on accuracy, relevance, and reasoning.
  • Creativity Engine generates ideas and narrative flourishes.

Meanwhile, the Knowledge Base consists of the facts and relationships learned during training:

  • Entities are the individual concepts that make up the knowledge.
  • Relationships provide the connections between entities.
  • Semantic distance represents how close entities are in vector space.
SLiCK: A Framework for Understanding Large Language Models
Peek under the hood of LLMs with SLiCK- a conceptual framework that segments AI operations into distinct components, shedding light on the inner workings of these complex “black box” systems.

This organization mirrors human cognition - we interpret situations using processing abilities while drawing on our existing knowledge. Information alone does not produce intelligence.


Conceptualizing LLMs through this framework helps guide effective prompting and using LLMs effectively. We can design prompts to target different processing functions or introduce new knowledge as needed. Just like people, raw facts alone will not generate coherent, relevant responses without the processing power to utilize them. The SLiCK framework lends useful perspective on accessing LLM capabilities.

The Cure to the Reversal Curse: Possible Solutions To The Problem

Solution 1: Allowing Browsing - An Insufficient Approach

Among the initial proposals was the idea of having ChatGPT browse websites or search engines to find answers. At first glance, this seems a straightforward fix - let the model look up what it doesn't know. However, while pragmatic, this approach fails to leverage the unique capabilities of ChatGPT.

Rather than tapping into the knowledge already embedded in the model's parameters, browsing outsources the work to conventional search engines. It treats ChatGPT as a mere interface to Google, neglecting its potential for synthesizing concepts and drawing novel connections through its statistical learning.

Additionally, unrestricted browsing raises concerns about generating misinformation. Without the grounding of its training data, allowing ChatGPT to browse unverified sources could produce harmful or inaccurate content. Blind searching defeats the purpose of using a model in the first place.

While allowing browsing appears a simple solution, it squanders the differentiating strengths of large language models. More prudent approaches explore minimal necessary grounding through selective research rather than opening the proverbial floodgates. Targeted enrichment sustains the benefits of the ChatGPT while circumventing the flaws inherent in relying solely on external browsing.


Solution 2: Exploring Elaborate Frameworks

In researching approaches to enhance large language model performance, intricate workflows for “sequential activation” have emerged. As evidenced by a notable YouTube creator tackling a comparable issue, some advocate using multi-step prompting sequences to stimulate latent associations within the model.

This semantic exploration process involves:

Step 1: Unpack the User Query: Discuss and explore the semantic space around the user's query from different angles. Expand the context as much as possible.

Step 2: Enumerate Formal Definitions: Restate the query as formal definitions that may match the word being sought. Use the context to explore the lexical space and come up with increasingly eccentric/esoteric definitions.

Step 3: Enumerate Common Terms: List out common terms that may match the query, since the model tends to default to more average, common words.

Step 4: Enumerate Rare Terms: Think of more obscure, rare words that could match the query and enumerated definitions. Expand to more distant connections.

Step 5: Enumerate Tangential Terms: List out tangentially related words to explore more distal connections related to the query.

At first glance, this meticulous, structured prompting seems a prudent way to compensate for the model’s limitations. Certainly, providing context and expanding the scope of associations cannot hurt. However, in practice, this creator’s intricate framework ultimately failed to produce the desired performance improvements.

💡
David spends about 15 mins going through this, to at the end realise it's not working. And this is not unusual for David, most of the other attempts at similar issues don't seem to work consistently. This is not to say his system is not good, it's just a bit difficult to perform and it's not yielding the results that are intended.

While valuable in certain applications, highly detailed prompting workflows tend to overcomplicate challenges unnecessarily. The myriad phases require extensive trial-and-error tuning, without guaranteeing results. For many use cases, this ceremonial approach seems extraneous when more streamlined solutions suffice.


Solution 3: Methodically Strengthening Relationships Between Entities

Rather than relying on providing full context upfront, we can take an incremental approach to reinforce key relationships in the chatbot's knowledgebase. Using the SLiCK framework as a guide, we will methodically introduce intermediate "hop" entities to create shorter semantic distances between the target entities.

First, we establish the entity "Gabriel Macht" and the one-way relationship with his mother "Suzanne Pulier." Next, we bring in the entity of his father, "Stephen Macht." This creates a bidirectional link between Gabriel and Stephen.

GPT-4's response to the query

We then connect Stephen to his wife "Suzanne Victoria Pulier." Now a path exists between the original entities, with Stephen Macht as the intermediate hop. Just like us humans, by methodically bridging these entities, we strengthen the contextual associations in the chatbot's knowledge.


To generalize this, we could start with just the entity "Macht" and prompt the chatbot to progressively establish relationships with Stephen and Gabriel Macht before finally connecting with Suzanne. This incremental prompting chains together key facts to support knowledge retrieval.

The SLiCK framework helps guide the analysis of the knowledge gaps. Then, prompt engineering allows us to shore up deficiencies through step-wise relationship building. Rather than relying on wholesale context provision, targeted enhancement of entity associations improves reasoning and retrieval.

Entities:

  • Gabriel Macht - actor known for Suits
  • Stephen Macht - Gabriel's father, actor
  • Suzanne Victoria Pulier - Stephen's wife, Gabriel's mother

Relationships:

  • Gabriel Macht is the son of Suzanne Pulier (one-way)
  • Gabriel Macht is the son of Stephen Macht (two-way)
  • Stephen Macht is married to Suzanne Victoria Pulier (two-way)

Semantic Distance:

  • Short distance between Gabriel and his parents
  • But weak connections between Suzanne Pulier and Stephen Macht

Solution 4: The Critical Role of Personas/Roles

When querying a large language model, simply asking direct questions is unlikely to produce robust results. A key tenet of prompt engineering is establishing an appropriate persona to provide the right framing and associations.

Unlock the Power of AI with Effective Prompts
Discover the importance of prompts in AI content creation and how they can transform your output. Dive into the world of prompting and create amazing AI interactions

An effective persona should cluster together as many entities relevant to the desired generated content as possible. In our example, the legal drama superfan persona connects details about lawyers, TV shows, characters, plots, and actors.

Role-Playing in Large Language Models like ChatGPT
Explore how role-playing enhances AI interactions. This technique guides Large Language Models to deliver precise, consistent, and engaging responses.

This clustering loads the model with generalized relationships between these entities even before a specific question is asked. It activates networks of latent associations, just as recalling one memory in humans often triggers related memories.

Moreover, a broad persona allows flexibility in the target outcome. Unlike our contrived example where we know Gabriel Macht is the goal, real-world scenarios have unknown solutions. A wide-ranging persona keeps options open while still providing useful contextual focus.

In summary, thoughtful persona design primes the model by packing the contextual space with associated entities. This reduces semantic hops to facilitate inferencing target relationships from sparse starting prompts. Personas are critical prompt components for imbuing large language models with human-like recall through networked associative connections.


Key Lessons for Effective Prompt Engineering

In exploring various approaches to improving large language model performance, this exercise highlights several important insights:

  • LLMs should not be treated as mere search engines - their value lies in processing and relating knowledge.
  • The SLiCK framework offers a useful mental model for targeting different aspects like knowledge relationships and creativity.
  • Following proven prompting methods using well-constructed personas and contexts is often the best starting point.
  • Overly convoluted prompting workflows tend to overcomplicate challenges unnecessarily. Simpler solutions focused on key knowledge associations are often more effective.

In essence, properly understanding the capabilities and limitations of LLMs allows us to engineer quality prompts. We must avoid falling into the trap of anthropomorphizing these models while also leveraging their strengths through selective prompting strategies. A balance of realism and creativity in prompt engineering yields the most meaningful results.


Share this post