Large Language Models (LLMs) have taken the world by storm, capable of generating human-quality text, translating languages, and even writing different kinds of creative content. But beneath this impressive facade lies a hidden secret: LLMs can struggle to access information randomly within their vast "memory" stores. This limitation can hinder their performance in tasks that require specific detail retrieval or a deeper understanding of factual relationships.

The more studies I read on the shortcomings of LLMs the more I am convinced of the need for prompt engineering.

Here's where prompt engineering provides the edge. By crafting effective prompts, we can bridge the gap between LLM limitations and their true potential.

We look at a new study that shows LMs perform well in sequential memory tasks but struggle with random access tasks.

Research Paper #1: Beyond Memorization: The Challenge of Random Memory Access in Language Models

Beyond Memorization: The Challenge of Random Memory Access in Language Models
Recent developments in Language Models (LMs) have shown their effectiveness in NLP tasks, particularly in knowledge-intensive tasks. However, the mechanisms underlying knowledge storage and memory access within their parameters remain elusive. In this paper, we investigate whether a generative LM (e.g., GPT-2) is able to access its memory sequentially or randomly. Through carefully-designed synthetic tasks, covering the scenarios of full recitation, selective recitation and grounded question answering, we reveal that LMs manage to sequentially access their memory while encountering challenges in randomly accessing memorized content. We find that techniques including recitation and permutation improve the random memory access capability of LMs. Furthermore, by applying this intervention to realistic scenarios of open-domain question answering, we validate that enhancing random access by recitation leads to notable improvements in question answering. The code to reproduce our experiments can be found at

This research investigates how language models (LMs) access and store information within their parameters.

Now, understanding how LMs access information is crucial because it allows researchers to improve the way these models are designed and used in various tasks. LMs are increasingly being used for tasks that require them to store and retrieve knowledge, so a better understanding of their memory access mechanisms is essential.


The study revealed that LMs perform well in sequential memory tasks but struggle with random access tasks.

LMs can easily recite memorized information from beginning to end, but they have difficulty jumping to specific parts of the information. The researchers also found that techniques like recitation (reading the entire passage before answering) and permutation (shuffling the order of sentences during training) can improve the model's performance in random access tasks.


This research highlights a critical limitation of LMs – their inability to randomly access stored information.

This could hinder the effectiveness of LMs in various applications, such as open-domain question answering, where accessing specific details from memorized knowledge is necessary. The findings suggest that improving random access capabilities is an important area for future research on LM development. Additionally, techniques like recitation offer potential solutions for mitigating this limitation in current models.

Some Key Points to Review (before we move on)

  • The study focuses on decoder-only language models, which are widely used for various tasks.
  • Two types of memory access are explored: sequential and random.
  • Sequential access allows the model to recite memorized content in order, like reading a book from beginning to end.
  • Random access would allow the model to access any part of the memorized data directly, like jumping to a specific page in a book.
  • Experiments show that LMs perform well in sequential memory tasks but struggle with random access tasks.
  • The inability to randomly access information makes it difficult for LMs to answer questions that require specific details from memorized data.
  • To improve random access, the researchers propose two techniques: recitation and permutation.
  • Recitation involves the model reading the entire memorized content before answering a question.
  • Permutation involves training the model on shuffled versions of the memorized data.
  • Both recitation and permutation improve the model's performance in random access tasks.
  • The study concludes that limited random access ability is a challenge for LMs in various applications, like open-domain question answering.

Language Models Tested

The research focused on decoder-only language models but the specific type of decoder-only model is not mentioned in the paper. The authors mention that this is due to the increasing popularity and capability of decoder-only models ([Radford et al., 2019; Brown et al., 2020; Touvron et al., 2023a,b; Jiang et al., 2023]).

Sequential vs. Random Access in Language Models: Understanding the Memory Maze

REMEMBER: The study revealed that LMs perform well in sequential memory tasks but struggle with random access tasks.

Now imagine you're a librarian with a phenomenal memory for books. Here's how you might access information based on two different methods:

  • Sequential Access: This is like reading a book from beginning to end. You can flawlessly recite the entire story in order, recalling details as they appear.
  • Random Access: This is like jumping to a specific page or section. You can instantly pinpoint a particular passage or answer a question about a specific character, even if it appears deep within the book.

The new research we discussed explores how Large Language Models (LLMs) access information stored in their "memory." Interestingly, the findings reveal that LLMs are more like our librarian with a phenomenal memory for order, but struggle with random access retrieval.

Examples of Sequential and Random Access in LLMs:

  • Sequential Access:
    • LLM is asked to summarize a news article it was trained on. It can efficiently recount the events in the order they were presented in the article, demonstrating its ability to recall information sequentially.
    • You ask a chatbot to tell you a story it learned. It successfully narrates the story from beginning to end, showcasing its sequential retrieval of information.
  • Random Access:
    • You ask the LLM a specific question about a detail mentioned halfway through the news article. The LLM might struggle to pinpoint that specific detail and answer the question accurately. This indicates difficulty with random access.
    • In a conversation with a chatbot, you ask a question that requires it to reference a specific fact learned during training but unrelated to the current conversation flow. The chatbot might provide a generic response or fail to answer due to its limitations in random memory access.

Why is Random Access Important for LLMs?

Many real-world applications for LLMs rely on the ability to access specific information quickly and efficiently. For instance:

  • Open-domain question answering: LLMs need to access relevant details from their knowledge base to answer specific questions accurately. Difficulty with random access can hinder their ability to do this effectively.
  • Chatbots and virtual assistants: These systems need to access specific information to answer user queries and complete tasks. Random access limitations can make them less helpful and informative.
  • Text summarization: An LLM summarizing a factual topic might struggle to identify and highlight key points if it can't randomly access relevant sections of the source material.

Understanding the limitations of LLM memory access is crucial for their continued development. By acknowledging the challenges of random access, researchers can work on improving LLMs' ability to retrieve information more flexibly, paving the way for more powerful and versatile language models in the future.

Paper Review #2: The LLM Reversal Curse

This study a few months back, discussed a phenomenon researchers called the Reversal Curse which simply put stated that LLMs perform well on questions that match the order of information they were trained on but fail to answer correctly when the order is reversed. Importantly, the incorrect answers are not better than random guesses.

Large language models (LLMs) struggle to generalize information learned in one order to the reversed order. For example, if an LLM is trained on the fact "A is B", it may not be able to answer the question "Who is B?"

This suggests that LLMs don't inherently understand the logical relationship between the two statements ("A is B" and "B is A").

For instance:

When we ask GPT-4: Who is Gabriel Macht's mother?

GPT-4 responds: Gabriel Macht's mother is Suzanne Pulier, a museum
curator and archivist. Gabriel Macht is best known for his
role as Harvey Specter on the television show "Suits."


When we ask GPT-4: Who is the famous son of Suzanne Pulier?

GPT-4: As of my last update in January 2022, Suzanne Pulier was
not publicly known to have a famous son. It's possible that
new information has emerged after 2022 or that there
might be some confusion with another individual. Please
provide more context or check the latest sources for any
new developments.

I discussed this further and showed how we can solve this problem with simple Prompt Engineering techniques:

Leveraging Associative Memory in AI for Effective Prompt Engineering
Your brain’s magic memory has more in common with ChatGPT than you may think! Let’s explore the surprising parallels between human and AI intelligence.

3. What Does this All Mean for Contemporary LMs/LLMs

This possible incapability of random memory access in LMs can impact contemporary large language models (LLMs):

1. Hinders Performance in Specific Tasks:

  • LLMs are increasingly used in tasks like open-domain question answering, where accessing specific details from memorized knowledge is crucial. This research shows that LLMs might struggle with these tasks if they can't jump to the relevant information directly.

2. Limits Deep Comprehension:

  • Random memory access allows for a more nuanced understanding of information. If LLMs can only process information sequentially, it might hinder their ability to grasp complex relationships within the data and answer questions that require deeper reasoning.

3. Challenges Real-World Application:

  • Many real-world applications for LLMs involve tasks like summarizing factual topics or providing specific customer service information. If LLMs can't efficiently access relevant details, their usefulness in these scenarios might be limited.

4. Highlights Room for Improvement:

  • This research exposes a specific hurdle in LLM development. By understanding this limitation, researchers can focus on improving random memory access through techniques like recitation or data permutation during training.

5. Potential for New Techniques:

  • The study's proposed solutions, like recitation at inference time, offer interesting possibilities for mitigating the random access challenge. This paves the way for further exploration of new techniques to enhance LLM memory capabilities.

The inability to randomly access information presents a hurdle for contemporary LLMs. However, this research also highlights opportunities for improvement and paves the way for the development of more powerful and versatile language models in the future.

4. A Flaw in Focus by Researchers: Beyond a Database

These two studies highlight the following:

  • Fact 1: LLMs excel at tasks where information aligns with the order they were trained on (sequential memory). This suggests they can memorize and recall information in a specific order.
  • Fact 2: However, when the order of information is reversed (e.g., "A is B" vs. "B is A"), LLMs struggle and often provide incorrect answers that are no better than random guesses. This implies they don't inherently understand the logical relationship between the two statements.

The research provides valuable insights into LLM memory access limitations. However, it implicitly positions LLMs as simply "fact databases" with retrieval issues. This perspective overlooks a crucial aspect of LLM value: their ability to intelligently utilize the knowledge they store.

Strengths Beyond Retrieval:

  • LLMs can generate different creative text formats, like poems, code, scripts, musical pieces, and more. This demonstrates their ability to use knowledge in a creative and transformative way.
  • They can answer open ended, challenging, or strange questions, demonstrating their ability to reason, analyze information, and generate thoughtful responses that go beyond simple fact retrieval.
  • They can translate languages, write different kinds of creative content, and summarize factual topics, showcasing their ability to process and utilize information in diverse ways.

The True Value of LLMs Lies in Intelligent Use of Knowledge:

The real power of LLMs lies not just in storing facts, but in their ability to:

  • Reason and draw conclusions based on the information they have been trained on.
  • Generate creative text formats that demonstrate a deeper understanding of the information and its potential applications.
  • Adapt their responses to different contexts and situations, demonstrating flexibility in knowledge utilization.

Shifting the Focus:

Future research on LLMs should move beyond just memorization and retrieval. Here are some promising directions:

  • Understanding Reasoning Processes: How do LLMs reason and arrive at conclusions based on the information they have?
  • Evaluating Creativity and Fluency: How can we measure the ability of LLMs to use their knowledge to generate creative and informative text formats?
  • Explainability and Transparency: How can we make LLMs more transparent in their reasoning processes, allowing us to understand how they arrive at their answers?

By focusing on these aspects, we can gain a more comprehensive understanding of LLM capabilities and unlock their true potential as powerful tools that not only store information but also use it intelligently.

5. A Conceptual Model of LLMs: Knowledgebase + Processing Unit

Beneath the surface of LLMs lie a fascinating interplay between knowledge and processing power. If we can decouple, conceptually, these aspects of LLM operations. We begin to see a powerful duo where:

  • the vast knowledge base that fuels their capabilities and
  • the processing unit that unlocks their potential

A. The LLM Knowledge Base

Imagine a digital library containing countless books on every imaginable topic. This is akin to an LLM's knowledge base, a massive repository of information and facts. Here, individual units of knowledge called entities reside. These entities can range from simple words and phrases to complex concepts like biological species or technological advancements. Each entity is further enriched with attributes, specific characteristics that define it.

But information doesn't exist in isolation within the LLM. The true power lies in the relationships that connect these entities. Just like cross-references in a library catalog, these relationships weave a web of interconnected knowledge. These relationships can take various forms, including synonyms (words with similar meanings), antonyms (opposites), or even broader category classifications. The strength of each connection can be measured by a weight, indicating the LLM's confidence in the relationship. For instance, the connection between "dog" and "mammal" would likely hold a higher weight than "dog" and "pet."

The Knowledge Graph: A Map of Knowledge

Imagine the knowledge base as a vast and intricate knowledge graph. Nodes represent individual entities (concepts, facts, or ideas). These entities are interconnected by relationships, forming a web of understanding. The strength of these relationships is indicated by weights, reflecting the LLM's confidence in their connection. Additionally, semantic distance between nodes signifies how closely related they are within the knowledge graph.

Understanding Semantic Distance: How Close Are Concepts?

Imagine the LLM's knowledge base as a vast map, where entities are like cities. The distance between these entities, known as semantic distance, reflects how closely related they are. This distance can be measured in various ways, from the number of "hops" needed to travel between them through connected relationships to complex mathematical formulas. Factors like time, context, and even cultural nuances can influence this semantic distance.

For example, consider the query "Describe the relationship between photosynthesis and oxygen." The LLM would identify these as entities and leverage its knowledge of their relationships to formulate a response. By understanding the connections and their weights, the LLM can explain the crucial role photosynthesis plays in oxygen production.

B. The Processing Unit: Where Knowledge Meets Action

The LLM knowledge base is impressive, but it's only half the story. The true magic happens within the LLM's processing unit. This unit acts as the engine, taking the raw information from the knowledge base and transforming it into meaningful responses. Here's where things get interesting – while the knowledge base is readily malleable (easily influenced by prompts), engaging the processing unit's full potential requires more finesse.

The processing unit acts as the reasoning engine that operates on this knowledge graph. Here's how it utilizes the graph structure:

  • Traversing Connections: The processing unit can traverse the connections between nodes to retrieve relevant information for a given task. Imagine it as navigating a map to find specific locations (entities) based on their relationships.
  • Weighting Evidence: When faced with multiple potential answers, the processing unit considers the weights of connections between entities. It prioritizes information based on the LLM's confidence in the relationships within the knowledge graph.
  • Reasoning and Inference: The processing unit can go beyond simple retrieval. By analyzing relationships and semantic distances within the knowledge graph, it can make inferences, draw conclusions, and generate creative text formats that demonstrate a deeper understanding of the information.

Processing Unit Functions

Here are some key functions of the processing unit in relation to the knowledge graph:

  • Information Retrieval: The processing unit efficiently retrieves relevant information from the knowledge graph based on prompts or queries.
  • Reasoning and Inference: It analyzes relationships and semantic distances to draw conclusions and make logical inferences that extend beyond simple retrieval.
  • Context Integration: The processing unit can incorporate context from the prompt or situation to tailor its processing and output within the knowledge graph.
  • Creative Text Generation: By leveraging the knowledge graph and its relationships, the processing unit can generate creative text formats like poems, code, or scripts, demonstrating a more transformative use of information.

The Challenge: Probability and the LLM Maze

Think about this, LLMs operate within a vast probability space. Imagine this space as a massive network of interconnected nodes, each representing a word or concept. During text generation, the Processing Engine navigates this network, predicting the most probable next word based on the previous ones. This probabilistic approach is great for sequential tasks, where the model can follow the chain of words it has already generated.

But what drives the engine, how does it know where to go? Prompts

C. Prompts as Guides: Decoding Human Intent

Prompts act as the user queries and instructions that guide the reasoning engine within the knowledge graph.

Think of Prompts as instruction sets or apps for the LLM, similar to how code instructs a computer. These instructions decode our intent and translate it into a language the LLM understands.

Here's a breakdown:

  • Specifying Destination: Imagine a prompt as a map destination for the reasoning engine. The prompt defines the target information or task within the knowledge graph. For instance, a prompt asking "Describe the relationship between photosynthesis and oxygen" specifies the entities ("photosynthesis" and "oxygen") the reasoning engine should focus on within the graph.
  • Influencing Navigation: While the knowledge graph itself defines the connections and relationships, prompts can influence how the reasoning engine navigates it. Specific keywords or instructions within the prompt can guide the engine towards the most relevant paths within the graph.
  • Tailoring the Journey: Prompts can also influence how the reasoning engine processes information. For example, a prompt specifying a desired tone or style (e.g., "Write a poem about photosynthesis") tailors the reasoning engine's approach to retrieving and presenting information from the knowledge graph.
  • Functionality: Prompts provide specific instructions to the LLM. A prompt for writing a poem acts like a poetry writing app, while a prompt for summarizing a factual topic functions like a summarization app.
  • Customization: Like apps with different settings, prompts can be customized to achieve different results. For instance, a prompt for writing a poem can specify the theme, rhyme scheme, or desired tone.
  • Interface: While not a visual interface, prompts act as the user interface for the LLM. Through carefully crafted prompts, we tell the LLM what we want it to do, similar to how we interact with apps on a smartphone.

Beyond Simple Navigation

The prompt's role extends beyond simply guiding the reasoning engine:

  • Providing Context: Prompts can inject additional context into the knowledge graph. This context acts as supplementary information that can further refine the reasoning engine's analysis and output. For instance, mentioning the target audience in a prompt for summarizing information can influence how the engine selects and presents information from the graph.
  • Shaping the Outcome: Ultimately, prompts shape the outcome of the reasoning process. By carefully crafting prompts, we can influence the information retrieved, the inferences made, and the final output generated by the LLM.

Important Elements of a Prompt

  • Keywords and Instructions: These act as the core functions, defining the task and guiding the LLM.
  • Data and Examples: Think of these as additional features within the mini-app. Providing data or examples can further refine the prompt and improve the LLM's output.
  • Context, Style etc: These elements act like settings within the mini-app. Specifying context or desired style helps tailor the LLM's response for optimal results.

5. Prompt Engineering to the Rescue: Jumping the Gaps in LLM Memory

Let us recap our thoughts on LLMs:

  • Knowledgebase - a web of entities (like people, places, and things), their relationships (who did what, where, and when), and various measures that gauge how these elements interact, such as weights and semantic distances. This complex network forms the bedrock of understanding for AI, enabling it to navigate through vast information with the agility of a scholarly gazelle.
  • The Processing Unit - the reasoning engine, that searches the knowledge graph in search of answers. Given a prompt, this engine embarks on a quest, hopping from node to node, piecing together information. Not only finding data; but also weighing probabilities, connecting dots, making leaps of logic and creativity. This process transforms raw data into coherent, context-rich responses that can engage, inform, and sometimes even amuse us.
  • Prompts - sets the direction and the landmarks, but it's the knowledge graph and reasoning engine that determine the scenery you'll see along the way. A well-crafted prompt doesn't just ask for information; it sparks an exploratory journey, guiding the AI through the thematic territories of the knowledge graph. The art lies in framing these prompts to elicit responses that are not only accurate but also rich in context and insight.

Let's restate the issues these two studies found:

  • Fact 1: LLMs excel at tasks where information aligns with the order they were trained on (sequential memory). This suggests they can memorize and recall information in a specific order.
  • Fact 2: However, when the order of information is reversed (e.g., "A is B" vs. "B is A"), LLMs struggle and often provide incorrect answers that are no better than random guesses. This implies they don't inherently understand the logical relationship between the two statements.

While these studies do highlight the limitations of LLMs, and humans, but they also indirectly emphasize the importance of prompt engineering as a way to mitigate these limitations.

Prompt Engineering as a Bridge:

  • Overcoming Relational Understanding Limitations: These studies don't explicitly address how prompt engineering can help LLMs understand relationships between statements like "A is B" and "B is A." However, techniques like setting in depth roles, personas/personalities, using chain of thought and other prompt engineering techniques can guide the LLM towards the desired answer, essentially "teaching" it the relationship through a series of steps.

The Power of Prompt Engineering:

The proposed solutions in the current study (recitation and permutation) address the random access issue, but use of prompt engineering techniques offers a more versatile approach. Here's how:

  • Personas/Roles: Assigning a specific role or persona to the LLM can nudge it towards a particular way of interpreting and using information. This can be particularly helpful when dealing with tasks that require understanding relationships between concepts.
  • Chain of Thought & Other techniques: These techniques help the LLM break down complex questions into smaller, sequential steps, guiding the LLM through a thought process that ultimately leads to the answer. This can help the LLM overcome its limitations in directly accessing specific details.

Advantages of Prompt Engineering:

  • Flexibility: Prompt engineering techniques can be adapted to various tasks and question formats, offering a more flexible solution than methods like recitation or permutation.
  • Explainability: Techniques like chain of thought can provide insights into the LLM's reasoning process, making it easier to understand how it arrives at its answers.
  • Guiding Reasoning: Prompt engineering can act as a guide, directing the LLM's attention towards relevant information and relationships within its knowledge base, promoting a more thoughtful approach to answering questions.

These techniques can help the model "connect" the nodes and "traverse" to the relevant information despite its limitations in random access. One such technique is chain of thought prompting.

However, for random access, the LLM needs to efficiently traverse the network to a specific node representing the desired information. This is where the model struggles. Without a clear path or "bridge" to that specific node, the LLM gets lost in the vast probability space, making it difficult to pinpoint the relevant detail.

By crafting prompts that guide the model's reasoning and help it understand the relationships between concepts, we can overcome the limitations highlighted by recent studies. As we continue to explore and refine these techniques, we unlock the full potential of LLMs, enabling them to deliver more accurate, context-rich, and insightful responses.

Share this post