With the meteoric rise of large language models (LLMs) like GPT-3, there has been an understandable scramble to find the best ways to tap into their vast potential.

The Latest Trend: Memory Augmentation

The latest trend in natural language processing seems to be an obsession with "adding memory" to large language models (LLMs) through retrieval augmentation techniques like RAG (Retrieval Augmented Generation). The idea is that by allowing LLMs to retrieve and incorporate external knowledge, we can enhance their already impressive capabilities even further. However, this risks overlooking the tremendous untapped potential still lying dormant within the base LLMs themselves.

The Limitations of Memory Augmentation

While retrieval augmentation has shown promise, much of the "new" knowledge actually ends up encoded within the prompt and context window fed into the LLM anyway. As such, RAG feels a bit like window dressing - making only incremental improvements to a model containing more latent knowledge than we can realistically imagine. The true breakthroughs, AGI included, will not only come from accessorizing our LLMs, but from understanding them more deeply in order to unlock their full potential.

đź’ˇ
I'm not downplaying the utilities of RAG or any other Prompt Engineering techniques of external knowledge supplementation. Supplementing contextual knowledge has been shown to improve performance on many tasks.

The Power Within LLMs

But this misses the truly revolutionary power that lies at the heart of LLMs. We now have access to the most capable computational system ever constructed, with the entirety of human knowledge embedded in its parameters. Telling this system to simply retrieve and regurgitate external text sells it unforgivably short. The real power lies in understanding the innate capabilities of LLMs and using prompt engineering to unlock them.

Unlocking the Potential

So what does that entail? First, we must comprehend the sheer scale of what modern LLMs represent—their foundations encompass everything from Wikipedia to cutting-edge research papers. We have an artificial intelligence with human-level language fluency and background knowledge that is always growing. Prompt engineering gives us the techniques to tap into this wellspring of understanding and insight.

With the entirety of human knowledge encoded within, we've only begun scratching the surface of what's possible. Rather than getting distracted by the latest hype cycles around new accessories for LLMs, we should be focused on prompt engineering - the tools, techniques, and study of how to access their underlying power most effectively.

That power stems from comprehension - genuine understanding of how LLMs ingest, represent, and produce knowledge. Only then can we traverse the vast planes of information within them and channel their capacity for analysis, creation, and insight. Augmenting their knowledge supply may offer marginal gains, but improving our ability to navigate and utilize what's already there holds the key to revealing their true capabilities.

The Power of Prompts

Through careful prompt formulation, we can pose questions and tasks to the LLM that unlock its potential for analysis, creativity, and problem-solving. Rather than just retrieving text, we can get LLMs to translate complex ideas into simpler analogies, identify connections across disparate domains, assess hypotheses, generate story outlines, write code, and so much more. We are only beginning to scratch the surface.

The key is to realize that LLMs have moved far beyond just predicting the next word in a sequence. With prompt engineering, we can activate different modes of reasoning and capabilities. By structuring prompts properly, maintaining a consistent voice and tone, and providing the right contexts and examples, we elicit LLMs’ higher-level emergent intelligence.

As long as startups chase funding by tuning accessories for off-the-shelf LLMs, the full potential of models containing humanity's collective intelligence will remain trapped behind a veil of hype. The possibilities of what these monumental achievements in computer science can enable are staggering, if only we devote ourselves to comprehending the knowledge they contain.

đź’ˇ
I'm not advocating either/or here. I am advocating that we should also put the study of LLMs themselves on the same pedestal. Especially if we're talking about AGI..

So while RAG grabs headlines by using LLMs for basic information retrieval, I think an alternative way forward could also be accessing and directing the unified causal-relational knowledge embodied in these models. Only by fully utilizing prompt engineering will we tap into the enormously multifaceted potential of large language models.

Share this post