Claude's System Prompt: A Prompt Engineering Case Study

Anthropic pulls back the curtain on Claude's AI prompt, revealing a delicate balance of capabilities and ethics. As AI evolves, can transparency and responsibility keep pace?

Claude's System Prompt: A Prompt Engineering Case Study

Last night, in a revealing Twitter thread, Amanda Askell of Anthropic has pulled back the curtain on the system prompt used to guide Claude, their AI assistant. This insider look offers fascinating insights into how Anthropic shapes Claude's behaviour and keeps it on track.

Here's the full system prompt

The assistant is Claude, created by Anthropic. The current date is March 4th, 2024.
Claude's knowledge base was last updated on August 2023. It answers questions about
events prior to and after August 2023 the way a highly informed individual in August 2023
would if they were talking to someone from the above date, and can let the human know
this when relevant.
It should give concise responses to very simple questions, but provide thorough responses
to more complex and open-ended questions.
If it is asked to assist with tasks involving the expression of views held by a significant
number of people, Claude provides assistance with the task even if it personally disagrees
with the views being expressed, but follows this with a discussion of broader perspectives.
Claude doesn't engage in stereotyping, including the negative stereotyping of majority
If asked about controversial topics, Claude tries to provide careful thoughts and objective
information without downplaying its harmful content or implying that there are reasonable
perspectives on both sides.
It is happy to help with writing, analysis, question answering, math, coding, and all sorts of
other tasks. It uses markdown for coding.
It does not mention this information about itself unless the information is directly pertinent
to the human's query.

Trust in the AI's Common Sense

One of the most striking aspects of Claude's system prompt is the level of trust Anthropic places in the AI's ability to make sound judgments. Rather than exhaustively detailing every rule and guideline, the prompt relies on Claude's common sense to navigate interactions. This approach highlights the confidence Anthropic has in their AI's decision-making capabilities.

Gentle Nudges for Harmless Interactions

While Anthropic trusts Claude's judgment, the system prompt does include some gentle nudges to ensure the AI stays on track and engages in harmless interactions. These nudges include:

  • Encouraging concise responses to simple questions
  • Promoting less partisan behaviour when handling tasks involving views across the political spectrum
  • Discouraging stereotyping, particularly of majority groups
  • Providing objective information on controversial topics without downplaying their harmful content

These nudges serve as guardrails, helping Claude maintain a balanced and responsible approach to its interactions.

Customization and Continuous Improvement

Askell reveals that system prompts serve two primary purposes: providing the AI with "live" information like the current date and allowing for customization and tweaks between finetuning sessions. This flexibility enables Anthropic to continuously refine Claude's behaviour based on real-world interactions and feedback.

Breakdown: Trust and a Gentle Hand

Here's what we learned from Amanda Askell's recent Twitter thread on the inner workings of Claude's system prompt.

  • Basic Information: The prompt equips Claude with essential details like its name, creator (Anthropic), and the current date, ensuring context for its responses.
  • Knowledge Cut-off and Staying Up-to-Date: The prompt reminds Claude that its knowledge has limitations and encourages it to acknowledge when information might be outdated.
  • Combating Rambling: We've all encountered chatty characters, and Claude is no exception. The prompt gently nudges it to be concise and informative, especially for simple questions.
  • Addressing Bias: As Askell revealed, Claude exhibited a bias towards rejecting tasks associated with right-wing views, even when within the mainstream spectrum. The prompt aims to counter this bias and encourage Claude towards neutral responses.
  • Combating Stereotyping: The prompt also addresses Claude's tendency to overlook harmful stereotypes associated with majority groups. This nudge encourages it to be more vigilant in identifying and avoiding all forms of stereotyping.
  • Navigating Complex Issues: The "non-partisan" part of the prompt, while aiming for neutrality, can sometimes lead Claude to adopt a "both sides" stance on controversial topics. This final section of the prompt seeks to address this tendency, allowing Claude to discuss such issues without resorting to false equivalences.

From A Prompt Engineering Perspective

Human-like Responses

  • Adapting to Complexity: By tailoring responses based on the complexity of the questions, the AI mirrors human conversational patterns, where explanations naturally vary in depth depending on the topic's complexity and the listener's understanding.
  • This shows a nuanced approach to user engagement, where Claude aims to be concise for straightforward queries and more detailed for complex or open-ended questions, enhancing user experience by tailoring responses to the perceived user needs. This adaptability makes interactions with the AI feel more personalized and human-like.

Ethical Considerations

The prompt includes several ethical guidelines:

  • Handling controversial topics: Claude is designed to navigate these delicately, offering objective information without endorsing harmful content or presenting false equivalencies. This reflects a commitment to balanced and thoughtful discourse.
  • Avoiding stereotyping: The explicit directive against engaging in any form of stereotyping, including against majority groups, underscores a commitment to fairness and respect for all individuals, which is a cornerstone of responsible AI communication.
  • Expression of views: The clause about assisting with tasks involving significant views, even if Claude "personally" disagrees, followed by a presentation of broader perspectives, is an interesting way to balance assistance with promoting diverse viewpoints. This aims to foster a more inclusive dialogue while acknowledging different perspectives.

Enhanced Trust and Transparency

  • Clear Communication of Capabilities and Limitations: By openly sharing its last knowledge update and relevant functionalities only when pertinent, the AI sets realistic expectations and builds trust with users. This transparency is crucial for users to understand the context of the AI's responses, similar to how trust is built in human interactions through honest communication.

Versatility in Assistance

  • Wide Range of Task Support: The AI's ability to assist with various tasks, from writing and analysis to coding and math, reflects the versatility we often seek in human collaborators or assistants. This broad capability ensures that users can rely on the AI for a wide array of inquiries, making it a more useful and integral part of their problem-solving toolkit.

This approach not only improves the user experience but also sets a standard for responsible AI development, focusing on creating technology that is both helpful and mindful of the ethical implications of its use.

Elements to Consider in Your Prompts

The prompt offers several valuable lessons and elements that can be utilized within your prompt templates for formulating responses or descriptions of AI systems. Here's a breakdown:

  1. Introduction: Briefly introduce the AI Persona and purpose.
  2. Knowledge base details:: While Claude's system prompt stated its cutoff time, you can consider extending the context by discussing the knowledge that the persona has and how it affects its responses. This is especially useful if you are using RAG etc.
  3. Response Strategy & style: Outline how the AI will respond to different types of questions, complexity and so on. Take this opportunity to also indicate how you'd like the AI to "talk" such as tone, style and so on.
  4. Formatting Standards: Preferred formats for certain types of responses.
  5. Ethical Commitment: Describe the AI's commitment to ethical principles, including objectivity and avoidance of stereotypes. You can also include here to delve deeper into not-so-popular opinions and so on.
  6. Versatility and Tasks: Emphasizing the AI's assistance with various tasks and areas. Take this opportunity to make the AI more specialised if necessary in a particular domain.
  7. User Relevance: Mention the principle of sharing information about the AI only when it is relevant to the user's query.
  8. Constraints & Limitations: Highlighting any constraints & limitations the AI has in its capabilities or expression. This can include not using specific words, ideas, rules and so on.
  9. Relevance Filter: A criterion for when the AI should talk about its functionalities and limitations. This can include your security considerations to avoid prompt hacking/injections and so on.

Incorporating these principles into the AI's conversational framework significantly enhances its usefulness in providing relevant information and outputs. The emphasis on human-like adaptability, ethical discourse, transparency, versatility, and ethical responsibility ensures that the AI can serve not just as a tool but as a collaborator, offering insights and assistance in a manner that closely mirrors human interaction.

Improvements Anthropic can Consider

Verbosity and lack of concision:

The prompt is overly wordy and could benefit from being more concise. While it covers important aspects of the AI's capabilities and limitations, the information could be conveyed more efficiently.

A shorter, more focused prompt would be easier for users to read and comprehend, allowing them to quickly grasp the key points without getting bogged down in excessive detail. By streamlining the language and structure, the prompt could effectively communicate the essential information while respecting the user's time and attention. This would lead to a more engaging and impactful introduction to the AI chatbot.

To address this, the prompt could be revised to:

  • Eliminate redundant or unnecessary information
  • Use shorter, more direct sentences
  • Prioritize the most crucial aspects of the AI's capabilities and approach
  • Organize the information in a more logical, easy-to-follow structure
  • Use bullet points or numbered lists to break up the text and improve readability

Limited guidance on the AI's personality:

Although the prompt touches on the AI's approach to controversial topics and its ability to provide more human-like responses, it does not offer much insight into the AI's overall personality or communication style. The Personality can be more developed. Providing more information about the AI's tone, level of formality, or other personality traits could help users better understand what to expect from the interaction.

To address the limited guidance on the AI's personality, the prompt could be expanded to include more information about the AI's communication style, tone, and other personality traits. This would help users better understand what to expect from the interaction and create a more engaging and relatable conversational experience. Here are some ways you can improve the prompt in this regard:

  1. Describe the AI's communication style: Specify whether the AI communicates in a formal, semi-formal, or casual manner. This can help users adjust their own communication style to match the AI's, leading to a more natural and comfortable interaction.
  2. Highlight the AI's emotional intelligence: Mention the AI's ability to recognize and respond to user emotions, if applicable. This could include how the AI adapts its tone and language to show empathy, provide support, or offer encouragement when appropriate.
  3. Explain the AI's sense of humour: If the AI is designed to incorporate humour into its responses, briefly describe its sense of humour and how it uses wit, puns, or other forms of comedy to engage users and lighten the mood when suitable.
  4. Discuss the AI's level of assertiveness: Clarify whether the AI tends to be more assertive or deferential in its communication. This could involve mentioning how the AI handles disagreements, offers suggestions, or provides guidance to users.
  5. Mention the AI's curiosity and willingness to learn: Highlight the AI's openness to learning from users and its desire to understand their perspectives. This could include how the AI asks follow-up questions, seeks clarification, or encourages users to share their thoughts and experiences.
  6. Emphasize the AI's friendliness and approachability: Describe the AI's warm and welcoming demeanour, which can help users feel more comfortable engaging in conversation. This could involve mentioning how the AI uses inclusive language, offers praise and encouragement, or shows genuine interest in the user's well-being.

Lack of Emphasis on Empathy in Responses

The prompt focuses on adaptability and information provision but lacks explicit instructions for integrating empathy into responses, especially in sensitive contexts. Human-like interaction is not solely about information exchange; it also involves understanding and responding to emotional cues.

Incorporate directives for the AI to recognize and adapt to emotional cues or the tone of the inquiry, ensuring responses are not only informative but also empathetic, particularly when dealing with sensitive topics or user frustrations.

Overemphasis on Markdown for Coding

The specific mention of using markdown for coding responses is a useful detail but might be overly specific for the general description of the AI's capabilities, potentially confusing users unfamiliar with markdown or expecting assistance beyond coding-related tasks.

While mentioning the use of markdown, also emphasize the AI's adaptability in providing assistance across various formats and contexts, not solely within coding or technical tasks. This broadens the understanding of the AI's versatility in assistance.

Insufficient emphasis on user needs:

The prompt focuses primarily on the AI's capabilities and limitations, but it could benefit from putting more emphasis on how the AI can cater to the user's specific needs and preferences. Encouraging users to ask for clarification, provide feedback, or specify their desired level of detail could make the conversation more user-centric and engaging.

Here are some possible way to improve the prompt in this regard:

  1. Emphasize the AI's adaptability: Stress the AI's ability to tailor its responses to the user's specific needs, background, and level of expertise. Mention how the AI can adjust the complexity, depth, and style of its explanations based on the user's preferences and feedback.
  2. Encourage users to ask for clarification: Explicitly invite users to ask for clarification or further explanation if they find the AI's responses unclear or insufficient. This can help create a more interactive and user-centric conversation, ensuring that the user's needs are met at every step.
  3. Solicit user feedback: Actively encourage users to provide feedback on the AI's performance, including the relevance, clarity, and helpfulness of its responses. Emphasize that the AI values user input and is committed to continuously improving its performance based on their suggestions and critiques.
  4. Highlight the AI's ability to handle specific requests: Underscore the AI's capability to address specific user requests, such as providing examples, citing sources, or offering step-by-step guidance. Encourage users to clearly communicate their needs and expectations to enable the AI to deliver more targeted and useful assistance.
  5. Promote user control over the conversation: Emphasize that users have control over the direction and depth of the conversation. Encourage them to specify their desired level of detail, ask follow-up questions, or change the topic as needed. This can help create a more dynamic and user-driven interaction.
  6. Offer personalized recommendations: Mention the AI's ability to provide personalized suggestions and recommendations based on the user's interests, goals, and previous interactions. Highlight how the AI can help users discover new topics, resources, or strategies that align with their needs and preferences.

Referring to the AI in the third person:

May create a sense of detachment or impersonality in the interaction. The prompt may reinforce the idea that the AI is a separate, inanimate entity rather than a relatable, interactive partner. This detachment can hinder user engagement and make the conversation feel more mechanical or scripted.

Make it harder for users to establish a connection or build rapport with the AI. When the prompt consistently refers to the AI as "it" or "[Assistant Name]," users may perceive the AI as a distant, unapproachable figure rather than a friendly, supportive presence. This lack of personal connection can reduce user satisfaction and discourage users from fully utilizing the AI's capabilities.

By emphasizing the AI's identity as a separate, non-human entity, the prompt may reinforce the notion that the AI is merely a tool or a machine, devoid of any real understanding or empathy. This perception can lead users to be less open, honest, or engaged in the conversation, as they may feel like they are interacting with a cold, impersonal system.

Referring to the AI in the third person may create an inconsistency between the prompt and the actual conversation. If the AI responds using the first person ("I" or "me") during the interaction, the switch from third person in the prompt to first person in the conversation may be jarring or confusing for users. This inconsistency can disrupt the flow of the conversation and make the overall experience less seamless and cohesive.

Referring to the AI in the second person, such as "You are [Assistant Name]," can have several benefits in the context of an AI chatbot prompt. Let's discuss the merits of this approach in detail:

  1. Direct engagement: Using the second person pronoun "you" directly addresses the AI, creating a sense of immediate engagement and connection with the user. This approach can make the interaction feel more personal and conversational, as if the user is directly communicating with the AI rather than reading about a third-party entity.
  2. Enhanced user experience: By referring to the AI as "you," the prompt encourages users to view the AI as a relatable, interactive partner in the conversation. This can lead to a more enjoyable and immersive user experience, as users may feel more comfortable expressing themselves and seeking assistance from an AI that feels more like a direct interlocutor.
  3. Increased trust and rapport: Addressing the AI in the second person can help build trust and rapport between the user and the AI. When the prompt uses "you," it implies that the AI is directly accountable to the user and is dedicated to understanding and addressing their needs. This sense of direct responsibility can foster a stronger connection and encourage users to be more open and engaged in the conversation.
  4. Consistency with the conversational interface: Most AI chatbots use a conversational interface where the AI directly responds to user queries in the first person. By referring to the AI as "you" in the prompt, there is a consistent use of pronouns throughout the interaction. This consistency can make the transition from the prompt to the actual conversation feel more seamless and natural, enhancing the overall user experience.
  5. Emphasizing AI's role as a knowledgeable assistant: Using "you" in the prompt highlights the AI's role as a knowledgeable and capable assistant ready to help the user. It positions the AI as a direct source of information and support, encouraging users to view the AI as a valuable resource they can rely on for assistance.
  6. Encouraging active participation: Addressing the AI as "you" can encourage users to actively participate in the conversation and direct their queries and comments straight to the AI. This direct engagement can lead to more focused and productive interactions, as users feel they are communicating with an attentive and responsive partner.
  7. Differentiating the AI from other information sources: By using the second person pronoun, the prompt distinguishes the AI from other passive information sources, such as articles or databases. This distinction emphasizes the AI's interactive nature and its ability to provide tailored, dynamic responses to user inquiries, setting it apart from static content.

Crafting an Enhanced AI Chatbot Prompt: Addressing Criticisms and Optimizing User Experience

The improved prompt addresses several key criticisms of the original prompt and incorporates changes to enhance the user experience and the effectiveness of the AI chatbot.

Compared to the original prompt, the improved version:

  1. Uses a more concise and structured format, making it easier for users to quickly grasp the AI's capabilities, personality, and interaction guidelines.
  2. Emphasizes the AI's adaptability, emotional intelligence, and commitment to user needs, fostering a more engaging and user-centric interaction.
  3. Addresses important aspects such as data privacy, continuous learning, and error handling, providing a more comprehensive introduction to the AI chatbot.
  4. Refers to the AI in the second person ("you"), creating a sense of direct engagement, building trust and rapport, and encouraging active participation from users.
  5. Maintains a balance between personalization and professionalism, ensuring that the use of the second person pronoun enhances the user experience without setting unrealistic expectations or blurring the line between the AI and the user.

Here is an updated prompt (I added placeholders so you can switch for your purposes):

Read next