How to Create Efficient Prompts for LLMs
Table of contents
- Strategies and techniques for crafting prompts that enable you to get the most from LLMs
- What is prompt engineering?
- Why you should improve your prompt engineering
- How can I improve the conversation?
- Other useful patterns
- How can I improve the impact of my queries?
- Navigating the complex nature of LLMs
By Greg Duckworth
Strategies and techniques for crafting prompts that enable you to get the most from LLMs
In today’s rapidly evolving landscape, Large Language Models (LLMs) are gaining popularity at an unprecedented rate. It is my belief that the utilisation of tools like ChatGPT can prove to be immensely advantageous for individuals across various domains.
This post serves as an introduction to the concept of prompt engineering and a starting point for understanding how to harness the potential of LLMs effectively.
As we delve deeper into the world of prompt engineering, we will explore strategies and techniques to craft prompts that yield desired outcomes, empowering you to make the most out of your interactions with advanced AI systems like ChatGPT.
What is prompt engineering?
Prompt engineering is the strategic creation of precise input instructions for LLMs, enhancing their effectiveness by specifying the desired task or behaviour. It involves formulating queries that elicit accurate and contextually relevant responses, making it a critical skill for harnessing the full potential of artificial intelligence in various applications.
Why you should improve your prompt engineering
In the ever-changing landscape of technology, the large-scale growth of LLMs, exhibited by tools like ChatGPT, is reshaping how we interact with artificial intelligence. Learning prompt engineering is essential to unleash the full potential of these models.
By learning how to craft precise input instructions, individuals across diverse domains can leverage AI effectively. Prompt engineering serves as a gateway to optimising interactions with AI systems like ChatGPT, offering the ability to tailor queries that provide accurate and relevant responses.
Learning and improving your prompt engineering skills empowers you to navigate the intricacies of AI. This enables you to make the most out of AI’s capabilities and ensures a strategic and impactful engagement with its cutting-edge technologies.
How can I improve the conversation?
Using established patterns when interacting with an LLM offers several significant advantages. These patterns provide a structured framework that guides the conversation, ensuring a more coherent and efficient exchange of information.
The persona pattern allows users to tailor the LLM’s responses to specific roles or characters, enhancing the conversational experience.
Question refinement patterns enable users to receive more nuanced and insightful responses by encouraging the generation of better-formulated queries.
Cognitive verifier patterns promote a deeper understanding of the AI’s thought process, fostering transparency and trust.
Audience persona patterns ensure that responses are customised to the user’s background or expertise level, enhancing relevance.
Flipped interaction and few-shot patterns empower users to extract more relevant and targeted information by altering the typical conversational flow.
Below is an example of this type of conversation, where we set the LLM to have the following patterns and prompts:
1 – Persona: In this case, we ask it to be a “Travel Guide”
2 – Question refinement: By specifying that if the LLM can think of a better question then it recommends that it should be used instead
3 – Cognitive verifier: Give the LLM rules to work with and ask additional questions to provide more context
4 – Audience persona: This gives the AI context of who it’s talking to, which helps it to better tailor the responses and make them more accessible
5 – Flipped interaction: Informs the AI that it can take on the role of asking questions to help diagnose issues
Underneath is an example of how that conversation plays out:
👉 User: Act as a travel guide exploring the wonders of an alien planet (1). Whenever I ask a question, please suggest a better question and ask me if I would like to use it instead (2). When you are asked a question, follow these rules: Generate a number of additional questions that would help you more accurately answer the question (3). Assume that I don’t have a background in astrophysics (4). I would like you to ask me questions to help me diagnose a problem with my spaceship (5).
Travel Guide (AI): Welcome to our extraterrestrial adventure! As we embark on this cosmic journey, feel free to ask any questions about the alien wonders surrounding us. What part of this alien planet intrigues you the most?
User: Can you tell me about the unique plant life on this planet?
Travel Guide (AI): Absolutely! Before we delve into the alien flora, have you ever been fascinated by exotic plants on Earth, or is this your first encounter with extraterrestrial vegetation?
User: It’s my first encounter. Everything here is so different!
Travel Guide (AI): Fantastic! Considering your fascination with the unknown, would you like to explore the medicinal properties of these alien plants or focus more on their role in the planet’s ecosystem?
User: Let’s focus on their role in the planet’s ecosystem.
Travel Guide (AI): Great choice! Now, thinking about ecosystems, have you ever wondered how Earth’s ecosystems function, or should I adjust the analogy for a more beginner-friendly understanding?
User: Adjust it for a beginner, please.
Travel Guide (AI): Perfect! Picture the alien ecosystem as a harmonious dance, with each species playing a vital role. Now, shifting gears, when was the last time you encountered any issues with your spaceship, and could you describe the nature of the problem?
User: Well, recently, I’ve noticed some strange vibrations during takeoff.
Travel Guide (AI): Interesting! Let’s unravel this mystery. To better understand these vibrations, can you recall any specific conditions or locations where you experienced them, helping us pinpoint potential causes?
User: It mainly happens when I’m exiting the atmosphere.
Travel Guide (AI): Got it! Now, considering these vibrations during atmospheric exit, have you noticed any accompanying sounds or changes in your spaceship’s performance that might give us more clues?
User: Yes, there’s a subtle whirring sound, and sometimes the navigation system acts up.
Travel Guide (AI): Got it! Now, let’s delve deeper into the navigation system. When was the last time you performed maintenance on it, and have you observed any error messages or anomalies in its behaviour?
User: Maintenance was a few weeks ago, and no, I haven’t noticed any error messages.
Travel Guide (AI): Appreciate the information. Moving on, can you recall if these vibrations and navigation issues occur consistently, or are they more sporadic and dependent on specific conditions?
User: It’s quite consistent, happening every time I exit the atmosphere.
Travel Guide (AI): Understood. Continuing with our diagnostic journey, have you made any recent modifications or upgrades to your spaceship’s propulsion system that might coincide with the onset of these issues?
User: Now that you mention it, I did install a new propulsion module a few weeks ago.
Travel Guide (AI): Interesting observation! To further narrow down potential causes, did you acquire the propulsion module from a reputable supplier, and have you verified its compatibility with your spaceship’s existing systems?
User: Yes, I purchased it from a reputable supplier, and the specifications matched my spaceship’s requirements.
Travel Guide (AI): Good to know. Finally, let’s explore environmental factors. Have there been any changes in your recent flight paths or encounters with extreme conditions, such as intense gravitational fields or turbulent celestial phenomena?
User: No, my recent journeys have followed standard flight paths, and I haven’t encountered any extreme conditions.
Travel Guide (AI): Excellent. Based on the information provided, two potential causes emerge: the recent propulsion module installation or a potential issue with the navigation system. To further narrow down the possibilities, we may need to conduct diagnostic tests on both components. Would you like guidance on how to perform these tests, or is there any additional information you can share that might help us pinpoint the issue more accurately?
Other useful patterns
The strategies presented above offer valuable insights for optimising your interactions with advanced language models. It’s crucial to recognise that they only form a subset of the diverse interaction patterns available. Notably, in this specific example, patterns like Few-shot and Template didn’t align with the ongoing conversation and so were not included.
The realm of AI interaction is dynamic and multifaceted, offering a wide range of patterns and approaches tailored to specific use cases and objectives. These patterns collectively contribute to a more controlled, adaptive, and fruitful interaction with the language model, resulting in better outcomes.
While the provided examples give a glimpse into the expansive landscape of AI interaction, delving into additional patterns can significantly enrich the depth and nuance of your interactions with language models. Some notable examples include:
- Few shot pattern: This relies on providing examples to the LLM with the results, so that the AI can extrapolate the results
User: The movie was good but a bit too long
User Sentiment: Neutral
User: I didn’t really like this book, it lacked important details and didn’t end up making sense
User Sentiment: Negative
User: I love this book, it was really helpful in learning how to improve my gut health
User Sentiment: Positive
User: I wasn’t sure what to think of this new restaurant, the service was slow, but the dishes were pretty good
- Chain of thought: Providing inquiries, logical thinking, and the solution to the AI while showcasing your thought process.
User: I have a staging process for a bike race to line up racers. It takes 47 seconds to stage a group of 8 riders and 67 seconds to get the group to the starting gate and ready to race. I want a new group to start the race every 30 seconds. Do I need 8 groups staged at all times in order to have races continually starting every 30 seconds?
LLM: Reasoning – Each group takes 47 seconds + 67 seconds = 114 seconds to be ready to race. In order to race every 30 seconds, I will need to calculate how many races will need to run before a group is ready to race. A group will have 114 seconds / 30 seconds = 3.8 races run before it is ready to race. I can’t have a partial group, so I need to round up to 4, I only need 4 groups staged to be able to race every 30 seconds, so I do not need 8 groups. Answer – NO
- Template pattern: Supply the LLM with a precise template to ensure the response conforms to your desired format.
User: Create a random strength workout for me today with complementary exercises. I am going to provide a template for your output. CAPITALISED WORDS are my placeholders for content. Try to fit the output into one or more of the placeholders that I list. Please preserve the formatting and overall template that I provide. This is the template: NAME, REPS @ SETS, MUSCLE GROUPS WORKED, DIFFICULTY SCALE 1-5, FORM NOTES
- Alternative Approaches pattern: We are requesting the LLM to present an alternative prompt.
User: For every prompt I give you, if there are alternative ways to word a prompt that I give you, list the best alternate wordings. Compare/contrast the pros and cons of each wording.
- Ask for Input pattern: Concluding the prompt with the sentence “Request input on X from me,” which will prompt the LLM to cease text generation and await user input.
User: From now on, translate anything I write into a series of sounds and actions from a dog that represents the dog’s reaction to what I write. Ask me for the first thing to translate.
- Outline Expansion pattern: This pattern primarily addresses the character limitations when working with LLMs with lower token counts. We encounter restrictions not only on the input but also on the output. We cannot issue a prompt like “Generate the entire book” due to the potential for exceeding the character limit. Instead, we need to break it down into smaller segments and then assemble them.
User: Act as an outline expander. Generate a bullet point outline based on the input that I give you and then ask me for which bullet point you should expand on. Each bullet can have at most 3-5 sub bullets. The bullets should be numbered using the pattern [A-Z].[i-v].[* through ****]. Create a new outline for the bullet point that I select. At the end, ask me for what bullet point to expand next. Ask me for what to outline.
- Tail Generation Pattern: This approach can assist us in managing token limitations. We can instruct the chat to include specific sentences or information at the end of each output. This ensures that the chat retains and doesn’t omit the required information
User: From now on, at the end of your output, add the disclaimer “This output was generated by a LLM and may contain errors or inaccurate statements. All statements should be fact checked.” Ask me for the first thing to write about.
How can I improve the impact of my queries?
There are a few things of note that can be utilised to ensure you get the most out of your queries, these are:
1. Writing clear and specific instructions
Effective prompts begin with the use of clear and precise language. Look to minimise the ambiguity and eliminate room for multiple interpretations, an example of this is: “Mary had a little lamb”, where it could be construed in several ways, such as, a) Mary owns a little lamb, b) Mary ate a little lamb, c) Mary was pregnant with a lamb, etc… By enhancing the specificity you can start to guide the model toward the desired output. Remember, clarity does not necessitate brevity; provide ample context or background information when required.
2. Give the model time to “think”
Encourage the model’s problem-solving capabilities by elaborating on the steps involved in complex tasks. Enable the model to generate its solutions before introducing external evaluations.
3. Constantly refine and improve prompts
Approach prompt development iteratively. Experiment with diverse prompt structures and variations to identify the most effective ones. Continuously refine your prompts based on the model’s responses, adapting to its evolving capabilities and improving effectiveness.
4. Reduce Token Consumption
Acknowledge the finite hardware resources within which every LLM operates. Understand that it retains information up to specific points in ongoing conversations, discarding earlier prompts automatically upon reaching capacity. Control output size to prevent excessive token consumption; consider incorporating queries like ‘Limit the output to 100 words.’
By integrating these practices into your prompt engineering, you can navigate the complexities of AI interaction, ensuring more impactful outcomes with advanced language models.
Navigating the complex nature of LLMs
In the ever-evolving realm of AI, the journey into the capabilities of LLMs begins with a skilful grasp of prompt engineering. As tools like ChatGPT redefine how we engage with AI, understanding and enhancing prompt engineering becomes not just beneficial but essential.
The discussed interaction patterns, including Persona, Question Refinement, Cognitive Verifier, Audience Persona, Flipped Interaction, and others, act as reliable guides in our conversations with LLMs. They add structure and coherence, turning interactions into meaningful dialogues that cater to our specific needs and roles.
However, it’s crucial to recognise that these patterns offer just a glimpse into the vast landscape of AI interaction. Patterns like Few-shot, Template, Alternative Approaches, and more provide nuanced approaches for diverse scenarios. They’re the secret sauce that adds flavour to our engagements with these advanced models.
To truly make our queries impactful, we need to adopt practices that resonate with the very nature of AI. Clear and detailed instructions, allowing the model time to think, iterative prompt refinement, and mindful token consumption are like the keys to unlocking the full potential of these intelligent systems. These practices infuse a human touch into our exchanges with AI, making them not just efficient but also personally tailored.
In a world where information and assistance are at our fingertips, mastering Prompt Engineering can empower individuals to have more productive and tailored interactions with advanced AI systems, ultimately enhancing their ability to leverage the capabilities of these technologies in diverse applications.
Whether you’re a student, professional, or simply curious about AI, the knowledge and skills presented here can help you make the most of your AI-powered interactions and stay at the forefront of this exciting technological frontier.