Large language models like GPT-3 have spawned a new emerging field called Prompt Engineering. People are coming up with clever techniques to prompt these models to perform different tasks, such as:
These are just a couple of examples among millions of possible use cases. The exciting thing about Prompt Engineering is that the possibilities are endless.
Prompt Engineering is the practice of producing robust prompts for generative language models that can withstand a number of real world challenges:
As a Prompt Engineer, your job is to take the general capabilities of large language models and narrow them to a specific use case. You are responsible for addressing the above challenges and following best practices to accommodate them.
In this post we will explore the anatomy of a prompt, break down an example prompt, discuss best practices when designing prompts, and answer some common questions about Prompt Engineering.
Prompts are fundamentally made of 3 parts:
Let's look at one of my favourite examples: Fruit → Color Hex.
You'll notice the prompt has 3 sections:
fruit
input placeholder was created.fruit
input is a simple free-form text box. Next to the inputs is a text preview to help you visualize what the final prompt will look like before its sent.The goal of this specific prompt is to produce a color hex value that best matches the color of the fruit provided.
Notice how we designed our prompt template. You can logically break it down into multiple parts: a context, task, label, and placeholder. Let's identify each one:
Given the following fruit,
output the closest color hex value that matches the color of that fruit.
Fruit:
{{ fruit }}
Color hex string:
Why did we break up the prompt this way? Let's explore this in the next section about tips for effective prompts.
The structure of an effective prompt varies depending on your goals. But as a general rule of thumb, the following components are recommended:
Identity: Give the language model an identity.
You are a question-answer bot for a luxury watch website.
The goal with identity is to prime the language model with context that will reinforce the task you will ask to it do.
Context: Give context when applicable. For example, a luxury watch bot will need to know as much information as possible about the product in order to answer questions about it. We must inject this information into the prompt to give it content to work with.
Given the following information about the product:
{{ productInformation }}
Task: Explain the model's job.
Answer the following question from a customer about the above watch product.
Conditions: Prevent the model from hallucinating (making up answers) by adding a condition to the task.
If the answer is not provided above or you are unsure, reply with "Sorry, I don't know."
Labels: Labels help set expectations and structure for the model. Without labels, models will sometimes try to add on to the task itself instead of performing the task. In our example, it would be wise to label our question and answer:
Question: """
{{ question }}
"""
Answer:
Notice that we also wrap our question input in triple quotes. This helps make the input explicit for the model. It also helps mitigate against prompt injection.
User input: As you can see above, we needed a place to inject user input. Most prompt tools will provide a templating language to allow you to set placeholders within your prompt:
{{ question }}
Keep in mind that any of the above components have the ability to be dynamically injected as required. For example, context is a prime candidate for dynamic injection since this information may be constantly changing and unknown at build time. Likely you would retrieve context from a database, knowledge base, or external API. You can use embeddings to determine which content is most relevant when injecting.
Here's the final result:
Not exactly. Fine-tuning is the process of re-training the language model itself with custom training data. Fine-tuning is just one of many tools in a Prompt Engineer’s tool belt to produce the desired outcome.
Fine-tuning is not always the answer. You will often pay a premium to fine-tune a model, both during the training process and for every future completion request after that. You may be surprised how far you can get using other Prompt Engineering techniques like context injection + embeddings.
prmpts.AI is a prompt engineering playground to test and share robust prompts with others. Instead of proprietizing our discoveries, let’s keep this technology open and explore it together.