logo
Blog
What is Prompt Engineering?
2023-01-24 · 8 minute read

Large language models like GPT-3 have spawned a new emerging field called Prompt Engineering. People are coming up with clever techniques to prompt these models to perform different tasks, such as:

  • TL;DR summarization
  • Fix grammar/spelling errors
  • Explain a concept to a 5 year old
  • Generate code using natural language
  • Explain what a piece of code does
  • Generate stories
  • Q&A

These are just a couple of examples among millions of possible use cases. The exciting thing about Prompt Engineering is that the possibilities are endless.

So what is Prompt Engineering exactly?

Prompt Engineering is the practice of producing robust prompts for generative language models that can withstand a number of real world challenges:

  • How do we craft a prompt to produce a specific result?
  • How do we test our prompts to know that they consistently produce the right results?
  • How do we trust the language model not to say something incorrect/inappropriate?
  • How do we prevent prompt injection?
  • How do we pass in a custom data set or knowledge base?
  • What do we do if our prompt doesn't fit within the model's token limits?
  • How do we estimate cost per prompt when inputs are unknown?
  • How do we compare competing language models as more become available?

As a Prompt Engineer, your job is to take the general capabilities of large language models and narrow them to a specific use case. You are responsible for addressing the above challenges and following best practices to accommodate them.

In this post we will explore the anatomy of a prompt, break down an example prompt, discuss best practices when designing prompts, and answer some common questions about Prompt Engineering.

Anatomy of a prompt

Prompts are fundamentally made of 3 parts:

  • Static template: This is the text template that structures the prompt's context, task, and input placeholders. This part of the prompt doesn't change between executions.
  • Dynamic inputs: This is dynamic data injected into the prompt based on user input. This could come directly from the user, such as a freeform text input or schema controlled input. It could also contain other injected data, such as externally fetched content. The location of these inputs are set using placeholders in the static template and will vary for every prompt.
  • Model parameters: This is where you define which language model you are using and any additional parameters for that model (eg. temperature).

Example: Fruit → Color Hex

Let's look at one of my favourite examples: Fruit → Color Hex.

Prompt

Tokens: 31

Inputs

fruit

Preview

Tokens: 34
Family
Model
Ctrl+Enter

You'll notice the prompt has 3 sections:

  1. The prompt template: This top section is the meat and potatoes of the prompt. It contains the core instructions and structure for the prompt. If you click on it, you'll see how the fruit input placeholder was created.
  2. Inputs and preview: For every input placeholder in the template, an input is created below. In this example, the fruit input is a simple free-form text box. Next to the inputs is a text preview to help you visualize what the final prompt will look like before its sent.
  3. Model parameters: At the bottom you choose which language model you want to use for this prompt. As more language models are released (from multiple organizations), you can experiment with them here.

The goal of this specific prompt is to produce a color hex value that best matches the color of the fruit provided.

Notice how we designed our prompt template. You can logically break it down into multiple parts: a context, task, label, and placeholder. Let's identify each one:

  • Context:
    Given the following fruit,
    
  • Task:
    output the closest color hex value that matches the color of that fruit.
    
  • Placeholder with label:
    Fruit:
    {{ fruit }}
    
  • Completion label:
    Color hex string:
    

Why did we break up the prompt this way? Let's explore this in the next section about tips for effective prompts.

Tips for effective prompts

The structure of an effective prompt varies depending on your goals. But as a general rule of thumb, the following components are recommended:

  1. Identity: Give the language model an identity.

    You are a question-answer bot for a luxury watch website.
    

    The goal with identity is to prime the language model with context that will reinforce the task you will ask to it do.

  2. Context: Give context when applicable. For example, a luxury watch bot will need to know as much information as possible about the product in order to answer questions about it. We must inject this information into the prompt to give it content to work with.

    Given the following information about the product:
    {{ productInformation }}
    
  3. Task: Explain the model's job.

    Answer the following question from a customer about the above watch product.
    
  4. Conditions: Prevent the model from hallucinating (making up answers) by adding a condition to the task.

    If the answer is not provided above or you are unsure, reply with "Sorry, I don't know."
    
  5. Labels: Labels help set expectations and structure for the model. Without labels, models will sometimes try to add on to the task itself instead of performing the task. In our example, it would be wise to label our question and answer:

    Question: """
    {{ question }}
    """
    
    Answer:
    

    Notice that we also wrap our question input in triple quotes. This helps make the input explicit for the model. It also helps mitigate against prompt injection.

  6. User input: As you can see above, we needed a place to inject user input. Most prompt tools will provide a templating language to allow you to set placeholders within your prompt:

    {{ question }}
    

Keep in mind that any of the above components have the ability to be dynamically injected as required. For example, context is a prime candidate for dynamic injection since this information may be constantly changing and unknown at build time. Likely you would retrieve context from a database, knowledge base, or external API. You can use embeddings to determine which content is most relevant when injecting.

Here's the final result:

Prompt

Tokens: 71

Inputs

productInformation
question

Preview

Tokens: 121
Family
Model
Ctrl+Enter

Is Prompt Engineering the same as fine-tuning?

Not exactly. Fine-tuning is the process of re-training the language model itself with custom training data. Fine-tuning is just one of many tools in a Prompt Engineer’s tool belt to produce the desired outcome.

Fine-tuning is not always the answer. You will often pay a premium to fine-tune a model, both during the training process and for every future completion request after that. You may be surprised how far you can get using other Prompt Engineering techniques like context injection + embeddings.

What is prmpts.AI?

prmpts.AI is a prompt engineering playground to test and share robust prompts with others. Instead of proprietizing our discoveries, let’s keep this technology open and explore it together.

logo© 2023 prmpts.AI