Everyone has been talking about prompting in AI for a few months now. But what is prompting, what are the best practices and what mistakes should be avoided? This is exactly what we explain in this blog article.

What is prompting?

Put simply, prompting describes the input – usually in written form – of commands to an AI model. Prompts are therefore used to get an AI model to generate a specific output from an input – usually in the form of text or images. Prompts are often used with large language models, a special type of AI model that can handle human language particularly well.

Different types of prompt

Before we delve deeper into the best practices of prompting, let’s take a look at the two most important types of prompting: The system prompt and the user prompt:

System prompt

Every AI assistant that is used in companies has been given a so-called system prompt by the developers. Typically, such a system prompt is used by AI developers to set certain parameters for the end user’s communication with the AI model.

System prompt in direct communication with an AI model

A system prompt can be used, for example, when users chat directly with an AI model – e.g. with ChatGPT from OpenAI or Gemini from Google.

System prompt for retrieval augmented generation

In most cases, however, companies do not chat directly with an AI model, but use retrieval augmented generation technology instead. This allows companies to chat securely with their own internal company data, reduce the risk of hallucinations and take existing access rights into account, for example.

In these situations, the system prompt specifies the framework for how the AI model should respond to the user’s request. Framework conditions can be, for example

  • Formality of the generated response: should the response be more formal, informal, friendly, neutral or technical?
  • Behaviour: Should the response be more helpful, factual, creative or analytical?
  • Scope of the response: How comprehensive should the AI model be in its response?
  • Adaptation to target groups: Should the answer be at a beginner, advanced or expert level?
  • Linguistic precision: What level of language should be used for communication?
  • Cultural and ethical components: Should certain ethical principles or sensitivities be taken into account?

This gives developers a certain framework via the system prompt within which the AI model can react to the response. The system prompt is then combined with the user prompt to answer a user’s enquiry.

User prompt

When most people talk about prompting, they mean the user prompt. The user prompt is what is visible to them in the user interface. The system prompt, on the other hand, is not visible to the end user.

Where the system prompt specifies the scope of the request, the user prompt specifies the specific task of the user that the AI model is to solve. As the user prompt is the relevant prompt for the majority of AI users and the system prompt is relevant for the creators of such AI systems, the best practices for user prompts are explained below:

8 Best Practices for Successful Prompting

Use the following 8 tips to simplify interaction with an AI model:

  1. Contextualisation: The more context is provided, the better the AI model can categorise the query.
    Example: Instead of ‘How do I change the oil in my car?’ Ask ‘How do I change the oil in my VW Golf 3 GTI?’
  2. Clarification: The more precise the prompt, the better and more useful the answer:
    Example: instead of ‘Write a response to customer complaint XYZ’ try ‘Write a response to customer complaint XYZ from ABC and consider our email templates.’
  3. Iterate: A prompt won’t be perfect the first time. Refine the prompt over several iteration loops. Pro tip: Let the AI model optimise your prompt or explain to you how it interprets the prompt.
    Example: If the first prompt is: ‘How much budget am I allowed to spend as head of department without consulting my line manager?’, then in the next step add: ‘How much budget am I allowed to spend as head of department without consulting my line manager?’. Explain to me the budget approval process for sums that exceed the approval limit’.
  4. Result specification: Give the AI an example according to which the answer should be formulated.
    Example: Provide similar, older emails as an example for the formulation of a cold call email.
  5. Role prompting: Tell the AI what role it has to play.
    Example: ‘You are an expert in online marketing. You specialise in creating great advertising copy for out-of-home campaigns.’
  6. Explanation prompting: Let the AI explain to you step-by-step how it arrived at the result.
    Example: ‘Explain to me step by step how you came up with the result.’
  7. Set a negative frame: Tell the AI what it should not do.
    Example: ‘Explain to me what good prompts for AI models look like without going into the difference between user and system prompts’
  8. Multi-step question: Divide the prompt into several tasks that are to be solved in several steps. (Note: If you are using a retrieval augmented generation system, it will need a multi-hop Q&A technique to work).
    Example: Explain Retrieval Augmented Generation to me. Start with the Retrieval step, then explain the Augmentation step and in the third step go into the Generation point.

Common mistakes in prompting

Where there are best practices, there are also negative examples. We have therefore listed the three most important prompting mistakes below:

  • Imprecise prompts: You should not assume that AI models can ‘read minds’. Formulate your prompts clearly and precisely
  • Prompts that are too complex: Make simple, easy-to-understand prompts. If prompts are too long or confusing, the AI model may forget certain points or not take them sufficiently into account
  • Lack of context: An AI model needs background information to be able to categorise things correctly. So don’t just be precise in the prompt, but also provide the necessary details that are required for interpretation.

Limits of artificial intelligence

Even if all best practices are taken into account and errors are avoided, there are various reasons why AI can deliver unsatisfactory results.

It is therefore important for us to emphasise this: AI, especially chat systems are assistance systems that are not meant to automate work or take 100% of it off your hands. AI assistants are designed to provide employees with impulses and ideas to make their work easier.

There is currently a lot of ‘half-knowledge’ floating around in society about what AI can and cannot do. Basically, as of September 2024, AI cannot develop or create anything new, it can only reproduce the information that has been trained into the AI model. It therefore only reproduces existing knowledge. Nevertheless, it appears very intelligent to us as humans, as it naturally knows significantly more in total than an individual person.

AI is designed to make it easier for us to find solutions to certain problems – but the final decision still has to be made by people themselves.

Looking to the future: In the future, multi-agent systems will become increasingly relevant. In contrast to AI assistants, these are designed to complete certain tasks themselves. However, it will be some time before they are sufficiently mature and have reached a broad market base.