Practice without frustration - what helps with prompting
August 26, 2024
After we looked at the different types of AI in the first part of our series, took a closer look at generative AI and its special features in the second part, and looked at the different models in the third part, this part is all about concrete preparation for practice. The following knowledge will help you to prompt successfully.
Remember that language models are iterative machines. They respond to input with a repetitive process in which, to put it simply, they attach the next most appropriate word to the input as output and repeat this process until they have generated enough text.
They have a huge pool of output. That's what makes it attractive. Whether it is correct is another matter. But we can do something to ensure that we get as few wrong answers as possible.
The prompt
When we interact with a language model, we do so using a prompt. This is the instruction that tells the language model what to do.
The prompt can be text, sound, or images. A prompt can also consist of a series of instructions, or it can be used more than once in an interaction with a language model, just as we speak more than once in a conversation.
When we say that a prompt can be a text, we are talking about many different forms of text. It can be a sentence, a question, a file, such as a PDF, an SQL statement, which is a database query, or even programming code.
Avoiding hallucinations
So how do we formulate the prompt to get a correct response from the language model?
First, we need to be clear about what we are prompting for. When we brainstorm, it's not about right or wrong, it's about good ideas. This means that if we use the language models to generate output that has no truth content, we are already on the safe side.
But what if we want to learn about or research something? Then we should be aware that chatbots always provide an answer and try to satisfy us. Both can be at the expense of the truth.
We should be cautious with the following topics:
- Language models are not search engines: If you are looking for something, please do so only in those chatbots that have an additional Internet search, for example, Perplexity.
- Be satisfied with the level of detail provided by a language model: If you can ask in detail, you are likely to get a correct answer. If you are unfamiliar with a topic, are not satisfied with the chatbot's answer, and ask for details, be careful. The language model will give you details even if it doesn't know them.
- Use roles in moderation: As you learn to prompt, you'll hear us give a chatbot a role so that it responds in that role. This is good if it makes the answer more specific and gives it a special quality. However, if we insist on the role of the chatbot during a chat, then it will "play" that role with all the consequences for the possible truthfulness of the content. We should also check the answers in this case.
- Refrain from giving reasons: If we ask language models for the reason why there was a certain answer, we will rarely get a useful answer. This is because a language model always answers and wants to satisfy us. Reasons are something that is difficult to combine with probabilities. That's why the chances of getting a satisfactory answer are better if we figure it out ourselves.
If you keep these points in mind, you will know which results you can trust and which you should consult other sources for. With this in mind, we can move on to the next part of the most important rules of prompting.