Stop Wasting Time with Bad AI Prompts: 10 Tips for Beginners
Improve your prompts today with these best practices
Prompts are how we interact with LLMs.
The right prompt is the key to unlocking the full potential of an LLM. Finding the right prompt requires experimentation. However, there are some general best practices that you can follow to improve your prompts.
Below are 10 best practices to improve your prompts.
1. Provide Examples
Providing examples is the most important best practice.
An example teaches the model what a good output looks like so it can tailor its response accordingly.
2. Design with Simplicity
Just like good written English, Prompts should be clear, concise, and easy to understand.
Try not to use complex language and avoid unnecessary information.
3. Be Specific About the Output
Providing specific details in the prompt (or through system or context prompting) can help the LLM focus on what’s relevant and improve accuracy.
4. Use Instructions over Constraints
Growing research says that positive instructions can be more effective than constraints.
Instructions communicate the desired format, style, or content of the response. Constraints communicate what the LLM shouldn’t do.
Using instructions aligns with how humans normally communicate by providing guidance instead of dictating limitations.
5. Control the Max Token Length
Controlling the Max Token Length will control the length of the response.
6. Use Variables in Prompts
Just like in programming, if you need to use the same information in multiple prompts, you can store that information in a variable and reference that variable within the prompt.
7. Experiment with Input Formats and Writing Styles
Since LLMs can deliver different results, experiment with prompt attributes such as style, word choice, and the type of prompt.
For example, you can formulate your prompt as a question, a statement, or an instruction.
8. Adapt to Model Updates
LLMs change rapidly and adjusting your prompts to take advantage of new capabilities of LLMs could lead to better responses.
9. Experiment with Output Formats
In addition to experimenting with prompt inputs, you can experiment with prompt outputs.
For example, you can request the output be returned as either JSON or XML.
10. For Chain Of Thought (COT) Prompting, Set the Temperature to 0
Generally speaking, when using reasoning to come up with a final answer, there is likely one single correct answer. Therefore, the temperature should always be set to 0.
References
- https://www.kaggle.com/whitepaper-prompt-engineering
- https://cloud.google.com/blog/products/application-development/five-best-practices-for-prompt-engineering?e=48754805
- https://developers.google.com/machine-learning/resources/prompt-eng