Session 5: Prompt Engineering
Prompting TechniquesAI Model ParametersChain-of-Thought (CoT)

Session 5: Prompt Engineering

Presenters

Masih MoloodianYasin FakharMohammad Amin Dadgar

Prompt Engineering

Based on content from promptingguide.ai that we discussed in our AI Talks meeting, the first section of the document focuses on key parameters for controlling AI model outputs. Temperature and Top P are discussed as primary ways to manage response randomness and creativity, with the recommendation to adjust one but not both. Similarly, frequency and presence penalties help control repetition in different ways, with frequency penalty scaling based on token count and presence penalty applying a flat rate for any repetition. The document also covers basic parameters like max length and stop sequences for controlling response size.

The promptingguide.ai documentation, which was the focus of our AI Talks discussion, then emphasizes the importance of specificity and precision in prompt writing. It recommends using clear separators like "###" or "---" to distinguish between instructions and context, and stresses the value of being direct rather than overly clever. The guidance suggests focusing on what to do rather than what not to do, providing concrete examples of how this approach leads to better results in practice.

The final section from promptingguide.ai discusses prompting techniques, with particular attention to few-shot prompting and its limitations. Using a mathematical reasoning example, the document illustrates that while few-shot prompting can be effective for many tasks, it may fall short with complex reasoning problems. The text introduces chain-of-thought (CoT) prompting as a more advanced technique for handling arithmetic, commonsense, and symbolic reasoning tasks, noting that this capability emerges in sufficiently large language models.