A deep dive into the strategies I learned for harnessing the facility of Large Language Models
Last month, I had the incredible honor of winning Singapore’s first ever GPT-4 Prompt Engineering competition, which brought together over 400 prompt-ly sensible participants, organised by the Government Technology Agency of Singapore (GovTech).
Prompt engineering is a discipline that blends each art and science — it’s as much technical understanding because it is of creativity and strategic considering. It is a compilation of the prompt engineering strategies I learned along the best way, that push any LLM to do exactly what you would like and more!
This text covers the next, with 🟢 referring to beginner-friendly prompting techniques while 🟠 refers to advanced strategies:
1. [🟢] Structuring prompts using the CO-STAR framework
2. [🟢] Sectioning prompts using delimiters
3. [🟠] Creating system prompts with LLM guardrails
4. [🟠] Analyzing datasets using only LLMs, without plugins or code —
With a hands-on example of analyzing a real-world Kaggle dataset using GPT-4
Effective prompt structuring is crucial for eliciting optimal responses from an LLM. The CO-STAR framework, a brainchild of GovTech Singapore’s Data Science & AI team, is a handy template for structuring prompts. It considers all the important thing points that influence the effectiveness and relevance of an LLM’s response, resulting in more optimal responses.
Here’s how it really works:
(C) Context: Provide background information on the duty
This helps the LLM understand the precise scenario being discussed, ensuring its response is relevant.