“Prompt engineering is the practice of designing, refining, and optimizing the instructions (prompts) given to generative AI models to guide them into producing accurate, relevant, and desired outputs.” – Prompt engineering
Prompt engineering is the practice of designing, refining, and optimising instructions—known as prompts—given to generative AI models, particularly large language models (LLMs), to elicit accurate, relevant, and desired outputs.1,2,3,7
This process involves creativity, trial and error, and iterative refinement of phrasing, context, formats, words, and symbols to guide AI behaviour effectively, making applications more efficient, flexible, and capable of handling complex tasks.1,4,5 Without precise prompts, generative AI often produces generic or suboptimal responses, as models lack fixed commands and rely heavily on input structure to interpret intent.3,6
Key Benefits
- Improved user experience: Users receive coherent, bias-mitigated responses even with minimal input, such as tailored summaries for legal documents versus news articles.1
- Increased flexibility: Domain-neutral prompts enable reuse across processes, like identifying inefficiencies in business units without context-specific data.1
- Subject matter expertise: Prompts direct AI to reference correct sources, e.g., generating medical differential diagnoses from symptoms.1
- Enhanced security: Helps mitigate prompt injection attacks by refining logic in services like chatbots.2
Core Techniques
- Generated knowledge prompting: AI first generates relevant facts (e.g., deforestation effects like climate change and biodiversity loss) before completing tasks like essay writing.1
- Contextual refinement: Adding role-playing (e.g., “You are a sales assistant”), location, or specifics to vague queries like “Where to purchase a shirt.”1,5
- Iterative testing: Trial-and-error to optimise for accuracy, often encapsulated in base prompts for scalable apps.2,5
Prompt engineering bridges end-user inputs with models, acting as a skill for developers and a step in AI workflows, applicable in fields like healthcare, cybersecurity, and customer service.2,5
Best Related Strategy Theorist: Lilian Weng
Lilian Weng, Director of Applied AI Safety at OpenAI, stands out as the premier theorist linking prompt engineering to strategic AI deployment. Her seminal 2023 blog post, “Prompt Engineering Guide”, systematised techniques like chain-of-thought prompting, few-shot learning, and self-consistency, providing a foundational framework that influenced industry practices and tools from AWS to Google Cloud.1,4
Weng’s relationship to the term stems from her role in advancing reliable LLM interactions post-ChatGPT’s 2022 launch. At OpenAI, she pioneered safety-aligned prompting strategies, addressing hallucinations and biases—core challenges in generative AI—making her work indispensable for enterprise-scale optimisation.1,2 Her guide emphasises strategic structuring (e.g., role assignment, step-by-step reasoning) as a “roadmap” for desired outputs, directly shaping modern definitions and techniques like generated knowledge prompting.1,4
Biography: Born in China, Weng earned a PhD in Machine Learning from McGill University (2015), focusing on computational neuroscience and reinforcement learning. She joined OpenAI in 2018 as a research scientist, rising to lead long-term safety efforts amid rapid AI scaling. Previously at Microsoft Research (2016–2018), she specialised in hierarchical RL for robotics. Weng’s contributions extend to publications on emergent abilities in LLMs and AI alignment, with her GitHub repository on prompting garnering millions of views. As of 2026, she continues shaping ethical AI strategies, blending theoretical rigour with practical engineering.7
References
1. https://aws.amazon.com/what-is/prompt-engineering/
2. https://www.coursera.org/articles/what-is-prompt-engineering
3. https://uit.stanford.edu/service/techtraining/ai-demystified/prompt-engineering
4. https://cloud.google.com/discover/what-is-prompt-engineering
5. https://www.oracle.com/artificial-intelligence/prompt-engineering/
6. https://genai.byu.edu/prompt-engineering
7. https://en.wikipedia.org/wiki/Prompt_engineering
8. https://www.ibm.com/think/topics/prompt-engineering
9. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-prompt-engineering
10. https://github.com/resources/articles/what-is-prompt-engineering

