- humAIn+
- Posts
- How to Prompt for Different AI Models
How to Prompt for Different AI Models
Different AI models require different prompting techniques. Prompting is not a one size fits all.
With the recent releases of Grok 3 and Claude Sonnet 3.7, we are ushered into the age of Hybrid models. Hybrid models have both the capabilities of Non-Reasoning and Reasoning models, functioning in different modes depending on the task. Hybrid models provide a perfect blend of fast thinking for simpler queries and thinking capabilities to solve complex queries.
Building on the Prompting 101 and Prompting Reasoning Models guides we have released, we thought it might be important to list out how prompting styles best fit three different models (Non-Reasoning, Reasoning, and Hybrid). To make it easier, we created a table listing the most popular AI models for each category:
Reasoning Models | Non-Reasoning Models | Hybrid Models |
---|---|---|
OpenAI: o1 | Google: Gemini 2.0 Flash | OpenAI: GPT-4o |
OpenAI: o3 | xAI: Grok | Google: Gemini 2.0 Pro |
Google: Gemini Flash Thinking | Anthropic: Claude 3 Haiku | Anthropic: Claude 3.5 Sonnet |
DeepSeek: DeepSeek-R1 | OpenAI: GPT-3.5 Turbo | Anthropic: Claude 3.7 |
OpenAI: o3-mini | Google: PaLM 2 | xAI: Grok 3 |
Here's a comprehensive table combining the best prompting principles from both documents, categorized for non-reasoning models, reasoning models, and hybrid models:
Principle | Non-Reasoning Models | Reasoning Models | Hybrid Models |
---|---|---|---|
Clarity and Specificity | Be clear and specific, leave little ambiguity | Provide high-level guidance, trust the model to work out details | Be clear but allow room for model's inference |
Role Assignment | Give the AI a specific role or persona | Assign a role, but allow for more autonomy | Blend multiple personas for holistic answers |
Context Setting | Provide detailed context and background | Give essential context, allow model to fill gaps | Provide context with room for model expansion |
Tone Control | Explicitly state desired tone and style | Allow model to adapt tone based on context | Suggest tone, but allow for appropriate adjustments |
Format Specification | Clearly define output format | Suggest format, but allow flexibility | Specify format with option for model improvement |
Chain-of-Thought | Use detailed CoT prompts | Avoid CoT prompts, let model reason independently | Use minimal CoT guidance if needed |
Semantic Anchoring | Use precise context markers and delimiters | Use broader context markers, allow for interpretation | Balance specific anchors with open-ended prompts |
Constraint Engineering | Set clear boundaries and limitations | Provide general guidelines, allow for creative solutions | Set flexible constraints, allow model to optimize |
Source Limiting | Specify exact sources or types of information | Suggest source types, allow model to select | Provide source guidelines with room for model discretion |
Temporal Filters | Specify exact time frames for information | Suggest relevant time periods, allow model to adjust | Set broad temporal context, let model refine as needed |
Uncertainty Calibration | Ask model to rate confidence in responses | Allow model to express uncertainty naturally | Encourage transparency in confidence levels |
Perspective Calibration | Request specific viewpoints | Allow model to consider multiple perspectives | Suggest diverse viewpoints, let model synthesize |
XML/JSON Structuring | Use structured formats for clear instructions | Use minimal structuring, allow for natural language | Use light structuring with flexibility for model interpretation |
Iterative Refinement | Guide model through step-by-step refinement | Allow model to self-refine and iterate | Suggest refinement steps, but allow model to optimize process |
Ethical Considerations | Explicitly state ethical guidelines | Trust model's ethical training, provide general guidance | Highlight key ethical concerns, allow model to expand |
This table provides a comprehensive overview of prompting principles, showcasing how they can be applied differently across non-reasoning, reasoning, and hybrid models. It emphasizes the shift from explicit instructions for non-reasoning models to more open-ended, goal-oriented prompts for reasoning and hybrid models, allowing for greater autonomy and leveraging of the model's advanced capabilities. Please check our other guides to get a thorough explanation on each prompting principle.
Don't miss out on the cutting-edge insights that will shape the future of human-AI collaboration! Subscribe now to humAIn+ and be among the first to receive our daily newsletter, launching as soon as we reach 100+ subscribers. By pre-subscribing today, you'll not only secure your spot but also have the unique opportunity to influence the content of our future pieces, ensuring you get the most relevant and valuable information tailored to your interests in the rapidly evolving world of AI and workforce productivity.