Leonardo_Phoenix_09_A_futuristic_laboratory_scene_depicting_th_0

New prompting rules for the use of reasoning models (deep research)


Attention! The podcast was created entirely by my AI-Assistant based on my contribution – no guarantee for incorrect content.


To repeat

What does deep research mean?

Deep research refers to the ability of AI chatbots, such as ChatGPT, Perplexity or Gemini, to understand complex search queries, break them down into several research tasks, search the Internet independently and consolidate the results in a structured manner. Instead of just listing links to websites – as we are used to from Google – the systems analyze the information, synthesize it and provide the user with a comprehensive answer in the form of a clearly written report.



How can I test and use deep research methods?


The easiest way to test and use the new deep research functions is via the existing AI providers. However, as the Deep Research function requires significantly more computing capacity, this function is currently only available in the paid accounts.


How does deep research work?


In-depth research or the deep research method always involves 4 steps, whereby the providers differ slightly (as of 02-2025).

  1. Planning: The AI processes the search task and independently plans the search process and search queries.
  2. Information search: The AI searches through numerous sources such as articles, reports and studies and filters out irrelevant information. OpenAI uses web browsing functions, while Gemini relies on Google services.
  3. Analysis: The AI then “reads” all the collected texts, extracts important facts, compares sources and recognizes contradictions.
  4. Structuring and preparation: Finally, the findings are presented in a clearly structured report, usually with an introduction, main section and conclusion. Important points (e.g. pros/cons) are highlighted, and references at the end ensure transparency.

Important: Even with source-based research, language models can hallucinate. As always, AI should be used as support and not as a blind substitute. The principle of result control applies in particular to critical facts. More on this in my last article on how deep research works.

Watch out: These new prompt rules apply to reasoning models

1. keep prompts simple and direct

These models work best with short, clear instructions.
Prompts that are too complex can cause confusion or impair performance.

A good example:
Prompt:

“Summarize the most important findings of the article on climate change in three key points.”

Bad example:
Prompt:

“Could you please provide a detailed, step-by-step analysis of the article and then turn it into a structured, logically coherent summary with precise reasoning?”


2. avoid chain-of-thought (CoT) prompting

In contrast to general LLMs, reasoning models already carry out internal logical analyses.
An explicit instruction such as “Think step by step” does not improve accuracy and may even worsen performance.

Good example (direct question):
Prompt:

“What is the probability of rolling two sixes with two dice?”

Bad example (unnecessary CoT prompting):
Prompt:

“First explain the probability of rolling a six, then the probability of rolling another six and multiply these probabilities at the end.”

Important note:
Only use explicit thought processes for non-reasoning-capable models or if the model provides an incorrect logical structure.


3. use separators for more clarity

Markdown, XML tags or section titles help the model to better distinguish between different prompt components.
Particularly useful for structured outputs such as JSON, tables or code snippets.

A good example:
Prompt:

cssCopyEditExtrahiere die wichtigsten Details aus diesem Vertrag im folgenden strukturierten Format:  
{
  "Parteien": "Name der beteiligten Parteien",
  "Gültigkeitsdatum": "Startdatum des Vertrags",
  "Pflichten": "Wichtigste vertragliche Verpflichtungen",
  "Kündigungsklausel": "Bedingungen für die Vertragsauflösung"
}

Bad example:
Prompt:

“Summarize the contract and give all the important details in a structured way.” (Too vague and possibly not structured enough.)


4. first try a zero shot, then a few shots if necessary

Reasoning models often work well without examples.
Few-shot learning should only be used if the output needs to be improved.

Good example (zero shot):
Prompt:

“Put the following sentence into the passive voice:
‘The committee has approved the new directive’.”

Bad example (Unnecessary Few-Shot-Prompting):
Prompt:

“Put the sentence into the passive voice. Example 1:
Active: ‘She baked a cake.
Passive: ‘A cake was baked by her’.
Now apply this to: ‘The committee has approved the new policy’.”

Important note:
If Zero-Shot does not deliver the desired results, add well-chosen Few-Shot examples.


5. define clear guidelines and restrictions

Reasoning models benefit from precise specifications.
Clear rules help to control length, format, scope or tone.

A good example:
Prompt:

“Suggest an inexpensive itinerary for New York City.
Budget: Under 500 $
Duration: 3 days
Preferences: Includes sightseeing and dining recommendations.”

Bad example:
Prompt:

“Create an itinerary for New York City.” (Too general, may go over budget.)

Important note:

  • Define the desired output format (“Explain in less than 100 words”).
  • Use restrictions for clarification (“Include only vegan restaurants”).

6. be very specific about the desired outcome

Clearly defined success criteria improve the quality of the response.
Promotes repetition until the expected result is achieved.

A good example:
Prompt:

“Explain the concept of supply and demand in under 50 words. Keep it simple and avoid jargon.”

Bad example:
Prompt:

“Describe supply and demand.” (Formulated too openly; can be too detailed.)

Important note:

  • Set length limits (“Explain in two sentences”).
  • Define the level of detail (“Avoid unnecessary details and only use everyday examples.”).

7. make sure that Markdown formatting is used (if required)

Since O1-2024-12-17, Reasoning models no longer output Markdown by default.
To force Markdown, add the string “Formatting re-enabled” to the prompt.

A good example:
Prompt:

nginxCopyEditFormatting re-enabled  
Erstelle eine Markdown-formatierte Zusammenfassung zur Quantenmechanik.

Bad example:
Prompt:

“Give me a Markdown answer about quantum mechanics.” (Could be returned as plain text.)

Important note:
For formatted answers, always insert a reference to the desired structure.


Summary: Prompt rules for reasoning models

  • Keep prompts simple and direct – avoid unnecessary complexity.
  • Do NOT use chain-of-thought prompting – reasoning models already think logically.
  • Use separators (JSON, Markdown, XML) for clarity – helpful for structured output.
  • Zero-Shot first, then Few-Shot if necessary – don’t overload the prompt with examples.
  • Define explicit restrictions – length limits, budget limits, desired format.
  • Force Markdown formatting if necessary – use “Formatting re-enabled”.


Frequently asked question: For which applications can I use Deep Research?

Although the deep research function sounds very tempting at first, it is not useful for all AI chatbot applications. This method also requires significantly more computing power and energy, and these should only be used when they are actually needed.

Basically, deep research can help wherever many sources are searched for information and the results need to be put into a structured format.

Scientific investigation: Compilation of freely available studies on the topic of “Effects of air pollution on health”.

Topic research: Creation of a structured report on a topic, e.g. “Introduction to AI agents”

Technology and innovation scouting: Identification of emerging technologies (e.g. quantum computing, mRNA technology) by analyzing news, blog posts and patents, provided the latter are publicly accessible.

Trend analysis: Identification of new nutritional or lifestyle trends (e.g. veganism, zero waste).

Product search and comparison: Search for suppliers in categories such as e-bikes, 3D printers or new smart home systems.

News overview: Compilation and consolidation of news on a topic.

Market and competition analyses: Market overview in the medical technology sector in Germany, strengths and weaknesses of the main competitors, key figures, product portfolio, …

Has this article given you food for thought and do you have any further questions? Or are you looking for general support with the use of AI, ChatGPT, Deepseek and chatbots?

I am always happy to receive your messages, preferably by WhatsApp message or e-mail.

Attention! The podcast was created entirely by my AI-Assistant based on my contribution – no guarantee for incorrect content.

Book now
Your personal consultation

Do you need support or have questions? Then simply make an appointment with me and get a personal consultation. I look forward to hearing from you!

> Concept & Strategy

> Keynotes, workshops and expert contributions

> Chatbots, Voicebots, ChatGPT

Further contributions

Good content costs time...

... Sometimes time is money.

You can now pay a small amount to Sophie on a regular or one-off basis as a thank you for her work here (a little tip from me as Sophie’s AI Assistant).