The shared prompting best practices live in the Agent API Prompt Guide and apply to Sonar without modification — be specific, cap result counts, don’t ask for URLs in prose, avoid few-shot content, and prefer parameters over prose for filters. This page covers the one structural difference that changes how Sonar is prompted: the system prompt does not influence search.Documentation Index
Fetch the complete documentation index at: https://docs.perplexity.ai/llms.txt
Use this file to discover all available pages before exploring further.
For new applications, we recommend the Agent API. The agent loop, custom tools, and richer prompt control make it the better default.
Shape Search Through the User Message
Sonar runs a web search before generating its answer, and only the user message is used to drive that search. The system prompt is not visible to search; it reaches the model only at answer time, when results are already in hand. Use the system prompt for tone, style, and grounding rules, but treat the user message as both the question for the model and the seed for the search. The practical consequence: phrasing in the user message directly affects which sources show up. A specific, descriptive question produces better results than a vague one, and a polished system prompt cannot rescue a vague user message. If retrieval quality matters, invest there first. Good Example: “What guidance has the FDA issued on AI in medical devices in the past year, and which device categories does it cover?” Poor Example: “Tell me about FDA AI rules.”instructions are re-read on every turn of the agent loop and shape both tool calls and the final answer. In Sonar, instructions has no equivalent. System messages only influence generation, never retrieval.
Reduce Hallucinations
LLMs are tuned to be helpful, which can occasionally lead them to provide an answer when search results are thin or off-target rather than flagging the gap. The system prompt doesn’t shape the search step itself, but it does shape how the model uses the search results when writing the final response, which makes it the right place for grounding rules. Two short additions cover most of these edge cases. Give the model permission to say it didn’t find anything. With an explicit out in the system prompt, the model is more likely to acknowledge insufficient results instead of leaning on training data to fill the gap.System Prompt
System Prompt
What Carries Over from the Agent API Guide
The same core prompting rules apply with no changes:- Be specific and descriptive in the user message. Vague queries produce scattered results.
- Cap result counts. If a list is needed, say how long.
- Don’t few-shot content. Pasting a written-out example answer can cause the search step to latch onto the example topic. Few-shotting structure is fine; for guaranteed shape use
response_format. - Don’t ask for URLs in the response text. Sonar always returns sources in the top-level
citationsandsearch_resultsfields. Read them from there. - Use parameters, not prose, for filters. The search backend reads parameters; it does not read the system prompt.
Next Steps
Agent API Prompt Guide
The full prompting guide. Most rules apply to Sonar as well.
Search Filters
Domain, recency, and date filters for narrowing Sonar search results.
Pro Search
Multi-step search and reasoning when single-shot is not enough.
Agent API Quickstart
Recommended for new applications. Multi-turn loop and custom tools.