The Agent API is a multi-provider, interoperable API specification for building LLM applications. Access models from multiple providers with integrated real-time web search, tool configuration, reasoning control, and token budgets—all through one unified interface.
Convenience Property: Both Python and Typescript SDKs provide an output_text property that aggregates all text content from response outputs. Instead of iterating through response.output, simply use response.output_text for cleaner code.
Use third-party models from OpenAI, Anthropic, Google, xAI, and other providers for specific capabilities:
Copy
Ask AI
from perplexity import Perplexityclient = Perplexity()# Using a third-party modelresponse = client.responses.create( model="openai/gpt-5.2", input="What are the latest developments in AI?", tools=[{"type": "web_search"}], instructions="You have access to a web_search tool. Use it for questions about current events, news, or recent developments. Use 1 query for simple questions. Keep queries brief: 2-5 words. NEVER ask permission to search - just search when appropriate",)print(f"Response ID: {response.id}")print(response.output_text)
Enable web search capabilities using the web_search tool:
Copy
Ask AI
from perplexity import Perplexityclient = Perplexity()response = client.responses.create( model="openai/gpt-5.2", input="What's the weather in San Francisco?", tools=[ { "type": "web_search" } ], instructions="You have access to a web_search tool. Use it when you need current information.",)if response.status == "completed": print(response.output_text)
Presets provide optimized defaults for specific use cases. Start with a preset for quick setup:
Copy
Ask AI
from perplexity import Perplexityclient = Perplexity()# Using a preset (e.g., pro-search)response = client.responses.create( preset="pro-search", input="What are the latest developments in AI?",)print(f"Response ID: {response.id}")print(response.output_text)
from perplexity import Perplexityclient = Perplexity()response = client.responses.create( model="openai/gpt-5.2", input="What are the latest AI developments?",)
The instructions parameter provides system instructions or guidelines for the model. This is particularly useful for:
Tool usage instructions: Guide the model on when and how to use available tools
Response style guidelines: Control the tone and format of responses
Behavior constraints: Set boundaries and constraints for model behavior
Example with tool instructions:
Copy
Ask AI
response = client.responses.create( model="openai/gpt-5.2", input="What are the latest developments in AI?", instructions="You have access to a web_search tool. Use it for questions about current events, news, or recent developments. Use 1 query for simple questions. Keep queries brief: 2-5 words. NEVER ask permission to search - just search when appropriate", tools=[{"type": "web_search"}],)
Control the reasoning effort level for reasoning models:
low: Minimal reasoning effort
medium: Moderate reasoning effort
high: Maximum reasoning effort
The reasoning parameter is only supported by models with reasoning capabilities. Models without reasoning support will ignore this parameter.
Copy
Ask AI
response = client.responses.create( model="openai/gpt-5.2", input="Solve this complex problem step by step", reasoning={ "effort": "high" # Use maximum reasoning },)