Generate AI responses with web-grounded knowledge using either the Python or TypeScript SDKs. Both SDKs provide full support for chat completions, streaming responses, async operations, and comprehensive error handling.
from perplexity import Perplexityclient = Perplexity()completion = client.chat.completions.create( messages=[ { "role": "user", "content": "Tell me about the latest developments in AI", } ], model="sonar",)print(f"Response: {completion.choices[0].message.content}")
Example Output
Copy
Ask AI
Response: Based on the latest information, here are some key developments in AI for 2024:**Large Language Models & Foundation Models:**- GPT-4 and its variants continue to improve with better reasoning capabilities- Open-source models like Llama 2 and Code Llama have gained significant traction- Specialized models for coding, math, and scientific tasks have emerged**Multimodal AI:**- Vision-language models can now process images, text, and audio simultaneously- Real-time image generation and editing capabilities have improved dramatically**AI Safety & Alignment:**- Constitutional AI and RLHF techniques are becoming standard practice- Increased focus on AI governance and regulatory frameworks...Request ID: req_123abc456def789
Choose from different Sonar models based on your needs:
Python SDK
TypeScript SDK
Copy
Ask AI
# Standard Sonar model for general queriescompletion = client.chat.completions.create( messages=[{"role": "user", "content": "What is quantum computing?"}], model="sonar")# Sonar Pro for more complex queriescompletion = client.chat.completions.create( messages=[{"role": "user", "content": "Analyze the economic implications of renewable energy adoption"}], model="sonar-pro")# Sonar Reasoning for complex analytical taskscompletion = client.chat.completions.create( messages=[{"role": "user", "content": "Solve this complex mathematical problem step by step"}], model="sonar-reasoning")
messages = [ {"role": "system", "content": "You are a helpful research assistant."}, {"role": "user", "content": "What are the main causes of climate change?"}, {"role": "assistant", "content": "The main causes of climate change include..."}, {"role": "user", "content": "What are some potential solutions?"}]completion = client.chat.completions.create( messages=messages, model="sonar")
Get real-time response streaming for better user experience:
Python SDK
TypeScript SDK
Copy
Ask AI
stream = client.chat.completions.create( messages=[ {"role": "user", "content": "Write a summary of recent AI breakthroughs"} ], model="sonar", stream=True)for chunk in stream: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="")
For comprehensive streaming documentation including metadata collection, error handling, advanced patterns, and raw HTTP examples, see the Streaming Guide.
system_prompt = """You are an expert research assistant specializing in technology and science. Always provide well-sourced, accurate information and cite your sources. Format your responses with clear headings and bullet points when appropriate."""completion = client.chat.completions.create( messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": "Explain quantum computing applications"} ], model="sonar-pro")
Choose the right model for your use case: sonar for general queries, sonar-pro for complex analysis, sonar-reasoning for analytical tasks.
Python SDK
TypeScript SDK
Copy
Ask AI
# For quick factual queriessimple_query = client.chat.completions.create( messages=[{"role": "user", "content": "What is the capital of France?"}], model="sonar")# For complex analysiscomplex_query = client.chat.completions.create( messages=[{"role": "user", "content": "Analyze the economic impact of AI on employment"}], model="sonar-pro")
2
Implement streaming for long responses
Use streaming for better user experience with lengthy responses.