Academic Research Finder CLI
A command-line tool that uses Perplexity’s Sonar API to find and summarize academic literature (research papers, articles, etc.) related to a given question or topic.Features
- Takes a natural language question or topic as input, ideally suited for academic inquiry.
- Leverages Perplexity Sonar API, guided by a specialized prompt to prioritize scholarly sources (e.g., journals, conference proceedings, academic databases).
- Outputs a concise summary based on the findings from academic literature.
- Lists the primary academic sources used, aiming to include details like authors, year, title, publication, and DOI/link when possible.
- Supports different Perplexity models (defaults to
sonar-pro
). - Allows results to be output in JSON format.
Installation
1. Install required dependencies
Ensure you are using the Python environment you intend to run the script with (e.g.,python3.10
if that’s your target).
2. Make the script executable (Optional)
python3 research_finder.py ...
.
API Key Setup
The tool requires a Perplexity API key (PPLX_API_KEY
) to function. You can provide it in one of these ways (checked in this order):
- As a command-line argument:
- As an environment variable:
- In a file: Create a file named
pplx_api_key
,.pplx_api_key
,PPLX_API_KEY
, or.PPLX_API_KEY
in the same directory as the script or in the current working directory containing just your API key.
Usage
Run the script from thesonar-use-cases/research_finder
directory or provide the full path.
Arguments
query
: (Required) The research question or topic (enclose in quotes if it contains spaces).-m
,--model
: Specify the Perplexity model (default:sonar-pro
).-k
,--api-key
: Provide the API key directly.-p
,--prompt-file
: Path to a custom system prompt file.-j
,--json
: Output the results in JSON format.
Example Output (Human-Readable - Note: Actual output depends heavily on the query and API results)
Limitations
- The ability of the Sonar API to consistently prioritize and access specific academic databases or extract detailed citation information (like DOIs) may vary. The quality depends on the API’s search capabilities and the structure of the source websites.
- The script performs basic parsing to separate summary and sources; complex or unusual API responses might not be parsed perfectly. Check the raw response in case of issues.
- Queries that are too broad or not well-suited for academic search might yield less relevant results.
- Error handling for API rate limits or specific API errors could be more granular.