Unleash the Power of LLMs: A Unified Interface with Just-Prompt
Tired of juggling multiple APIs for different Large Language Models (LLMs)? Just-Prompt offers a streamlined solution, providing a single, unified interface to access the power of OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, and Ollama. Simplify your workflow and unlock the potential of cross-platform LLM interactions.
Why Choose Just-Prompt for Your LLM Needs?
- Unified API: Interact with various LLMs using a single, consistent interface, saving you development time and effort. Say goodbye to managing individual provider SDKs!
- Multi-Model Support: Run prompts across multiple LLMs simultaneously, allowing for comparative analysis or ensemble approaches. Supercharge your ability to perform A/B testing across LLM's.
- Flexibility: Use text prompts directly or load them from files, adapting to your preferred workflow.
- Easy Integration: Simple installation and configuration using environment variables.
Powerful Tools to Streamline Your LLM Interactions
Just-Prompt provides several handy tools to manage all major LLM providers. Here’s a breakdown:
-
prompt
: Send a text prompt to one or more LLMs. Use this to quickly query different LLMs from your terminal.- Parameters:
text
: The prompt itself.models_prefixed_by_provider
(optional): Specify the models to use (e.g.,openai:gpt-4o
). If omitted, Just-Prompt uses your defined default models.
- Parameters:
-
prompt_from_file
: Load a prompt from a file and send it to multiple LLMs. Great for more sophisticated or repetitive workflows.- Parameters:
file
: Path to the file containing the prompt.models_prefixed_by_provider
(optional): Specify the models.
- Parameters:
-
prompt_from_file_to_file
: Read a prompt from a file, send it to LLMs, and save the responses to individual markdown files. This tool enables you to record the history of your prompts in organized files.- Parameters:
file
: Path to the prompt file.models_prefixed_by_provider
(optional): Specify the models.output_dir
(optional, default: "."): The directory where response files will be saved.
- Parameters:
-
ceo_and_board
: Simulate a board meeting by sending a prompt to multiple 'board member' LLMs and have a 'CEO' model make a decision based on their responses. The tool empowers users to harness the diverse insights of multiple specialized AI's to tackle complex, high-level strategic decisions.- Parameters:
file
: Path to the file containing the prompt describing the scenario.models_prefixed_by_provider
(optional): List of models acting as board members.output_dir
(optional, default: "."): Directory for saving responses and the CEO decision.ceo_model
(optional, default: "openai:o3"): The model to use as the CEO.
- Parameters:
-
list_providers
: Get a list of all available LLM providers.- Parameters: None
-
list_models
: List all available models for a specific provider.- Parameters:
provider
: The provider to query (e.g.,'openai'
or'o'
).
- Parameters:
Provider Prefixes: Your Key to Model Selection
Each model needs to be prefixed with its provider's name. Use the short name for quicker referencing:
- OpenAI:
o
oropenai
(e.g.,o:gpt-4o-mini
,openai:gpt-4o-mini
) - Anthropic:
a
oranthropic
(e.g.,a:claude-3-5-haiku
,anthropic:claude-3-5-haiku
) - Google Gemini:
g
orgemini
(e.g.,g:gemini-2.5-pro-exp-03-25
,gemini:gemini:gemini-2.5-pro-exp-03-25
) - Groq:
q
orgroq
(e.g.,q:llama-3.1-70b-versatile
,groq:llama-3.1-70b-versatile
) - DeepSeek:
d
ordeepseek
(e.g.,d:deepseek-coder
,deepseek:deepseek-coder
) - Ollama:
l
orollama
(e.g.,l:llama3.1
,ollama:llama3.1
)
Getting Started with Just-Prompt: Installation & Configuration
-
Clone the Repository:
-
Install Dependencies:
-
Configure API Keys:
-
Create a
.env
file (copy from.env.sample
). -
Add your API keys to the
.env
file or export them in your shell:OPENAI_API_KEY=your_openai_api_key_here ANTHROPIC_API_KEY=your_anthropic_api_key_here GEMINI_API_KEY=your_gemini_api_key_here GROQ_API_KEY=your_groq_api_key_here DEEPSEEK_API_KEY=your_deepseek_api_key_here OLLAMA_HOST=http://localhost:11434
-
Advanced Features: Reasoning Effort, Thinking Tokens, and Thinking Budget
Just-Prompt allows you to fine-tune the reasoning process for certain models:
- OpenAI: Control the reasoning effort for o-series models (e.g.,
o4-mini
,o3
) using suffixes like:low
,:medium
, or:high
(e.g.,openai:o4-mini:low
). - Anthropic: Utilize thinking tokens with Claude models (e.g.,
claude-3-7-sonnet-20250219
) using suffixes like:1k
,:4k
, or:8000
to enable more thorough thought processes (e.g.anthropic:claude-3-7-sonnet-20250219:4k
). - Gemini: Leverage thinking budget with Gemini models (e.g.,
gemini-2.5-flash-preview-04-17
) using suffixes like:1k
,:4k
, or:8000
to enable more thorough reasoning before providing a response (e.g.gemini:gemini-2.5-flash-preview-04-17:4k
).
Unlock the Full Potential of Large Language Models Today!
Just-Prompt simplifies LLM interactions, empowering you to focus on innovation and insights. Start using Just-Prompt today.