Unleash the Power of LLMs with Just-Prompt: Your Unified Interface to AI Giants
Tired of juggling multiple APIs to access the best Large Language Models (LLMs)? Just-Prompt offers a streamlined solution, acting as a Model Control Protocol (MCP) server to unify access to top providers like OpenAI, Gemini, Anthropic, Groq, DeepSeek, and Ollama. This article delves into how Just-Prompt simplifies LLM integration, boosts productivity, and helps you make the most of cutting-edge AI technology.
Why Choose Just-Prompt for Your LLM Needs?
- Unified API: Access multiple LLM providers through a single, consistent interface, simplifying your workflow.
- Parallel Processing: Run prompts across multiple models simultaneously and compare their outputs for optimal results.
- Easy Model Management: List available providers and models with simple commands, staying up-to-date with LLM capabilities.
- Flexible Input Options: Send prompts directly as text or load them from files, catering to different use cases.
- Automated Output Handling: Save model responses to markdown files for easy analysis and documentation.
Turn LLM Complexity into Simplicity: Essential Tools at Your Fingertips
Just-Prompt provides a range of powerful tools to streamline your LLM interactions:
-
prompt
: Send text prompts to multiple LLMs, specifying models or using defaults.text
: The prompt you want the models to process.models_prefixed_by_provider
(optional): A list of models to use, prefixed with their provider (e.g.,openai:gpt-4o
,a:claude-3-haiku
).
-
prompt_from_file
: Load prompts from a file and send them to multiple models.file
: The path to the file containing your prompt.models_prefixed_by_provider
(optional): A list of target models.
-
prompt_from_file_to_file
: Process prompts from a file and save the responses to separate markdown files.file
: The path to the prompt file.models_prefixed_by_provider
(optional): The models to use.output_dir
(optional): Directory for saving responses (defaults to ".").
-
ceo_and_board
: Simulate a decision-making scenario with "board member" models and a "CEO" model that synthesizes their input. This tool leverages the power of LLM orchestration to tackle complex challenges.file
: Path to the file containing the prompt describing the scenario.models_prefixed_by_provider
(optional): List of models acting as board members.output_dir
(optional): Directory for saving responses and the CEO's decision (defaults to ".").ceo_model
(optional): Model for the CEO role (defaults to "openai:o3").
-
list_providers
: Display available LLM providers supported by Just-Prompt. -
list_models
: List all models available for a specific provider.provider
: The provider to query (e.g., 'openai' or 'o').
Provider Prefixes: Quick Access to Your Favorite Models
Just-Prompt uses short prefixes for each provider to make model selection faster and more convenient:
- OpenAI:
o
oropenai
(e.g.,o:gpt-4o-mini
) - Anthropic:
a
oranthropic
(e.g.,a:claude-3-5-haiku
) - Google Gemini:
g
orgemini
(e.g.,g:gemini-2.5-pro-exp-03-25
) - Groq:
q
orgroq
(e.g.,q:llama-3.1-70b-versatile
) - DeepSeek:
d
ordeepseek
(e.g.,d:deepseek-coder
) - Ollama:
l
orollama
(e.g.,l:llama3.1
)
Using these prefixes, you can easily specify the desired model for each operation.
Installing and Configuring Just-Prompt
-
Clone the Repository:
git clone https://github.com/yourusername/just-prompt.git; cd just-prompt
-
Install Dependencies:
uv sync
-
Configure API Keys: Create a
.env
file with your API keys (copy from.env.sample
):OPENAI_API_KEY=your_openai_api_key_here ANTHROPIC_API_KEY=your_anthropic_api_key_here GEMINI_API_KEY=your_gemini_api_key_here GROQ_API_KEY=your_groq_api_key_here DEEPSEEK_API_KEY=your_deepseek_api_key_here OLLAMA_HOST=http://localhost:11434
Optimize LLM Performance: Reasoning Effort, Thinking Tokens, and Thinking Budget
Just-Prompt allows you to fine-tune the reasoning capabilities of certain models:
-
OpenAI's o-Series: Control reasoning effort with suffixes
:low
,:medium
, or:high
(e.g.,openai:o4-mini:low
). -
Anthropic's Claude-3-Sonnet: Leverage thinking tokens by adding a suffix like
:1k
,:4k
, or:8000
to specify the token budget (e.g.,anthropic:claude-3-7-sonnet-20250219:4k
). -
Google's Gemini-2.5-Flash: Utilize thinking budget with suffixes like
:1k
,:4k
, or:8000
(e.g.,gemini:gemini-2.5-flash-preview-04-17:4k
).
Getting Started with Claude Code Integration
Just-Prompt seamlessly integrates with Claude Code, a powerful development environment. Use the mcp add
command to register Just-Prompt. Here are a few examples:
-
Basic Integration:
claude mcp add just-prompt "$(pbpaste)" # JSON to copy: # { "command": "uv", "args": ["--directory", ".", "run", "just-prompt"] }
-
With Default Models:
claude mcp add just-prompt -s project \ -- \ uv --directory. \ run just-prompt --default-models " openai:gpt-4o "
Elevate Your LLM Workflow Today
Just-Prompt empowers you to harness the collective intelligence of multiple LLMs with ease. By providing a unified interface and advanced control over model behavior, it streamlines your AI development process and unlocks new possibilities for innovation. Start leveraging the power of Just-Prompt today!