Unleash the Power of LLMs: A Deep Dive into Just Prompt (and How It Can Skyrocket Your Productivity)
Are you tired of juggling multiple APIs to access different Large Language Models (LLMs)? Do you dream of a unified interface that simplifies your workflow and unlocks the full potential of AI? Look no further than Just Prompt, a lightweight Model Control Protocol (MCP) server designed to streamline your interactions with leading LLM providers.
What is Just Prompt and Why Should You Care?
Just Prompt acts as a central hub, allowing you to send prompts to various LLMs – including OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, and Ollama – through a single, consistent interface. This eliminates the need to learn and manage individual APIs, saving you time and effort. Imagine controlling all your AI interactions from one place!
- Unified Interface: Say goodbye to API fragmentation.
- Simplified Workflow: Focus on your prompts, not the underlying infrastructure.
- Increased Productivity: Get more done in less time.
Key Features That Will Blow You Away
Just Prompt isn't just about consolidation; it's packed with features designed to enhance your LLM experience:
- Parallel Processing: Run prompts across multiple models simultaneously.
- Flexible Input: Send prompts as text strings or upload them from files.
- Automatic Model Correction: Just Prompt intelligently corrects model names based on your default preferences.
- Output Management: Easily save responses to files for later analysis or use.
- Provider Agnostic: Works seamlessly with a wide range of LLM providers.
Powerful Tools at Your Fingertips
Just Prompt offers a suite of MCP tools tailored to specific tasks:
1. prompt
: Unleash Your Text Prompts
Send a text prompt to multiple LLM models with ease. Specify the models you want to use (prefixed by the provider) or rely on your default settings:
text
: The prompt text itself.models_prefixed_by_provider
(optional): A list of models likeopenai:gpt-4o, anthropic:claude-3-haiku
.
2. prompt_from_file
: Get Smart with File-Based Prompts
Load prompts from a file, perfect for complex or lengthy instructions:
file
: The path to your prompt file.models_prefixed_by_provider
(optional): Choose which models should receive the file prompt.
3. prompt_from_file_to_file
: Prompt Like a Pro and Save Your Results
Take file-based prompting to the next level by automatically saving the responses as Markdown files:
file
: The path to your input prompt file.models_prefixed_by_provider
(optional): Select the target models.output_dir
(default: "."): Specify the directory to save the output files.
4. ceo_and_board
: Simulating Executive Decision-Making
This innovative tool simulates a board meeting, sending a prompt to multiple 'board member' models, then using a 'CEO' model to make a final decision:
file
: The path to the prompt file containing the business challenge.models_prefixed_by_provider
(optional): Models acting as board members (e.g.,openai:gpt-4o, anthropic:claude-3-sonnet
).output_dir
(default: "."): Directory for saving responses and the CEO's decision.ceo_model
(default: "openai:o3"): The model designated as the CEO.
5. list_providers
: Discover Your LLM Options
Quickly list all available LLM providers supported by your Just Prompt setup.
6. list_models
: Explore Model Capabilities
Dive deeper and list all available models for a specific LLM provider, such as 'openai' or 'anthropic'.
Provider Prefixes: Your Key to Model Identification
To ensure clarity and efficiency, every model in Just Prompt must be prefixed with its provider's name. Use the short name for faster referencing:
Provider | Short Name | Example Model |
---|---|---|
OpenAI | o |
o:gpt-4o |
Anthropic | a |
a:claude-3-5-haiku |
Google Gemini | g |
g:gemini-2.5-pro-exp-03-25 |
Groq | q |
q:llama-3.1-70b-versatile |
DeepSeek | d |
d:deepseek-coder |
Ollama | l |
l:llama3.1 |
Getting Started with Just Prompt: Installation and Setup
Ready to experience the power of Just Prompt? Here's how to get started:
-
Clone the Repository:
-
Install Dependencies:
-
Configure Environment Variables: Create a
.env
file with your API keys (copy from.env.sample
):OPENAI_API_KEY=your_openai_api_key_here ANTHROPIC_API_KEY=your_anthropic_api_key_here GEMINI_API_KEY=your_gemini_api_key_here GROQ_API_KEY=your_groq_api_key_here DEEPSEEK_API_KEY=your_deepseek_api_key_here OLLAMA_HOST=http://localhost:11434
Fine-Tuning Your LLM Experience
Just Prompt allows you to customize the reasoning and thinking capabilities of certain models for optimal performance:
OpenAI o-Series Reasoning Effort
For models like o4-mini
and o3
, control the level of internal reasoning:
:low
– Minimal reasoning (faster, cheaper).:medium
– Balanced (default).:high
– Thorough reasoning (slower, more tokens).
Example: openai:o4-mini:high
Claude Thinking Tokens
Enable extended thinking for the Claude model using thinking tokens:
anthropic:claude-3-7-sonnet-20250219:1k
(1024 tokens)anthropic:claude-3-7-sonnet-20250219:4k
(4096 tokens)anthropic:claude-3-7-sonnet-20250219:8000
(8000 tokens)
Gemini Thinking Budget
The Gemini model supports thinking budget for more thorough reasoning:
gemini:gemini-2.5-flash-preview-04-17:1k
(1024 budget)gemini:gemini-2.5-flash-preview-04-17:4k
(4096 budget)gemini:gemini-2.5-flash-preview-04-17:8000
(8000 budget)
Ready to Jumpstart Your AI Journey?
With Just Prompt, you can unlock the full potential of LLMs, streamline your workflow, and supercharge your productivity. Say goodbye to API complexity and hello to a unified, powerful AI experience!