LangWatch: Your Open Source LLM Ops Platform for Confident AI App Development
Are you ready to build AI applications with unprecedented confidence? LangWatch is your comprehensive solution for LLM (Large Language Model) operations. It provides the tools you need to track, visualize, and analyze LLM interactions, fine-tune performance, and deeply understand user engagement. Get ready to improve your AI development workflow!
Benefits of Using an LLMOps Platform for Your AI Apps
LangWatch is designed to help you at every stage of the LLM application lifecycle.
- Improved collaboration. Enables team members to iterate towards the reliable LLM-app
- Insights into user engagement. Helps the understanding of how users are using LLM applications.
- Confidence in development. Provides data for fine-tuning performance.
Key Features: Understand LLM Performance and User Behavior
With LangWatch's comprehensive suite of tools, you'll gain unprecedented visibility into your LLM applications.
- Real-Time Telemetry: Track LLM cost, latency, and other crucial metrics for optimized performance.
- Detailed Debugging: Effortlessly troubleshoot issues with complete metadata and history of LLM calls organized by threads and users.
- Measurable LLM Quality: Use LangEvals evaluators to quantify your LLM pipeline's output quality and confidently improve your prompts and models via data and versioning.
- DSPy Visualizer: Inspect and track DSPy experiments, compare runs, and iterate towards optimal prompts and pipelines.
- User Analytics: Track metrics on engagement, user interactions, and behavior to improve your product.
- Guardrails: Monitor LLM outputs for PII leaks (using Google DLP), toxicity (using Azure Moderation), and other potential issues. Build custom guardrails using semantic matching or LLMs for response evaluation.
Quickstart Guide: OpenAI Python Integration– Monitor LLM Interaction
Ready to get started? Here's how to quickly integrate LangWatch with your OpenAI Python application:
- Install the LangWatch Library:
- Add the
@langwatch.trace()
Decorator: Decorate the function that triggers your LLM pipeline. - Enable Autotracking of OpenAI Calls: Use
autotrack_openai_calls()
to automatically capture LLM interactions. - Export Your LangWatch API Key: LangWatch platform. Generate your key by setting up your project on the
That’s it! Your LLM calls will now be automatically captured on LangWatch for monitoring, analytics, and evaluations. For other languages like TypeScript and frameworks like LangChain, read our documentation.
Visualize DSPy Progress and Optimize AI Apps
Use the DSPy Visualizer for automated prompt and pipeline optimization. Track your experiments with ease!
- Install LangWatch:
- Import and Authenticate: Use your LangWatch key to authenticate.
- Initialize LangWatch Before Compilation:
Now you can follow the progress of your experiments on your LangWatch dashboard! This feature will help you monitor the best prompt and pipelines automatically with DSPy optimizers.
Run LangWatch Locally
LangWatch is designed to be run locally on your machine. To do that, you need to have docker and docker compose installed in your local environment.
-
Set up the environment file: Duplicate
.env.example
to.env
-
Add keys: Add your Open AI key or Azure Open AI key for LLM guardrails capabilities and generating embeddings for the messages
# For embeddings and LLM guardrails, leave empty it if you don't want to use Azure AZURE_OPENAI_ENDPOINT="" AZURE_OPENAI_KEY="" # Set OPENAI_API_KEY if you want to use OpenAI directly instead of Azure OPENAI_API_KEY=""
-
Set up an auth0 account and then create a simple app (for next.js)
AUTH0_CLIENT_ID="" AUTH0_CLIENT_SECRET="" AUTH0_ISSUER="https://dev-yourapp.eu.auth0.com"
-
Run Docker compose
docker compose up --build
Then go to http://localhost:3000
More Resources for Effective LLM Monitoring
Explore the comprehensive LangWatch documentation for in-depth information:
- Introduction
- Getting Started
- OpenAI Python Integration
- LangChain Python Integration
- Custom REST Integration
- Concepts
- Troubleshooting and Support
Contribute to the Future of LLMOps
LangWatch thrives on community contributions. Please read our Contribution Guidelines and help us improve the platform!