Monitor Mirascope applications with Langfuse
This guide shows you how to use Langfuse to track and debug Mirascope applications.
What is Mirascope?
Mirascope (GitHub) is a Python toolkit for building LLM applications.
Developing LLM-applications with Mirascope feels just like writing the Python code you’re already used to. Python Toolkit for LLMs: Mirascope simplifies the development of applications using Large Language Models (LLMs) by providing an intuitive interface similar to standard Python coding practices.
What is Langfuse?
Langfuse (GitHub) is an open-source platform for LLM engineering. It provides tracing, evaluation, prompt management, and metrics to help debug and enhance your LLM application.
How to use Mirascope with Langfuse
With this integration, you can automatically capture detailed traces of how users interact with your application.
You can install the necessary packages directly or using the langfuse
extras flag:
pip install "mirascope[langfuse]"
Mirascope automatically passes the Langfuse observe()
decorator to all relevant functions within Mirascope via its with_langfuse
decorator.
from mirascope.integrations.langfuse import with_langfuse
Getting Started
1. Calls
The with_langfuse
decorator can be used on all Mirascope functions to automatically log calls across all of Mirascope’s supported LLM providers.
Here is a simple example using tools:
from mirascope.core import anthropic, prompt_template
from mirascope.integrations.langfuse import with_langfuse
def format_book(title: str, author: str):
return f"{title} by {author}"
@with_langfuse()
@anthropic.call(model="claude-3-5-sonnet-20240620", tools=[format_book])
@prompt_template("Recommend a {genre} book.")
def recommend_book(genre: str):
...
print(recommend_book("fantasy"))
# > Certainly! I'd be happy to recommend a fantasy book...
This will give you:
- A trace around the
recommend_book
function that captures items like the prompt template, and input/output attributes and more. - Human-readable display of the conversation with the agent
- Details of the response, including the number of tokens used
2. Streams
You can capture streams exactly the same way:
from mirascope.core import openai, prompt_template
from mirascope.integrations.langfuse import with_langfuse
@with_langfuse()
@openai.call(
model="gpt-4o-mini",
stream=True,
call_params={"stream_options": {"include_usage": True}},
)
@prompt_template("Recommend a {genre} book.")
def recommend_book(genre: str):
...
for chunk, _ in recommend_book("fantasy"):
print(chunk.content, end="", flush=True)
# > I recommend **"The Name of the Wind"** by Patrick Rothfuss. It's the first book...
For some providers, certain call_params
will need to be set in order for usage to be tracked.