Example Project
The Langfuse example project is a live, shared project that lets you explore Langfuse’s features with real data before setting up your own account. Think of it as a hands-on walkthrough where you can see how teams use Langfuse for LLM observability, prompt management, and evaluation.
The example project provides view-only access.
Prefer videos? Watch end-to-end walkthroughs of all Langfuse features.
Getting Started with the Example Project
Step 1: Access the Example Project
Create a free account (no credit card required) to access the example project.
Step 2: Understand What You’re Seeing
When you first open the example project, you’ll land on the Traces page. Here’s what you’re looking at:
- Each row represents one interaction with the example chatbot
- You’ll see traces from all users (not just yours). This is intentional so you can explore diverse examples
- The traces show: timing, costs, input/output, and any scores assigned by evaluations
Try this:
- Click on any trace to see detailed execution steps
- Notice the graph view showing how the chatbot’s components work together
- Look for traces with scores to see how evaluation works
Explore all features: Browse the left navigation to explore Tracing, Sessions, Prompts, Scores, and Datasets. Each area shows how Langfuse works in a complete LLM application.
Interact with the Example Chatbot
The example project includes a chatbot that generates traces you can explore. Every question creates a new trace that you can inspect in Langfuse.
To interact with the chatbot and see live traces, visit the Langfuse example project directly. The chatbot requires features specific to the Langfuse deployment that aren’t available in PebbleAI’s documentation environment.
Interested in implementation details of this RAG chat? Check out the blog post about how the chatbot was built (code is fully open source).
Next Steps
Ready to set up your own project?
- Get Started with Tracing: Add observability to your LLM application
- Set Up Prompt Management: Move prompts out of your code
- Create Your First Evaluation: Start measuring quality systematically