Skip to main content

LangSmith Prompt Management

· 13 min read
Vadim Nicolai
Senior Software Engineer

In the rapidly evolving landscape of Large Language Model (LLM) applications, prompt engineering has emerged as a critical discipline. As teams scale their AI applications, managing prompts across different versions, environments, and use cases becomes increasingly complex. This is where LangSmith's prompt management capabilities shine.

Langfuse Features: Prompts, Tracing, Scores, Usage

· 11 min read
Vadim Nicolai
Senior Software Engineer

A comprehensive guide to implementing Langfuse features for production-ready AI applications, covering prompt management, tracing, evaluation, and observability.

Overview

This guide covers:

  • Prompt management with caching and versioning
  • Distributed tracing with OpenTelemetry
  • User feedback and scoring
  • Usage tracking and analytics
  • A/B testing and experimentation

Schema-First RAG with Eval-Gated Grounding and Claim-Card Provenance

· 7 min read
Vadim Nicolai
Senior Software Engineer

This article documents a production-grade architecture for generating research-grounded therapeutic content. The system prioritizes verifiable artifacts (papers → structured extracts → scored outputs → claim cards) over unstructured text.

You can treat this as a “trust pipeline”: retrieve → normalize → extract → score → repair → persist → generate.