Skip to main content

3 posts tagged with "remote work"

View All Tags

The Case Against Mandatory In-Person Work for AI Startups

· 8 min read
Vadim Nicolai
Senior Software Engineer

The argument for an "office-first" culture is compelling on its face. It speaks to a romantic ideal of innovation: chance encounters, whiteboard epiphanies, and a shared mission forged over lunch. For a company building AI, this narrative feels intuitively correct. As a senior engineer who has worked in both colocated and globally distributed teams, I understand the appeal.

But intuition is not a strategy, and anecdotes are not data. When we examine the evidence and the unique constraints of an AI startup, a mandatory in-person policy looks like a self-imposed bottleneck. It limits access to the most critical resource—talent—and misunderstands how modern technical collaboration scales.

LLM as Judge: What AI Engineers Get Wrong About Automated Evaluation

· 20 min read
Vadim Nicolai
Senior Software Engineer

Claude 3.5 Sonnet rates its own outputs approximately 25% higher than a human panel would. GPT-4 gives itself a 10% boost. Swap the order of two candidate responses in a pairwise comparison, and the verdict flips in 10--30% of cases -- not because the quality changed, but because the judge has a position preference it cannot override.

These are not edge cases. They are the default behavior of every LLM-as-judge pipeline that ships without explicit mitigation. And most ship without it.

LLM-as-judge -- the practice of using a capable large language model to score or compare outputs from another LLM -- has become the dominant evaluation method for production AI systems. 53.3% of teams with deployed AI agents now use it, according to LangChain's 2025 State of AI Agents survey. The economics are compelling: 80% agreement with human preferences at 500x--5,000x lower cost. But agreement rates and cost savings obscure a deeper problem. Most teams adopt the method, measure the savings, and never measure the biases. The result is evaluation infrastructure that looks automated but is quietly wrong in systematic, reproducible ways.

This article covers the mechanism, the research, and the biases that break LLM judges in production.

What is LLM as a judge? LLM-as-a-Judge is an evaluation methodology where a capable large language model scores or compares outputs from another LLM application against defined criteria -- such as helpfulness, factual accuracy, and relevance -- using structured prompts that request chain-of-thought reasoning before a final score. The method achieves approximately 80% agreement with human evaluators, matching human-to-human consistency, at 500x--5,000x lower cost than manual review.