Evaluating fine-tuned models: Beyond loss metrics

Chris Nakamura
Chris NakamuraFeb 20, 2026

Training loss going down doesn't mean your fine-tuned model is actually better. I've learned this the hard way.

Evaluation framework I use

1. Task-specific metrics: Accuracy, F1, BLEU — depends on your use case 2. Regression testing: Compare against base model on a fixed test set 3. A/B testing: Deploy both models, measure user preference 4. Edge case testing: Specifically test failure modes from the base model 5. Safety testing: Ensure fine-tuning didn't degrade safety behaviors

def evaluate_model(model_id, test_set):
    results = []
    for example in test_set:
        response = client.chat.completions.create(
            model=model_id,
            messages=example["messages"],
            temperature=0
        )
        score = compute_metric(response, example["expected"])
        results.append(score)
    return sum(results) / len(results)

I always compare: base model, fine-tuned model, and a more capable base model (e.g., if fine-tuning GPT-4o-mini, compare against base GPT-4o).

3.9k views21 replies58 likes

Log in to reply to this topic.