Claude: Building evals and test cases. More documentation updates from Anthropic: this section on writing evals for Claude is new today and includes Python code examples for a number of different evaluation techniques.
Included are several examples of the LLM-as-judge pattern, plus an example using cosine similarity and another that uses the new-to-me Rouge Python library that implements the ROUGE metric for evaluating the quality of summarized text.
Recent articles
- Qwen3-4B-Thinking: "This is art - pelicans don't ride bikes!" - 10th August 2025
- My Lethal Trifecta talk at the Bay Area AI Security Meetup - 9th August 2025
- The surprise deprecation of GPT-4o for ChatGPT consumers - 8th August 2025