Skills Layer

Eval Loop

Iterative quality diagnosis that runs targeted fix-and-check cycles until output meets a defined quality threshold or hits max iterations.

Eval Loop is the skill for "this is almost right but not quite." It takes content or code, evaluates it against explicit criteria, identifies the top 3 issues, fixes them, re-evaluates, and repeats until the threshold is met or a maximum iteration count is reached. Each iteration is scoped: fix only the flagged issues, do not touch anything else. This prevents the common AI failure mode where "improving" correct sections while fixing broken ones introduces new regressions. The loop maintains an iteration log showing what changed at each step and why. Typical convergence: 2-4 iterations for content, 3-6 for code. The escape hatch fires if the quality score plateaus across 2 consecutive iterations — continuing past that point burns tokens without progress.

Where it shows up:

Content polishingCode quality improvementPrompt tuningDesign iteration

Build your Knowledge OS

90 minutes from zero to your first skill chain.