I've built over 50 Claude Code skills across 8 workstreams. Most took less than 10 minutes. The first took three hours because every tutorial I found explained the concept without showing what actually breaks.

This is the tutorial I needed six months ago: a 30-minute path from "what is a skill?" to a working, tested skill you'll use tomorrow. Not a feature overview. The specific steps, the specific YAML mistakes that make your skill invisible, and the iteration cycle that turns a rough draft into something reliable.

Everything here comes from patterns I found building 50+ production skills and from the ways the first 10 broke silently before I understood why.


What a Claude Code Skill Actually Is (And What It Isn't)

A skill is a markdown file that teaches Claude how to handle a specific type of task. No code. No API integration. No deployment pipeline. A text file with instructions.

The file is called SKILL.md (all caps). It lives inside a folder under .claude/skills/ in any project directory. Claude discovers it automatically when you start a session.

Every SKILL.md has two mandatory parts:

  1. YAML frontmatter (between --- markers) — metadata that tells Claude when to load the skill. Think of it as the label on a filing cabinet drawer.
  2. Markdown body — the actual instructions Claude follows. The document inside the drawer.

Claude reads the YAML frontmatter of every skill at session start but only loads the full body when it judges the skill relevant to your request. Your description field acts as a search index entry. If it's vague, the skill never fires. If it's too narrow, it only fires on exact phrasing. That's all the architecture you need for now.

A skill is not a chatbot persona or a code plugin. It's a reusable workflow in plain text. And it's not a one-shot prompt. It persists across sessions, inherits your CLAUDE.md context, and compounds over time.

Before we build, here's what the output looks like — a content brief generated by typing /content-brief "AI-driven sales enablement":

## Content Brief: AI-Driven Sales Enablement

**Working Title:** How AI Sales Enablement Actually Works (Beyond the Vendor Pitch)
**Target Audience:** B2B sales leaders evaluating AI tools for their teams
**Format:** Long-form article, 2000-2500 words

### Key Angles
1. The gap between AI sales tool marketing and daily rep experience
2. Which enablement tasks actually benefit from AI today (and which don't)
3. Adoption patterns from teams that stuck with it vs. abandoned

### Outline
1. The enablement problem AI claims to solve
2. What's working: three specific use cases with evidence
3. What's not: where AI enablement overshoots
4. The adoption question nobody asks upfront
5. A realistic 90-day deployment sequence

### Competitor Gap
Most existing content covers tool comparisons without adoption data...

### Suggested Sources
- Gartner 2025 Sales Enablement survey
- Three practitioner case studies from revenue operations forums

That's a usable brief shaped by your voice and audience context, not a generic outline. The skill that produces it is 40 lines of markdown. Let's build it.

For the full specification, see the official skills documentation.


The YAML Frontmatter — 3 Fields That Determine Whether Your Skill Lives or Dies

If you're not familiar with YAML: it's a plain-text format for structured data. Think of it as a form where each line has a label and a value. The three dashes (---) mark the beginning and end of that form.

This section is the one no other tutorial covers well. YAML frontmatter is where skills silently fail — no error, no warning, just a skill that never triggers. I learned this when 37 out of 52 skills became invisible in a single session because of one punctuation mistake I'd copy-pasted into all of them.

The Three Required Fields

1. name — This becomes the /slash-command you type to invoke the skill. Max 64 characters. Lowercase letters, numbers, and hyphens only. No spaces, no underscores, no special characters.

# Good
name: "meeting-prep"

# Bad
name: "Meeting Prep Dossier v2"  # spaces, uppercase, too descriptive

The name is what you type. Keep it short. You'll type it hundreds of times.

2. description — Max 200 characters. The single most important line in the entire file. Claude reads every skill's description at session start and matches it against what you're asking for. A vague description means your skill never fires. A too-narrow description means it only fires on exact phrasing.

# Good — specific enough to match, broad enough to catch variations
description: "Generate a pre-meeting research dossier for upcoming calls and meetings"

# Bad — too vague, fires on everything meeting-related
description: "Helps with meetings"

# Bad — over 200 chars, too narrow
description: "Create a comprehensive strategic account research briefing document
  for enterprise B2B SaaS sales qualification meetings with VP-level prospects"

Write it like a search result snippet: specific enough to match, broad enough to catch variations.

3. --- delimiters — The frontmatter must start and end with exactly three hyphens on their own line. No extra spaces. No tabs. YAML is whitespace-sensitive, and it won't tell you when it's unhappy.

Here's the complete frontmatter for a working skill:

---
name: "content-brief"
description: "Generate a structured content brief with target audience, key angles, outline, and competitor gap analysis. Use when creating content from scratch."
---

Two fields and two fence lines. Everything else is optional.

The Three Mistakes That Kill Skills Silently

Each of these cost me real debugging time because there's no error message — the skill just vanishes.

1. Nested double quotes in a double-quoted YAML string. Writing description: "Build a "comprehensive" report" breaks the YAML parser. The fix: use single quotes inside double-quoted strings, or use a YAML block scalar (description: |). This one failure mode caused 37 out of 52 skills to go invisible in a single session with no warning.

2. Description over 200 characters. No error. Claude just truncates or ignores it. The skill doesn't match against your requests as expected. Fix: count characters. Every word in the description must earn its place.

3. Missing the closing ---. The frontmatter block is YAML between two --- lines. If you forget the closing delimiter, everything below becomes part of the frontmatter, and the YAML parser chokes on your markdown headers. No error message. The skill stops existing.

Practical advice: After writing your frontmatter, validate it:

python -c "import yaml; yaml.safe_load(open('SKILL.md').read().split('---',2)[1])"

Or ask Claude: "Read my SKILL.md at .claude/skills/my-skill/SKILL.md and tell me if the YAML frontmatter is valid." It catches issues faster than manual review.


The Prompt Body — Writing Instructions Claude Follows

Now the part that makes the skill useful.

Structure Beats Cleverness

Use markdown headers, bullet points, and numbered steps. Claude parses structure far more reliably than prose paragraphs. Every time I've written a skill as flowing prose — "First you should consider doing this, and then perhaps moving on to..." — the output was inconsistent. Every time I switched to numbered steps, the output stabilized.

The pattern that works:

  1. Overview — One sentence saying what this skill does
  2. Inputs — What the user provides (company name, topic, file path)
  3. Steps — Numbered sequence of what Claude should do
  4. Output format — What the final deliverable looks like (sections, structure, length)
  5. Constraints — What Claude should NOT do (don't fabricate, don't exceed N words, always cite sources)

This structure mirrors how you'd brief a competent junior hire: here's the task, here's what I'm giving you, here's the steps, here's what I want back, here's what to avoid.

A Complete Example — The "Content Brief" Skill

Here's a full, copy-pasteable SKILL.md for a content brief generator. Create the folder .claude/skills/content-brief/, put this in SKILL.md, and it works.

---
name: "content-brief"
description: "Generate a structured content brief with target audience, key angles, outline, and competitor gap analysis. Use when creating content from scratch."
---
# Content Brief Generator

Generate a structured content brief for any topic the user provides.

## Inputs

- **Topic** (required): The subject to create a brief for
- **Format** (optional): Article, guide, case study, LinkedIn post. Default: article.

## Steps

1. **Load project context.** If the user has a CLAUDE.md, read it for voice,
   audience, and positioning context. If no CLAUDE.md exists, proceed with
   general best practices.
2. **Research the topic.** Search the web for current coverage, competitor
   articles, and gaps in existing content. Check the user's repository for
   any existing content on this topic to avoid duplication.
3. **Identify the angle.** Based on the research, determine what perspective
   is underserved. Don't rehash what already exists — find the gap.
4. **Generate the brief** with these sections:
   - Working title (specific, not generic)
   - Target audience (one sentence)
   - Format and target word count
   - 3-5 key angles, ranked by differentiation potential
   - Outline with 4-6 sections
   - Competitor gap analysis (what exists vs. what's missing)
   - 3-5 suggested sources
5. **Apply constraints.** No hype language. Cite sources where possible.
   Keep the entire brief under 500 words — it's a brief, not a draft.

## Output Format

Single markdown document with H2 headers for each section. Deliver directly
to the user — don't save to a file unless asked.

## Constraints

- Do NOT generate a full draft. This is a brief only.
- Do NOT use words like 'game-changing,' 'revolutionary,' or 'unlock.'
- Do NOT fabricate sources. If you can't find real sources, say so.
- Keep the brief under 500 words total.

That's 40 lines. It works without a CLAUDE.md (you get a solid generic brief), and works better with one (voice and audience context get pulled in automatically at Step 1).

Why a content brief as the first example? Because it's useful to anyone — marketing, sales enablement, thought leadership, partner comms. And because it shows the CLAUDE.md inheritance pattern: your voice and audience context come from your project config, not from re-explaining yourself every time.

A Second Example — The "Meeting Prep" Skill

To show the pattern isn't limited to content workflows, here's a simplified meeting prep skill I use almost daily. The production version is 244 lines with chained research modules, but the core structure stays the same five sections.

---
name: "meeting-prep"
description: "Generate a pre-meeting research dossier for upcoming calls and meetings. Use when preparing for any meeting with external participants."
---
# Meeting Prep Dossier

Generate a research dossier for an upcoming meeting by researching attendees
and their companies.

## Inputs

- **Meeting description** (required): Who, when, what type of meeting
- **Attendee names** (required): At least one name with company or title
- **Objectives** (optional): What the user wants to accomplish

## Steps

1. **Load project context.** If the user has a CLAUDE.md, read it for
   company positioning, ICP definitions, and CRM context. If no CLAUDE.md
   exists, proceed with general research.
2. **Classify the meeting type.** Sales discovery, follow-up, partner call,
   informational chat, or board meeting. The type determines research depth.
3. **Research each attendee.** Search the web for their LinkedIn profile,
   recent posts, publications, and talks. Check the user's repository for
   any prior interactions or notes about this person.
4. **Research the company.** Company overview, recent news, funding,
   headcount, tech stack if relevant to the meeting type.
5. **Check for prior interactions.** Search the repository for any previous
   meeting notes, email threads, or CRM records involving these attendees.
6. **Compile the dossier** with these sections:
   - Executive summary (3 bullets: key things to know)
   - Attendee cards (one per person: role, background, connection points)
   - Company snapshot (if external meeting)
   - Prior interaction timeline (if any history exists)
   - Prepared questions (5-7, tailored to meeting type and objectives)
   - Talking points and potential landmines

## Output Format

Single markdown document. Scannable in 2-3 minutes. Executive summary at
the top, details below.

## Constraints

- Do NOT fabricate background information. Use [NOT_FOUND] markers for
  gaps rather than guessing.
- Evidence-tag all claims: [VERIFIED], [INFERRED], or [NOT_FOUND].
- Keep the dossier under 800 words unless the user requests more depth.
- For sales discovery meetings, include pain signals and qualification notes.
- For informational meetings, focus on connection points and conversation
  starters.

The pattern is identical: inputs, numbered steps, output format, constraints. The domain knowledge changes, the skeleton doesn't. Once you internalize this structure, every new skill takes 10 minutes because you're filling in a known framework, not designing from scratch.

The meeting prep skill also shows how skills adapt based on context. Step 2 classifies the meeting type, and that classification changes everything downstream: a sales discovery gets pain signals and qualification data, while a coffee chat gets conversation starters. One skill, conditional depth — but not one skill trying to be two different skills. The meeting type routes within a single workflow.

One Skill, One Job

If you find yourself writing "If the user wants X, do this; if they want Y, do that" across fundamentally different workflows, you need two skills. A skill handling meeting prep AND deal research AND competitive intel will have a description so broad that it fires on every request, or so narrow that it misses most of them.

Conditional depth within a single workflow (like meeting type routing) is fine. Separate jobs get separate skills. Focused skills with clear descriptions compose far better than one skill trying to do everything.

Reference Anthropic's skill authoring best practices for the full sizing guide.


Testing Your Skill — The Build-Test-Fix Cycle

This is the part every tutorial skips, and it's where a rough skill becomes reliable.

Your First Test — And the Three Things That Usually Need Fixing

Invoke your skill: /content-brief "AI-driven sales enablement"

Watch what Claude does. Does it read your CLAUDE.md? Does it follow the steps in order? Does the output match your expected format?

The first run almost always reveals one of three problems:

  1. Too many steps. You wrote 20 when 5 would do. Claude follows all 20 and produces bloated, mechanical output. Fix: cut by half. Usually means deleting 3-4 lines.

  2. Vague output format. You said "produce a brief" but didn't specify sections, length, or structure. Claude improvises, and its improvisation doesn't match your mental model. Fix: add an output template. Usually 5-6 lines.

  3. Missing context reference. The skill doesn't tell Claude to read your CLAUDE.md for voice or audience. The output is generic because the skill operates in a vacuum. Fix: add one line to Step 1. (Our example already handles this, but plenty of skills skip it.)

Three Versions, Ten Minutes

V1 works. It does roughly the right thing. You'll use it.

V2 is better. You tighten the output format and sharpen the description — 2-3 lines of edits, not a rewrite. Maybe you noticed the brief was running long, so you add "Keep the entire brief under 500 words" to the constraints. Two seconds of editing.

V3 is dialed in. You add a constraint you didn't anticipate until you saw the output: a max word count, a source quality requirement, a tone guardrail. Another 2-3 lines.

Total time for all three iterations: 10-15 minutes. The difference between versions is small — a tighter description here, an output section heading there. Anthropic's own guidance confirms it: "Most people get to a solid Skill within two or three iterations."

The Best Shortcut — Ask Claude to Critique Your Skill

This technique saves an entire iteration cycle.

Before you test, tell Claude: "Read .claude/skills/content-brief/SKILL.md and critique the instructions. What's ambiguous? What's missing? What would confuse you?"

Claude is good at identifying gaps in its own instructions. It flags vague steps, missing output specs, and description mismatches. Five to ten minutes of debugging replaced by a 30-second prompt. I use this on every new skill now — faster than running the skill, reading the output, diagnosing the problem, and editing.


Five Mistakes I Made Building 50 Skills (So You Don't Have To)

Each cost me real time. They're also the mistakes I see most when helping teams build their first skills.

1. Duplicating CLAUDE.md context inside the skill. Your CLAUDE.md already has your ICP, voice, and positioning. If you paste the same context into the skill, it goes stale when you update CLAUDE.md but forget the skill. Instead, write "Read the user's CLAUDE.md for [specific context]" in your skill instructions. Single source of truth. I had three skills with outdated ICP definitions before I caught this.

2. Trying to make one skill do everything. A skill handling meeting prep AND deal research AND competitive intel will have a description so broad it fires on every request, or so narrow it misses most. One skill, one job. Compose focused skills instead. My most-invoked skills are the most narrowly scoped.

3. Writing the skill as prose paragraphs instead of structured steps. Claude parses 1. Do this. 2. Then this. 3. Output this. far more reliably than "First you should consider doing this, and then perhaps moving on to..." Structure is instruction. Prose is suggestion. I rewrote six skills from prose to numbered steps and output quality changed immediately.

4. Skipping the description field. Without a description, Claude has no way to match your request to the skill. It's like filing a document without labeling the folder. The skill exists but can't be found. I've seen people build a 200-line skill with perfect instructions and no description, then wonder why it never fires.

5. Never iterating after the first version. The first SKILL.md is a hypothesis. The second version is the product. Testing isn't optional — it's the core workflow. My daily-driver skills have between 6 and 30 git commits each. The first version of my planning skill was a 40-line prompt that said "decompose this task into phases." After 30 commits, it's 862 lines with historical sizing calibration and cross-model review. Budget for iteration, not perfection on the first pass.


The Skill Creator Skill — A Meta-Skill That Builds Other Skills

Once you've built 5-10 skills manually, the creation process itself becomes a repeatable pattern: understand the use case, identify reusable resources, write the SKILL.md with the five-section structure, validate the YAML, iterate. That's a workflow, which makes it a candidate for a skill.

I have a skill called new-skill-creator — a skill that creates other skills. It sounds recursive, and it is, but it's one of the most useful things in my library.

When I build a skill manually, I hold the structural template in my head: does it have the right frontmatter fields, does the description fit under 200 characters, did I include constraints, is the output format specified? The skill creator encodes all of that: the YAML requirements, the directory structure convention (scripts/, references/, assets/), the progressive disclosure principle (keep SKILL.md lean, move reference material to subdirectories), and the common failure patterns I described earlier.

It also enforces patterns I'd otherwise forget. Every skill it creates includes a negative boundary section ("Do NOT use when..."), which matters when 20+ skills compete for attention. Without negative boundaries, overlapping skills cannibalize each other and Claude picks semi-randomly. The skill creator includes that section by default so I don't have to remember.

The workflow: I tell Claude "I need a skill that generates weekly pipeline reports from my CRM data," and the skill creator walks through the use case, identifies what reference files and scripts the skill needs, scaffolds the directory, and produces a complete SKILL.md. I review it, test it, do 2-3 rounds of edits, and I have a working skill in 15 minutes instead of 30. For teams building skill libraries, this halves ramp time because new contributors don't need to memorize the structure.

You don't need a meta-skill on day one. But around skill 5-10, when you find yourself copy-pasting the same YAML frontmatter and five-section structure into every new file, a skill creator pays for itself. Anthropic's engineering blog on skill design explains the design philosophy behind this composability — skills are plain text, so a skill that writes plain text is just another skill.


What Comes After Your First Skill

Build two more. The compound effect kicks in around skill 3-5, when they start sharing context from your CLAUDE.md and you see patterns across workflows. Your content brief skill reads your voice standards. Your meeting prep skill reads your ICP. They don't duplicate each other — they share a foundation.

Scaling note: Keep each SKILL.md under 500 lines. If you need more, move reference material to separate files in the skill directory (like references/templates.md or references/voice-standards.md) and reference them from SKILL.md. You won't hit this limit on your first skill, but you will on your tenth.

Browse what others have built. The Anthropic community skills repository has hundreds of skills you can install, read, and learn from. Reading other people's skills is one of the fastest ways to pick up new patterns.

Read the deep-dive. This tutorial got you from zero to one. The 52-skill anatomy article covers what happens when you go from 1 to 50+ — which skills survive daily use, which collect dust, and the architectural patterns that separate toy skills from production systems.

Download the full guide. Anthropic published The Complete Guide to Building Skills for Claude — a PDF covering fundamentals, planning, testing, and distribution. Bookmark it as your reference.

Sharing with your team: Skills are files. Share them by sharing the file — Slack it, drop it in a shared folder, or commit it to a Git repo. No deployment pipeline. No IT ticket.

For teams at scale: If you're evaluating skill-based systems for a GTM team, the Knowledge OS packages 50+ production-tested skills into deployable libraries. But you don't need a product to get value — everything in this tutorial works with Claude Code alone.


Build One. This Thursday.

A Claude Code skill is a text file. Not code. Not configuration. Not a plugin that requires IT approval. A markdown file with a good description and clear instructions. The gap between "I have Claude Code" and "Claude Code works the way I work" is one SKILL.md file and 30 minutes.

Open your calendar. Find a task you'll do again Thursday. Build a skill for it now — create the folder, write the SKILL.md, test it once. By Thursday, you'll know if it's worth keeping. (It will be.)

The best AI system isn't the one with the smartest model. It's the one that remembers how you work. Skills are how you teach it.