10 AI Skills Every Developer Should Have in 2026 (With SKILL.md Templates)

Code review, test generation, documentation, debugging - the 10 highest-ROI skills for AI coding assistants. Each one comes with a copy-paste SKILL.md template you can use today.

·14 min read·
templatesskillsproductivity

Most developers using AI coding tools have zero custom skills. They rely on the model's default behavior, which is generic by design. A code review from a vanilla AI reads like a textbook - technically correct but missing your team's context, your project's patterns, and your personal standards.

These 10 skills form a complete developer toolkit. Each one comes with a full SKILL.md template you can copy, a rationale for why it matters, and a pro tip earned from real usage. Start with the ones that match your biggest pain points, then add the rest over time.

1. Code Review

A code review skill transforms your AI from a generic “looks good to me” reviewer into a structured auditor that checks what you care about. Without it, the AI comments on formatting and misses SQL injection. With it, the AI follows your review checklist in order, references specific line numbers, and skips the things your linter already handles.

The key to a great review skill is the priority order. Most code reviews should check correctness first (does it work?), then security (is it safe?), then maintainability (can the next person understand it?). Formatting and style come last - if at all - because that's what automated tools handle better.

SKILL.md
1---
2name: code-review
3description: Structured code review with severity levels and line references
4version: 1.0.0
5tags: [review, quality, security]
6---
7
8# Code Review
9
10Perform a structured review focusing on correctness, security, and maintainability.
11
12## Steps
13
141. **Read the entire file/diff** before making any comments
152. **Correctness**: null handling, off-by-one errors, race conditions, unhandled exceptions
163. **Security**: injection, XSS, auth bypasses, hardcoded secrets, unsafe deserialization
174. **Maintainability**: naming clarity, function length (>30 lines = flag), duplication
185. **Format each issue as**: [Severity] (Line N): Title - explanation + fix
19
20## Examples
21
22### Example review:
23**[Bug] (Line 42): Null pointer** - `user.getName()` throws if `findUser()` returns null.
24Fix: `String name = user != null ? user.getName() : "Unknown";`
25
26**[Security] (Line 18): SQL injection** - User input concatenated into query string.
27Fix: Use parameterized query: `db.query("SELECT * FROM users WHERE id = ?", [id])`
28
29## Triggers
30- "review this code"
31- "check for bugs"
32- "what's wrong with this"
33- "security review"
34
35## Don't
36- Don't comment on formatting (that's what linters handle)
37- Don't suggest complete rewrites unless asked
38- Don't flag style preferences as bugs

Pro tip: Add your project's specific anti-patterns to the Steps section. If your codebase has a known problem with unchecked API responses, add “Check that all fetch/axios calls have error handling” as step 2.5.

2. Test Writer

Without a test skill, AI writes tests that pass but don't actually test anything meaningful. You get tests that check expect(result).toBeDefined() instead of verifying the actual return value. A good test skill specifies your framework, your naming convention, and most importantly, the types of assertions that matter.

The test writer skill should encode your team's testing philosophy. Do you prefer integration tests over unit tests? Do you use test factories or inline fixtures? Do you mock external services or use test containers? These decisions should be in the skill, not left to the AI's default behavior.

SKILL.md
1---
2name: test-writer
3description: Generates comprehensive tests with edge cases and meaningful assertions
4version: 1.0.0
5tags: [testing, quality, tdd]
6---
7
8# Test Writer
9
10Write thorough tests that verify behavior, not just existence.
11
12## Steps
13
141. **Analyze the function/component** - identify inputs, outputs, side effects, edge cases
152. **Write the happy path test first** - the most common successful usage
163. **Write edge cases**: empty input, null, boundary values, max length, negative numbers
174. **Write error cases**: invalid input, network failure, timeout, permission denied
185. **Use descriptive test names**: "should return empty array when user has no orders"
196. **One assertion per test** (or closely related assertions)
20
21## Examples
22
23### Given this function:
24```typescript
25function calculateDiscount(price: number, tier: "gold" | "silver" | "bronze"): number
26```
27
28### Expected tests:
29```typescript
30describe("calculateDiscount", () => {
31 it("should apply 20% discount for gold tier", () => {
32 expect(calculateDiscount(100, "gold")).toBe(80);
33 });
34
35 it("should apply 10% discount for silver tier", () => {
36 expect(calculateDiscount(100, "silver")).toBe(90);
37 });
38
39 it("should return 0 when price is 0", () => {
40 expect(calculateDiscount(0, "gold")).toBe(0);
41 });
42
43 it("should handle negative prices by returning 0", () => {
44 expect(calculateDiscount(-50, "gold")).toBe(0);
45 });
46});
47```
48
49## Triggers
50- "write tests for"
51- "add test coverage"
52- "test this function"
53- "create unit tests"
54
55## Don't
56- Don't write tests that only check `toBeDefined()` or `toBeTruthy()`
57- Don't mock everything - only mock external dependencies
58- Don't test implementation details (private methods, internal state)

Pro tip: Include your project's test utility imports in an example. If you have a renderWithProviders() helper or a createMockUser() factory, show the AI how to use them.

3. Documentation Generator

AI-generated documentation is either too verbose (restating the code in English) or too sparse (just the function signature). A documentation skill fixes this by specifying exactly what to document: the “why” not the “what,” usage examples not parameter lists, edge cases not obvious behavior.

The best documentation skills also encode your doc format - JSDoc, TSDoc, Sphinx, Javadoc - and your conventions around documenting internal vs. public APIs. This prevents the AI from adding JSDoc to every private helper function when you only want public API docs.

SKILL.md
1---
2name: documentation-generator
3description: Generates practical documentation focused on usage and edge cases
4version: 1.0.0
5tags: [docs, documentation, jsdoc]
6---
7
8# Documentation Generator
9
10Write documentation that helps the next developer USE the code, not just read it.
11
12## Steps
13
141. **Identify the audience** - is this a public API (external devs) or internal code (team)?
152. **Write a one-line summary** - what does this do in plain English?
163. **Add a usage example** - show the most common way to call it
174. **Document edge cases** - what happens with null input? Empty arrays? Large datasets?
185. **Note return values** - especially when they can be null or throw
196. **Skip the obvious** - don't document getters, setters, or self-explanatory code
20
21## Examples
22
23### Bad documentation:
24```typescript
25/** Gets the user name. @param id The user id. @returns The user name. */
26function getUserName(id: string): string
27```
28
29### Good documentation:
30```typescript
31/**
32 * Fetches the display name for a user by ID.
33 * Returns "Unknown User" if the ID doesn't exist in the database.
34 * Throws `AuthError` if the current session lacks read permissions.
35 *
36 * @example
37 * const name = await getUserName("usr_123");
38 * // => "Jane Smith"
39 */
40function getUserName(id: string): Promise<string>
41```
42
43## Triggers
44- "document this"
45- "add docs"
46- "write documentation for"
47- "add JSDoc"
48
49## Don't
50- Don't restate the function name as the description
51- Don't document private/internal functions unless asked
52- Don't add @param tags that just repeat the parameter name

Pro tip: Add a rule about when NOT to document. “Skip documentation for functions under 5 lines with self-explanatory names” prevents the AI from cluttering simple code with redundant comments.

4. Bug Fixer

When you paste an error into your AI tool, the default behavior is to guess at a fix immediately. A bug fixer skill forces a diagnostic approach: read the error, trace the cause, identify the root issue (not just the symptom), then propose a fix with an explanation. This prevents the AI from “fixing” a NullPointerException by adding a null check when the real bug is that the data was never loaded.

The diagnostic step is crucial. Without it, AI fixes are often band-aids that move the crash from one line to another. With a structured diagnostic, the AI traces the data flow backward from the error to find where things actually went wrong.

SKILL.md
1---
2name: bug-fixer
3description: Diagnoses bugs by tracing root cause before proposing fixes
4version: 1.0.0
5tags: [debugging, bugs, diagnosis]
6---
7
8# Bug Fixer
9
10Diagnose before fixing. Find the root cause, not just the symptom.
11
12## Steps
13
141. **Read the error message** - extract the exact error type, message, and stack trace
152. **Trace the data flow** - follow the variable/value from its origin to the crash point
163. **Identify the root cause** - explain WHY the bug happens, not just WHERE
174. **Propose a fix** - show the code change with before/after
185. **Verify the fix** - explain what test would confirm this fix works
196. **Check for siblings** - are there similar patterns elsewhere that have the same bug?
20
21## Examples
22
23### User says: "Getting TypeError: Cannot read properties of undefined"
24### Expected response:
25**Root cause:** `fetchUser()` returns `undefined` when the API returns 404,
26but the caller assumes it always returns a User object.
27
28**Fix:**
29```typescript
30// Before (crashes)
31const name = fetchUser(id).name;
32
33// After (handles missing user)
34const user = fetchUser(id);
35if (!user) throw new NotFoundError(`User ${id} not found`);
36const name = user.name;
37```
38
39**Verify:** Add test for fetchUser with non-existent ID.
40
41## Triggers
42- "fix this bug"
43- "why is this crashing"
44- "debug this"
45- "getting an error"
46
47## Don't
48- Don't propose a fix before explaining the root cause
49- Don't just add a try/catch without fixing the underlying issue
50- Don't suggest "have you tried restarting?"

Pro tip: Add step 6 (“Check for siblings”) to your bug fixer skill. The same pattern that caused one bug often exists in 3-4 other places. AI is great at pattern-matching across a codebase once you tell it to look.

5. Refactoring Assistant

Refactoring without a skill produces chaotic results: the AI rewrites everything at once, changes function signatures without updating callers, or “improves” code by making it more abstract but less readable. A good refactoring skill enforces discipline - small, behavior-preserving changes with explicit before/after comparisons.

The most important rule in any refactoring skill is: tests must pass before and after.If the AI can't articulate what test would prove the refactoring didn't break anything, the refactoring isn't safe.

SKILL.md
1---
2name: refactoring-assistant
3description: Safe, incremental refactoring with behavior preservation guarantees
4version: 1.0.0
5tags: [refactoring, clean-code, maintenance]
6---
7
8# Refactoring Assistant
9
10Refactor code in small, safe, testable steps. Never change behavior.
11
12## Steps
13
141. **Identify the smell** - name the specific code smell (duplication, long method, feature envy, etc.)
152. **State the goal** - what should the code look like after refactoring?
163. **Check test coverage** - are there tests that verify current behavior? If not, write them first.
174. **Make one change at a time** - extract method, rename variable, move function - one per step
185. **Show before/after** - for each step, show the old code and new code
196. **Verify** - confirm tests still pass after each step
20
21## Triggers
22- "refactor this"
23- "clean up this code"
24- "this code is messy"
25- "simplify this function"
26- "extract method"
27
28## Don't
29- Don't change behavior while refactoring
30- Don't refactor and add features simultaneously
31- Don't make the code more abstract just for abstraction's sake
32- Don't rename public API methods without a migration plan

Pro tip: Add a “maximum blast radius” rule: “Each refactoring step should touch no more than 3 files. If it touches more, break it into smaller steps.” This keeps AI refactoring manageable.

6. Git Commit Crafter

AI-generated commit messages are famously bad: “Update code” or “Fix stuff” or a 200-word essay about a one-line change. A commit skill encodes your team's format (Conventional Commits, gitmoji, or your own standard), the right level of detail, and rules about when to split commits.

The underrated part of this skill is the “when to split” guidance. A diff that touches 5 files with 3 different purposes should be 3 commits, not 1. Teaching the AI to recognize this saves you from archaeological git blame sessions later.

SKILL.md
1---
2name: git-commit-crafter
3description: Generates Conventional Commits with appropriate scope and detail
4version: 1.0.0
5tags: [git, commits, workflow]
6---
7
8# Git Commit Crafter
9
10Write clear, conventional commit messages from staged changes.
11
12## Steps
13
141. **Read the full diff** - understand ALL changes before writing
152. **Classify the change** - feat, fix, refactor, docs, test, chore, perf, ci
163. **Identify scope** - which module/component is affected (auth, api, ui, db)
174. **Write the subject** - imperative mood, under 72 chars, no period
185. **Add body if needed** - explain WHY, not WHAT (the diff shows the what)
196. **Flag if split needed** - if the diff has multiple purposes, suggest splitting
20
21## Examples
22
23### For a diff that adds input validation:
24```
25feat(auth): add email format validation to signup form
26
27Prevents invalid emails from reaching the API, which was returning
28500 errors instead of helpful validation messages.
29```
30
31### For a diff that should be split:
32"This diff contains 3 separate changes. Suggest splitting:
331. fix(api): handle null response from payment provider
342. test(api): add integration tests for payment flow
353. chore: update stripe SDK to v14.2"
36
37## Triggers
38- "write a commit message"
39- "commit this"
40- "what should the commit say"
41
42## Don't
43- Don't write "Update" or "Fix" without specifics
44- Don't exceed 72 characters in the subject line
45- Don't put the body content in the subject

Pro tip: Add a list of your project's scopes to the skill. Instead of letting the AI guess, list them: “Valid scopes for this project: auth, api, ui, db, config, ci, deps.”

7. API Designer

Without an API design skill, AI creates endpoints that are inconsistent: one returns { data: [...] } and another returns the array directly. One uses snake_case and another uses camelCase. Error responses vary wildly. An API design skill enforces your conventions across every endpoint the AI creates.

The most valuable part of this skill is the error response format. Consistent error handling across an API is what separates a professional API from a prototype. When every endpoint returns errors in the same shape, frontend developers can write a single error handler instead of special-casing each endpoint.

SKILL.md
1---
2name: api-designer
3description: Designs consistent REST APIs with standardized responses and error handling
4version: 1.0.0
5tags: [api, rest, design]
6---
7
8# API Designer
9
10Design consistent, predictable REST endpoints.
11
12## Steps
13
141. **Name the resource** - plural nouns (/users, /orders, not /getUser, /createOrder)
152. **Choose the method** - GET (read), POST (create), PUT (full update), PATCH (partial), DELETE
163. **Define the response envelope**: `{ data, meta, errors }`
174. **Design error responses** - consistent shape: `{ error: { code, message, details } }`
185. **Add pagination** for list endpoints: `{ data: [...], meta: { page, perPage, total } }`
196. **Document status codes** - 200 (ok), 201 (created), 400 (bad request), 404 (not found), 422 (validation)
20
21## Examples
22
23### Endpoint design:
24```
25GET /api/v1/users → list users (paginated)
26GET /api/v1/users/:id → get single user
27POST /api/v1/users → create user
28PATCH /api/v1/users/:id → update user fields
29DELETE /api/v1/users/:id → delete user
30
31Error response (always this shape):
32{ "error": { "code": "VALIDATION_ERROR", "message": "Email is required", "details": [{ "field": "email" }] } }
33```
34
35## Triggers
36- "design an API"
37- "create an endpoint"
38- "REST API for"
39- "what should this endpoint look like"
40
41## Don't
42- Don't use verbs in URL paths (/getUsers → /users)
43- Don't return different response shapes for different endpoints
44- Don't use 200 for everything - use appropriate status codes

Pro tip: Include your actual response envelope in the skill. If your project wraps everything in { success: true, data: ... }, show that exact format. The AI will replicate it precisely.

8. Security Auditor

Most developers aren't security experts, but their AI can be. A security auditor skill turns your AI into a focused vulnerability scanner that checks for the OWASP Top 10, common authentication mistakes, and secrets accidentally committed to code. It's not a replacement for professional penetration testing, but it catches the low-hanging fruit that causes 90% of breaches.

The skill should prioritize findings by severity. A hardcoded API key is critical. A missing rate limiter is medium. An overly permissive CORS policy is low. Without severity levels, the AI dumps 20 findings of equal weight and the developer ignores all of them.

SKILL.md
1---
2name: security-auditor
3description: Scans code for OWASP Top 10 vulnerabilities and common security mistakes
4version: 1.0.0
5tags: [security, audit, owasp]
6---
7
8# Security Auditor
9
10Audit code for security vulnerabilities with severity-ranked findings.
11
12## Steps
13
141. **Scan for secrets** - API keys, passwords, tokens, connection strings in code or config
152. **Check injection points** - SQL, NoSQL, command injection, XSS, template injection
163. **Verify authentication** - session management, token validation, password hashing
174. **Check authorization** - are there endpoints missing auth middleware? IDOR vulnerabilities?
185. **Review data exposure** - are sensitive fields (password, SSN) in API responses?
196. **Rate each finding** - Critical / High / Medium / Low with OWASP category reference
20
21## Examples
22
23### Example finding:
24**[Critical] Hardcoded secret (Line 5)** - AWS access key in source code.
25OWASP: A07:2021 - Identification and Authentication Failures
26Fix: Move to environment variable, add to .gitignore, rotate the key immediately.
27
28**[High] SQL injection (Line 23)** - User input in string-concatenated query.
29OWASP: A03:2021 - Injection
30Fix: Use parameterized queries.
31
32## Triggers
33- "security audit"
34- "check for vulnerabilities"
35- "is this secure"
36- "find security issues"
37
38## Don't
39- Don't flag theoretical issues that require physical access to exploit
40- Don't suggest security theater (e.g., obfuscating client-side code)
41- Don't recommend deprecated crypto algorithms as fixes

Pro tip: Add a “secrets patterns” section listing the regex patterns for secrets in your stack: sk_live_* for Stripe, AKIA* for AWS, ghp_* for GitHub tokens.

9. Performance Optimizer

Performance optimization is where AI shines - it can spot N+1 queries, unnecessary re-renders, and O(n²) algorithms that a tired developer misses. But without a skill, the AI optimizes prematurely: suggesting memoization on a component that renders once, or adding caching to a function that runs once per deploy.

A good performance skill asks “is this on a hot path?” before optimizing. It measures before suggesting changes. And it focuses on the 3 things that actually matter in most web applications: database queries, bundle size, and rendering performance.

SKILL.md
1---
2name: performance-optimizer
3description: Identifies performance bottlenecks with impact-first prioritization
4version: 1.0.0
5tags: [performance, optimization, profiling]
6---
7
8# Performance Optimizer
9
10Find and fix performance issues, starting with the highest-impact ones.
11
12## Steps
13
141. **Identify the hot path** - is this code called once or 10,000 times?
152. **Check database queries** - N+1 problems, missing indexes, unneeded columns (SELECT *)
163. **Check rendering** - unnecessary re-renders, missing keys, heavy computations in render
174. **Check bundle** - large imports, missing tree-shaking, uncompressed assets
185. **Estimate impact** - "saves ~200ms per request" or "reduces bundle by ~50KB"
196. **Show the fix** - before/after with expected improvement
20
21## Examples
22
23### Finding:
24**[High Impact] N+1 query in /api/orders** - fetching user for each order in a loop.
25Currently: 1 query for orders + N queries for users = 101 queries for 100 orders.
26Fix: Use JOIN or batch query: `WHERE id IN (...)`
27Impact: ~950ms → ~50ms for 100 orders.
28
29## Triggers
30- "optimize this"
31- "why is this slow"
32- "performance review"
33- "speed up"
34
35## Don't
36- Don't optimize code that runs once at startup
37- Don't suggest memoization without proving re-computation happens
38- Don't sacrifice readability for micro-optimizations

Pro tip: Add your stack's specific performance patterns. For Next.js: check for missing dynamic imports, unused "use client" directives, and images without next/image.

10. Technical Writer

Different from the Documentation Generator (which handles inline code docs), the Technical Writer skill produces standalone documents: README files, architecture decision records (ADRs), API guides, migration guides, and runbooks. These documents require a different structure and voice than code comments.

The key difference is audience awareness. Code docs are for developers reading source code. Technical documents are for people who might never look at the code - they need context, motivation, and step-by-step instructions. A good technical writer skill encodes your team's document templates and tone.

SKILL.md
1---
2name: technical-writer
3description: Creates structured technical documents with audience-appropriate language
4version: 1.0.0
5tags: [writing, docs, technical, readme]
6---
7
8# Technical Writer
9
10Write technical documents that non-authors can follow without help.
11
12## Steps
13
141. **Identify the audience** - developers, PMs, ops, new hires?
152. **Start with the problem** - why does this document exist? What question does it answer?
163. **Structure with headings** - Overview, Prerequisites, Steps, Troubleshooting
174. **Include prerequisites** - what must be installed, configured, or understood first?
185. **Write reproducible steps** - someone else should get the same result following your steps
196. **Add a troubleshooting section** - 3-5 common issues and how to resolve them
20
21## Examples
22
23### Template for ADR (Architecture Decision Record):
24```markdown
25# ADR-001: Use PostgreSQL over MongoDB
26
27## Status: Accepted
28## Date: 2026-01-15
29## Context: We need a database for the orders service.
30## Decision: PostgreSQL, because our data is relational and we need ACID transactions.
31## Consequences: Need to manage migrations. Team needs SQL knowledge.
32```
33
34## Triggers
35- "write a README"
36- "document this architecture"
37- "create an ADR"
38- "write a migration guide"
39- "runbook for"
40
41## Don't
42- Don't use jargon without defining it (unless audience is senior devs)
43- Don't skip prerequisites - the reader may be a new team member
44- Don't write walls of text - use headings, lists, and code blocks

Pro tip: Include a link to your team's doc template directory in the skill. “For READMEs, follow the template at docs/templates/README-template.md.” The AI will read the template and follow its structure.

Getting Started

Don't install all 10 at once. Pick the 3 that match your biggest daily pain points. For most developers, that's Code Review, Test Writer, and Git Commit Crafter. Use them for a week, iterate on the instructions based on where the AI deviates from what you want, then add the next 3.

Every skill template above scores at least 4/5 on the quality rubric (Clarity, Specificity, Structure, Completeness, Actionability). To hit 5/5, customize them: add your project's specific patterns, your team's naming conventions, and 2-3 examples from your actual codebase. A generic skill is good. A customized skill is transformative.

Manage your skills with Praxl

Edit once, deployed to every AI tool. Version history, AI review, team sharing.

Try Praxl free