AgentSkillsCN

test

测试审查与生成。模式:--review(默认模式,检查测试质量),--generate(为代码生成测试)。范围:--staged(仅审查已暂存的测试)、--all(全面审查所有测试),或基于上下文进行审查。适用于测试质量和测试创建。

SKILL.md
--- frontmatter
name: test
description: Test review and generation. Modes: --review (check test quality, default), --generate (create tests for code). Scope: --staged, --all, or context-based. Use for test quality and creation.
argument-hint: [--review | --generate <target>] [--staged | --all]

Test

Review and generate tests following consistent principles.

Usage

code
/test                              # Review tests related to current context (default)
/test --review                     # Explicit review mode
/test --review --staged            # Review staged test files
/test --review --all               # Review all tests (parallel agents)
/test --generate <target>          # Generate tests for file/module/feature
/test --generate --staged          # Generate tests for staged code changes

Testing Principles

Both review and generate modes follow these principles. Review checks conformance; generate applies them.

1. Test Behavior, Not Implementation

javascript
// BAD: Breaks on any refactor
expect(component.state.internalFlag).toBe(true);
expect(service._privateMethod).toHaveBeenCalled();

// GOOD: Test observable behavior
expect(screen.getByText('Welcome')).toBeVisible();
expect(result.status).toBe('success');

2. Mock Only External Boundaries

javascript
// BAD: Testing mocks, not real code
jest.mock('./database');
jest.mock('./auth');
jest.mock('./validator');
jest.mock('./logger');
// What's actually being tested?

// GOOD: Mock only external boundaries
jest.mock('./externalPaymentApi');

3. Meaningful Assertions

javascript
// BAD: Test always passes
test('user login', async () => {
  await loginUser('test@example.com');
  // No expect() - what are we testing?
});

// GOOD: Verify outcomes
test('user login', async () => {
  const result = await loginUser('test@example.com');
  expect(result.token).toBeDefined();
  expect(result.user.email).toBe('test@example.com');
});

4. No Brittle Timing

javascript
// BAD: Flaky - depends on timing
await doAsyncThing();
await new Promise(r => setTimeout(r, 100));
expect(result).toBe('done');

// GOOD: Wait for actual condition
await waitFor(() => expect(result).toBe('done'));

5. Independent Tests

javascript
// BAD: Tests depend on execution order
let sharedState;
test('first', () => { sharedState = setup(); });
test('second', () => { expect(sharedState.value).toBe(1); }); // Fails if run alone

// GOOD: Each test sets up its own state
test('first', () => { const state = setup(); /* ... */ });
test('second', () => { const state = setup(); expect(state.value).toBe(1); });

6. Cover Edge Cases

  • Empty inputs
  • Null/undefined
  • Boundary values (0, -1, MAX_INT)
  • Error conditions
  • Concurrent access
  • Unicode/special characters

7. Focused Tests

javascript
// BAD: Tests too much, hard to debug failures
test('user flow', async () => {
  // 50 lines testing signup, login, profile, settings, logout
});

// GOOD: One concern per test
test('signup creates user', ...);
test('login sets session', ...);

8. Named Constants Over Magic Values

javascript
// BAD: What is 42? Why 'abc123'?
expect(calculate(42)).toBe(84);
expect(validate('abc123')).toBe(true);

// GOOD: Named constants explain intent
const VALID_USER_ID = 'user_12345';
const DOUBLED_VALUE = INPUT * 2;

Review Mode (--review)

Default mode. Checks tests against the principles above.

Scope

FlagScopeMethod
(none)Context-related testsFind tests for recently discussed code
--stagedStaged test filesgit diff --cached --name-only -- '*test*' '*spec*'
--allAll test filesGlob **/*test*.{ts,js,py,dart} etc.

Workflow

  1. Get file list based on scope
  2. Review (directly if ≤5 files, parallel sub-agents if more)
  3. Report findings by priority

Checklist

Principles:

  • Tests verify behavior, not implementation
  • Mocks limited to external boundaries
  • All tests have meaningful assertions
  • No brittle timing (setTimeout, sleep)
  • Tests are independent (no shared state)
  • Edge cases covered
  • Tests are focused (one concern each)

Flaky Patterns:

  • No unseeded Math.random()
  • No unmocked new Date()
  • No network calls to real services
  • No file system dependencies without cleanup
  • No environment variable assumptions

Completeness (for code being reviewed):

  • All public functions/methods have tests
  • All exported components have tests
  • Error paths are tested, not just happy paths
  • Edge cases identified in code have corresponding tests

Pattern Conformance (for --staged/new tests):

  • File naming matches project convention
  • Test organization matches existing tests (suite grouping and test naming conventions per framework — see Terminology Mapping)
  • Setup/teardown patterns match existing tests (per-test and per-suite setup per framework)
  • Mocking approach consistent with project (framework-specific mocking or dependency injection)
  • Assertion style matches (expect vs assert, matchers used)

Coverage Check

If a coverage script exists, run it to identify gaps:

bash
# Check for coverage scripts
grep -E "coverage|test:cov" package.json 2>/dev/null
cat pyproject.toml 2>/dev/null | grep -A5 "pytest"

Common coverage commands:

  • npm run test:coverage or npm run coverage
  • pytest --cov
  • go test -cover
  • flutter test --coverage

Report uncovered lines/branches for files in scope.

Output Format

markdown
## Test Review: {scope}

### Critical Issues
- {file}:{test} - {issue}

### Completeness Gaps
- {code_file}:{function} - no tests found
- {code_file}:{function} - missing test for error case
- {code_file}:{function} - missing test for edge case: {scenario}

### Coverage Report
(if coverage script available)
- Overall: {X}% statements, {Y}% branches
- Uncovered in scope:
  - {file}:{lines} - {description}

### Pattern Violations (--staged)
- {test_file} - setup pattern differs from existing tests
- {test_file} - mocking approach inconsistent with {example_file}

### Test Smells
- {file}:{test} - {smell}

### Suggestions
- {improvement}

Generate Mode (--generate)

Create tests for code following the principles above.

Red-Green Verification (for bug fixes)

When generating tests for a bug fix, verify the test actually catches the bug:

  1. Run the new test with the fix applied -- confirm PASS (green)
  2. Temporarily revert the fix
  3. Run the test again -- confirm FAIL (red)
  4. Re-apply the fix
  5. Run the test -- confirm PASS again (green)

Only claim the test is valid if it fails without the fix and passes with it. This prevents tests that pass for unrelated reasons.

Scope

FlagScopeMethod
<target>Specific file/function/moduleRead the code, generate tests
--stagedStaged code changesGenerate tests for what changed

Workflow

  1. Detect framework - Jest, pytest, go test, vitest, etc. from project
  2. Analyze existing test patterns - read 2-3 existing test files to learn:
    • File naming and location conventions
    • Describe/it structure and nesting style
    • Setup/teardown patterns (beforeEach, fixtures, factories)
    • Mocking approach (jest.mock, manual mocks, DI)
    • Assertion style and common matchers
    • Test data patterns (inline, fixtures, builders)
  3. Read the code - understand what needs testing
  4. Check existing tests - avoid duplicates, extend if needed
  5. Generate tests following both principles AND project patterns

Framework Detection

Check for:

  • jest.config.*, package.json with jest → Jest
  • vitest.config.* → Vitest
  • pytest.ini, pyproject.toml with pytest → pytest
  • *_test.go files → Go testing
  • *_test.dart files → Flutter test
  • .rspec, Gemfile with rspec → RSpec
  • Cargo.toml with [dev-dependencies] → Rust #[test]
  • *.test.tsx or *.spec.tsx with @testing-library/react in package.json → React Testing Library
  • phpunit.xml → PHPUnit

Framework Terminology Mapping

When reviewing or generating tests, translate concepts to the detected framework:

ConceptJest/VitestpytestGo testingFlutter test
Test suite groupingdescribe()class or modulefunc Test... prefixgroup()
Individual testit() / test()def test_...()func Test...(t *testing.T)test() / testWidgets()
Setup (per-test)beforeEach()setup_method / fixturet.Cleanup() or helpersetUp()
Setup (per-suite)beforeAll()setUpClass / session fixtureTestMain()setUpAll()
Mockingjest.mock()unittest.mock.patchinterface + stub structmockito package
Assertionexpect(x).toBe(y)assert x == yif got != want { t.Errorf() }expect(x, equals(y))
Async testasync/await@pytest.mark.asynciot.Run with goroutinesasync test + pump()
Skip testit.skip()@pytest.mark.skipt.Skip()skip()

Use this table to adapt all examples and checklists below. Code examples in this skill use JavaScript/Jest syntax as a reference; translate idioms to the detected framework.

Test File Placement

Follow project conventions:

  • __tests__/ directory (common in JS/TS)
  • *.test.ts or *.spec.ts alongside source
  • test/ directory at project root
  • *_test.py alongside source or in tests/

What to Generate

The templates below use Jest syntax for illustration. Adapt structure and syntax to the detected framework using the Terminology Mapping table above. For example, in pytest use def test_returns_expected_result(): instead of it('returns expected result', ...).

For a function:

javascript
describe('{functionName}', () => {
  it('returns expected result for valid input', () => {
    // Happy path
  });

  it('handles empty input', () => {
    // Edge case
  });

  it('throws on invalid input', () => {
    // Error handling
  });

  it('handles boundary value', () => {
    // Edge case: 0, MAX, etc.
  });
});

For a component:

javascript
describe('{ComponentName}', () => {
  it('renders with required props', () => {
    // Happy path
  });

  it('responds to user interaction', () => {
    // User events
  });

  it('displays error state', () => {
    // Error handling
  });

  it('handles loading state', () => {
    // Async states
  });
});

For a service/API:

javascript
describe('{ServiceName}', () => {
  it('returns data on success', () => {
    // Happy path
  });

  it('handles errors gracefully', () => {
    // Error handling
  });

  it('validates input', () => {
    // Input validation
  });
});

For staged changes:

  1. Identify what changed (new function, modified behavior, etc.)
  2. Find or create relevant test file
  3. Generate tests for the changes
  4. Ensure edge cases are covered

Output

Generate test files directly, matching project patterns:

  • Place in location matching existing test file structure
  • Use same describe/it nesting style as other tests
  • Match setup/teardown patterns (beforeEach, fixtures, etc.)
  • Use same mocking approach as existing tests
  • Match assertion style and matchers
  • Use consistent test data patterns (inline, fixtures, builders)
  • Add brief comments for non-obvious test cases

Before generating, show the patterns found:

markdown
## Detected Test Patterns

**Location:** `__tests__/` alongside source
**Structure:** `describe` per class/module, `it` per behavior
**Setup:** `beforeEach` with factory functions
**Mocking:** jest.mock for external, DI for internal
**Assertions:** jest matchers, testing-library queries

Generating tests following these patterns...

Examples

Generate tests for a payment function:

/test --generate lib/services/payment.ts

Detects the project's test framework and patterns, then generates a test file covering happy path (successful charge), error handling (declined card, network failure), and edge cases (zero amount, currency mismatch). Places the file following existing test conventions.

Review staged tests catches over-mocking:

/test --review --staged

Reviews staged test files against the testing principles. Flags tests that mock internal modules instead of only external boundaries, and identifies tests with no meaningful assertions that would pass regardless of behavior.

Troubleshooting

Generated tests fail immediately on first run

Solution: Verify the correct test framework was detected by checking the "Detected Test Patterns" output. If imports or setup are wrong, point --generate at an existing passing test file so the generator can match its patterns exactly.

Cannot detect the project's test framework

Solution: Ensure a framework config file exists (jest.config.*, vitest.config.*, pytest.ini, or pyproject.toml with pytest section). If the project uses a non-standard setup, run /test --generate <target> and specify the framework in your prompt.

Notes

  • Default is review mode with context-based scope
  • Both modes use the same principles - review checks, generate applies
  • Use --staged before commits to catch issues or generate missing tests
  • Use --all periodically for comprehensive review
  • Sub-agents parallelize large reviews/generations
  • Integration tests > unit tests with heavy mocking