AgentSkillsCN

Alex Effort Estimation

Alex效率估算

SKILL.md

Alex Effort Estimation

Estimate task duration from an AI-assisted development perspective rather than traditional human developer estimates.

Activation Triggers

  • "estimate effort", "how long will this take"
  • "alex time", "AI effort"
  • Planning tasks, reviewing roadmaps
  • Creating work estimates

Why Alex Effort ≠ Human Effort

FactorHuman DeveloperAlex-Assisted
ResearchHours browsing docs/SOMinutes with semantic search
BoilerplateType it outGenerated instantly
Multi-file editsContext switching overheadParallel in one pass
Code reviewRead, context-build, commentInstant pattern recognition
TestingSameSame (real execution time)
DebuggingPrint statements, breakpointsPattern matching + bisect
Learning curveDays/weeksMinutes (bootstrap learning)
Breaks/fatigueRequiredN/A
Approval cyclesN/ARequired (human-in-loop)

Alex Effort Units

UnitMeaningTypical Tasks
⚡ Instant< 5 minSingle file edit, quick lookup, code generation
🔄 Short5-30 minMulti-file refactor, documentation, skill creation
⏱️ Medium30-60 minFeature implementation, test suite, complex debugging
📦 Session1-2 hoursMajor feature, architecture change, release process
🗓️ Multi-session2+ hoursLarge refactor, new system, research + implementation

Estimation Formula

code
Alex Effort = (Core Work × 0.3) + (Testing × 1.0) + (Approval Cycles × Human Response Time)

Multipliers by Task Type

Task TypeHuman EstimateAlex MultiplierAlex Effort
Documentation2h×0.2🔄 25 min
Code generation4h×0.15🔄 35 min
Refactoring4h×0.25⏱️ 1h
Research8h×0.1⏱️ 45 min
Bug fix (known)2h×0.3🔄 35 min
Bug fix (unknown)4h×0.5⏱️ 2h
Test writing4h×0.4⏱️ 1.5h
Test execution1h×1.0⏱️ 1h
Architecture design8h×0.3⏱️ 2.5h
New feature (small)4h×0.25⏱️ 1h
New feature (medium)2d×0.2📦 3h
New feature (large)1w×0.15🗓️ 6h
Release process4h×0.3📦 1.2h
Skill creation2h×0.2🔄 25 min

Bottlenecks (Cannot Accelerate)

These take real time regardless of AI assistance:

  1. Build/compile time - Hardware bound
  2. Test execution - Must actually run
  3. Human approval - User response latency
  4. External APIs - Network/service latency
  5. Deployment - CI/CD pipeline time
  6. Learning user preferences - Requires interaction

Estimation Template

When estimating tasks, use this format:

markdown
| Task | Human Est. | Alex Est. | Bottleneck |
|------|------------|-----------|------------|
| [Task name] | [X hours/days] | [⚡🔄⏱️📦🗓️ + time] | [None/Build/Test/Approval] |

Example: v4.2.5 Retrospective

TaskHuman Est.Actual AlexBottleneck
Update engine to 1.10930m⚡ 5 minNone
Consolidate 9→3 agents4h🔄 20 minNone
Create 6 slash commands2h🔄 15 minNone
Refactor dream to shared4h⏱️ 45 minTesting
Test all features2h⏱️ 1hHuman testing
Release process4h📦 1hCI/approval
Total16.5h📦 2.5h-

Acceleration factor: 6.6×

Calibrated from 62-Project Analysis

What Accelerates Well (4-10×)

Task TypeHumanAlexFactorEvidence
Documentation4h25m10×METHODOLOGY doc: 400 lines in ~30 min
Skill creation2h15m65 skills created, many in single sessions
Code generation4h30mSlash commands, refactors
Research + synthesis8h45m10×62 project analysis in ~20 min
Architecture decisions8h2hRoot cause analysis + recommendations

What Doesn't Accelerate (<2×)

BottleneckWhyEvidence
External dependenciesCan't controlAlexCook blocked by book formatting
Unrealistic scopeMust be discoveredAltman-Z-Score, KalabashDashboard
Human learning curveNeeds real timeWriting skills developing (Paper)
Third-party toolsMust waitmarkdown-to-pdf "not working"
Approval cyclesCalendar-boundRelease publishing waits for human

Project Success Predictors

From 62-project analysis:

IndicatorSuccess CorrelationAction
Clear "done" definitionStrong positiveDefine in one sentence upfront
Quick win potentialStrong positiveFavor 🚀 over 🗓️
External dependenciesStrong negativeIdentify blockers early, pivot
Scope ambitionModerate negativeConservative > ambitious
Continuous small workStrong positiveDaily touch > weekly sprint
Skill countWeak positiveSkills = investment, not outcome

Usage in Planning

When reviewing task lists:

  1. Convert human estimates using multipliers
  2. Identify bottlenecks that can't be accelerated
  3. Flag tasks requiring multiple approval cycles
  4. Consider parallelization opportunities
  5. Add buffer for unexpected iteration

Anti-Patterns

Don't assume instant everything - Testing and approval take real time
Don't skip human review - Speed without quality is waste
Don't ignore iteration cycles - First attempt isn't always right
Don't forget context-building - Reading files takes real time

Synapses

  • [bootstrap-learning/SKILL.md] → Learning acceleration estimates
  • [project-management/SKILL.md] → Task planning integration
  • [release-process/SKILL.md] → Release effort estimation
  • [testing-strategies/SKILL.md] → Test effort (real time bottleneck)