AgentSkillsCN

Atlassian Practices

Atlassian 实践

SKILL.md

Atlassian Engineering Practices

Best practices from Atlassian's development culture. Sources: Atlassian Team Playbook, Engineering Blog

Core Principles

1. Definition of Done (DoD)

A feature is NOT done until:

markdown
## Definition of Done Checklist

### Code Complete
- [ ] Code written and self-reviewed
- [ ] Unit tests written and passing
- [ ] Integration tests passing
- [ ] No linting errors
- [ ] No security vulnerabilities

### Review Complete
- [ ] PR reviewed and approved
- [ ] All comments addressed
- [ ] CI/CD pipeline green

### Documentation Complete
- [ ] Code comments for complex logic
- [ ] API documentation updated
- [ ] README updated if needed
- [ ] Changelog entry added

### Testing Complete
- [ ] QA tested (if applicable)
- [ ] Edge cases verified
- [ ] Performance acceptable
- [ ] Accessibility checked

### Deployment Ready
- [ ] Feature flag configured (if needed)
- [ ] Monitoring/alerts set up
- [ ] Rollback plan documented
- [ ] Stakeholders notified

2. Agile Ceremonies

Sprint Planning

  • Define sprint goal (ONE clear objective)
  • Break stories into tasks (< 1 day each)
  • Identify dependencies and blockers
  • Commit to realistic scope

Daily Standup (15 min max)

code
1. What did I complete yesterday?
2. What will I work on today?
3. Any blockers?

Sprint Retrospective

markdown
## Retro Format: 4 Ls

### Liked
- [What went well]

### Learned
- [New insights]

### Lacked
- [What was missing]

### Longed For
- [What we wish we had]

## Action Items
- [ ] [Specific improvement] - Owner

3. DACI Decision Framework

For important decisions:

RolePersonResponsibility
Driver[Name]Drives the decision, gathers input
Approver[Name]Final say, one person only
Contributors[Names]Provide input and expertise
Informed[Names]Kept in the loop

4. Health Monitors

Regular team health checks:

Attribute🟢 Healthy🔴 Unhealthy
Balanced TeamRight skills, capacityGaps, overloaded
Shared UnderstandingEveryone alignedConfusion
Value & MetricsClear success criteriaNo measures
Proof of ConceptValidated approachUnproven
VelocityPredictable deliveryErratic
Full-Time OwnerDedicated leadPart-time
DependenciesManagedBlocking
Stakeholder SupportEngagedAbsent

5. Incident Management

Severity Levels

LevelDescriptionResponse Time
SEV1Complete outageImmediate
SEV2Major feature broken< 1 hour
SEV3Minor feature broken< 4 hours
SEV4Low impactNext business day

Incident Response

code
1. DETECT - Monitoring alerts or user reports
2. RESPOND - Acknowledge, assign incident commander
3. MITIGATE - Stop the bleeding (even if temp fix)
4. RESOLVE - Permanent fix
5. REVIEW - Blameless postmortem

6. Quality Assistance (QA)

Shift left - quality is everyone's job:

code
┌─────────────────────────────────────────────────────────────┐
│                 QUALITY ASSISTANCE MODEL                    │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  Traditional QA          Quality Assistance                 │
│  ──────────────          ─────────────────                  │
│  QA tests at end    →    Dev writes tests                   │
│  QA finds bugs      →    Dev prevents bugs                  │
│  QA owns quality    →    Team owns quality                  │
│  QA = gatekeeper    →    QA = coach/enabler                 │
│                                                             │
│  "Quality is not a phase, it's built in"                   │
│                                                             │
└─────────────────────────────────────────────────────────────┘

7. Code Review Guidelines

As Author

  • Self-review first
  • Keep PRs small (< 400 lines)
  • Provide context in description
  • Respond promptly to feedback

As Reviewer

  • Review within 4 hours
  • Be constructive, not critical
  • Approve if "good enough"
  • Use suggestions, not demands

PR Template

markdown
## Summary
[What does this PR do?]

## Changes
- [Change 1]
- [Change 2]

## Testing
- [ ] Unit tests
- [ ] Manual testing
- [ ] Screenshots (if UI)

## Checklist
- [ ] Self-reviewed
- [ ] Tests passing
- [ ] Docs updated

8. Playbook Plays

Pre-mortem (Before project)

code
Imagine it's 6 months from now and the project failed.
What went wrong?

- [Risk 1] → Mitigation: [...]
- [Risk 2] → Mitigation: [...]

5 Whys (Root cause analysis)

code
Problem: Users can't log in

1. Why? → Auth service returning 500
2. Why? → Database connection timeout
3. Why? → Connection pool exhausted
4. Why? → Connections not being released
5. Why? → Missing finally block in code

ROOT CAUSE: Resource leak in auth service

Quick Reference

PracticeWhen to Use
DoD ChecklistEvery feature
DACIImportant decisions
Health MonitorMonthly team check
Pre-mortemProject kickoff
5 WhysIncident analysis
4 Ls RetroSprint end