Deploy Skill
Deploy and manage projects on LUCIDLABS-HQ server.
Sub-Commands
| Command | Purpose |
|---|---|
/deploy or /deploy full | End-to-end deployment via deploy-project.sh |
/deploy setup | Generate deployment files only (Dockerfile, docker-compose, CI/CD) |
/deploy status | Check container health + SSL status |
/deploy logs | Show container logs |
/deploy or /deploy full
End-to-end deployment using the automated scripts.
Prerequisites Check
Before deploying, verify:
# 1. SSH access ssh -p 2222 lucidlabs-hq "echo ok" # 2. gh CLI authenticated gh auth status # 3. Project has required files ls docker-compose.yml frontend/Dockerfile
Auth Verification (MANDATORY)
Every deployment MUST pass auth verification. No exceptions.
1. Code Check
Verify these auth files exist in the project:
# All of these must exist ls frontend/middleware.ts \ frontend/lib/auth-server.ts \ frontend/lib/auth-helpers.ts \ frontend/lib/auth-client.ts \ frontend/app/api/auth/[...all]/route.ts \ frontend/lib/convex.ts \ frontend/components/auth/login-form.tsx
Additionally verify:
- •
frontend/lib/convex.tsexports dual connections (projectConvex+authConvex) - •
frontend/lib/auth-client.tsusesmagicLinkClient()plugin (Magic Links via Resend only - no passwords, no OAuth) - •
frontend/components/providers/ConvexClientProvider.tsxwraps with bothConvexBetterAuthProvider(authConvex)andConvexProvider(projectConvex)
2. Config Check
Verify docker-compose.yml contains:
| Variable | Type | Required |
|---|---|---|
NEXT_PUBLIC_AUTH_ENABLED | build arg | Yes (default: true) |
AUTH_CONVEX_URL | runtime env | Yes |
AUTH_CONVEX_SITE_URL | runtime env | Yes |
BETTER_AUTH_SECRET | runtime env | Yes |
3. Post-Deploy Auth Check
# MUST return 307 redirect to /login curl -sI https://<subdomain>.lucidlabs.de/ | head -5 # MUST return 200 with login form curl -sI https://<subdomain>.lucidlabs.de/login | head -5
4. Blocker Rule
If any auth check fails, the deployment is FAILED. Do NOT mark it as successful. Warn the user immediately and suggest fixes.
Read Project Context
# Get project details from PROJECT-CONTEXT.md or infer from codebase cat .claude/PROJECT-CONTEXT.md 2>/dev/null # Check registry for existing allocations cat infrastructure/lucidlabs-hq/registry.json 2>/dev/null || \ ssh -p 2222 lucidlabs-hq "cat /opt/lucidlabs/registry.json" 2>/dev/null
Determine Parameters
Ask the user if not clear from context:
- •Project name - kebab-case (e.g.,
client-service-reporting) - •Abbreviation - short prefix for containers (e.g.,
csr) - •Subdomain - URL prefix (e.g.,
reportingforreporting.lucidlabs.de) - •Has Convex? - Check if
convex/directory ordocker-compose.convex.ymlexists - •Has Mastra? - Check if
mastra/directory exists
Run Deployment
# From project root ./scripts/deploy-project.sh \ --name "<project-name>" \ --abbreviation "<abbr>" \ --subdomain "<subdomain>" \ [--has-convex] \ [--has-mastra]
If scripts/deploy-project.sh does not exist locally, copy from upstream:
UPSTREAM="$(dirname "$(pwd)")/../lucidlabs-agent-kit" cp "$UPSTREAM/scripts/deploy-project.sh" ./scripts/deploy-project.sh chmod +x ./scripts/deploy-project.sh
Auth Configuration (Phase 4.5)
The deploy script automatically injects centralized auth environment variables into each project's .env.local:
| Variable | Value | Purpose |
|---|---|---|
AUTH_CONVEX_URL | https://auth-convex.lucidlabs.de | Shared auth Convex instance |
AUTH_CONVEX_SITE_URL | https://auth-convex.lucidlabs.de | BetterAuth session validation endpoint |
NEXT_PUBLIC_AUTH_CONVEX_URL | https://auth-convex.lucidlabs.de | Client-side auth Convex URL |
NEXT_PUBLIC_AUTH_ENABLED | true | Enables auth in production |
BETTER_AUTH_SECRET | (shared secret) | JWT signing key (from auth-convex instance) |
NEXT_PUBLIC_APP_URL | https://<subdomain>.lucidlabs.de | App URL for auth callbacks |
The shared secret is read from the auth-convex instance on the server. If no auth-convex instance exists yet, this phase is skipped with a warning.
Important: Auth env vars are only injected once (idempotent). Re-deploying will not overwrite existing auth config.
Post-Deploy
After successful deployment:
- •Verify URLs are accessible
- •Add monitoring in Uptime Kuma (if available)
- •Update PROJECT-STATUS.md with deployment info
/deploy setup
Generate deployment files without deploying. For first-time setup.
Files to Generate
Check which files exist and create missing ones:
1. Dockerfile (frontend/Dockerfile)
If not present, generate multi-stage Next.js Dockerfile:
FROM node:20-alpine AS base FROM base AS deps RUN apk add --no-cache libc6-compat WORKDIR /app COPY package.json pnpm-lock.yaml ./ RUN corepack enable pnpm && pnpm install --frozen-lockfile FROM base AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . ARG NEXT_PUBLIC_CONVEX_URL ENV NEXT_PUBLIC_CONVEX_URL=$NEXT_PUBLIC_CONVEX_URL RUN corepack enable pnpm && pnpm run build FROM base AS runner WORKDIR /app ENV NODE_ENV=production RUN addgroup --system --gid 1001 nodejs && adduser --system --uid 1001 nextjs COPY --from=builder /app/public ./public COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static USER nextjs EXPOSE 3000 ENV PORT=3000 CMD ["node", "server.js"]
2. docker-compose.yml
Read registry.json for port allocation and generate accordingly.
3. docker-compose.convex.yml (if Convex)
Generated by add-project.sh on server, but can be created locally too.
4. GitHub Actions workflow (.github/workflows/deploy-hq.yml)
Generate from template at infrastructure/lucidlabs-hq/templates/github-workflow-hq.yml.
5. .env.example
Create with required environment variables (no secrets).
Output
DEPLOYMENT FILES GENERATED frontend/Dockerfile - Multi-stage Next.js build docker-compose.yml - Production deployment docker-compose.convex.yml - Convex instance (if applicable) .github/workflows/deploy-hq.yml - CI/CD pipeline .env.example - Environment template Next: Run /deploy to deploy to server
/deploy status
Check deployment status for the current project.
Process
# Read project info
PROJECT_NAME=$(basename "$(pwd)")
# Check registry (python3, no jq on server)
ssh -p 2222 lucidlabs-hq "python3 -c \"import json; [print(json.dumps(p, indent=2)) for p in json.load(open('/opt/lucidlabs/registry.json'))['projects'] if p['name']=='$PROJECT_NAME']\"" 2>/dev/null
# Container status
ssh -p 2222 lucidlabs-hq "docker ps --filter 'name=ABBREVIATION' --format 'table {{.Names}}\t{{.Status}}\t{{.Ports}}'"
# SSL check
curl -sI "https://SUBDOMAIN.lucidlabs.de" 2>/dev/null | head -5
# Convex check (if applicable)
curl -s "https://ABBREVIATION-convex.lucidlabs.de/version" 2>/dev/null
Output Format
DEPLOYMENT STATUS: client-service-reporting URL: https://reporting.lucidlabs.de Convex: https://csr-convex.lucidlabs.de GitHub: https://github.com/lucidlabs-hq/client-service-reporting CONTAINERS ────────────────────────────────────────────────── csr-frontend Up 3 days (healthy) 3070->3000 csr-convex-backend Up 3 days (healthy) 3212->3210 csr-convex-dashboard Up 3 days 6793->6791 SSL: Valid (Let's Encrypt) Last Deploy: 2026-02-09T14:30:00+01:00
/deploy logs
Show container logs for the current project.
Process
Ask user which service logs they want:
# Frontend logs (last 50 lines) ssh -p 2222 lucidlabs-hq "docker logs ABBREVIATION-frontend --tail 50" # Convex logs ssh -p 2222 lucidlabs-hq "docker logs ABBREVIATION-convex-backend --tail 50" # Follow logs (stream) ssh -p 2222 lucidlabs-hq "docker logs ABBREVIATION-frontend -f --tail 20"
SSH Configuration
LUCIDLABS-HQ uses port 2222. Required ~/.ssh/config:
Host lucidlabs-hq HostName <server-ip> User nightwing Port 2222 IdentityFile ~/.ssh/lucidlabs-hq Host lucidlabs-hq-root HostName <server-ip> User root Port 2222 IdentityFile ~/.ssh/lucidlabs-hq
Sudoers (One-Time Setup, already configured 2026-02-09)
Deploy scripts need passwordless sudo for non-interactive SSH. This is configured via
/etc/sudoers.d/lucidlabs-deploy on the server. Only deploy scripts and Docker are
passwordless — all other sudo still requires a password.
# Setup (already done, documented here for reference): ssh lucidlabs-hq echo 'nightwing ALL=(ALL) NOPASSWD: /opt/lucidlabs/scripts/*.sh, /usr/bin/docker, /usr/bin/docker-compose' \ | sudo tee /etc/sudoers.d/lucidlabs-deploy sudo chmod 440 /etc/sudoers.d/lucidlabs-deploy sudo visudo -c # must print "parsed OK"
See infrastructure/lucidlabs-hq/README.md for full security rationale.
Port Allocation
Auto-allocated by add-project.sh from registry.json:
| Service | Range | Step |
|---|---|---|
| Frontend | 3050-3099 | +10 |
| Convex Backend | 3210-3299 | +2 |
| Convex Dashboard | 6790-6899 | +2 |
| Mastra | 4050-4099 | +10 |
Current Allocations
| Project | Frontend | Convex BE | Convex Dash | Mastra |
|---|---|---|---|---|
| cotinga-test-suite | 3050 | 3214 | 6794 | - |
| invoice-accounting | 3060 | 3216 | 6796 | 4050 |
| client-service-reporting | 3070 | 3212 | 6793 | - |
Convex Project Isolation (MANDATORY)
Every project MUST have its own Convex instance. The add-project.sh script generates docker-compose.convex.yml automatically with correct port allocation.
Troubleshooting
| Issue | Solution |
|---|---|
| SSH refused | Check port 2222, verify SSH key in ~/.ssh/config |
| Caddy not reloading | ssh lucidlabs-hq 'cd /opt/lucidlabs/caddy && sudo docker compose restart caddy' |
| Container won't start | /deploy logs to check build errors |
| SSL not provisioning | Verify DNS A record points to server IP |
| Port conflict | Check ss -tlnp on server, review registry.json |
| Permission denied | Verify sudoers.d config for nightwing |
Reference
- •
infrastructure/lucidlabs-hq/scripts/add-project.sh- Server provisioning - •
scripts/deploy-project.sh- Local orchestration - •
infrastructure/lucidlabs-hq/registry.json- Port/project registry - •
.claude/reference/deployment-targets.md- Deployment architecture - •
.claude/reference/deployment-best-practices.md- Docker/CI patterns - •
.claude/reference/ssh-keys.md- SSH setup guide