Files
hyungi_document_server/gpu-server/docker-compose.yml
Hyungi Ahn 0ca78640ee infra: migrate application from Mac mini to GPU server
- Integrate ollama + ai-gateway into root docker-compose.yml
  (NVIDIA GPU runtime, single compose for all services)
- Change NAS mount from SMB (NAS_SMB_PATH) to NFS (NAS_NFS_PATH)
  Default: /mnt/nas/Document_Server (fstab registered on GPU server)
- Update config.yaml AI endpoints:
  primary → Mac mini MLX via Tailscale (100.76.254.116:8800)
  fallback/embedding/vision/rerank → ollama (same Docker network)
  gateway → ai-gateway (same Docker network)
- Update credentials.env.example (remove GPU_SERVER_IP, add NFS path)
- Mark gpu-server/docker-compose.yml as deprecated
- Update CLAUDE.md network diagram and AI model config
- Update architecture.md, deploy.md, devlog.md for GPU server as main
- Caddyfile: auto_https off, HTTP only (TLS at upstream proxy)
- Caddy port: 127.0.0.1:8080:80 (localhost only)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-03 07:47:09 +09:00

37 lines
1.1 KiB
YAML

# ═══════════════════════════════════════════════════
# 이 파일은 더 이상 사용하지 않음.
# 루트 docker-compose.yml로 통합됨 (2026-04-03).
# ═══════════════════════════════════════════════════
services:
ollama:
image: ollama/ollama
volumes:
- ollama_data:/root/.ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
ports:
- "11434:11434"
restart: unless-stopped
ai-gateway:
build: ./services/ai-gateway
ports:
- "8080:8080"
environment:
- PRIMARY_ENDPOINT=${PRIMARY_ENDPOINT:-http://mac-mini:8800/v1/chat/completions}
- FALLBACK_ENDPOINT=http://ollama:11434/v1/chat/completions
- CLAUDE_API_KEY=${CLAUDE_API_KEY:-}
- DAILY_BUDGET_USD=${DAILY_BUDGET_USD:-5.00}
depends_on:
- ollama
restart: unless-stopped
volumes:
ollama_data: