infra: migrate application from Mac mini to GPU server

- Integrate ollama + ai-gateway into root docker-compose.yml
  (NVIDIA GPU runtime, single compose for all services)
- Change NAS mount from SMB (NAS_SMB_PATH) to NFS (NAS_NFS_PATH)
  Default: /mnt/nas/Document_Server (fstab registered on GPU server)
- Update config.yaml AI endpoints:
  primary → Mac mini MLX via Tailscale (100.76.254.116:8800)
  fallback/embedding/vision/rerank → ollama (same Docker network)
  gateway → ai-gateway (same Docker network)
- Update credentials.env.example (remove GPU_SERVER_IP, add NFS path)
- Mark gpu-server/docker-compose.yml as deprecated
- Update CLAUDE.md network diagram and AI model config
- Update architecture.md, deploy.md, devlog.md for GPU server as main
- Caddyfile: auto_https off, HTTP only (TLS at upstream proxy)
- Caddy port: 127.0.0.1:8080:80 (localhost only)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Hyungi Ahn
2026-04-03 07:47:09 +09:00
parent 8afa3c401f
commit 0ca78640ee
11 changed files with 434 additions and 56 deletions

View File

@@ -542,7 +542,7 @@ POST /to-hwpx
### Caddy 설정 예시
```
pkm.hyungi.net {
document.hyungi.net {
reverse_proxy localhost:8000 # FastAPI
}
@@ -931,7 +931,7 @@ pkm-web/
│ └── migrate_from_devonthink.py ← v1 → v2 마이그레이션 스크립트
├── docs/
│ ├── architecture-v2.md ← 이 문서
│ ├── architecture.md ← 이 문서
│ └── deploy.md ← 배포 가이드
└── tests/