feat: 맥미니 MLX 연동 — OpenAI-compat 프록시 + 모델 배치 정정

- proxy_openai.py 추가: MLX 서버 SSE 패스스루
- chat.py: openai-compat 백엔드 타입 라우팅 추가
- backends.json: GPU=embed(bge-m3)만, 맥미니MLX=채팅(qwen3.5:35b-a3b)
- LAN IP(192.168.1.122) 사용 (같은 서브넷, Tailscale 불필요)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Hyungi Ahn
2026-03-31 15:09:21 +09:00
parent 3794afff95
commit 7b28252d4f
3 changed files with 113 additions and 4 deletions

View File

@@ -4,8 +4,17 @@
"type": "ollama",
"url": "http://host.docker.internal:11434",
"models": [
{ "id": "qwen3.5:9b-q8_0", "capabilities": ["chat"], "priority": 1 },
{ "id": "qwen3-vl:8b", "capabilities": ["chat", "vision"], "priority": 1 }
{ "id": "bge-m3", "capabilities": ["embed"], "priority": 1 }
],
"access": "all",
"rate_limit": null
},
{
"id": "mlx-mac",
"type": "openai-compat",
"url": "http://192.168.1.122:8800",
"models": [
{ "id": "qwen3.5:35b-a3b", "capabilities": ["chat"], "priority": 1 }
],
"access": "all",
"rate_limit": null