feat: 모델 및 데이터베이스 구조 개선

- 모델 파일들의 데이터베이스 연결 및 쿼리 최적화
- 마이그레이션 파일 정리 및 통합
- Docker 설정 파일 추가
- 프로젝트 시작/중지 스크립트 추가
- README 및 문서 업데이트
This commit is contained in:
hyungi
2025-10-27 10:01:12 +09:00
parent f96604b01e
commit 5ff0c7cd60
19 changed files with 13972 additions and 33 deletions

63
README.md Normal file
View File

@@ -0,0 +1,63 @@
# TK-FB-Project - 통합 실행 가이드
## 🚀 한 번에 모든 서비스 실행
### 🎯 간편 실행 (권장)
```bash
cd /Users/hyungi/docker/TK-FB-Project
./start.sh
```
### 🛑 간편 중지
```bash
./stop.sh
```
### 📋 직접 실행
```bash
docker-compose up -d
docker-compose down
```
## 📊 서비스 목록
| 서비스 | 포트 | 접속 URL | 설명 |
|--------|------|----------|------|
| **웹 UI** | 20000 | http://localhost:20000 | 메인 웹 인터페이스 |
| **API 서버** | 20005 | http://localhost:20005 | Node.js API 서버 ✅ |
| **FastAPI 브릿지** | 20010 | http://localhost:20010 | Python FastAPI 서비스 |
| **phpMyAdmin** | 20080 | http://localhost:20080 | DB 관리도구 |
| **MariaDB** | 20306 | - | 데이터베이스 서버 |
## 🛠️ 관리 명령어
### 모든 서비스 중지
```bash
cd /Users/hyungi/docker/TK-FB-Project
docker-compose down
```
### 서비스 상태 확인
```bash
docker ps | grep fb_
```
### 로그 확인
```bash
docker-compose logs -f
```
## 💾 데이터베이스 정보
- **호스트**: localhost:20306
- **데이터베이스**: hyungi
- **사용자**: hyungi
- **비밀번호**: hyungi_password_2025
- **Root 비밀번호**: hyungi_root_password_2025
## ✨ 주요 개선사항
1. **통합 실행**: 한 번의 명령으로 모든 서비스 실행
2. **깔끔한 DB 초기화**: 마이그레이션 오류 해결
3. **일관된 네이밍**: fb_ 접두사로 컨테이너 구분
4. **안정적인 포트**: 20000번대 포트 사용

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -31,7 +31,7 @@ class WorkAnalysis {
totalHours: parseFloat(stats.total_hours) || 0, totalHours: parseFloat(stats.total_hours) || 0,
totalReports: parseInt(stats.total_reports) || 0, totalReports: parseInt(stats.total_reports) || 0,
activeProjects: parseInt(stats.active_projects) || 0, activeProjects: parseInt(stats.active_projects) || 0,
activeWorkers: parseInt(stats.active_workers) || 0, activeworkers: parseInt(stats.active_workers) || 0,
errorRate: parseFloat(errorRate.toFixed(2)) || 0, errorRate: parseFloat(errorRate.toFixed(2)) || 0,
avgHoursPerReport: parseFloat(stats.avg_hours_per_report) || 0 avgHoursPerReport: parseFloat(stats.avg_hours_per_report) || 0
}; };
@@ -82,7 +82,7 @@ class WorkAnalysis {
SUM(CASE WHEN dwr.work_status_id = 2 THEN 1 ELSE 0 END) as errorCount, SUM(CASE WHEN dwr.work_status_id = 2 THEN 1 ELSE 0 END) as errorCount,
COUNT(DISTINCT dwr.report_date) as workingDays COUNT(DISTINCT dwr.report_date) as workingDays
FROM daily_work_reports dwr FROM daily_work_reports dwr
LEFT JOIN Workers w ON dwr.worker_id = w.worker_id LEFT JOIN workers w ON dwr.worker_id = w.worker_id
WHERE dwr.report_date BETWEEN ? AND ? WHERE dwr.report_date BETWEEN ? AND ?
GROUP BY dwr.worker_id, w.worker_name GROUP BY dwr.worker_id, w.worker_name
ORDER BY totalHours DESC ORDER BY totalHours DESC
@@ -201,7 +201,7 @@ class WorkAnalysis {
u.name as created_by_name, u.name as created_by_name,
dwr.created_at dwr.created_at
FROM daily_work_reports dwr FROM daily_work_reports dwr
LEFT JOIN Workers w ON dwr.worker_id = w.worker_id LEFT JOIN workers w ON dwr.worker_id = w.worker_id
LEFT JOIN Projects p ON dwr.project_id = p.project_id LEFT JOIN Projects p ON dwr.project_id = p.project_id
LEFT JOIN work_types wt ON dwr.work_type_id = wt.id LEFT JOIN work_types wt ON dwr.work_type_id = wt.id
LEFT JOIN work_status_types wst ON dwr.work_status_id = wst.id LEFT JOIN work_status_types wst ON dwr.work_status_id = wst.id
@@ -269,7 +269,7 @@ class WorkAnalysis {
totalHours: parseFloat(row.total_hours) || 0, totalHours: parseFloat(row.total_hours) || 0,
totalReports: parseInt(row.total_reports) || 0, totalReports: parseInt(row.total_reports) || 0,
avgHours: parseFloat(row.avg_hours) || 0, avgHours: parseFloat(row.avg_hours) || 0,
activeWorkers: parseInt(row.active_workers) || 0 activeworkers: parseInt(row.active_workers) || 0
})); }));
} catch (error) { } catch (error) {
throw new Error(`요일별 패턴 분석 실패: ${error.message}`); throw new Error(`요일별 패턴 분석 실패: ${error.message}`);
@@ -301,7 +301,7 @@ class WorkAnalysis {
error_type_name: row.error_type_name || `에러유형 ${row.error_type_id}`, error_type_name: row.error_type_name || `에러유형 ${row.error_type_id}`,
errorCount: parseInt(row.error_count) || 0, errorCount: parseInt(row.error_count) || 0,
totalHours: parseFloat(row.total_hours) || 0, totalHours: parseFloat(row.total_hours) || 0,
affectedWorkers: parseInt(row.affected_workers) || 0, affectedworkers: parseInt(row.affected_workers) || 0,
affectedProjects: parseInt(row.affected_projects) || 0 affectedProjects: parseInt(row.affected_projects) || 0
})); }));
} catch (error) { } catch (error) {
@@ -333,7 +333,7 @@ class WorkAnalysis {
monthName: row.month_name, monthName: row.month_name,
totalHours: parseFloat(row.total_hours) || 0, totalHours: parseFloat(row.total_hours) || 0,
totalReports: parseInt(row.total_reports) || 0, totalReports: parseInt(row.total_reports) || 0,
activeWorkers: parseInt(row.active_workers) || 0, activeworkers: parseInt(row.active_workers) || 0,
activeProjects: parseInt(row.active_projects) || 0, activeProjects: parseInt(row.active_projects) || 0,
errorCount: parseInt(row.error_count) || 0, errorCount: parseInt(row.error_count) || 0,
errorRate: row.total_reports > 0 ? parseFloat(((row.error_count / row.total_reports) * 100).toFixed(2)) : 0 errorRate: row.total_reports > 0 ? parseFloat(((row.error_count / row.total_reports) * 100).toFixed(2)) : 0
@@ -362,7 +362,7 @@ class WorkAnalysis {
AND report_date BETWEEN ? AND ? AND report_date BETWEEN ? AND ?
)) * 100, 2) as percentage )) * 100, 2) as percentage
FROM daily_work_reports dwr FROM daily_work_reports dwr
LEFT JOIN Workers w ON dwr.worker_id = w.worker_id LEFT JOIN workers w ON dwr.worker_id = w.worker_id
LEFT JOIN work_types wt ON dwr.work_type_id = wt.id LEFT JOIN work_types wt ON dwr.work_type_id = wt.id
LEFT JOIN Projects p ON dwr.project_id = p.project_id LEFT JOIN Projects p ON dwr.project_id = p.project_id
WHERE dwr.report_date BETWEEN ? AND ? WHERE dwr.report_date BETWEEN ? AND ?

View File

@@ -26,7 +26,7 @@ const getAnalysis = async (startDate, endDate) => {
const summarySql = ` const summarySql = `
SELECT SELECT
COUNT(DISTINCT dwr.project_id) as totalProjects, COUNT(DISTINCT dwr.project_id) as totalProjects,
COUNT(DISTINCT dwr.worker_id) as totalWorkers, COUNT(DISTINCT dwr.worker_id) as totalworkers,
COUNT(DISTINCT dwr.task_id) as totalTasks, COUNT(DISTINCT dwr.task_id) as totalTasks,
SUM(${workHoursCalc}) as totalHours SUM(${workHoursCalc}) as totalHours
FROM DailyWorkReports dwr FROM DailyWorkReports dwr
@@ -48,7 +48,7 @@ const getAnalysis = async (startDate, endDate) => {
const byWorkerSql = ` const byWorkerSql = `
SELECT w.worker_name as name, SUM(${workHoursCalc}) as hours, COUNT(DISTINCT dwr.project_id) as participants SELECT w.worker_name as name, SUM(${workHoursCalc}) as hours, COUNT(DISTINCT dwr.project_id) as participants
FROM DailyWorkReports dwr FROM DailyWorkReports dwr
JOIN Workers w ON dwr.worker_id = w.worker_id JOIN workers w ON dwr.worker_id = w.worker_id
${whereClause} ${whereClause}
GROUP BY w.worker_name GROUP BY w.worker_name
HAVING hours > 0 HAVING hours > 0
@@ -74,7 +74,7 @@ const getAnalysis = async (startDate, endDate) => {
(${workHoursCalc}) as work_hours, dwr.memo (${workHoursCalc}) as work_hours, dwr.memo
FROM DailyWorkReports dwr FROM DailyWorkReports dwr
JOIN Projects p ON dwr.project_id = p.project_id JOIN Projects p ON dwr.project_id = p.project_id
JOIN Workers w ON dwr.worker_id = w.worker_id JOIN workers w ON dwr.worker_id = w.worker_id
JOIN Tasks t ON dwr.task_id = t.task_id JOIN Tasks t ON dwr.task_id = t.task_id
${whereClause} ${whereClause}
HAVING work_hours > 0 HAVING work_hours > 0

View File

@@ -46,7 +46,7 @@ const getAllByDate = async (date) => {
d.id, d.date, w.worker_name, p.project_name, d.start_time, d.end_time, d.id, d.date, w.worker_name, p.project_name, d.start_time, d.end_time,
t.category, t.subcategory, d.description t.category, t.subcategory, d.description
FROM DailyIssueReports d FROM DailyIssueReports d
LEFT JOIN Workers w ON d.worker_id = w.worker_id LEFT JOIN workers w ON d.worker_id = w.worker_id
LEFT JOIN Projects p ON d.project_id = p.project_id LEFT JOIN Projects p ON d.project_id = p.project_id
LEFT JOIN IssueTypes t ON d.issue_type_id = t.issue_type_id LEFT JOIN IssueTypes t ON d.issue_type_id = t.issue_type_id
WHERE d.date = ? WHERE d.date = ?

View File

@@ -54,7 +54,7 @@ const createDailyReport = async (reportData, callback) => {
const [existingReports] = await conn.query( const [existingReports] = await conn.query(
`SELECT dwr.created_by, u.name as created_by_name, COUNT(*) as count, SUM(dwr.work_hours) as total_hours `SELECT dwr.created_by, u.name as created_by_name, COUNT(*) as count, SUM(dwr.work_hours) as total_hours
FROM daily_work_reports dwr FROM daily_work_reports dwr
LEFT JOIN Users u ON dwr.created_by = u.user_id LEFT JOIN users u ON dwr.created_by = u.user_id
WHERE dwr.report_date = ? AND dwr.worker_id = ? WHERE dwr.report_date = ? AND dwr.worker_id = ?
GROUP BY dwr.created_by`, GROUP BY dwr.created_by`,
[report_date, worker_id] [report_date, worker_id]
@@ -82,7 +82,7 @@ const [existingReports] = await conn.query(
const [finalReports] = await conn.query( const [finalReports] = await conn.query(
`SELECT dwr.created_by, u.name as created_by_name, COUNT(*) as count, SUM(dwr.work_hours) as total_hours `SELECT dwr.created_by, u.name as created_by_name, COUNT(*) as count, SUM(dwr.work_hours) as total_hours
FROM daily_work_reports dwr FROM daily_work_reports dwr
LEFT JOIN Users u ON dwr.created_by = u.user_id LEFT JOIN users u ON dwr.created_by = u.user_id
WHERE dwr.report_date = ? AND dwr.worker_id = ? WHERE dwr.report_date = ? AND dwr.worker_id = ?
GROUP BY dwr.created_by`, GROUP BY dwr.created_by`,
[report_date, worker_id] [report_date, worker_id]
@@ -164,7 +164,7 @@ const getMyAccumulatedHours = async (date, worker_id, created_by, callback) => {
ORDER BY created_at ORDER BY created_at
) as my_entries ) as my_entries
FROM daily_work_reports dwr FROM daily_work_reports dwr
LEFT JOIN Projects p ON dwr.project_id = p.project_id LEFT JOIN projects p ON dwr.project_id = p.project_id
WHERE dwr.report_date = ? AND dwr.worker_id = ? AND dwr.created_by = ? WHERE dwr.report_date = ? AND dwr.worker_id = ? AND dwr.created_by = ?
`; `;
@@ -216,8 +216,8 @@ const getContributorsByDate = async (date, worker_id, callback) => {
ORDER BY dwr.created_at SEPARATOR ', ' ORDER BY dwr.created_at SEPARATOR ', '
) as entry_details ) as entry_details
FROM daily_work_reports dwr FROM daily_work_reports dwr
LEFT JOIN Users u ON dwr.created_by = u.user_id LEFT JOIN users u ON dwr.created_by = u.user_id
LEFT JOIN Projects p ON dwr.project_id = p.project_id LEFT JOIN projects p ON dwr.project_id = p.project_id
WHERE dwr.report_date = ? AND dwr.worker_id = ? WHERE dwr.report_date = ? AND dwr.worker_id = ?
GROUP BY dwr.created_by GROUP BY dwr.created_by
ORDER BY total_hours DESC, first_entry ASC ORDER BY total_hours DESC, first_entry ASC
@@ -245,9 +245,9 @@ const removeSpecificEntry = async (entry_id, deleted_by, callback) => {
const [entryInfo] = await conn.query( const [entryInfo] = await conn.query(
`SELECT dwr.*, w.worker_name, p.project_name, u.name as created_by_name `SELECT dwr.*, w.worker_name, p.project_name, u.name as created_by_name
FROM daily_work_reports dwr FROM daily_work_reports dwr
LEFT JOIN Workers w ON dwr.worker_id = w.worker_id LEFT JOIN workers w ON dwr.worker_id = w.worker_id
LEFT JOIN Projects p ON dwr.project_id = p.project_id LEFT JOIN projects p ON dwr.project_id = p.project_id
LEFT JOIN Users u ON dwr.created_by = u.user_id LEFT JOIN users u ON dwr.created_by = u.user_id
WHERE dwr.id = ?`, WHERE dwr.id = ?`,
[entry_id] [entry_id]
); );
@@ -333,12 +333,12 @@ const getSelectQuery = () => `
dwr.created_at, dwr.created_at,
dwr.updated_at dwr.updated_at
FROM daily_work_reports dwr FROM daily_work_reports dwr
LEFT JOIN Workers w ON dwr.worker_id = w.worker_id LEFT JOIN workers w ON dwr.worker_id = w.worker_id
LEFT JOIN Projects p ON dwr.project_id = p.project_id LEFT JOIN projects p ON dwr.project_id = p.project_id
LEFT JOIN work_types wt ON dwr.work_type_id = wt.id LEFT JOIN work_types wt ON dwr.work_type_id = wt.id
LEFT JOIN work_status_types wst ON dwr.work_status_id = wst.id LEFT JOIN work_status_types wst ON dwr.work_status_id = wst.id
LEFT JOIN error_types et ON dwr.error_type_id = et.id LEFT JOIN error_types et ON dwr.error_type_id = et.id
LEFT JOIN Users u ON dwr.created_by = u.user_id LEFT JOIN users u ON dwr.created_by = u.user_id
`; `;
/** /**
@@ -524,7 +524,7 @@ const getSummaryByDate = async (date, callback) => {
COUNT(*) as work_entries_count, COUNT(*) as work_entries_count,
SUM(CASE WHEN dwr.work_status_id = 2 THEN 1 ELSE 0 END) as error_count SUM(CASE WHEN dwr.work_status_id = 2 THEN 1 ELSE 0 END) as error_count
FROM daily_work_reports dwr FROM daily_work_reports dwr
LEFT JOIN Workers w ON dwr.worker_id = w.worker_id LEFT JOIN workers w ON dwr.worker_id = w.worker_id
WHERE dwr.report_date = ? WHERE dwr.report_date = ?
GROUP BY dwr.worker_id, dwr.report_date GROUP BY dwr.worker_id, dwr.report_date
ORDER BY w.worker_name ASC ORDER BY w.worker_name ASC
@@ -553,7 +553,7 @@ const getSummaryByWorker = async (worker_id, callback) => {
COUNT(*) as work_entries_count, COUNT(*) as work_entries_count,
SUM(CASE WHEN dwr.work_status_id = 2 THEN 1 ELSE 0 END) as error_count SUM(CASE WHEN dwr.work_status_id = 2 THEN 1 ELSE 0 END) as error_count
FROM daily_work_reports dwr FROM daily_work_reports dwr
LEFT JOIN Workers w ON dwr.worker_id = w.worker_id LEFT JOIN workers w ON dwr.worker_id = w.worker_id
WHERE dwr.worker_id = ? WHERE dwr.worker_id = ?
GROUP BY dwr.report_date, dwr.worker_id GROUP BY dwr.report_date, dwr.worker_id
ORDER BY dwr.report_date DESC ORDER BY dwr.report_date DESC
@@ -587,8 +587,8 @@ const getMonthlySummary = async (year, month, callback) => {
GROUP_CONCAT(DISTINCT p.project_name ORDER BY p.project_name) as projects, GROUP_CONCAT(DISTINCT p.project_name ORDER BY p.project_name) as projects,
GROUP_CONCAT(DISTINCT wt.name ORDER BY wt.name) as work_types GROUP_CONCAT(DISTINCT wt.name ORDER BY wt.name) as work_types
FROM daily_work_reports dwr FROM daily_work_reports dwr
LEFT JOIN Workers w ON dwr.worker_id = w.worker_id LEFT JOIN workers w ON dwr.worker_id = w.worker_id
LEFT JOIN Projects p ON dwr.project_id = p.project_id LEFT JOIN projects p ON dwr.project_id = p.project_id
LEFT JOIN work_types wt ON dwr.work_type_id = wt.id LEFT JOIN work_types wt ON dwr.work_type_id = wt.id
WHERE dwr.report_date BETWEEN ? AND ? WHERE dwr.report_date BETWEEN ? AND ?
GROUP BY dwr.report_date, dwr.worker_id GROUP BY dwr.report_date, dwr.worker_id

View File

@@ -90,7 +90,7 @@ const create = async (report, callback) => {
wr.work_details, wr.work_details,
wr.memo wr.memo
FROM WorkReports wr FROM WorkReports wr
LEFT JOIN Workers w ON wr.worker_id = w.worker_id LEFT JOIN workers w ON wr.worker_id = w.worker_id
LEFT JOIN Projects p ON wr.project_id = p.project_id LEFT JOIN Projects p ON wr.project_id = p.project_id
LEFT JOIN Tasks t ON wr.task_id = t.task_id LEFT JOIN Tasks t ON wr.task_id = t.task_id
WHERE wr.\`date\` = ? WHERE wr.\`date\` = ?

View File

@@ -7,7 +7,7 @@ const create = async (worker, callback) => {
const { worker_name, join_date, job_type, salary, annual_leave, status } = worker; const { worker_name, join_date, job_type, salary, annual_leave, status } = worker;
const [result] = await db.query( const [result] = await db.query(
`INSERT INTO Workers `INSERT INTO workers
(worker_name, join_date, job_type, salary, annual_leave, status) (worker_name, join_date, job_type, salary, annual_leave, status)
VALUES (?, ?, ?, ?, ?, ?)`, VALUES (?, ?, ?, ?, ?, ?)`,
[worker_name, join_date, job_type, salary, annual_leave, status] [worker_name, join_date, job_type, salary, annual_leave, status]
@@ -23,7 +23,7 @@ const create = async (worker, callback) => {
const getAll = async (callback) => { const getAll = async (callback) => {
try { try {
const db = await getDb(); const db = await getDb();
const [rows] = await db.query(`SELECT * FROM Workers ORDER BY worker_id DESC`); const [rows] = await db.query(`SELECT * FROM workers ORDER BY worker_id DESC`);
callback(null, rows); callback(null, rows);
} catch (err) { } catch (err) {
callback(err); callback(err);
@@ -34,7 +34,7 @@ const getAll = async (callback) => {
const getById = async (worker_id, callback) => { const getById = async (worker_id, callback) => {
try { try {
const db = await getDb(); const db = await getDb();
const [rows] = await db.query(`SELECT * FROM Workers WHERE worker_id = ?`, [worker_id]); const [rows] = await db.query(`SELECT * FROM workers WHERE worker_id = ?`, [worker_id]);
callback(null, rows[0]); callback(null, rows[0]);
} catch (err) { } catch (err) {
callback(err); callback(err);
@@ -48,7 +48,7 @@ const update = async (worker, callback) => {
const { worker_id, worker_name, join_date, job_type, salary, annual_leave, status } = worker; const { worker_id, worker_name, join_date, job_type, salary, annual_leave, status } = worker;
const [result] = await db.query( const [result] = await db.query(
`UPDATE Workers `UPDATE workers
SET worker_name = ?, SET worker_name = ?,
join_date = ?, join_date = ?,
job_type = ?, job_type = ?,
@@ -70,7 +70,7 @@ const remove = async (worker_id, callback) => {
try { try {
const db = await getDb(); const db = await getDb();
const [result] = await db.query( const [result] = await db.query(
`DELETE FROM Workers WHERE worker_id = ?`, `DELETE FROM workers WHERE worker_id = ?`,
[worker_id] [worker_id]
); );
callback(null, result.affectedRows); callback(null, result.affectedRows);

116
docker-compose.yml Normal file
View File

@@ -0,0 +1,116 @@
version: "3.8"
services:
# MariaDB 데이터베이스
db:
image: mariadb:10.9
container_name: fb_db
restart: unless-stopped
environment:
- MYSQL_ROOT_PASSWORD=hyungi_root_password_2025
- MYSQL_DATABASE=hyungi
- MYSQL_USER=hyungi
- MYSQL_PASSWORD=hyungi_password_2025
volumes:
- db_data:/var/lib/mysql
- ./api.hyungi.net/migrations:/docker-entrypoint-initdb.d
ports:
- "20306:3306"
networks:
- fb_network
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
timeout: 20s
retries: 10
# API 서버 (Node.js)
api:
build:
context: ./api.hyungi.net
dockerfile: Dockerfile
container_name: fb_api
depends_on:
db:
condition: service_healthy
restart: unless-stopped
ports:
- "20005:20005"
environment:
- NODE_ENV=production
- DB_HOST=db
- DB_NAME=hyungi
- DB_USER=hyungi
- DB_PASSWORD=hyungi_password_2025
- DB_ROOT_PASSWORD=hyungi_root_password_2025
volumes:
- ./api.hyungi.net/public/img:/usr/src/app/public/img:ro
- ./api.hyungi.net/uploads:/usr/src/app/uploads
- ./api.hyungi.net/logs:/usr/src/app/logs
networks:
- fb_network
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# 웹 UI (Nginx)
web-ui:
build:
context: ./web-ui
dockerfile: Dockerfile
container_name: fb_web_ui
restart: unless-stopped
ports:
- "20000:80"
volumes:
- ./web-ui:/usr/share/nginx/html:ro
networks:
- fb_network
depends_on:
- api
# FastAPI 브릿지
fastapi-bridge:
build:
context: ./fastapi-bridge
dockerfile: Dockerfile
container_name: fb_fastapi_bridge
restart: unless-stopped
ports:
- "20010:8000"
environment:
- EXPRESS_API_URL=http://api:20005
- NODE_ENV=production
networks:
- fb_network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
# phpMyAdmin (DB 관리도구)
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: fb_phpmyadmin
depends_on:
- db
restart: unless-stopped
ports:
- "20080:80"
environment:
- PMA_HOST=db
- PMA_USER=root
- PMA_PASSWORD=hyungi_root_password_2025
- UPLOAD_LIMIT=50M
networks:
- fb_network
volumes:
db_data:
driver: local
networks:
fb_network:
driver: bridge

View File

@@ -7,7 +7,7 @@ from typing import List
class Settings: class Settings:
# 기본 설정 # 기본 설정
FASTAPI_PORT: int = int(os.getenv("FASTAPI_PORT", "8000")) FASTAPI_PORT: int = int(os.getenv("FASTAPI_PORT", "8000"))
EXPRESS_API_URL: str = os.getenv("EXPRESS_API_URL", "http://localhost:3005") EXPRESS_API_URL: str = os.getenv("EXPRESS_API_URL", "http://api:20005")
REDIS_URL: str = os.getenv("REDIS_URL", "redis://localhost:6379") REDIS_URL: str = os.getenv("REDIS_URL", "redis://localhost:6379")
NODE_ENV: str = os.getenv("NODE_ENV", "development") NODE_ENV: str = os.getenv("NODE_ENV", "development")

View File

@@ -0,0 +1,17 @@
version: "3.8"
services:
fastapi-bridge:
build:
context: .
dockerfile: Dockerfile
container_name: fastapi_bridge_hyungi
restart: unless-stopped
ports:
- "20010:8000"
networks:
- hyungi_network
networks:
hyungi_network:
external: true

4567
hyungi.sql Normal file

File diff suppressed because it is too large Load Diff

30
start.sh Executable file
View File

@@ -0,0 +1,30 @@
#!/bin/bash
# TK-FB-Project 통합 실행 스크립트
echo "🚀 TK-FB-Project 시작 중..."
echo "========================================"
# Docker Compose로 모든 서비스 실행
docker-compose up -d
# 잠시 대기
sleep 3
# 서비스 상태 확인
echo ""
echo "📊 서비스 상태 확인 중..."
echo "========================================"
docker ps | grep fb_
echo ""
echo "🌐 접속 URL 정보:"
echo "========================================"
echo "• 웹 UI: http://localhost:20000"
echo "• API 서버: http://localhost:20005"
echo "• FastAPI: http://localhost:20010"
echo "• phpMyAdmin: http://localhost:20080"
echo "• 데이터베이스: localhost:20306"
echo ""
echo "✅ 모든 서비스가 시작되었습니다!"
echo " 잠시 후 브라우저에서 http://localhost:20000 으로 접속하세요."

12
stop.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
# TK-FB-Project 통합 중지 스크립트
echo "⏹️ TK-FB-Project 중지 중..."
echo "========================================"
# Docker Compose로 모든 서비스 중지
docker-compose down
echo ""
echo "✅ 모든 서비스가 중지되었습니다!"