# ๐Ÿ  DS1525+ ์ตœ์ ํ™” ์„ค์น˜ ๊ฐ€์ด๋“œ ## ๐Ÿ“‹ ํ•˜๋“œ์›จ์–ด ์‚ฌ์–‘ ### ๋Œ€์ƒ ๊ธฐ๊ธฐ: Synology DS1525+ - **CPU**: AMD Ryzen R1600 (4์ฝ”์–ด 2.6GHz) - **๋ฉ”๋ชจ๋ฆฌ**: 32GB RAM (๋Œ€ํญ ์—…๊ทธ๋ ˆ์ด๋“œ๋จ!) - **์ €์žฅ์žฅ์น˜**: ์‹œ๋†€๋กœ์ง€ ์ •ํ’ˆ 2.5" SSD 480GB - **๋„คํŠธ์›Œํฌ**: ๊ธฐ๊ฐ€๋น„ํŠธ ์ด๋”๋„ท x4 - **DSM**: 7.0 ์ด์ƒ ### ์„ฑ๋Šฅ ํŠน์ง• - **๊ณ ์„ฑ๋Šฅ CPU**: AMD Ryzen์œผ๋กœ NLP ์ฒ˜๋ฆฌ ์šฐ์ˆ˜ - **๋Œ€์šฉ๋Ÿ‰ ๋ฉ”๋ชจ๋ฆฌ**: 32GB๋กœ ๋ชจ๋“  ์„œ๋น„์Šค ์—ฌ์œ ๋กญ๊ฒŒ ์šด์˜ - **SSD ์Šคํ† ๋ฆฌ์ง€**: ๋น ๋ฅธ I/O, ๋‚ฎ์€ ์ง€์—ฐ์‹œ๊ฐ„ - **๋ฉ€ํ‹ฐ ๊ธฐ๊ฐ€๋น„ํŠธ**: ๋„คํŠธ์›Œํฌ ๋ณ‘๋ชฉ ์—†์Œ --- ## ๐Ÿš€ DS1525+ 32GB ์ตœ์ ํ™” ๊ตฌ์„ฑ ### ๋ฉ”๋ชจ๋ฆฌ ํ• ๋‹น ๊ณ„ํš (32GB ํ™œ์šฉ) ``` ์ด ๋ฉ”๋ชจ๋ฆฌ: 32GB โ”œโ”€โ”€ DSM ์‹œ์Šคํ…œ: 2GB โ”œโ”€โ”€ PostgreSQL: 4GB (๋Œ€ํญ ์ฆ๋Ÿ‰) โ”œโ”€โ”€ Elasticsearch: 8GB (๊ณ ์„ฑ๋Šฅ ๊ฒ€์ƒ‰) โ”œโ”€โ”€ Redis: 2GB (๋Œ€์šฉ๋Ÿ‰ ์บ์‹œ) โ”œโ”€โ”€ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜: 4GB (๋ฉ€ํ‹ฐ ์›Œ์ปค) โ”œโ”€โ”€ NLP ์ฒ˜๋ฆฌ: 4GB (๋™์‹œ ์ฒ˜๋ฆฌ) โ”œโ”€โ”€ ์‹œ์Šคํ…œ ์บ์‹œ: 6GB (ํŒŒ์ผ ์‹œ์Šคํ…œ ์บ์‹œ) โ””โ”€โ”€ ์—ฌ์œ  ๊ณต๊ฐ„: 2GB ``` ### Docker Compose ์ตœ์ ํ™” (DS1525+ ์ „์šฉ) ```yaml # /volume1/docker/industrial-info/docker-compose.yml version: '3.8' networks: industrial-net: driver: bridge ipam: config: - subnet: 172.20.0.0/16 services: postgres: image: postgres:15-alpine container_name: industrial-postgres environment: POSTGRES_DB: industrial_info POSTGRES_USER: industrial_user POSTGRES_PASSWORD: secure_password_here POSTGRES_INITDB_ARGS: "--encoding=UTF-8 --locale=C" volumes: - /volume1/docker/industrial-info/data/postgres:/var/lib/postgresql/data - /volume1/docker/industrial-info/config/postgresql.conf:/etc/postgresql/postgresql.conf ports: - "5432:5432" networks: - industrial-net restart: unless-stopped deploy: resources: limits: memory: 4G # 32GB ํ™œ์šฉ cpus: '2.0' # Ryzen 4์ฝ”์–ด ํ™œ์šฉ reservations: memory: 2G cpus: '1.0' command: postgres -c config_file=/etc/postgresql/postgresql.conf elasticsearch: image: elasticsearch:8.11.0 container_name: industrial-elasticsearch environment: - discovery.type=single-node - "ES_JAVA_OPTS=-Xms4g -Xmx8g" # ๋Œ€ํญ ์ฆ๋Ÿ‰! - xpack.security.enabled=false - xpack.security.enrollment.enabled=false - bootstrap.memory_lock=true volumes: - /volume1/docker/industrial-info/data/elasticsearch:/usr/share/elasticsearch/data - /volume1/docker/industrial-info/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml ports: - "9200:9200" - "9300:9300" networks: - industrial-net restart: unless-stopped deploy: resources: limits: memory: 10G # 8GB JVM + 2GB ์‹œ์Šคํ…œ cpus: '2.0' reservations: memory: 6G cpus: '1.0' ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 redis: image: redis:7-alpine container_name: industrial-redis command: redis-server /etc/redis/redis.conf volumes: - /volume1/docker/industrial-info/data/redis:/data - /volume1/docker/industrial-info/config/redis.conf:/etc/redis/redis.conf ports: - "6379:6379" networks: - industrial-net restart: unless-stopped deploy: resources: limits: memory: 2G # ๋Œ€์šฉ๋Ÿ‰ ์บ์‹œ cpus: '1.0' reservations: memory: 1G cpus: '0.5' app: build: ./app container_name: industrial-app environment: - DATABASE_URL=postgresql://industrial_user:secure_password_here@postgres:5432/industrial_info - REDIS_URL=redis://redis:6379/0 - ELASTICSEARCH_URL=http://elasticsearch:9200 - PYTHONPATH=/app - WORKERS=4 # ๋ฉ€ํ‹ฐ ์›Œ์ปค volumes: - /volume1/docker/industrial-info/app:/app - /volume1/docker/industrial-info/data/logs:/app/logs ports: - "8000:8000" networks: - industrial-net depends_on: - postgres - redis - elasticsearch restart: unless-stopped deploy: resources: limits: memory: 4G # ๋ฉ€ํ‹ฐ ์›Œ์ปค ์ง€์› cpus: '2.0' reservations: memory: 2G cpus: '1.0' # NLP ์ „์šฉ ์›Œ์ปค (๊ณ ์„ฑ๋Šฅ) nlp-worker: build: ./app container_name: industrial-nlp-worker command: celery -A main.celery worker --loglevel=info --concurrency=4 -Q nlp environment: - DATABASE_URL=postgresql://industrial_user:secure_password_here@postgres:5432/industrial_info - REDIS_URL=redis://redis:6379/0 - ELASTICSEARCH_URL=http://elasticsearch:9200 - SPACY_MODEL_PATH=/app/models volumes: - /volume1/docker/industrial-info/app:/app - /volume1/docker/industrial-info/data/logs:/app/logs - /volume1/docker/industrial-info/data/models:/app/models networks: - industrial-net depends_on: - postgres - redis - elasticsearch restart: unless-stopped deploy: resources: limits: memory: 4G # NLP ๋ชจ๋ธ ๋กœ๋”ฉ์šฉ cpus: '2.0' reservations: memory: 2G cpus: '1.0' # ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ ์›Œ์ปค collector-worker: build: ./app container_name: industrial-collector command: celery -A main.celery worker --loglevel=info --concurrency=2 -Q collector environment: - DATABASE_URL=postgresql://industrial_user:secure_password_here@postgres:5432/industrial_info - REDIS_URL=redis://redis:6379/0 - ELASTICSEARCH_URL=http://elasticsearch:9200 volumes: - /volume1/docker/industrial-info/app:/app - /volume1/docker/industrial-info/data/logs:/app/logs networks: - industrial-net depends_on: - postgres - redis - elasticsearch restart: unless-stopped deploy: resources: limits: memory: 2G cpus: '1.0' reservations: memory: 1G cpus: '0.5' # ์Šค์ผ€์ค„๋Ÿฌ scheduler: build: ./app container_name: industrial-scheduler command: celery -A main.celery beat --loglevel=info environment: - DATABASE_URL=postgresql://industrial_user:secure_password_here@postgres:5432/industrial_info - REDIS_URL=redis://redis:6379/0 - ELASTICSEARCH_URL=http://elasticsearch:9200 volumes: - /volume1/docker/industrial-info/app:/app - /volume1/docker/industrial-info/data/logs:/app/logs networks: - industrial-net depends_on: - postgres - redis - elasticsearch restart: unless-stopped deploy: resources: limits: memory: 512M cpus: '0.5' # ๋ชจ๋‹ˆํ„ฐ๋ง (Prometheus) prometheus: image: prom/prometheus:latest container_name: industrial-prometheus volumes: - /volume1/docker/industrial-info/config/prometheus.yml:/etc/prometheus/prometheus.yml - /volume1/docker/industrial-info/data/prometheus:/prometheus ports: - "9090:9090" networks: - industrial-net restart: unless-stopped deploy: resources: limits: memory: 1G cpus: '0.5' # ๋ชจ๋‹ˆํ„ฐ๋ง ๋Œ€์‹œ๋ณด๋“œ (Grafana) grafana: image: grafana/grafana:latest container_name: industrial-grafana environment: - GF_SECURITY_ADMIN_PASSWORD=admin123 volumes: - /volume1/docker/industrial-info/data/grafana:/var/lib/grafana ports: - "3000:3000" networks: - industrial-net restart: unless-stopped deploy: resources: limits: memory: 1G cpus: '0.5' nginx: image: nginx:alpine container_name: industrial-nginx volumes: - /volume1/docker/industrial-info/config/nginx.conf:/etc/nginx/nginx.conf - /volume1/docker/industrial-info/data/logs:/var/log/nginx ports: - "80:80" - "443:443" networks: - industrial-net depends_on: - app restart: unless-stopped deploy: resources: limits: memory: 512M cpus: '0.5' ``` --- ## โš™๏ธ ๊ณ ์„ฑ๋Šฅ ์„ค์ • ํŒŒ์ผ๋“ค ### PostgreSQL ์ตœ์ ํ™” (32GB ํ™œ์šฉ) ```ini # /volume1/docker/industrial-info/config/postgresql.conf # DS1525+ 32GB RAM ์ตœ์ ํ™” ์„ค์ • # ๋ฉ”๋ชจ๋ฆฌ ์„ค์ • shared_buffers = 2GB # 32GB์˜ 6.25% effective_cache_size = 16GB # 32GB์˜ 50% maintenance_work_mem = 512MB # ๋Œ€์šฉ๋Ÿ‰ ์ธ๋ฑ์Šค ์ž‘์—… work_mem = 64MB # ์ •๋ ฌ/ํ•ด์‹œ ์ž‘์—…์šฉ # ์ฒดํฌํฌ์ธํŠธ ์„ค์ • checkpoint_completion_target = 0.9 checkpoint_timeout = 15min max_wal_size = 4GB min_wal_size = 1GB # ์—ฐ๊ฒฐ ์„ค์ • max_connections = 200 shared_preload_libraries = 'pg_stat_statements' # ๋กœ๊ทธ ์„ค์ • log_destination = 'stderr' logging_collector = on log_directory = '/var/lib/postgresql/data/log' log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' log_rotation_age = 1d log_rotation_size = 100MB log_min_duration_statement = 1000 # 1์ดˆ ์ด์ƒ ์ฟผ๋ฆฌ ๋กœ๊น… # ์„ฑ๋Šฅ ์ตœ์ ํ™” random_page_cost = 1.1 # SSD ์ตœ์ ํ™” effective_io_concurrency = 200 # SSD ๋™์‹œ I/O max_worker_processes = 4 # Ryzen 4์ฝ”์–ด ํ™œ์šฉ max_parallel_workers = 4 max_parallel_workers_per_gather = 2 # ์ž๋™ VACUUM ์ตœ์ ํ™” autovacuum = on autovacuum_max_workers = 2 autovacuum_naptime = 30s ``` ### Elasticsearch ์ตœ์ ํ™” (8GB JVM) ```yaml # /volume1/docker/industrial-info/config/elasticsearch.yml cluster.name: "industrial-cluster" node.name: "industrial-node-1" path.data: /usr/share/elasticsearch/data path.logs: /usr/share/elasticsearch/logs # ๋„คํŠธ์›Œํฌ ์„ค์ • network.host: 0.0.0.0 http.port: 9200 transport.port: 9300 # ๋ฉ”๋ชจ๋ฆฌ ์„ค์ • (32GB ํ™˜๊ฒฝ) bootstrap.memory_lock: true indices.memory.index_buffer_size: 2GB indices.fielddata.cache.size: 2GB # ์„ฑ๋Šฅ ์ตœ์ ํ™” thread_pool.write.queue_size: 1000 thread_pool.search.queue_size: 1000 indices.queries.cache.size: 1GB # SSD ์ตœ์ ํ™” index.store.type: niofs index.merge.scheduler.max_thread_count: 2 # ํ•œ๊ตญ์–ด ๋ถ„์„ ์„ค์ • index.analysis.analyzer.nori.type: custom index.analysis.analyzer.nori.tokenizer: nori_tokenizer index.analysis.analyzer.nori.filter: [nori_part_of_speech, lowercase] ``` ### Redis ์ตœ์ ํ™” (2GB) ```ini # /volume1/docker/industrial-info/config/redis.conf # DS1525+ 32GB RAM ์ตœ์ ํ™” # ๋ฉ”๋ชจ๋ฆฌ ์„ค์ • maxmemory 2gb maxmemory-policy allkeys-lru # ์˜์†์„ฑ ์„ค์ • (SSD ์ตœ์ ํ™”) save 900 1 save 300 10 save 60 10000 # AOF ์„ค์ • appendonly yes appendfsync everysec auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb # ๋„คํŠธ์›Œํฌ ์ตœ์ ํ™” tcp-keepalive 300 timeout 0 # ์„ฑ๋Šฅ ์ตœ์ ํ™” hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 # ๋กœ๊ทธ ์„ค์ • loglevel notice logfile "" ``` --- ## ๐Ÿ“Š ์„ฑ๋Šฅ ๋ฒค์น˜๋งˆํฌ (DS1525+ 32GB) ### ์˜ˆ์ƒ ์„ฑ๋Šฅ ์ง€ํ‘œ ``` ๋™์‹œ ์‚ฌ์šฉ์ž: 100๋ช… (์—ฌ์œ ๋กญ๊ฒŒ) ๊ฒ€์ƒ‰ ์‘๋‹ต์‹œ๊ฐ„: 50-100ms (๋งค์šฐ ๋น ๋ฆ„) RSS ์ˆ˜์ง‘: 500๊ฐœ ํ”ผ๋“œ/๋ถ„ (๋Œ€์šฉ๋Ÿ‰ ์ฒ˜๋ฆฌ) NLP ์ฒ˜๋ฆฌ: 10,000๊ฑด/์‹œ๊ฐ„ (๊ณ ์† ์ฒ˜๋ฆฌ) ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค: 1,000 TPS (๋†’์€ ์ฒ˜๋ฆฌ๋Ÿ‰) ``` ### ๋ฆฌ์†Œ์Šค ์‚ฌ์šฉ๋ฅ  ๋ชฉํ‘œ ``` CPU ์‚ฌ์šฉ๋ฅ : ํ‰๊ท  30-40% (์—ฌ์œ ) ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋ฅ : ํ‰๊ท  70-80% (ํšจ์œจ์ ) ๋””์Šคํฌ I/O: SSD ์„ฑ๋Šฅ ์ตœ๋Œ€ ํ™œ์šฉ ๋„คํŠธ์›Œํฌ: ๊ธฐ๊ฐ€๋น„ํŠธ ๋Œ€์—ญํญ ํ™œ์šฉ ``` --- ## ๐Ÿ’พ SSD 480GB ์ตœ์ ํ™” ### ๋””์Šคํฌ ๊ณต๊ฐ„ ํ• ๋‹น ``` ์ด ์šฉ๋Ÿ‰: 480GB โ”œโ”€โ”€ DSM ์‹œ์Šคํ…œ: 50GB โ”œโ”€โ”€ Docker ์ด๋ฏธ์ง€: 20GB โ”œโ”€โ”€ PostgreSQL ๋ฐ์ดํ„ฐ: 100GB โ”œโ”€โ”€ Elasticsearch ์ธ๋ฑ์Šค: 150GB โ”œโ”€โ”€ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋กœ๊ทธ: 30GB โ”œโ”€โ”€ ๋ฐฑ์—… ๊ณต๊ฐ„: 80GB โ””โ”€โ”€ ์—ฌ์œ  ๊ณต๊ฐ„: 50GB (10% ์—ฌ์œ ) ``` ### SSD ์ˆ˜๋ช… ์—ฐ์žฅ ์„ค์ • ```bash # SSD ์ตœ์ ํ™” ์Šคํฌ๋ฆฝํŠธ #!/bin/bash # TRIM ํ™œ์„ฑํ™” echo 'deadline' > /sys/block/sda/queue/scheduler # ๋กœ๊ทธ ๋กœํ…Œ์ด์…˜ ์ตœ์ ํ™” cat > /etc/logrotate.d/docker << EOF /volume1/docker/industrial-info/data/logs/*.log { daily rotate 7 compress delaycompress missingok notifempty create 644 root root } EOF # ์Šค์™‘ ์‚ฌ์šฉ ์ตœ์†Œํ™” echo 'vm.swappiness=1' >> /etc/sysctl.conf echo 'vm.vfs_cache_pressure=50' >> /etc/sysctl.conf ``` ### ์ž๋™ ์ •๋ฆฌ ์Šคํฌ๋ฆฝํŠธ ```bash # /volume1/docker/industrial-info/scripts/cleanup.sh #!/bin/bash # Docker ์ด๋ฏธ์ง€ ์ •๋ฆฌ (์ฃผ๊ฐ„) docker system prune -f # ๋กœ๊ทธ ํŒŒ์ผ ์••์ถ• (์ผ์ผ) find /volume1/docker/industrial-info/data/logs -name "*.log" -mtime +1 -exec gzip {} \; # ์˜ค๋ž˜๋œ ๋ฐฑ์—… ์‚ญ์ œ (์›”๊ฐ„) find /volume1/docker/industrial-info/backup -name "*.gz" -mtime +30 -delete # Elasticsearch ์ธ๋ฑ์Šค ์ตœ์ ํ™” (์ฃผ๊ฐ„) curl -X POST "localhost:9200/_forcemerge?max_num_segments=1" echo "Cleanup completed: $(date)" ``` --- ## ๐Ÿ”ง ๋ชจ๋‹ˆํ„ฐ๋ง ๋ฐ ์•Œ๋ฆผ ### Prometheus ์„ค์ • ```yaml # /volume1/docker/industrial-info/config/prometheus.yml global: scrape_interval: 15s scrape_configs: - job_name: 'industrial-app' static_configs: - targets: ['app:8000'] - job_name: 'postgres' static_configs: - targets: ['postgres:5432'] - job_name: 'elasticsearch' static_configs: - targets: ['elasticsearch:9200'] - job_name: 'redis' static_configs: - targets: ['redis:6379'] - job_name: 'node-exporter' static_configs: - targets: ['localhost:9100'] ``` ### ์•Œ๋ฆผ ๊ทœ์น™ ```yaml # ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋ฅ  80% ์ดˆ๊ณผ ์‹œ ์•Œ๋ฆผ - alert: HighMemoryUsage expr: (1 - (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes)) * 100 > 80 for: 5m labels: severity: warning annotations: summary: "High memory usage detected" # ๋””์Šคํฌ ์‚ฌ์šฉ๋ฅ  85% ์ดˆ๊ณผ ์‹œ ์•Œ๋ฆผ - alert: HighDiskUsage expr: (1 - (node_filesystem_avail_bytes / node_filesystem_size_bytes)) * 100 > 85 for: 5m labels: severity: critical annotations: summary: "High disk usage detected" ``` --- ## ๐Ÿš€ ๋ฐฐํฌ ๋ฐ ์‹คํ–‰ ### ์›ํด๋ฆญ ์„ค์น˜ ์Šคํฌ๋ฆฝํŠธ ```bash #!/bin/bash # /volume1/docker/industrial-info/install.sh echo "๐Ÿš€ DS1525+ ์‚ฐ์—…์ •๋ณด์‹œ์Šคํ…œ ์„ค์น˜ ์‹œ์ž‘..." # ๋””๋ ‰ํ† ๋ฆฌ ์ƒ์„ฑ mkdir -p /volume1/docker/industrial-info/{app,data/{postgres,elasticsearch,redis,logs,grafana,prometheus},config,backup,scripts} # ๊ถŒํ•œ ์„ค์ • chown -R 1000:1000 /volume1/docker/industrial-info # Docker ๋„คํŠธ์›Œํฌ ์ƒ์„ฑ docker network create industrial-net 2>/dev/null || true # ํ™˜๊ฒฝ ๋ณ€์ˆ˜ ํŒŒ์ผ ์ƒ์„ฑ cat > /volume1/docker/industrial-info/.env << EOF POSTGRES_PASSWORD=$(openssl rand -base64 32) SECRET_KEY=$(openssl rand -base64 64) ELASTICSEARCH_PASSWORD=$(openssl rand -base64 32) EOF # ์ปจํ…Œ์ด๋„ˆ ์‹œ์ž‘ cd /volume1/docker/industrial-info docker-compose up --build -d echo "โœ… ์„ค์น˜ ์™„๋ฃŒ! ์ ‘์† URL:" echo " ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜: http://$(hostname -I | awk '{print $1}'):80" echo " ๋ชจ๋‹ˆํ„ฐ๋ง: http://$(hostname -I | awk '{print $1}'):3000" echo " Elasticsearch: http://$(hostname -I | awk '{print $1}'):9200" ``` --- ## ๐Ÿ“ˆ ์„ฑ๋Šฅ ๋น„๊ต ์š”์•ฝ ### DS1525+ vs ์ผ๋ฐ˜ NAS vs Mac | ํ•ญ๋ชฉ | DS1525+ 32GB | ์ผ๋ฐ˜ NAS 8GB | Mac Mini M2 | |------|--------------|--------------|-------------| | ๋™์‹œ ์‚ฌ์šฉ์ž | 100๋ช… | 20๋ช… | 50๋ช… | | ๊ฒ€์ƒ‰ ์†๋„ | 50ms | 200ms | 30ms | | NLP ์ฒ˜๋ฆฌ | 10K๊ฑด/์‹œ๊ฐ„ | 1K๊ฑด/์‹œ๊ฐ„ | 15K๊ฑด/์‹œ๊ฐ„ | | ์ „๋ ฅ ์†Œ๋ชจ | 60W | 40W | 150W | | 24์‹œ๊ฐ„ ์šด์˜ | โœ… ์ตœ์  | โœ… ๊ฐ€๋Šฅ | โŒ ๋น„ํšจ์œจ | | ํ™•์žฅ์„ฑ | โœ… ์šฐ์ˆ˜ | โš ๏ธ ์ œํ•œ์  | โŒ ์ œํ•œ์  | | ๋น„์šฉ ํšจ์œจ | โœ… ์ตœ๊ณ  | โœ… ์ข‹์Œ | โŒ ๋†’์Œ | **๊ฒฐ๋ก **: DS1525+ 32GB๋Š” ์ด ํ”„๋กœ์ ํŠธ์— **๊ณผ๋ถ„ํ•  ์ •๋„๋กœ ์ข‹์€ ์„ฑ๋Šฅ**์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค! ๐Ÿš€ 480GB SSD๋„ ์ถฉ๋ถ„ํ•˜๊ณ , 32GB RAM์œผ๋กœ ๋ชจ๋“  ์„œ๋น„์Šค๋ฅผ ์—ฌ์œ ๋กญ๊ฒŒ ๋Œ๋ฆด ์ˆ˜ ์žˆ์–ด์š”. ์™„๋ฒฝํ•œ ์„ ํƒ์ž…๋‹ˆ๋‹ค! ๐Ÿ‘