MinIO on Docker Swarm — Production Guide
This site mirrors the repository's manual and is optimized for SEO. You can also read the single-file guide in the repo root: README-minio-swarm.md.
Production-Ready MinIO on Docker Swarm (Distributed / Multi-Node)
A complete, opinionated guide aligned with MinIO production practices—now using /data1
, /data2
, … paths and including a universal host configuration checklist (hostnames, users, firewall, etc.).
Reference topology (adjust as needed):
4 storage nodes (10.10.13.51–54
) + 1 load balancer (10.10.13.55
).
Each storage node has 4 XFS volumes mounted at/data1
,/data2
,/data3
,/data4
.
Architecture Overview
MinIO Docker Swarm Cluster Architecture
- Distributed MinIO across nodes and disks with erasure coding (MinIO, Inc., 2024). You'll deploy one service per node (
minio1..minio4
) for stable addressing and easier ops. - Server pools support online capacity expansion; plan consistent disk sizes within a pool (MinIO, Inc., 2024).
- An external L4/L7 load balancer (NGINX shown) fronts S3 API (port 9000) and the Console (port 9001), providing S3-compatible object storage service (Amazon Web Services, 2024).
Global Host Configuration (All Nodes)
Do this on every storage node and the load balancer before deploying.
1) Update All Packages
First, ensure all system packages are up-to-date. This minimizes potential conflicts and security vulnerabilities.
sudo dnf upgrade -y
2) Set hostnames (consistent naming)
On each node:
# Replace N with 1..4 on storage nodes; use "minio-lb" on the balancer
sudo hostnamectl set-hostname minioN
3) Hostname resolution
Create a shared mapping (use your real IPs):
cat <<'EOF' | sudo tee -a /etc/hosts
10.10.13.51 minio1
10.10.13.52 minio2
10.10.13.53 minio3
10.10.13.54 minio4
10.10.13.55 minio-lb
EOF
4) Create a system group/user (optional but recommended for directory ownership)
# A local OS user/group to own the data mountpoints on the host:
sudo groupadd --system minio || true
sudo useradd --system --no-create-home --shell /sbin/nologin --gid minio minio || true
5) Time sync, firewall, performance profile (RHEL/Alma Linux example)
sudo dnf -y install chrony firewalld tuned policycoreutils-python-utils setools-console jq
sudo systemctl enable --now chronyd firewalld
sudo tuned-adm profile throughput-performance
6) Configure Firewall (Docker Swarm & MinIO)
These rules cover both Docker Swarm communication and MinIO application traffic. The source 10.10.13.0/24
should match your node subnet.
# --- Docker Swarm Communication ---
# Manager node only (e.g., host ending in .51)
sudo firewall-cmd --permanent --add-rich-rule="rule family=ipv4 source address='10.10.13.0/24' port port='2377' protocol='tcp' accept"
sudo firewall-cmd --permanent --add-rich-rule="rule protocol value='esp' accept"
# All nodes except LB
sudo firewall-cmd --permanent --add-rich-rule="rule family=ipv4 source address='10.10.13.0/24' port port='7946' protocol='tcp' accept"
sudo firewall-cmd --permanent --add-rich-rule="rule family=ipv4 source address='10.10.13.0/24' port port='7946' protocol='udp' accept"
sudo firewall-cmd --permanent --add-rich-rule="rule family=ipv4 source address='10.10.13.0/24' port port='4789' protocol='udp' accept"
# --- MinIO Application Ports ---
# All storage nodes: Allow 9000 (S3) & 9001 (Console) from cluster + LB
for SRC in 10.10.13.51 10.10.13.52 10.10.13.53 10.10.13.54 10.10.13.55; do
sudo firewall-cmd --permanent --add-rich-rule="rule family=ipv4 source address='${SRC}' port port='9000' protocol='tcp' accept"
sudo firewall-cmd --permanent --add-rich-rule="rule family=ipv4 source address='${SRC}' port port='9001' protocol='tcp' accept"
done
# --- Apply All Rules ---
sudo firewall-cmd --reload
7) SELinux (if enforcing)
# Allow containers to use /data1..4
sudo semanage fcontext -a -t container_file_t '/data[1-4](/.*)?'
sudo restorecon -Rv /data1 /data2 /data3 /data4
Requirements & Sizing
- Filesystem: XFS is the recommended FS for MinIO production (MinIO, Inc., 2024).
- Disks: One mount per disk (
/data1
,/data2
,/data3
,/data4
)—no nested directories (MinIO, Inc., 2024). - Consistency: Keep drive sizes/types consistent within a pool to avoid reduced erasure-coding efficiency (MinIO, Inc., 2024).
- Network: Low latency between nodes; 10GbE (or better) is ideal.
Prepare Disks (XFS) and Mounts /data1..4
Example for 4 drives per node (/dev/sdb..sde
):
# Partition & format
for d in /dev/sdb /dev/sdc /dev/sdd /dev/sde; do
sudo parted -s "$d" mklabel gpt mkpart xfs 1MiB 100%
sudo mkfs.xfs -f "${d}1"
done
# Create mountpoints
sudo mkdir -p /data1 /data2 /data3 /data4
# Add to /etc/fstab (adjust device names if needed)
echo "/dev/sdb1 /data1 xfs defaults 0 0" | sudo tee -a /etc/fstab
echo "/dev/sdc1 /data2 xfs defaults 0 0" | sudo tee -a /etc/fstab
echo "/dev/sdd1 /data3 xfs defaults 0 0" | sudo tee -a /etc/fstab
echo "/dev/sde1 /data4 xfs defaults 0 0" | sudo tee -a /etc/fstab
# Mount and set ownership/permissions
sudo systemctl daemon-reload && sudo mount -a
sudo chgrp minio /data1 /data2 /data3 /data4 || true
sudo chmod 770 /data1 /data2 /data3 /data4
Install Docker & Initialize Swarm
These steps should be performed on all storage nodes (minio1..4
).
1) Create Sudo-Enabled Admin User
First, create a dedicated user for administration on each storage node. This user will run docker
commands without needing to be root
.
# Replace 'user-minio-01' with your desired username for each node
adduser user-minio-01
passwd user-minio-01
# Add the user to the 'wheel' group for sudo privileges
usermod -aG wheel user-minio-01
2) Install Docker Engine
Next, install Docker Engine using the official repositories. This ensures you get the latest stable version.
# Remove any old Docker versions (optional but recommended)
sudo dnf -y remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine || true
# Add the Docker CE repository
sudo dnf install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# Install Docker packages
sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Enable and start the Docker service
sudo systemctl enable --now docker
sudo systemctl status docker
# Add your admin user to the 'docker' group to run Docker commands without sudo
# Replace 'user-minio-01' with the username you created
sudo usermod -aG docker user-minio-01
# A reboot is required for the group changes to take full effect
echo "Reboot required. Please run 'sudo reboot' and log back in as the new user."
Important: After rebooting, log in as the new user (e.g., user-minio-01
) for all subsequent steps. You should be able to run docker
commands without sudo
.
3) Initialize Swarm
Once Docker is running on all nodes, initialize the Swarm on your designated manager node and join the workers. Docker Swarm mode provides native cluster management capabilities integrated with Docker Engine (Docker, Inc., 2024).
# On the manager node (e.g., minio1):
docker swarm init --advertise-addr <manager-ip>
# On each worker node (e.g., minio2, minio3, minio4):
docker swarm join --token <token-from-init> <manager-ip>:2377

Docker Swarm node status in CLI
Node Labels & Overlay Network
Label each node so one MinIO service lands on it. Docker Swarm's declarative service model allows you to define the desired state of services in your application stack (Docker, Inc., 2024):
sudo docker node update --label-add minio.id=1 --label-add minio.pool=1 <node-1-name>
sudo docker node update --label-add minio.id=2 --label-add minio.pool=1 <node-2-name>
sudo docker node update --label-add minio.id=3 --label-add minio.pool=1 <node-3-name>
sudo docker node update --label-add minio.id=4 --label-add minio.pool=1 <node-4-name>
Create an attachable overlay network (optionally encrypted—benchmark first). Docker Swarm provides multi-host networking capabilities through overlay networks (Docker, Inc., 2024):
sudo docker network create --driver overlay --attachable # --opt encrypted minio_net
Secrets (Root Credentials)
Create the root user and password as Docker secrets. This command should be run only on a manager node. It uses read -s
to prompt for credentials without saving them to your shell history.
# Interactively and securely create secrets
read -rsp "MINIO_ROOT_USER: " MINIO_ROOT_USER; echo
read -rsp "MINIO_ROOT_PASSWORD: " MINIO_ROOT_PASSWORD; echo
# Create Docker secrets from the variables
printf '%s' "$MINIO_ROOT_USER" | sudo docker secret create minio_root_user -
printf '%s' "$MINIO_ROOT_PASSWORD" | sudo docker secret create minio_root_password -
# Unset the variables to remove them from the shell session
unset MINIO_ROOT_USER MINIO_ROOT_PASSWORD
Deploy the Cluster (minio-stack.yml
)
On the manager node only, create a dedicated directory to store the stack manifest. This keeps your configuration organized and secure.
sudo mkdir -p /opt/minio
sudo chown -R user-minio-01:user-minio-01 /opt/minio
sudo chmod 700 /opt/minio
Now, save the following content as /opt/minio/minio-stack.yml
.
Deploy command:
sudo docker stack deploy -c /opt/minio/minio-stack.yml minio

Docker Swarm service list after deployment
version: "3.9"
x-minio-common: &minio-common
image: quay.io/minio/minio:RELEASE.YYYY-MM-DDThh-mm-ssZ # Pin a specific release
networks: [minio_net]
environment:
# URL S3 Public (for clients and pre-signed URLs)
MINIO_SERVER_URL: "https://example.com/minio/s3/"
# URL console public
MINIO_BROWSER_REDIRECT_URL: "https://example.com/minio/ui/"
MINIO_ROOT_USER_FILE: /run/secrets/minio_root_user
MINIO_ROOT_PASSWORD_FILE: /run/secrets/minio_root_password
secrets:
- minio_root_user
- minio_root_password
ulimits:
nofile: { soft: 65536, hard: 65536 }
stop_grace_period: 1m
healthcheck:
test: ["CMD-SHELL", "curl -fsS 'http://localhost:9000/minio/health/live' || exit 1"]
interval: 30s
timeout: 10s
retries: 3
command: >
minio server --console-address ":9001"
http://minio{1...4}/data{1...4}
volumes:
- /data1:/data1
- /data2:/data2
- /data3:/data3
- /data4:/data4
ports:
- target: 9000
published: 9000
protocol: tcp
mode: host
- target: 9001
published: 9001
protocol: tcp
mode: host
services:
minio1:
<<: *minio-common
hostname: minio1
deploy:
placement:
constraints: [ "node.labels.minio.id == 1" ]
restart_policy: { condition: any, delay: 5s }
update_config: { parallelism: 1, order: stop-first, failure_action: rollback }
minio2:
<<: *minio-common
hostname: minio2
deploy:
placement:
constraints: [ "node.labels.minio.id == 2" ]
restart_policy: { condition: any, delay: 5s }
update_config: { parallelism: 1, order: stop-first, failure_action: rollback }
minio3:
<<: *minio-common
hostname: minio3
deploy:
placement:
constraints: [ "node.labels.minio.id == 3" ]
restart_policy: { condition: any, delay: 5s }
update_config: { parallelism: 1, order: stop-first, failure_action: rollback }
minio4:
<<: *minio-common
hostname: minio4
deploy:
placement:
constraints: [ "node.labels.minio.id == 4" ]
restart_policy: { condition: any, delay: 5s }
update_config: { parallelism: 1, order: stop-first, failure_action: rollback }
networks:
minio_net:
external: true
secrets:
minio_root_user:
external: true
minio_root_password:
external: true
Why this layout?
- The command
enumerates all nodes and disks: http://minio{1...4}/data{1...4}
, allowing MinIO to automatically group drives into erasure sets for data redundancy (MinIO, Inc., 2024).
- One service per node ensures stable names (minio1..4
), simplifies ops, and supports adding a Pool 2 later (minio5..8
) without redesign.
Load Balancer Node Setup (NGINX)
These steps apply only to the Load Balancer (LB) node.
1) Install NGINX & Configure SELinux
Install NGINX and allow it to make network connections, which is required for proxying traffic to the MinIO backend.
sudo dnf -y install nginx
sudo setsebool -P httpd_can_network_connect 1
2) Open Firewall Ports
Allow public traffic on standard HTTP and HTTPS ports.
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
3) Create Admin User & Set Hostname
Set a unique hostname and create a dedicated admin user for the LB node.
sudo hostnamectl set-hostname minio-lb
adduser user-minio-lb
passwd user-minio-lb
usermod -aG wheel user-minio-lb
4) TLS Certificate Setup
Choose one of the following two options to secure your NGINX proxy.
Option A: Let's Encrypt with Certbot (Recommended)
This is the preferred method for obtaining and managing free, trusted TLS certificates.
# Install Certbot for NGINX
sudo dnf -y install certbot python3-certbot-nginx
# Obtain and install a certificate (this will also update your NGINX config)
sudo certbot --nginx -d example.com
Option B: Manual Certificate Installation
Use this method if you have a commercial or self-signed certificate. Place your certificate and private key in the specified directory.
# Create a directory for TLS certificates
sudo mkdir -p /etc/nginx/tls
# Set secure ownership and permissions
sudo chown root:nginx /etc/nginx/tls
sudo chmod 750 /etc/nginx/tls
# Copy your certificate and key, then set permissions
# sudo cp /path/to/your/fullchain.pem /etc/nginx/tls/
# sudo cp /path/to/your/privkey.pem /etc/nginx/tls/
sudo chown root:nginx /etc/nginx/tls/*
sudo chmod 640 /etc/nginx/tls/*
Load Balancer (NGINX)
Expose S3 at /
and Console at /minio/ui
(or use a dedicated subdomain for the Console).
# ─────────── Global Directives ───────────
# For WebSocket support
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# Allow underscores in headers for S3 compatibility
underscores_in_headers on;
# ─────────── Upstreams ───────────
upstream minio_s3 {
least_conn;
server minio1:9000 max_fails=3 fail_timeout=10s;
server minio2:9000 max_fails=3 fail_timeout=10s;
server minio3:9000 max_fails=3 fail_timeout=10s;
server minio4:9000 max_fails=3 fail_timeout=10s;
keepalive 64;
}
upstream minio_console {
least_conn;
server minio1:9001 max_fails=3 fail_timeout=10s;
server minio2:9001 max_fails=3 fail_timeout=10s;
server minio3:9001 max_fails=3 fail_timeout=10s;
server minio4:9001 max_fails=3 fail_timeout=10s;
keepalive 32;
}
# ─────────── HTTP → HTTPS ───────────
server {
listen 80;
listen [::]:80;
server_name example.com;
return 301 https://$host$request_uri;
}
# ─────────── HTTPS Server ───────────
server {
listen 443 ssl http2;
server_name example.com;
# ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
# ssl_certificate_key /etc/letsencrypt/live/example.net/privkey.pem;
client_max_body_size 0;
proxy_buffering off;
proxy_request_buffering off;
location /minio/s3/ {
rewrite ^/minio/s3/(.*) /$1 break;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio_s3;
}
location /minio/ui/ {
rewrite ^/minio/ui/(.*) /$1 break;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
chunked_transfer_encoding off;
proxy_pass http://minio_console;
}
location / {
return 308 /minio/ui/; # Use 302 if you prefer temporary during testing.
}
}

MinIO Web Console accessible through the load balancer
Connecting and Managing with MinIO Client (mc
)
The MinIO Client (mc
) is the recommended tool for administering your cluster. The following steps show how to run mc
via a Docker container, ensuring a consistent environment. MinIO provides S3-compatible APIs that can be accessed through various client tools (Amazon Web Services, 2024; MinIO, Inc., 2024).
1. Set Up mc
via Docker
First, create a persistent volume for the mc
configuration and launch a container with an interactive shell. This setup can be done on any host with Docker that can reach the cluster (either a cluster node or an external machine).
# 1) (One-time only) Create a volume for mc config
docker volume create mc-config
# 2) Open a shell inside the mc container
# --network host is used for easy access when running on a cluster node
docker run --rm -it --network host -v mc-config:/root/.mc --entrypoint sh --name mc-shell quay.io/minio/mc
Note: All subsequent
mc
commands are run inside this container's shell.
2. Configure a Cluster Alias
Next, create an alias to connect to your MinIO cluster. An alias is a nickname for a MinIO deployment, storing its URL and credentials.
# Get root credentials from Docker secrets (run this on a manager node)
MINIO_ROOT_USER=$(sudo docker secret inspect --format '{{.Spec.Data}}' minio_root_user | base64 -d)
MINIO_ROOT_PASSWORD=$(sudo docker secret inspect --format '{{.Spec.Data}}' minio_root_password | base64 -d)
# Create the alias inside the mc container
# Option A: From an external host (points to the NGINX proxy)
mc alias set myminio https://example.com/minio/s3/ "$MINIO_ROOT_USER" "$MINIO_ROOT_PASSWORD" --api "s3v4"
# Option B: From a node within the cluster (points directly to a MinIO service)
mc alias set myminio http://minio1:9000 "$MINIO_ROOT_USER" "$MINIO_ROOT_PASSWORD"
3. Example Workflow: Create a Bucket, User, and Policy
This workflow demonstrates how to create a dedicated user with access restricted to a single bucket.
Step 1: Create a Bucket
# Create a new bucket named 'test-bucket'
mc mb myminio/test-bucket
# Verify the bucket was created
mc ls myminio
Step 2: Create a Read/Write Policy
Create a JSON file with a policy that grants full access (s3:*
) but only to test-bucket
.
# Inside the mc container, create the policy file
cat <<'EOF' > /tmp/test-bucket-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::test-bucket/*"]
}
]
}
EOF
# Add the new policy to MinIO
mc admin policy add myminio test-bucket-policy /tmp/test-bucket-policy.json
Step 3: Create a User and Attach the Policy
Now, create a new user and assign the policy you just created.
# Create a new user named 'user.test' with a secure password
mc admin user add myminio user.test 'YOUR_SECURE_PASSWORD_HERE'
# Attach the bucket-specific policy to the new user
mc admin policy attach myminio test-bucket-policy --user user.test
Step 4: Generate a Service Account for Application Use
For applications, it is best practice to use service accounts, which are long-lived credentials tied to a user. The user user.test
can create a service account for themselves.
# First, create an alias for the new user
mc alias set testuser https://example.com/minio/s3/ user.test 'YOUR_SECURE_PASSWORD_HERE' --api "s3v4"
# Now, as 'testuser', create a service account
mc admin user svcacct add --access-key 'app.test.access.key' --secret-key 'app.test.secret.key' testuser user.test
The generated Access Key
and Secret Key
can now be used in your application's S3 client configuration to interact with test-bucket
.

MinIO cluster status and information via mc admin
Health, Readiness & Observability
- Health endpoints:
- Liveness:
GET /minio/health/live
- Readiness:
GET /minio/health/ready
- Cluster:
GET /minio/health/cluster
- Liveness:
- Metrics: scrape Prometheus targets at
/minio/v2/metrics/cluster
(and node metrics if desired). - Consider
mc admin prometheus generate
to bootstrap target lists.
Upgrades (Zero-Downtime) & Rolling Updates
- Pin a specific MinIO release tag in
/opt/minio/minio-stack.yml
. - To upgrade, change the image tag and redeploy the stack. Docker Swarm supports rolling updates, allowing you to apply service updates incrementally to maintain availability (Docker, Inc., 2024):
sudo docker stack deploy -c /opt/minio/minio-stack.yml minio
- Services restart in place; S3 clients retry seamlessly in most cases.
Capacity Expansion: Add a New Server Pool
Add nodes minio5..minio8
with their own /data1..4
. Update the same server
command on all services to include both pools. MinIO supports pool expansion to scale storage capacity while maintaining data availability (MinIO, Inc., 2024), e.g.:
server --console-address ":9001" http://minio{1...4}/data{1...4} http://minio{5...8}/data{1...4}
Then roll the stack so every service picks up the expanded endpoint list.
Security Hardening Checklist
- TLS everywhere (LB and/or node).
- Secrets for root credentials (
*_FILE
env). - Least privilege app users & bucket policies—never use root creds in apps.
- Filesystem: XFS, consistent disk sizes within a pool, one mount per disk.
- Overlay encryption: optional; validate impact in staging before enabling.
- ulimits: raise
nofile
(e.g., 65k) if you expect high concurrency. - Firewall & SELinux: restrict ingress, set proper SELinux contexts on
/data*
.
Troubleshooting
- A node doesn't join the cluster: all services must list the same endpoints; check DNS (
/etc/hosts
) andcommand
strings. - Permission denied on
/data*
: verify ownership/permissions and SELinux context; ensure the container UID can write. - LB 502/504 or Console errors: confirm upstreams are healthy, keep-alives enabled, and
MINIO_BROWSER_REDIRECT_URL
is set correctly. - Drive replaced: mount it at the same path (
/dataX
), fix ownership/labels, thenmc admin heal -r
.
Appendix: Handy Commands
# Swarm status
docker node ls
docker service ls
docker service ps minio_minio1
# Network
docker network inspect minio_net
# Cluster
mc admin info s3
mc admin top locks s3
mc admin heal -r s3
References
Amazon Web Services. (2024). What is Amazon S3? - Amazon Simple Storage Service. Amazon Web Services. https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html
Docker, Inc. (2024). Swarm mode | Docker Docs. Docker Documentation. https://docs.docker.com/engine/swarm/
MinIO, Inc. (2024). Deployment architecture — MinIO object storage (AGPLv3). MinIO Documentation. https://docs.min.io/community/minio-object-store/operations/concepts/architecture.html