Airgapped Network Deployment Tutorial#
Overview#
This tutorial guides system administrators through deploying CipherSwarm in an airgapped (non-Internet-connected) network environment. You will deploy the server using Docker Compose and connect Linux x64 binary agents to it.
Target Audience: Linux system administrators familiar with Docker and networking but not necessarily software development.
Prerequisites:
- Docker and Docker Compose installed on server
- Linux systems with network connectivity to server (isolated LAN)
- Downloaded CipherSwarm Docker images and agent binary
- Basic understanding of hash cracking concepts (you know what hashcat does)
- Sufficient RAM for tmpfs mounts. The default 768MB (512MB for /tmp and 256MB for /rails/tmp) supports attack resources up to ~1GB. For larger files, size tmpfs using the formula:
TMPFS_TMP_SIZE >= 1.5 × largest_attack_resource_file. For 100GB+ files, this requires 150GB+ of RAM for tmpfs, or use the disk-backed TMPDIR volume approach instead. See Docker Storage and Temp Management for comprehensive sizing guidance.
What is CipherSwarm?#
CipherSwarm is a distributed hash cracking system consisting of:
- Server: A Ruby on Rails web application that manages cracking campaigns, distributes tasks, and stores results
- Agents: Go-based workers that run hashcat on cracking nodes and report progress back to the server
Architecture:
Key Terms:
- Campaign: A hash cracking job (e.g., "crack all MD5 hashes using rockyou.txt")
- Attack: A specific attack strategy (dictionary, mask, combinator, etc.) within a campaign
- Task: A work unit assigned to an agent (e.g., "crack hashes 1-1000 with this wordlist")
- Heartbeat: Periodic check-in from agent to server to maintain connection
- Benchmark: Initial performance test agents run to measure hash cracking speed
Part 1: Server Deployment#
Step 1: Prepare Docker Images#
On an Internet-connected system, export the required Docker images:
# Pull images
docker pull ghcr.io/unclesp1d3r/cipherswarm:latest
docker pull postgres:latest
docker pull redis:latest
docker pull nginx:alpine
# Save to tar archives
docker save ghcr.io/unclesp1d3r/cipherswarm:latest -o cipherswarm.tar
docker save postgres:latest -o postgres.tar
docker save redis:latest -o redis.tar
docker save nginx:alpine -o nginx.tar
Transfer the tar files to the airgapped server via approved method (USB, secure file transfer, etc.).
Step 2: Load Images on Airgapped Server#
docker load -i cipherswarm.tar
docker load -i postgres.tar
docker load -i redis.tar
docker load -i nginx.tar
# Verify
docker images | grep -E "cipherswarm|postgres|redis|nginx"
Step 3: Configure the Server#
Download docker-compose-production.yml and place it on the server. This file defines five services:
- nginx: Reverse proxy exposing port 80
- web: Rails application (horizontally scalable)
- postgres-db: PostgreSQL database
- redis-db: Redis cache and job queue
- sidekiq: Background job workers (4 replicas)
tmpfs Mounts: The docker-compose files include tmpfs mounts for /tmp (512MB) and /rails/tmp (256MB) on both web and sidekiq services. These prevent filesystem exhaustion during Active Storage blob downloads. The tmpfs size is controlled by the TMPFS_TMP_SIZE and TMPFS_RAILS_TMP_SIZE environment variables. For deployments processing 100GB+ attack resources, use the formula TMPFS_TMP_SIZE >= 1.5 × largest_attack_resource_file (e.g., 150GB+ for 100GB wordlists). For very large files, the disk-backed TMPDIR volume approach is preferred over RAM-backed tmpfs. See Docker Storage and Temp Management for detailed sizing guidance and the TMPDIR alternative.
Create a .env file in the same directory. You can start with the provided .env.example template and customize it for your air-gapped environment:
# Required - Security
RAILS_MASTER_KEY=<generate-or-copy-from-config/master.key>
POSTGRES_PASSWORD=<your-secure-database-password>
TUSD_HOOK_SECRET=<generate-random-secret>
# Required - Networking
APPLICATION_HOST=<server-ip-or-hostname>
# Recommended for Airgapped Labs
DISABLE_SSL=true
# Optional - Storage (defaults to local)
ACTIVE_STORAGE_SERVICE=local
Note on RAILS_MASTER_KEY: This is a cryptographic key used by Rails to encrypt credentials. If you're setting up CipherSwarm for the first time, you can generate one with
openssl rand -hex 32. If migrating from an existing deployment, copy the key fromconfig/master.keyon the original system.
For a complete reference of all environment variables with detailed explanations, see Environment Variables Reference.
Critical Environment Variables for Air-Gapped Deployment#
The following environment variables are critical for air-gapped deployment. For complete details on these and all other variables, see the Environment Variables Reference.
Required Variables:
| Variable | Description | Air-Gapped Context |
|---|---|---|
RAILS_MASTER_KEY | Rails credentials encryption key | Transfer from config/master.key on original system |
POSTGRES_PASSWORD | Database password (strictly enforced) | Use strong password (min 16 chars). Container will fail to start if not set. |
TUSD_HOOK_SECRET | Shared secret for authenticating tusd webhook requests | Generate with openssl rand -hex 32. Required in production to prevent cache poisoning attacks. |
APPLICATION_HOST | Server hostname/IP for URL generation | Use internal hostname or IP (e.g., cipherswarm.lab or 192.168.1.100) |
Important Variables for Air-Gapped Environments:
| Variable | Description | Recommended Value | Reason |
|---|---|---|---|
DISABLE_SSL | Disable HTTPS enforcement | true | Lab environments typically don't have SSL certificates |
ACTIVE_STORAGE_SERVICE | Storage backend | local | No external S3 service available |
REDIS_URL | Redis connection | redis://redis-db:6379/0 | Must match Docker Compose service name |
Optional S3 Storage Variables (when using MinIO in air-gapped network):
When using S3-compatible storage with MinIO deployed inside your air-gapped network, see the Environment Variables Reference for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_ENDPOINT, AWS_BUCKET, AWS_FORCE_PATH_STYLE, and AWS_REGION.
Optional tmpfs Sizing Variables (for large attack resources):
| Variable | Description | Default | When to Set |
|---|---|---|---|
TMPFS_TMP_SIZE | Size of /tmp tmpfs mount for blob downloads | 512m | When largest attack resource exceeds 1GB. Use formula: >= 1.5 × largest_file |
TMPFS_RAILS_TMP_SIZE | Size of /rails/tmp tmpfs mount for framework temp files | 256m | Rarely needed. Only if Bootsnap cache exceeds 256MB |
For 100GB+ attack resources, set TMPFS_TMP_SIZE=150g (or higher) and increase the Sidekiq container memory limit proportionally. Alternatively, use the disk-backed TMPDIR volume approach documented in Docker Storage and Temp Management.
Step 4: Deploy the Server#
docker compose -f docker-compose-production.yml up -d
Wait for all services to start:
docker compose -f docker-compose-production.yml ps
All services should show healthy status.
Step 5: Initialize the Database#
On first deployment, the database must be initialized before starting web replicas. The entrypoint script requires the RUN_DB_PREPARE environment variable to run database migrations, which prevents migration races when multiple web replicas start simultaneously:
docker compose -f docker-compose-production.yml run --rm -e RUN_DB_PREPARE=true web bin/rails db:prepare
This command:
- Creates the database if it doesn't exist
- Runs pending migrations
- Seeds initial data (creates admin user)
Check the output for default admin credentials (typically admin/password - change immediately).
Important for Scaling: Only run
db:prepareonce before scaling up web replicas. Do not setRUN_DB_PREPARE=truein the.envfile or all replicas will attempt to run migrations simultaneously, causing database deadlocks.
Step 6: Verify Server Deployment#
- Health check:
curl http://<server-ip>/up
# Expected: HTTP 200 OK
-
Web interface: Navigate to
http://<server-ip>and log in with admin credentials -
System health: Check
http://<server-ip>/system_health- all services should be green
Step 7: Create Agent Credentials#
In the web interface:
- Navigate to Agents → New Agent
- Enter agent name (e.g.,
gpu-node-01) - Assign to a project or leave in default project
- Click Create Agent
- Copy the API token - you'll need this for agent configuration
The token looks like: csagent_abc123def456...
Part 2: Agent Deployment#
Step 1: Prepare Agent Binary#
On an Internet-connected system, download the Linux x64 binary:
wget https://github.com/unclesp1d3r/CipherSwarmAgent/releases/latest/download/CipherSwarmAgent_Linux_x86_64.tar.gz
Transfer to the agent system and extract:
tar -xzf CipherSwarmAgent_Linux_x86_64.tar.gz
chmod +x cipherswarm-agent
The binary is statically compiled with CGO_ENABLED=0, so it has no external library dependencies.
Step 2: Install Hashcat#
The agent requires hashcat to be installed. On the agent system:
# Debian/Ubuntu
apt-get install hashcat
# RHEL/CentOS
yum install hashcat
# Or download from https://hashcat.net/hashcat/
Verify installation:
hashcat --version
Step 3: Configure the Agent#
The agent uses a three-tier configuration system:
- Command-line flags (highest priority)
- Environment variables
- Configuration file (
cipherswarmagent.yaml)
Option A: Environment Variables (recommended for airgapped):
export API_TOKEN="csagent_abc123def456..."
export API_URL="http://<server-ip>"
export DATA_PATH="/opt/cipherswarm/data"
export ALWAYS_USE_NATIVE_HASHCAT=true
export HASHCAT_PATH="/usr/bin/hashcat"
Option B: Configuration File:
Create ~/.cipherswarmagent.yaml:
api_token: csagent_abc123def456...
api_url: http://<server-ip>
data_path: /opt/cipherswarm/data
always_use_native_hashcat: true
hashcat_path: /usr/bin/hashcat
gpu_temp_threshold: 80
status_timer: 10
Agent Configuration Reference#
Required Flags:
| Flag | Environment Variable | Description | Example |
|---|---|---|---|
--api-token | API_TOKEN | API authentication token from server | csagent_abc123... |
--api-url | API_URL | CipherSwarm server URL | http://192.168.1.100 |
Common Flags:
| Flag | Environment Variable | Default | Description |
|---|---|---|---|
--data-path | DATA_PATH | ./data | Directory for agent data and temp files |
--always-use-native-hashcat | ALWAYS_USE_NATIVE_HASHCAT | false | Force use of system hashcat (recommended) |
--hashcat-path | HASHCAT_PATH | auto-detect | Explicit hashcat binary path |
--gpu-temp-threshold | GPU_TEMP_THRESHOLD | 80 | GPU temp limit (°C) before throttling |
--status-timer | STATUS_TIMER | 10 | Status update interval (seconds) |
--debug | DEBUG | false | Enable debug logging |
Fault Tolerance Flags:
| Flag | Environment Variable | Default | Description |
|---|---|---|---|
--task-timeout | TASK_TIMEOUT | 24h | Maximum task duration |
--download-max-retries | DOWNLOAD_MAX_RETRIES | 3 | Download retry attempts |
--sleep-on-failure | SLEEP_ON_FAILURE | 60s | Wait time after task failure |
Advanced Flags:
| Flag | Environment Variable | Default | Description |
|---|---|---|---|
--write-zaps-to-file | WRITE_ZAPS_TO_FILE | false | Save cracked hashes to local files |
--zap-path | ZAP_PATH | (empty) | Directory for zap files (if enabled) |
--retain-zaps-on-completion | RETAIN_ZAPS_ON_COMPLETION | false | Keep zap files after task completion |
--extra-debugging | EXTRA_DEBUGGING | false | Verbose debugging output |
Note on flag naming: All flags use kebab-case (e.g.,
--api-token). For backward compatibility, the agent still accepts deprecated underscore-style flags (e.g.,--api_token), but these are not recommended and may be removed in future versions.
Step 4: Launch the Agent#
Test run (foreground):
./cipherswarm-agent
You should see:
- Authentication with server
- Benchmark tests running
- Status change to "Waiting for task"
Production deployment (as systemd service):
Create /etc/systemd/system/cipherswarm-agent.service:
[Unit]
Description=CipherSwarm Agent
After=network.target
[Service]
Type=simple
User=cipherswarm
Environment="API_TOKEN=csagent_abc123..."
Environment="API_URL=http://192.168.1.100"
Environment="DATA_PATH=/opt/cipherswarm/data"
Environment="ALWAYS_USE_NATIVE_HASHCAT=true"
Environment="HASHCAT_PATH=/usr/bin/hashcat"
ExecStart=/usr/local/bin/cipherswarm-agent
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
Enable and start:
systemctl daemon-reload
systemctl enable cipherswarm-agent
systemctl start cipherswarm-agent
systemctl status cipherswarm-agent
Part 3: Understanding Server-Agent Configuration Interaction#
The relationship between server and agent configuration determines the correct agent flags. This section explains how server settings affect agent configuration.
SSL/HTTPS Configuration#
Server Setting: DISABLE_SSL environment variable
DISABLE_SSL=true(recommended for airgapped labs): Server accepts HTTPDISABLE_SSLunset (default): Server forces HTTPS
Agent Configuration:
| Server SSL Config | Agent API_URL | Agent insecure_downloads |
|---|---|---|
DISABLE_SSL=true (HTTP) | http://server-ip | Not needed (HTTP only) |
| SSL enabled with valid cert | https://server-ip | false (default) |
| SSL enabled with self-signed cert | https://server-ip | true (dev/lab only) |
Important: The insecure_downloads flag only affects file downloads, not API authentication. It disables TLS certificate verification when downloading wordlists and other resources.
Security Note: Only use
insecure_downloads=truein development or lab environments with self-signed certificates. Never disable certificate verification in production environments with real certificate authorities.
Storage Configuration#
Server Setting: ACTIVE_STORAGE_SERVICE environment variable
local(default): Files stored in Docker volumes3: Files stored in S3-compatible service
Agent Impact: Storage backend is transparent to agents. The server generates download URLs that work the same for both backends. No agent configuration changes needed when switching storage backends.
Heartbeat and Timing Configuration#
Server-Controlled Settings:
The server controls agent heartbeat frequency via the agent_update_interval, which is:
- Set randomly between 5-60 seconds when agent is created
- Returned to agent via API
- Used by agent to override its default 10-second interval
Agent Flags: The --heartbeat-interval flag sets a local default (10s), but the server overrides this automatically. You typically don't need to set this flag.
Network Port Configuration#
Server Port: nginx exposes port 80 in docker-compose-production.yml
Agent Configuration: Agents should use port 80 (or 443 if TLS added):
# Correct for production Docker deployment
API_URL=http://192.168.1.100
# NOT port 3000 (that's for native/development deployments)
# API_URL=http://192.168.1.100:3000 # WRONG for Docker
Common Mistake: The CipherSwarm web application runs on port 3000 inside the container, but nginx exposes it on port 80. Always use port 80 when connecting agents to a Docker Compose deployment.
Configuration Decision Tree#
Use this flowchart to determine the correct agent configuration:
Configuration Matrix#
| Deployment Scenario | Server .env | Agent Configuration |
|---|---|---|
| Airgapped lab, HTTP, local storage | DISABLE_SSL=trueACTIVE_STORAGE_SERVICE=local | API_URL=http://<server-ip>ALWAYS_USE_NATIVE_HASHCAT=trueHASHCAT_PATH=/usr/bin/hashcat |
| Airgapped lab, HTTPS self-signed, local storage | ACTIVE_STORAGE_SERVICE=local(SSL enabled by default) | API_URL=https://<server-ip>INSECURE_DOWNLOADS=trueALWAYS_USE_NATIVE_HASHCAT=true |
| Airgapped lab, HTTP, MinIO storage | DISABLE_SSL=trueACTIVE_STORAGE_SERVICE=s3AWS_* variables set | API_URL=http://<server-ip>ALWAYS_USE_NATIVE_HASHCAT=true(storage transparent) |
Part 4: Verification and Troubleshooting#
Verify Agent Connection#
In the web interface:
- Navigate to Agents
- Find your agent
- Check status:
- Pending: Initial state, waiting for benchmark
- Benchmarking: Running performance tests
- Active: Ready to accept tasks
- Offline: Heartbeat not received
Common Issues#
| Symptom | Cause | Solution |
|---|---|---|
| Agent shows "Connection refused" | Wrong port or server down | Verify port 80 is correct and server is running |
| Agent shows "SSL certificate verify failed" | Self-signed cert with HTTPS | Set INSECURE_DOWNLOADS=true or use HTTP |
| Agent shows "401 Unauthorized" | Wrong API token | Verify token from web interface matches config |
| Agent shows "Offline" in web UI | Heartbeat timeout (30 min) | Check agent logs, restart agent |
| Benchmark never completes | Missing hashcat or GPU drivers | Verify hashcat --version works, check GPU setup |
| Download errors | Network/firewall blocking downloads | Check firewall rules, verify LAN connectivity |
| Sidekiq jobs failing with Errno::ENOSPC or InsufficientTempStorageError | /tmp tmpfs mount exhausted during large file processing | Increase TMPFS_TMP_SIZE in .env file and restart services. Use formula: TMPFS_TMP_SIZE >= 1.5 × largest_attack_resource_file. For 100GB+ files, consider the disk-backed TMPDIR volume approach instead of RAM-backed tmpfs. See Docker Storage and Temp Management for sizing calculations and TMPDIR alternative. |
Debug Mode#
Enable verbose logging on agent:
export DEBUG=true
export EXTRA_DEBUGGING=true
./cipherswarm-agent
Check server logs:
docker compose -f docker-compose-production.yml logs -f web
Critical: Agents Require Network Connectivity#
Important: CipherSwarm agents are not standalone offline tools. They require continuous network connectivity to the server for:
- Authentication: Agents authenticate with the server on startup
- Configuration: Agents fetch configuration (including heartbeat intervals) from the server
- File Downloads: Agents download hashlists, wordlists, and rules from the server for each task
- Heartbeat: Agents send periodic heartbeats (every 10-60 seconds) to maintain connection
- Task Management: Agents receive task assignments and submit results via API calls
For airgapped deployments, the CipherSwarm server must be deployed inside the airgapped network, and agents must have network connectivity to that internal server via LAN.
Part 5: Scaling and Production Operations#
Scaling Web Servers#
The production docker-compose file supports horizontal scaling based on the formula: n + 1 web replicas where n = number of active agents.
Scaling Rationale: Each agent maintains a persistent connection for heartbeats and task updates. The n + 1 formula ensures sufficient capacity with one extra replica for handling administrative web interface traffic.
Examples:
- 8 agents → 9 web replicas
- 16 agents → 17 web replicas
- 32 agents → 33 web replicas
Scaling Procedure:
-
Run database migrations once (before scaling):
docker compose -f docker-compose-production.yml run --rm -e RUN_DB_PREPARE=true web bin/rails db:migrate -
Scale web replicas:
docker compose -f docker-compose-production.yml up -d --scale web=9
The nginx load balancer automatically discovers scaled replicas using Docker's embedded DNS resolver. No nginx configuration changes are needed.
Resource Allocation:
- Each web replica: 1 CPU / 512 MB RAM
- Example: 9 web replicas = 9 CPUs / 4.5 GB RAM minimum
Migration Best Practice: Always run
db:migrateas a one-shot command withRUN_DB_PREPARE=truebefore scaling. Never setRUN_DB_PREPARE=truein the.envfile, as this would cause all replicas to run migrations simultaneously, leading to database deadlocks.
Monitoring Resource Usage#
Monitor Docker container resource usage:
# View resource consumption
docker stats
# Check specific service
docker compose -f docker-compose-production.yml ps
docker compose -f docker-compose-production.yml logs --tail=100 web
Watch for these indicators that you need to scale up:
- High CPU usage (>80%) on web replicas
- Slow API response times in agent logs
- Increased heartbeat failures or timeouts
Backup and Restore#
Backup database:
docker compose -f docker-compose-production.yml exec postgres-db \
pg_dump -U cipherswarm -d cipherswarm > backup_$(date +%Y%m%d).sql
Backup storage volume:
docker run --rm -v cipherswarm_storage:/data -v $(pwd):/backup \
alpine tar czf /backup/storage_$(date +%Y%m%d).tar.gz /data
Restore database:
docker compose -f docker-compose-production.yml exec -T postgres-db \
psql -U cipherswarm -d cipherswarm < backup_20260227.sql
Upgrading#
See the Air-Gapped Deployment Guide for detailed upgrade procedures including:
- Export new images
- Backup database
- Load new images
- Run migrations
- Restart services
Summary#
You have deployed:
- Server: Docker Compose stack with nginx, Rails web app, PostgreSQL, Redis, and Sidekiq workers
- Agents: Go binary agents connected to the server via LAN
Key Takeaways:
- Agents require network connectivity to the server (continuous heartbeat)
- Server SSL configuration determines agent
API_URLprotocol (http vs https) - Storage backend (local vs S3) is transparent to agents
- Agent heartbeat frequency is controlled by the server
- Use port 80 for Docker deployments, not 3000
Next Steps:
- Create a project in the web interface
- Upload hash lists and wordlists (hash lists support tus resumable uploads for large files, bypassing Active Storage tmpfs downloads during processing)
- Create a campaign and launch attacks
- Monitor agent status and task progress in real-time
Additional Resources:
- Air-Gapped Deployment Guide - comprehensive offline deployment documentation
- Agent Configuration Guide - complete agent configuration reference
- Production Load Balancing - horizontal scaling guide