Documents
Airgapped Network Deployment Tutorial
Airgapped Network Deployment Tutorial
Type
Document
Status
Published
Created
Feb 27, 2026
Updated
Mar 27, 2026
Updated by
Dosu Bot

Airgapped Network Deployment Tutorial#

Overview#

This tutorial guides system administrators through deploying CipherSwarm in an airgapped (non-Internet-connected) network environment. You will deploy the server using Docker Compose and connect Linux x64 binary agents to it.

Target Audience: Linux system administrators familiar with Docker and networking but not necessarily software development.

Prerequisites:

  • Docker and Docker Compose installed on server
  • Linux systems with network connectivity to server (isolated LAN)
  • Downloaded CipherSwarm Docker images and agent binary
  • Basic understanding of hash cracking concepts (you know what hashcat does)
  • Sufficient RAM for tmpfs mounts. The default 768MB (512MB for /tmp and 256MB for /rails/tmp) supports attack resources up to ~1GB. For larger files, size tmpfs using the formula: TMPFS_TMP_SIZE >= 1.5 × largest_attack_resource_file. For 100GB+ files, this requires 150GB+ of RAM for tmpfs, or use the disk-backed TMPDIR volume approach instead. See Docker Storage and Temp Management for comprehensive sizing guidance.

What is CipherSwarm?#

CipherSwarm is a distributed hash cracking system consisting of:

  1. Server: A Ruby on Rails web application that manages cracking campaigns, distributes tasks, and stores results
  2. Agents: Go-based workers that run hashcat on cracking nodes and report progress back to the server

Architecture:

Key Terms:

  • Campaign: A hash cracking job (e.g., "crack all MD5 hashes using rockyou.txt")
  • Attack: A specific attack strategy (dictionary, mask, combinator, etc.) within a campaign
  • Task: A work unit assigned to an agent (e.g., "crack hashes 1-1000 with this wordlist")
  • Heartbeat: Periodic check-in from agent to server to maintain connection
  • Benchmark: Initial performance test agents run to measure hash cracking speed

Part 1: Server Deployment#

Step 1: Prepare Docker Images#

On an Internet-connected system, export the required Docker images:

# Pull images
docker pull ghcr.io/unclesp1d3r/cipherswarm:latest
docker pull postgres:latest
docker pull redis:latest
docker pull nginx:alpine

# Save to tar archives
docker save ghcr.io/unclesp1d3r/cipherswarm:latest -o cipherswarm.tar
docker save postgres:latest -o postgres.tar
docker save redis:latest -o redis.tar
docker save nginx:alpine -o nginx.tar

Transfer the tar files to the airgapped server via approved method (USB, secure file transfer, etc.).

Step 2: Load Images on Airgapped Server#

docker load -i cipherswarm.tar
docker load -i postgres.tar
docker load -i redis.tar
docker load -i nginx.tar

# Verify
docker images | grep -E "cipherswarm|postgres|redis|nginx"

Step 3: Configure the Server#

Download docker-compose-production.yml and place it on the server. This file defines five services:

  1. nginx: Reverse proxy exposing port 80
  2. web: Rails application (horizontally scalable)
  3. postgres-db: PostgreSQL database
  4. redis-db: Redis cache and job queue
  5. sidekiq: Background job workers (4 replicas)

tmpfs Mounts: The docker-compose files include tmpfs mounts for /tmp (512MB) and /rails/tmp (256MB) on both web and sidekiq services. These prevent filesystem exhaustion during Active Storage blob downloads. The tmpfs size is controlled by the TMPFS_TMP_SIZE and TMPFS_RAILS_TMP_SIZE environment variables. For deployments processing 100GB+ attack resources, use the formula TMPFS_TMP_SIZE >= 1.5 × largest_attack_resource_file (e.g., 150GB+ for 100GB wordlists). For very large files, the disk-backed TMPDIR volume approach is preferred over RAM-backed tmpfs. See Docker Storage and Temp Management for detailed sizing guidance and the TMPDIR alternative.

Create a .env file in the same directory. You can start with the provided .env.example template and customize it for your air-gapped environment:

# Required - Security
RAILS_MASTER_KEY=<generate-or-copy-from-config/master.key>
POSTGRES_PASSWORD=<your-secure-database-password>
TUSD_HOOK_SECRET=<generate-random-secret>

# Required - Networking
APPLICATION_HOST=<server-ip-or-hostname>

# Recommended for Airgapped Labs
DISABLE_SSL=true

# Optional - Storage (defaults to local)
ACTIVE_STORAGE_SERVICE=local

Note on RAILS_MASTER_KEY: This is a cryptographic key used by Rails to encrypt credentials. If you're setting up CipherSwarm for the first time, you can generate one with openssl rand -hex 32. If migrating from an existing deployment, copy the key from config/master.key on the original system.

For a complete reference of all environment variables with detailed explanations, see Environment Variables Reference.

Critical Environment Variables for Air-Gapped Deployment#

The following environment variables are critical for air-gapped deployment. For complete details on these and all other variables, see the Environment Variables Reference.

Required Variables:

VariableDescriptionAir-Gapped Context
RAILS_MASTER_KEYRails credentials encryption keyTransfer from config/master.key on original system
POSTGRES_PASSWORDDatabase password (strictly enforced)Use strong password (min 16 chars). Container will fail to start if not set.
TUSD_HOOK_SECRETShared secret for authenticating tusd webhook requestsGenerate with openssl rand -hex 32. Required in production to prevent cache poisoning attacks.
APPLICATION_HOSTServer hostname/IP for URL generationUse internal hostname or IP (e.g., cipherswarm.lab or 192.168.1.100)

Important Variables for Air-Gapped Environments:

VariableDescriptionRecommended ValueReason
DISABLE_SSLDisable HTTPS enforcementtrueLab environments typically don't have SSL certificates
ACTIVE_STORAGE_SERVICEStorage backendlocalNo external S3 service available
REDIS_URLRedis connectionredis://redis-db:6379/0Must match Docker Compose service name

Optional S3 Storage Variables (when using MinIO in air-gapped network):

When using S3-compatible storage with MinIO deployed inside your air-gapped network, see the Environment Variables Reference for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_ENDPOINT, AWS_BUCKET, AWS_FORCE_PATH_STYLE, and AWS_REGION.

Optional tmpfs Sizing Variables (for large attack resources):

VariableDescriptionDefaultWhen to Set
TMPFS_TMP_SIZESize of /tmp tmpfs mount for blob downloads512mWhen largest attack resource exceeds 1GB. Use formula: >= 1.5 × largest_file
TMPFS_RAILS_TMP_SIZESize of /rails/tmp tmpfs mount for framework temp files256mRarely needed. Only if Bootsnap cache exceeds 256MB

For 100GB+ attack resources, set TMPFS_TMP_SIZE=150g (or higher) and increase the Sidekiq container memory limit proportionally. Alternatively, use the disk-backed TMPDIR volume approach documented in Docker Storage and Temp Management.

Step 4: Deploy the Server#

docker compose -f docker-compose-production.yml up -d

Wait for all services to start:

docker compose -f docker-compose-production.yml ps

All services should show healthy status.

Step 5: Initialize the Database#

On first deployment, the database must be initialized before starting web replicas. The entrypoint script requires the RUN_DB_PREPARE environment variable to run database migrations, which prevents migration races when multiple web replicas start simultaneously:

docker compose -f docker-compose-production.yml run --rm -e RUN_DB_PREPARE=true web bin/rails db:prepare

This command:

  • Creates the database if it doesn't exist
  • Runs pending migrations
  • Seeds initial data (creates admin user)

Check the output for default admin credentials (typically admin/password - change immediately).

Important for Scaling: Only run db:prepare once before scaling up web replicas. Do not set RUN_DB_PREPARE=true in the .env file or all replicas will attempt to run migrations simultaneously, causing database deadlocks.

Step 6: Verify Server Deployment#

  1. Health check:
curl http://<server-ip>/up
# Expected: HTTP 200 OK
  1. Web interface: Navigate to http://<server-ip> and log in with admin credentials

  2. System health: Check http://<server-ip>/system_health - all services should be green

Step 7: Create Agent Credentials#

In the web interface:

  1. Navigate to AgentsNew Agent
  2. Enter agent name (e.g., gpu-node-01)
  3. Assign to a project or leave in default project
  4. Click Create Agent
  5. Copy the API token - you'll need this for agent configuration

The token looks like: csagent_abc123def456...

Part 2: Agent Deployment#

Step 1: Prepare Agent Binary#

On an Internet-connected system, download the Linux x64 binary:

wget https://github.com/unclesp1d3r/CipherSwarmAgent/releases/latest/download/CipherSwarmAgent_Linux_x86_64.tar.gz

Transfer to the agent system and extract:

tar -xzf CipherSwarmAgent_Linux_x86_64.tar.gz
chmod +x cipherswarm-agent

The binary is statically compiled with CGO_ENABLED=0, so it has no external library dependencies.

Step 2: Install Hashcat#

The agent requires hashcat to be installed. On the agent system:

# Debian/Ubuntu
apt-get install hashcat

# RHEL/CentOS
yum install hashcat

# Or download from https://hashcat.net/hashcat/

Verify installation:

hashcat --version

Step 3: Configure the Agent#

The agent uses a three-tier configuration system:

  1. Command-line flags (highest priority)
  2. Environment variables
  3. Configuration file (cipherswarmagent.yaml)

Option A: Environment Variables (recommended for airgapped):

export API_TOKEN="csagent_abc123def456..."
export API_URL="http://<server-ip>"
export DATA_PATH="/opt/cipherswarm/data"
export ALWAYS_USE_NATIVE_HASHCAT=true
export HASHCAT_PATH="/usr/bin/hashcat"

Option B: Configuration File:

Create ~/.cipherswarmagent.yaml:

api_token: csagent_abc123def456...
api_url: http://<server-ip>
data_path: /opt/cipherswarm/data
always_use_native_hashcat: true
hashcat_path: /usr/bin/hashcat
gpu_temp_threshold: 80
status_timer: 10

Agent Configuration Reference#

Required Flags:

FlagEnvironment VariableDescriptionExample
--api-tokenAPI_TOKENAPI authentication token from servercsagent_abc123...
--api-urlAPI_URLCipherSwarm server URLhttp://192.168.1.100

Common Flags:

FlagEnvironment VariableDefaultDescription
--data-pathDATA_PATH./dataDirectory for agent data and temp files
--always-use-native-hashcatALWAYS_USE_NATIVE_HASHCATfalseForce use of system hashcat (recommended)
--hashcat-pathHASHCAT_PATHauto-detectExplicit hashcat binary path
--gpu-temp-thresholdGPU_TEMP_THRESHOLD80GPU temp limit (°C) before throttling
--status-timerSTATUS_TIMER10Status update interval (seconds)
--debugDEBUGfalseEnable debug logging

Fault Tolerance Flags:

FlagEnvironment VariableDefaultDescription
--task-timeoutTASK_TIMEOUT24hMaximum task duration
--download-max-retriesDOWNLOAD_MAX_RETRIES3Download retry attempts
--sleep-on-failureSLEEP_ON_FAILURE60sWait time after task failure

Advanced Flags:

FlagEnvironment VariableDefaultDescription
--write-zaps-to-fileWRITE_ZAPS_TO_FILEfalseSave cracked hashes to local files
--zap-pathZAP_PATH(empty)Directory for zap files (if enabled)
--retain-zaps-on-completionRETAIN_ZAPS_ON_COMPLETIONfalseKeep zap files after task completion
--extra-debuggingEXTRA_DEBUGGINGfalseVerbose debugging output

Note on flag naming: All flags use kebab-case (e.g., --api-token). For backward compatibility, the agent still accepts deprecated underscore-style flags (e.g., --api_token), but these are not recommended and may be removed in future versions.

Step 4: Launch the Agent#

Test run (foreground):

./cipherswarm-agent

You should see:

  1. Authentication with server
  2. Benchmark tests running
  3. Status change to "Waiting for task"

Production deployment (as systemd service):

Create /etc/systemd/system/cipherswarm-agent.service:

[Unit]
Description=CipherSwarm Agent
After=network.target

[Service]
Type=simple
User=cipherswarm
Environment="API_TOKEN=csagent_abc123..."
Environment="API_URL=http://192.168.1.100"
Environment="DATA_PATH=/opt/cipherswarm/data"
Environment="ALWAYS_USE_NATIVE_HASHCAT=true"
Environment="HASHCAT_PATH=/usr/bin/hashcat"
ExecStart=/usr/local/bin/cipherswarm-agent
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Enable and start:

systemctl daemon-reload
systemctl enable cipherswarm-agent
systemctl start cipherswarm-agent
systemctl status cipherswarm-agent

Part 3: Understanding Server-Agent Configuration Interaction#

The relationship between server and agent configuration determines the correct agent flags. This section explains how server settings affect agent configuration.

SSL/HTTPS Configuration#

Server Setting: DISABLE_SSL environment variable

  • DISABLE_SSL=true (recommended for airgapped labs): Server accepts HTTP
  • DISABLE_SSL unset (default): Server forces HTTPS

Agent Configuration:

Server SSL ConfigAgent API_URLAgent insecure_downloads
DISABLE_SSL=true (HTTP)http://server-ipNot needed (HTTP only)
SSL enabled with valid certhttps://server-ipfalse (default)
SSL enabled with self-signed certhttps://server-iptrue (dev/lab only)

Important: The insecure_downloads flag only affects file downloads, not API authentication. It disables TLS certificate verification when downloading wordlists and other resources.

Security Note: Only use insecure_downloads=true in development or lab environments with self-signed certificates. Never disable certificate verification in production environments with real certificate authorities.

Storage Configuration#

Server Setting: ACTIVE_STORAGE_SERVICE environment variable

Agent Impact: Storage backend is transparent to agents. The server generates download URLs that work the same for both backends. No agent configuration changes needed when switching storage backends.

Heartbeat and Timing Configuration#

Server-Controlled Settings:

The server controls agent heartbeat frequency via the agent_update_interval, which is:

Agent Flags: The --heartbeat-interval flag sets a local default (10s), but the server overrides this automatically. You typically don't need to set this flag.

Network Port Configuration#

Server Port: nginx exposes port 80 in docker-compose-production.yml

Agent Configuration: Agents should use port 80 (or 443 if TLS added):

# Correct for production Docker deployment
API_URL=http://192.168.1.100

# NOT port 3000 (that's for native/development deployments)
# API_URL=http://192.168.1.100:3000 # WRONG for Docker

Common Mistake: The CipherSwarm web application runs on port 3000 inside the container, but nginx exposes it on port 80. Always use port 80 when connecting agents to a Docker Compose deployment.

Configuration Decision Tree#

Use this flowchart to determine the correct agent configuration:

Configuration Matrix#

Deployment ScenarioServer .envAgent Configuration
Airgapped lab, HTTP, local storageDISABLE_SSL=true
ACTIVE_STORAGE_SERVICE=local
API_URL=http://<server-ip>
ALWAYS_USE_NATIVE_HASHCAT=true
HASHCAT_PATH=/usr/bin/hashcat
Airgapped lab, HTTPS self-signed, local storageACTIVE_STORAGE_SERVICE=local
(SSL enabled by default)
API_URL=https://<server-ip>
INSECURE_DOWNLOADS=true
ALWAYS_USE_NATIVE_HASHCAT=true
Airgapped lab, HTTP, MinIO storageDISABLE_SSL=true
ACTIVE_STORAGE_SERVICE=s3
AWS_* variables set
API_URL=http://<server-ip>
ALWAYS_USE_NATIVE_HASHCAT=true
(storage transparent)

Part 4: Verification and Troubleshooting#

Verify Agent Connection#

In the web interface:

  1. Navigate to Agents
  2. Find your agent
  3. Check status:
    • Pending: Initial state, waiting for benchmark
    • Benchmarking: Running performance tests
    • Active: Ready to accept tasks
    • Offline: Heartbeat not received

Common Issues#

SymptomCauseSolution
Agent shows "Connection refused"Wrong port or server downVerify port 80 is correct and server is running
Agent shows "SSL certificate verify failed"Self-signed cert with HTTPSSet INSECURE_DOWNLOADS=true or use HTTP
Agent shows "401 Unauthorized"Wrong API tokenVerify token from web interface matches config
Agent shows "Offline" in web UIHeartbeat timeout (30 min)Check agent logs, restart agent
Benchmark never completesMissing hashcat or GPU driversVerify hashcat --version works, check GPU setup
Download errorsNetwork/firewall blocking downloadsCheck firewall rules, verify LAN connectivity
Sidekiq jobs failing with Errno::ENOSPC or InsufficientTempStorageError/tmp tmpfs mount exhausted during large file processingIncrease TMPFS_TMP_SIZE in .env file and restart services. Use formula: TMPFS_TMP_SIZE >= 1.5 × largest_attack_resource_file. For 100GB+ files, consider the disk-backed TMPDIR volume approach instead of RAM-backed tmpfs. See Docker Storage and Temp Management for sizing calculations and TMPDIR alternative.

Debug Mode#

Enable verbose logging on agent:

export DEBUG=true
export EXTRA_DEBUGGING=true
./cipherswarm-agent

Check server logs:

docker compose -f docker-compose-production.yml logs -f web

Critical: Agents Require Network Connectivity#

Important: CipherSwarm agents are not standalone offline tools. They require continuous network connectivity to the server for:

  • Authentication: Agents authenticate with the server on startup
  • Configuration: Agents fetch configuration (including heartbeat intervals) from the server
  • File Downloads: Agents download hashlists, wordlists, and rules from the server for each task
  • Heartbeat: Agents send periodic heartbeats (every 10-60 seconds) to maintain connection
  • Task Management: Agents receive task assignments and submit results via API calls
    For airgapped deployments, the CipherSwarm server must be deployed inside the airgapped network, and agents must have network connectivity to that internal server via LAN.

Part 5: Scaling and Production Operations#

Scaling Web Servers#

The production docker-compose file supports horizontal scaling based on the formula: n + 1 web replicas where n = number of active agents.

Scaling Rationale: Each agent maintains a persistent connection for heartbeats and task updates. The n + 1 formula ensures sufficient capacity with one extra replica for handling administrative web interface traffic.

Examples:

  • 8 agents → 9 web replicas
  • 16 agents → 17 web replicas
  • 32 agents → 33 web replicas

Scaling Procedure:

  1. Run database migrations once (before scaling):

    docker compose -f docker-compose-production.yml run --rm -e RUN_DB_PREPARE=true web bin/rails db:migrate
    
  2. Scale web replicas:

    docker compose -f docker-compose-production.yml up -d --scale web=9
    

The nginx load balancer automatically discovers scaled replicas using Docker's embedded DNS resolver. No nginx configuration changes are needed.

Resource Allocation:

  • Each web replica: 1 CPU / 512 MB RAM
  • Example: 9 web replicas = 9 CPUs / 4.5 GB RAM minimum

Migration Best Practice: Always run db:migrate as a one-shot command with RUN_DB_PREPARE=true before scaling. Never set RUN_DB_PREPARE=true in the .env file, as this would cause all replicas to run migrations simultaneously, leading to database deadlocks.

Monitoring Resource Usage#

Monitor Docker container resource usage:

# View resource consumption
docker stats

# Check specific service
docker compose -f docker-compose-production.yml ps
docker compose -f docker-compose-production.yml logs --tail=100 web

Watch for these indicators that you need to scale up:

  • High CPU usage (>80%) on web replicas
  • Slow API response times in agent logs
  • Increased heartbeat failures or timeouts

Backup and Restore#

Backup database:

docker compose -f docker-compose-production.yml exec postgres-db \
  pg_dump -U cipherswarm -d cipherswarm > backup_$(date +%Y%m%d).sql

Backup storage volume:

docker run --rm -v cipherswarm_storage:/data -v $(pwd):/backup \
  alpine tar czf /backup/storage_$(date +%Y%m%d).tar.gz /data

Restore database:

docker compose -f docker-compose-production.yml exec -T postgres-db \
  psql -U cipherswarm -d cipherswarm < backup_20260227.sql

Upgrading#

See the Air-Gapped Deployment Guide for detailed upgrade procedures including:

  1. Export new images
  2. Backup database
  3. Load new images
  4. Run migrations
  5. Restart services

Summary#

You have deployed:

  1. Server: Docker Compose stack with nginx, Rails web app, PostgreSQL, Redis, and Sidekiq workers
  2. Agents: Go binary agents connected to the server via LAN

Key Takeaways:

  • Agents require network connectivity to the server (continuous heartbeat)
  • Server SSL configuration determines agent API_URL protocol (http vs https)
  • Storage backend (local vs S3) is transparent to agents
  • Agent heartbeat frequency is controlled by the server
  • Use port 80 for Docker deployments, not 3000

Next Steps:

  • Create a project in the web interface
  • Upload hash lists and wordlists (hash lists support tus resumable uploads for large files, bypassing Active Storage tmpfs downloads during processing)
  • Create a campaign and launch attacks
  • Monitor agent status and task progress in real-time

Additional Resources: