SlateDB's performance characteristics are primarily determined by the object store being used. This page provides guidance on expected performance and how to tune SlateDB for your use case.
Latency#
SlateDB's write latency is dominated by object store PUT operations. The following table shows expected latencies for different object stores:
| Object Store | Expected Write Latency | Notes |
|---|---|---|
| S3 Standard | 50-100ms | Network latency dominates |
| S3 Express One Zone | 5-10ms | Single-digit millisecond latency |
| Google Cloud Storage | 50-100ms | Network latency dominates |
| Azure Blob Storage | 50-100ms | Network latency dominates |
| MinIO | 5-20ms | Depends on network and disk |
Read latency dominated by the database's working set size and read pattern:
- Working sets that fit in local (memorya and disk) caches will respond very quickly (< 1ms).
- Read patterns that do sequential reads will respond very quickly (<1ms) since SST blocks are pre-fetched sequentially and cached locally.
- Read patterns that access keys that are (lexicographically) close to each other will respond very quickly (< 1ms) since blocks are cached locally after the first read.
- Reads that access data across the keyspace, and whose dataset is larger than a single machine's memory or disk will, are more likely to see latency spikes similar to object storage latency levels (50-100ms for S3 standard). Such workloads can still work well, but require more tuning, partitioning, and so on.
Throughput#
SlateDB's write throughput is limited by:
- Object store PUT rate limits: Most object stores limit PUT operations to 3,500 requests per second per prefix.
- Network bandwidth: The time it takes to upload SSTs to object storage.
Read throughput is limited by:
- Object store GET rate limits: Most object stores limit GET operations to 5,500 requests per second per prefix.
- Network bandwidth: The time it takes to download SSTs from object storage.
- Disk I/O: The time it takes to read SSTs from disk when object store caching or Foyer hybrid caching are enabled.
Tuning#
SlateDB provides several configuration options to tune performance. See slatedb::config for the current tuning surface.
Write Performance#
flush_interval: How long SlateDB waits before flushing the mutable WAL to object storage. Lower values reduce durable write latency but increase object-store PUT frequency.l0_sst_size_bytes: The size of L0 SSTs. Larger SSTs provide better compression but take longer to upload.max_unflushed_bytes: The total amount of unflushed WAL and memtable data allowed in memory before SlateDB applies backpressure to writers.l0_max_ssts: The maximum number of L0 SSTs SlateDB allows before it stops flushing memtables and waits for compaction to catch up.
Read Performance#
min_filter_keys: SlateDB only builds filters for SSTs with at least this many keys. Raising it avoids filter overhead on small SSTs.DbBuilder::with_filter_policies(...): Configures the filter policies used for SST construction and evaluation. Defaults to a single bloom filter with 10 bits per key. PassBloomFilterPolicy::new(bits_per_key)to tune bloom filter density (higher values reduce false positives but increase filter size), or supply a customFilterPolicy(e.g., prefix bloom). See thewith_filter_policiesdocs onDbBuilder,CompactorBuilder, andDbReaderBuilder.object_store_cache_options.root_folder: Enables the local disk cache for object-store data when set.object_store_cache_options.part_size_bytesandobject_store_cache_options.preload_disk_cache_on_startup: Tune how the disk cache is chunked and optionally warmed on startup.DbBuilder::with_sst_block_size(...): Sets SST block size. Smaller blocks can help point lookups; larger blocks usually favor scans and compression.
Memory Usage#
max_unflushed_bytes: The primary writer-side memory bound for buffered but not yet flushed data.