Testing Documentation#
This document provides comprehensive testing strategies and
guidelines for DaemonEye, covering unit testing, integration testing,
performance testing, and security testing.
Table of Contents#
[TOC]
Testing Philosophy#
DaemonEye follows a comprehensive testing strategy that ensures:
- Reliability: Robust error handling and edge case
coverage - Performance: Meets performance requirements under
load - Security: Validates security controls and prevents
vulnerabilities - Maintainability: Easy to understand and modify
tests - Coverage: High test coverage across all
components
Testing Strategy#
Three-Tier Testing#
Architecture
- Unit Tests: Test individual components in
isolation - Integration Tests: Test component interactions and
data flow - End-to-End Tests: Test complete workflows and user
scenarios
Testing Pyramid#
┌─────────────────┐ │ E2E Tests │ ← Few, slow, expensive │ (Manual) │ ├─────────────────┤ │ Integration │ ← Some, medium speed │ Tests │ ├─────────────────┤ │ Unit Tests │ ← Many, fast, cheap │ (Automated) │ └─────────────────┘
Unit Testing#
Core Testing Framework#
DaemonEye uses a comprehensive unit testing framework:
`#[cfg(test)]
mod tests {
use super::*;
use insta::assert_snapshot;
use tempfile::TempDir;
#[tokio::test]
async fn test_process_collection() {
let collector = ProcessCollector::new();
let processes = collector.collect_processes().await.unwrap();
assert!(!processes.is_empty());
assert!(processes.iter().any(|p| p.pid > 0));
}
#[tokio::test]
async fn test_database_operations() {
let temp_dir = TempDir::new().unwrap();
let db_path = temp_dir.path().join("test.db");
let db = Database::new(&db_path).await.unwrap();
let process = ProcessInfo {
pid: 1234,
name: "test_process".to_string(),
// ... other fields
};
db.insert_process(&process).await.unwrap();
let retrieved = db.get_process(1234).await.unwrap();
assert_eq!(process.pid, retrieved.pid);
assert_eq!(process.name, retrieved.name);
}
}`
Mocking and Test Doubles#
Use mocks for external dependencies:
`use mockall::mock;
mock! {
pub ProcessCollector {}
#[async_trait]
impl ProcessCollectionService for ProcessCollector {
async fn collect_processes(&self) -> Result<CollectionResult, CollectionError>;
async fn get_system_info(&self) -> Result<SystemInfo, CollectionError>;
}
}
#[tokio::test]
async fn test_agent_with_mock_collector() {
let mut mock_collector = MockProcessCollector::new();
mock_collector
.expect_collect_processes()
.times(1)
.returning(|| Ok(CollectionResult::default()));
let agent = daemoneye_agent::new(Box::new(mock_collector));
let result = agent.run_collection_cycle().await;
assert!(result.is_ok());
}`
Property-Based Testing#
Use property-based testing for complex logic:
`use proptest::prelude::*;
proptest! {
#[test]
fn test_process_info_serialization(process in any::()) {
let serialized = serde_json::to_string(&process).unwrap();
let deserialized: ProcessInfo = serde_json::from_str(&serialized).unwrap();
assert_eq!(process, deserialized);
}
#[test]
fn test_sql_query_validation(query in "[a-zA-Z0-9_\\s]+") {
let result = validate_sql_query(&query);
// Property: validation should not panic
let _ = result;
}
}`
Integration Testing#
Database Integration Tests#
Test database operations with real SQLite:
`#[tokio::test]
async fn test_database_integration() {
let temp_dir = TempDir::new().unwrap();
let db_path = temp_dir.path().join("integration_test.db");
let db = Database::new(&db_path).await.unwrap();
// Test schema creation
db.create_schema().await.unwrap();
// Test data insertion
let process = ProcessInfo {
pid: 1234,
name: "test_process".to_string(),
executable_path: Some("/usr/bin/test".to_string()),
command_line: Some("test --arg value".to_string()),
start_time: Some(Utc::now()),
cpu_usage: Some(0.5),
memory_usage: Some(1024),
status: ProcessStatus::Running,
executable_hash: Some("abc123".to_string()),
collection_time: Utc::now(),
};
db.insert_process(&process).await.unwrap();
// Test data retrieval
let retrieved = db.get_process(1234).await.unwrap();
assert_eq!(process.pid, retrieved.pid);
// Test query execution
let results = db
.query_processes("SELECT * FROM processes WHERE pid = ?", &[1234])
.await
.unwrap();
assert_eq!(results.len(), 1);
}`
IPC Integration Tests#
Test inter-process communication:
`#[tokio::test]
async fn test_ipc_communication() {
let temp_dir = TempDir::new().unwrap();
let socket_path = temp_dir.path().join("test.sock");
// Start server
let server = IpcServer::new(&socket_path).await.unwrap();
let server_handle = tokio::spawn(async move { server.run().await });
// Wait for server to start
tokio::time::sleep(Duration::from_millis(100)).await;
// Connect client
let client = IpcClient::new(&socket_path).await.unwrap();
// Test request/response
let request = IpcRequest::CollectProcesses;
let response = client.send_request(request).await.unwrap();
assert!(matches!(response, IpcResponse::Processes(_)));
// Cleanup
server_handle.abort();
}`
Alert Delivery Integration#
Tests
Test alert delivery mechanisms:
`#[tokio::test]
async fn test_alert_delivery() {
let mut alert_manager = AlertManager::new();
// Add test sinks
let syslog_sink = SyslogSink::new("daemon").unwrap();
let webhook_sink = WebhookSink::new("http://localhost:8080/webhook").unwrap();
alert_manager.add_sink(Box::new(syslog_sink));
alert_manager.add_sink(Box::new(webhook_sink));
// Create test alert
let alert = Alert {
id: Uuid::new_v4(),
rule_name: "test_rule".to_string(),
severity: AlertSeverity::High,
message: "Test alert".to_string(),
process: ProcessInfo::default(),
timestamp: Utc::now(),
metadata: HashMap::new(),
};
// Send alert
let result = alert_manager.send_alert(alert).await;
assert!(result.is_ok());
}`
End-to-End Testing#
CLI Testing#
Test command-line interface:
`use insta::assert_snapshot;
use std::process::Command;
#[test]
fn test_cli_help() {
let mut cmd = Command::cargo_bin("daemoneye-cli").unwrap();
cmd.assert()
.success()
.stdout(predicate::str::contains("DaemonEye CLI"));
}
#[test]
fn test_cli_query() {
let mut cmd = Command::cargo_bin("daemoneye-cli").unwrap();
cmd.args(&["query", "SELECT * FROM processes LIMIT 1"])
.assert()
.success();
}
#[test]
fn test_cli_config() {
let mut cmd = Command::cargo_bin("daemoneye-cli").unwrap();
cmd.args(&["config", "show"])
.assert()
.success()
.stdout(predicate::str::contains("app:"));
}`
Full System Testing#
Test complete system workflows:
`#[tokio::test]
async fn test_full_system_workflow() {
let temp_dir = TempDir::new().unwrap();
let config_path = temp_dir.path().join("config.yaml");
// Create test configuration
let config = Config::default();
config.save_to_file(&config_path).unwrap();
// Start procmond
let procmond_handle = tokio::spawn(async move {
let procmond = ProcMonD::new(&config_path).await.unwrap();
procmond.run().await
});
// Start daemoneye-agent
let agent_handle = tokio::spawn(async move {
let agent = daemoneye_agent::new(&config_path).await.unwrap();
agent.run().await
});
// Wait for services to start
tokio::time::sleep(Duration::from_secs(2)).await;
// Test CLI operations
let mut cmd = Command::cargo_bin("daemoneye-cli").unwrap();
cmd.args(&[
"--config",
config_path.to_str().unwrap(),
"query",
"SELECT COUNT(*) FROM processes",
])
.assert()
.success();
// Cleanup
procmond_handle.abort();
agent_handle.abort();
}`
Performance Testing#
Automated Benchmarks#
DaemonEye provides a dedicated benchmarking workflow for performance
testing:
- Manual Trigger: Performance benchmarks are
triggered manually via workflow_dispatch - Configurable Suites: Select which benchmark suite
to run ("all", "performance_benchmarks", or
"process_collector_benchmarks") - Regression Detection: Tests detect performance
regressions and log warnings for review - Baseline Comparison: Benchmark results are cached
and compared against baseline from the main branch - Load Testing: Automated load tests validate system
behavior under stress in a separate job - Results Archival: Benchmark and load test results
are uploaded as artifacts with 30-day retention
Developers can access benchmark results from the GitHub Actions
workflow artifacts. Performance regressions are logged as warnings but
do not fail the build, allowing for manual review and assessment.
Load Testing#
Test system performance under load:
`use criterion::{Criterion, black_box, criterion_group, criterion_main};
fn benchmark_process_collection(c: &mut Criterion) {
let mut group = c.benchmark_group("process_collection");
group.bench_function("collect_processes", |b| {
b.iter(|| {
let collector = ProcessCollector::new();
black_box(collector.collect_processes())
})
});
group.bench_function("collect_processes_parallel", |b| {
b.iter(|| {
let collector = ProcessCollector::new();
black_box(collector.collect_processes_parallel())
})
});
group.finish();
}
fn benchmark_database_operations(c: &mut Criterion) {
let mut group = c.benchmark_group("database_operations");
group.bench_function("insert_process", |b| {
let db = Database::new(":memory:").unwrap();
let process = ProcessInfo::default();
b.iter(|| black_box(db.insert_process(&process)))
});
group.bench_function("query_processes", |b| {
let db = Database::new(":memory:").unwrap();
// Insert test data
for i in 0..1000 {
let process = ProcessInfo {
pid: i,
..Default::default()
};
db.insert_process(&process).unwrap();
}
b.iter(|| black_box(db.query_processes("SELECT * FROM processes WHERE pid > ?", &[500])))
});
group.finish();
}
criterion_group!(
benches,
benchmark_process_collection,
benchmark_database_operations
);
criterion_main!(benches);`
Memory Testing#
Test memory usage and leaks:
`#[tokio::test]
async fn test_memory_usage() {
let initial_memory = get_memory_usage();
// Run operations that should not leak memory
for _ in 0..1000 {
let collector = ProcessCollector::new();
let _processes = collector.collect_processes().await.unwrap();
drop(collector);
}
// Force garbage collection
tokio::task::yield_now().await;
let final_memory = get_memory_usage();
let memory_increase = final_memory - initial_memory;
// Memory increase should be minimal
assert!(memory_increase < 10 * 1024 * 1024); // 10MB
}
fn get_memory_usage() -> usize {
// Platform-specific memory usage detection
#[cfg(target_os = "linux")]
{
let status = std::fs::read_to_string("/proc/self/status").unwrap();
for line in status.lines() {
if line.starts_with("VmRSS:") {
let parts: Vec<&str> = line.split_whitespace().collect();
return parts[1].parse::().unwrap() * 1024; // Convert to bytes
}
}
0
}
#[cfg(not(target_os = "linux"))]
{
// Fallback for other platforms
0
}
}`
Stress Testing#
Test system behavior under stress:
`#[tokio::test]
async fn test_stress_collection() {
let collector = ProcessCollector::new();
// Run collection continuously for 60 seconds
let start = Instant::now();
let mut count = 0;
while start.elapsed() < Duration::from_secs(60) {
let processes = collector.collect_processes().await.unwrap();
count += processes.len();
// Small delay to prevent overwhelming the system
tokio::time::sleep(Duration::from_millis(100)).await;
}
// Should have collected a reasonable number of processes
assert!(count > 0);
println!("Collected {} processes in 60 seconds", count);
}`
Security Testing#
Fuzz Testing#
Test with random inputs:
`use cargo_fuzz;
#[no_mangle]
pub extern "C" fn fuzz_process_info(data: &[u8]) {
if let Ok(process_info) = ProcessInfo::from_bytes(data) {
// Test that deserialization doesn't panic
let _ = process_info.pid;
let _ = process_info.name;
}
}
#[no_mangle]
pub extern "C" fn fuzz_sql_query(data: &[u8]) {
if let Ok(query) = std::str::from_utf8(data) {
// Test SQL query validation
let _ = validate_sql_query(query);
}
}`
Security Boundary Testing#
Test security boundaries:
`#[tokio::test]
async fn test_privilege_dropping() {
let collector = ProcessCollector::new();
// Should start with elevated privileges
assert!(collector.has_privileges());
// Drop privileges
collector.drop_privileges().await.unwrap();
// Should no longer have privileges
assert!(!collector.has_privileges());
// Should still be able to collect processes (with reduced capabilities)
let processes = collector.collect_processes().await.unwrap();
assert!(!processes.is_empty());
}
#[tokio::test]
async fn test_sql_injection_prevention() {
let db = Database::new(":memory:").unwrap();
// Test various SQL injection attempts
let malicious_queries = vec![
"'; DROP TABLE processes; --",
"1' OR '1'='1",
"'; INSERT INTO processes VALUES (9999, 'hacker', '/bin/evil'); --",
];
for query in malicious_queries {
let result = db.execute_query(query).await;
// Should either reject the query or sanitize it safely
match result {
Ok(_) => {
// If query succeeds, verify no damage was done
let count = db.count_processes().await.unwrap();
assert_eq!(count, 0); // No processes should exist
}
Err(_) => {
// Query was rejected, which is also acceptable
}
}
}
}`
Input Validation Testing#
Test input validation:
`#[test]
fn test_input_validation() {
// Test valid inputs
let valid_process = ProcessInfo {
pid: 1234,
name: "valid_process".to_string(),
executable_path: Some("/usr/bin/valid".to_string()),
command_line: Some("valid --arg value".to_string()),
start_time: Some(Utc::now()),
cpu_usage: Some(0.5),
memory_usage: Some(1024),
status: ProcessStatus::Running,
executable_hash: Some("abc123".to_string()),
collection_time: Utc::now(),
};
assert!(valid_process.validate().is_ok());
// Test invalid inputs
let invalid_process = ProcessInfo {
pid: 0, // Invalid PID
name: "".to_string(), // Empty name
executable_path: Some("".to_string()), // Empty path
command_line: Some("a".repeat(10000).to_string()), // Too long
start_time: Some(Utc::now()),
cpu_usage: Some(-1.0), // Negative CPU usage
memory_usage: Some(0),
status: ProcessStatus::Running,
executable_hash: Some("invalid_hash".to_string()),
collection_time: Utc::now(),
};
assert!(invalid_process.validate().is_err());
}`
Test Configuration#
Test Environment Setup#
# test-config.yamlapp:log_level: debugscan_interval_ms:1000batch_size:10database:path:':memory:'max_connections:5retention_days:1alerting:enabled:falsetesting:enable_mocks:truemock_external_services:truetest_data_dir: /tmp/daemoneye-testcleanup_after_tests:true
Test Data Management#
`pub struct TestDataManager {
temp_dir: TempDir,
test_data: HashMap<String, Vec>,
}
impl TestDataManager {
pub fn new() -> Self {
Self {
temp_dir: TempDir::new().unwrap(),
test_data: HashMap::new(),
}
}
pub fn add_test_data(&mut self, name: &str, data: &[u8]) {
self.test_data.insert(name.to_string(), data.to_vec());
}
pub fn get_test_data(&self, name: &str) -> Option<&[u8]> {
self.test_data.get(name).map(|v| v.as_slice())
}
pub fn create_test_database(&self) -> PathBuf {
let db_path = self.temp_dir.path().join("test.db");
let db = Database::new(&db_path).unwrap();
db.create_schema().unwrap();
db_path
}
}`
Continuous Integration#
GitHub Actions Workflows#
DaemonEye uses two separate GitHub Actions workflows for testing:
Main CI Workflow#
(.github/workflows/ci.yml)
The main CI pipeline runs on every push and pull request:
name: Testson:push:branches:[main, develop]pull_request:branches:[main]jobs:quality:runs-on: ubuntu-lateststeps:-uses: actions/checkout@v6-uses: jdx/mise-action@v3with:install:truecache:truegithub_token: ${{ secrets.GITHUB_TOKEN }}-name: Ensure rustfmt and clippy are installedrun: rustup component add rustfmt clippy-name: Check formattingrun: just lint-rusttest:runs-on: ubuntu-lateststeps:-uses: actions/checkout@v6-uses: jdx/mise-action@v3with:install:truecache:truegithub_token: ${{ secrets.GITHUB_TOKEN }}-name: Run tests (all features)run: mise x -- cargo nextest run --profile ci --all-features-name: Build releaserun: mise x -- cargo build --release --all-featurestest-cross-platform:strategy:matrix:os:[ubuntu-latest, macos-latest, windows-latest]runs-on: ${{ matrix.os }}needs: qualitysteps:-uses: actions/checkout@v6-uses: jdx/mise-action@v3with:install:truecache:truegithub_token: ${{ secrets.GITHUB_TOKEN }}-run: mise x -- cargo nextest run --profile ci --all-features-run: mise x -- cargo build --release --all-featurescoverage:runs-on: ubuntu-latestneeds:[test, test-cross-platform, quality]steps:-uses: actions/checkout@v6-uses: jdx/mise-action@v3with:install:truecache:truegithub_token: ${{ secrets.GITHUB_TOKEN }}-name: Generate coveragerun: mise x -- cargo llvm-cov --all-features --no-report-name: Combine coverage reportsrun: mise x -- cargo llvm-cov report --lcov --output-path lcov.info-name: Upload to Codecovuses: codecov/codecov-action@v5with:files: lcov.infofail_ci_if_error:falsetoken: ${{ secrets.CODECOV_TOKEN }}
Benchmarks#
Workflow (.github/workflows/benchmarks.yml)
The benchmarks workflow is triggered manually and runs
independently:
name: Benchmarkson:workflow_dispatch:inputs:suite:description: Benchmark suite to runrequired:falsedefault: alltype: choiceoptions:- all- performance_benchmarks- process_collector_benchmarksjobs:benchmarks:runs-on: ubuntu-latesttimeout-minutes:15steps:-uses: actions/checkout@v6with:fetch-depth:0-uses: jdx/mise-action@v3with:install:truecache:truegithub_token: ${{ secrets.GITHUB_TOKEN }}-name: Restore baseline benchmarksuses: actions/cache/restore@v4with:path: target/criterionkey: criterion-baseline-${{ runner.os }}-name: Run benchmarksenv:BENCH_SUITE: ${{ inputs.suite }} run: | if [ "$BENCH_SUITE" = "all" ]; then mise x -- cargo bench --package procmond 2>&1 | tee bench-output.txt else mise x -- cargo bench --package procmond --bench "$BENCH_SUITE" 2>&1 | tee bench-output.txt fi-name: Check for performance regression run: | if grep -q "Performance has regressed" bench-output.txt; then echo "::warning::Performance regression detected in benchmarks" grep -A2 "Performance has regressed" bench-output.txt else echo "No performance regressions detected" fi-name: Save baseline benchmarksuses: actions/cache/save@v4if: github.ref == 'refs/heads/main'with:path: target/criterionkey: criterion-baseline-${{ runner.os }}-name: Upload benchmark resultsuses: actions/upload-artifact@v4if: always()with:name: benchmark-resultspath: bench-output.txtretention-days:30load-tests:runs-on: ubuntu-latesttimeout-minutes:10steps:-uses: actions/checkout@v6-uses: jdx/mise-action@v3with:install:truecache:truegithub_token: ${{ secrets.GITHUB_TOKEN }}-name: Run load testsrun: NO_COLOR=1 TERM=dumb mise x -- cargo test --package procmond --test load_tests -- --ignored --nocapture 2>&1 | tee load-test-output.txt-name: Upload load test resultsuses: actions/upload-artifact@v4if: always()with:name: load-test-resultspath: load-test-output.txtretention-days:30
CI Jobs Overview#
The main CI pipeline includes the following jobs:
- quality: Runs code formatting and linting
checks - test: Executes the full test suite with all
features enabled - test-cross-platform: Tests on Ubuntu, macOS, and
Windows - coverage: Generates and uploads code coverage
reports
The benchmarks workflow includes two independent jobs: - benchmarks: Runs performance benchmarks with
configurable suite selection (15-minute timeout) - load-tests: Runs load tests under stress conditions
(10-minute timeout)
Accessing Benchmark Results#
Benchmark results are available through the dedicated benchmarks
workflow:
- Manual Trigger: Navigate to the Actions tab and
select the "Benchmarks" workflow, then choose "Run workflow" to trigger
manually - Suite Selection: Choose which benchmark suite to
run: "all" (default), "performance_benchmarks", or
"process_collector_benchmarks" - Workflow Artifacts: Download
benchmark-resultsandload-test-results
artifacts from the workflow summary page - CI Logs: View benchmark output directly in the
workflow logs - Performance Alerts: Regressions are logged as
warnings for manual review without failing the workflow
The benchmarks workflow stores baseline results from the
mainbranch and compares all subsequent runs against this
baseline to detect performance regressions.
Test Reporting#
`use insta::assert_snapshot;
#[test]
fn test_config_serialization() {
let config = Config::default();
let serialized = serde_yaml::to_string(&config).unwrap();
// Snapshot testing for configuration
assert_snapshot!(serialized);
}
#[test]
fn test_alert_format() {
let alert = Alert {
id: Uuid::parse_str("123e4567-e89b-12d3-a456-426614174000").unwrap(),
rule_name: "test_rule".to_string(),
severity: AlertSeverity::High,
message: "Test alert message".to_string(),
process: ProcessInfo::default(),
timestamp: Utc::now(),
metadata: HashMap::new(),
};
let formatted = alert.format_json().unwrap();
assert_snapshot!(formatted);
}`
Test Maintenance#
Test Organization#
// tests/ // ├── unit/ // │ ├── collector_tests.rs // │ ├── database_tests.rs // │ └── alert_tests.rs // ├── integration/ // │ ├── ipc_tests.rs // │ ├── database_tests.rs // │ └── alert_delivery_tests.rs // ├── e2e/ // │ ├── cli_tests.rs // │ └── system_tests.rs // └── common/ // ├── test_helpers.rs // └── test_data.rs
Test Utilities#
`// tests/common/test_helpers.rs
pub struct TestHelper {
temp_dir: TempDir,
config: Config,
}
impl TestHelper {
pub fn new() -> Self {
let temp_dir = TempDir::new().unwrap();
let config = Config::default();
Self { temp_dir, config }
}
pub fn create_test_database(&self) -> Database {
let db_path = self.temp_dir.path().join("test.db");
Database::new(&db_path).unwrap()
}
pub fn create_test_config(&self) -> PathBuf {
let config_path = self.temp_dir.path().join("config.yaml");
self.config.save_to_file(&config_path).unwrap();
config_path
}
pub fn cleanup(&self) {
// Cleanup test resources
}
}
impl Drop for TestHelper {
fn drop(&mut self) {
self.cleanup();
}
}`
This testing documentation provides comprehensive guidance for
testing DaemonEye. For additional testing information, consult the
specific test files or contact the development team.
Source note: Populated from the public repo
(docs/src/testing.md) on 2026-04-18. This page was
previously empty; the content above mirrors the repo at the time of
sync.