The Gold Digger project has implemented a comprehensive integration testing framework to ensure robust, production-like validation of its query tool. The following overview details enhancements across containerized database usage, data and output validation, error handling, performance benchmarking, security, cross-platform compatibility, and CI integration.
Real MySQL/MariaDB Container Usage
Integration tests use the Rust testcontainers crate to spin up isolated MySQL and MariaDB containers. Both TLS and non-TLS configurations are supported, with ephemeral certificates generated per test run for secure scenarios. The framework includes health checks, retry logic, and resource management tailored for CI environments. Containers are seeded with comprehensive schemas and test data, covering all major MySQL/MariaDB types and edge cases. Platform-specific optimizations ensure reliable startup and cleanup on Linux and macOS, with graceful skipping if Docker is unavailable or resources are insufficient.
Data Type and Output Format Validation
Tests validate the handling of all MySQL/MariaDB data types, including VARCHAR, TEXT, INTEGER, DECIMAL, DATE, BINARY, JSON, ENUM, SET, and BOOLEAN. Output format validation covers CSV (RFC4180 compliance), JSON (structure and key ordering), and TSV (delimiter and quoting), with checks for correct handling of NULLs, Unicode, and special characters. Format-specific validators are implemented for each output type, ensuring compliance and consistency across platforms and database versions. Example validator trait:
pub trait FormatValidator {
fn validate(&self, output: &str, expected: &TestCase) -> Result<ValidationResult>;
}
Error Handling
Error scenario tests verify Gold Digger’s responses to invalid SQL syntax, non-existent tables, connection failures, permission denied, and file I/O errors. Each scenario checks for appropriate exit codes and meaningful error messages. Errors are classified for container setup, database seeding, CLI execution, output validation, performance threshold breaches, and configuration issues. Recovery strategies include retries, health checks, test isolation, and resource cleanup. Example error enum:
#[derive(Debug, thiserror::Error)]
pub enum IntegrationTestError {
#[error("Container setup failed: {0}")]
ContainerSetup(#[from] testcontainers::core::error::TestcontainersError),
// ... other variants
}
Performance Benchmarks
Performance tests measure query execution time, output generation time, and memory usage using the sysinfo crate for cross-platform compatibility. Benchmarks include warm-up runs to avoid first-run noise, and metrics such as rows processed and output file size. Thresholds are set to detect regressions, and recommendations are provided for consistent CI benchmarking (e.g., CPU governor settings, process priority).
Security Tests
Security validation ensures credentials are never logged, error messages do not expose sensitive information, TLS certificate handling is robust, and connection strings with special characters are handled safely. Tests verify credential redaction in logs and error output, validate TLS connection establishment and certificate verification, and check for proper error handling in security-related scenarios.
Cross-Platform Compatibility
Integration tests are designed to run consistently on Linux, macOS, and Windows. Platform-specific path handling and line ending normalization are included in output format validators. Docker preflight checks and resource recommendations ensure tests only run where supported, with actionable skip messages for unsupported platforms or insufficient resources. Cleanup routines are optimized for each OS to prevent resource leaks.
Implementation Plan
The integration testing enhancements were implemented in four phases:
- Core Infrastructure: Set up testcontainers integration, database schema and seeding, test runner framework, and output validation utilities.
- Comprehensive Test Coverage: Implement data type validation, output format compliance, error scenario tests, and CLI integration tests.
- Performance and Security: Add benchmarking framework, security validation, regression detection, and cross-platform validation.
- CI Integration and Optimization: Integrate with CI pipeline, optimize test execution, add reporting (JUnit/XML), and maintenance tools.
CI Workflow Integration
Tests are integrated into GitHub Actions CI workflows. The pipeline includes:
- Fast test subset for pull requests (<5 minutes)
- Full suite for main branch (<15 minutes)
- Performance benchmarks on release candidates
- Cross-platform validation matrix (Linux, macOS, Windows)
- Docker service enabled for container-based tests
- Artifact collection for failed tests (container logs, output files)
- JUnit/XML report generation for CI annotations
- Flaky test quarantine and retry logic for unstable environments
Example CI job configuration:
jobs:
test-features:
runs-on: ubuntu-latest
steps:
- uses: taiki-e/checkout-action@v1
- name: Setup Rust
uses: actions-rust-lang/setup-rust-toolchain@v1
- name: Install cargo-nextest
uses: taiki-e/install-action@v2
with:
tool: cargo-nextest
- name: Run tests
run: cargo nextest run
Test Organization Example
tests/
├── integration/
│ ├── mod.rs # Common test utilities and setup
│ ├── data_types.rs # Data type handling tests
│ ├── output_formats.rs # CSV/JSON/TSV format validation
│ ├── error_scenarios.rs # Error handling and exit code tests
│ ├── performance.rs # Performance benchmarks
│ ├── cli_integration.rs # CLI flag/config tests
│ └── security.rs # Credential/security tests
├── fixtures/
│ ├── schema.sql # Test database schema
│ ├── seed_data.sql # Test data
│ └── test_queries/ # Predefined queries
└── integration_tests.rs # Main test entry point
This framework ensures Gold Digger’s integration tests are thorough, reliable, and maintainable, providing high confidence in the tool’s correctness and security across environments.