CI/CD Performance Targets#
The CI/CD pipeline for this project enforces explicit performance targets to ensure reliability and efficiency. Key targets include: startup time under 250ms (measured with the criterion benchmarking framework), memory usage scaling with row width (O(row_width)), streaming efficiency for multi-gigabyte datasets (processing 1GB+ without materialization), and large dataset exports (100,000 rows) completing in less than 30 seconds. Unit test coverage is targeted at over 90% for core modules, with comprehensive property and error scenario coverage across all exit codes and data types. These targets are validated both locally and in CI environments to ensure consistent performance and reliability across platforms [source] [source].
Benchmarking Parameters#
Performance benchmarking in the pipeline uses the criterion crate (with HTML reporting) to measure and validate startup time, memory efficiency, and streaming throughput for CSV, JSON, and TSV format writers. Property-based testing with the proptest crate verifies type safety, memory behavior with large datasets, and edge cases such as NULL handling and character encoding. Benchmark regression detection is integrated into the CI workflow to catch performance degradations early. Benchmarks are run using justfile recipes, for example:
bench:
cargo bench --features criterion
Test execution time, memory usage, and throughput are tracked as primary benchmarking parameters [source].
Caching Strategies#
The CI/CD pipeline implements OS-aware caching to optimize build and test performance. Caches are isolated per runner OS to avoid conflicts and maximize cache hit rates. Dependency caching is used for Rust build artifacts, and pre-commit validation steps leverage these caches to speed up repeated runs. The caching strategy is integrated into the workflow configuration, ensuring that each platform (Ubuntu, macOS, Windows) maintains its own cache keys and storage [source].
Resource Limits#
Resource limits are enforced both at the infrastructure and test levels. Integration tests using Docker containers require a minimum of 2GB of available memory for MySQL container operations. The CI matrix runs on Ubuntu 22.04, macOS 13, and Windows 2022 runners, each with sufficient resources to handle large dataset exports and concurrent test execution. Memory usage is monitored during performance tests, and jobs are designed to run independently and in parallel, with proper resource cleanup after completion [source].
Performance Monitoring and Optimization#
Performance is monitored in the pipeline through benchmark regression detection, memory usage validation, and execution time tracking. The CI workflow includes jobs for property testing (with extended timeouts), benchmark runs, and coverage reporting using cargo-tarpaulin. Performance baselines are established on reference hardware, and test results are compared against these baselines to detect regressions. The pipeline enforces zero-tolerance quality gates using commands like just fmt-check and just lint to maintain code quality and performance standards. Optimization efforts focus on improving caching strategies, consolidating workflows, and enhancing actionable failure reporting [source] [source].
Local GitHub Actions Simulation#
Local simulation of GitHub Actions workflows is supported to validate CI/CD changes before pushing to remote repositories. The project provides separate test commands for local versus CI execution, and Docker environment detection is used to conditionally run integration tests. The justfile includes a command for local CI validation:
just act-ci-dry
This command leverages tools like act to simulate GitHub Actions locally, allowing developers to catch issues early and ensure that workflows behave as expected in the CI environment. Docker-dependent tests are categorized with the #[ignore] attribute and can be selectively run based on local Docker availability [source] [source].