Documents
CI and Automation Hooks
CI and Automation Hooks
Type
Document
Status
Published
Created
Oct 31, 2025
Updated
Mar 9, 2026
Updated by
Dosu Bot

The DaemonEye project uses a modern, maintainable continuous integration (CI) and automation setup based on GitHub Actions. This system replaced the previous CircleCI configuration, which was fully removed from the repository to simplify project setup and align with current development workflows (PR #102).

CI and Automation Workflows

All CI/CD workflows are defined in the .github/workflows/ directory. Key workflows include:

  • ci.yml: Main CI pipeline for code quality, testing, cross-platform builds, and coverage reporting (ci.yml).
  • benchmarks.yml: Performance benchmarks and load tests, run on-demand (benchmarks.yml).
  • docs.yml: Documentation build and deployment to GitHub Pages (docs.yml).
  • codeql.yml: Code analysis and security scanning (codeql.yml).
  • scorecard.yml: Supply-chain security analysis using OpenSSF Scorecard (scorecard.yml).

Scorecard Supply-Chain Security Workflow

The scorecard.yml workflow runs supply-chain security analysis using OpenSSF Scorecard. It is triggered by pushes to the main branch, scheduled weekly, and on branch protection rule events. The workflow:

  • Checks out the repository code.
  • Runs Scorecard analysis and generates SARIF results.
  • Publishes results to the OpenSSF REST API for badge integration (on public repositories).
  • Uploads results as artifacts and to GitHub's code scanning dashboard.
  • Supports optional configuration for private repositories and branch protection checks.

This workflow helps maintainers monitor supply-chain security posture and enables consumers to access Scorecard results and badges. For details on configuration and authentication, see the Scorecard Action documentation.

Workflow Triggers and Automation Hooks

Workflows are triggered on push and pull request events to the main branch. Additionally, all major workflows support explicit user action triggers via the workflow_dispatch event, allowing maintainers to manually start CI or documentation jobs as needed (ci.yml, docs.yml, codeql.yml). Documentation updates are deployed automatically on push, pull request, or manual dispatch.

The Scorecard workflow is triggered by pushes to main, scheduled weekly (every Tuesday at 11:28 UTC), and branch protection rule events. It ensures supply-chain security checks are regularly updated and visible.

Job Structure and Dependencies

The main CI workflow runs all jobs unconditionally to ensure comprehensive testing on every push and pull request. Quality checks run in a dedicated quality job, which includes rustfmt for formatting and clippy for linting. Testing is performed in test and test-cross-platform jobs, with the cross-platform job using a matrix strategy to test on Linux, macOS, and Windows runners.

Job dependencies are structured as follows:

  • quality runs independently
  • test runs independently
  • test-cross-platform depends on quality
  • coverage depends on test, test-cross-platform, and quality

The Scorecard workflow runs its analysis job only on the default branch or for pull requests, and uploads results to both the code scanning dashboard and as artifacts.

Coverage Reporting, Quality Checks, and Security Analysis

Coverage reporting is integrated using cargo-llvm-cov, which generates coverage reports and uploads them to Codecov and Qlty using their respective GitHub Actions (ci.yml). Some workflows also use cargo-tarpaulin to generate HTML coverage reports (testing.md). Quality gates include passing all tests, maintaining >85% code coverage, successful linting and formatting, security audits, and performance benchmarks (issue #61).

The Scorecard workflow adds supply-chain security checks, including branch protection, maintained status, and other Scorecard metrics. Results are published for visibility and badge integration.

Documentation Automation

Documentation is built using mdBook and Rustdoc, with plugins for enhanced features. The workflow builds and deploys documentation to GitHub Pages, triggered by code changes or manual dispatch (docs.yml).

Benchmarks Workflow

Performance benchmarks and load tests are executed in a separate .github/workflows/benchmarks.yml workflow that runs on-demand via manual trigger (workflow_dispatch). This workflow is independent of the main CI pipeline, allowing developers to run performance tests when needed without impacting regular CI execution times.

The workflow provides a configurable suite input parameter with the following options:

  • all (default): Runs all benchmark suites
  • performance_benchmarks: Runs only the performance benchmark suite
  • process_collector_benchmarks: Runs only the process collector benchmark suite

The workflow contains two independent jobs that run in parallel:

Benchmarks Job (15-minute timeout):

  • Runs cargo bench --package procmond (or specific suites based on input)
  • Restores baseline benchmarks from cache for regression detection
  • Checks for performance regressions and logs warnings if detected (does not fail the build)
  • Saves new baseline benchmarks to cache only when running on the main branch
  • Uploads benchmark results as artifacts with 30-day retention

Load Tests Job (10-minute timeout):

  • Runs cargo test --package procmond --test load_tests -- --ignored --nocapture
  • Validates system behavior under stress
  • Uploads load test results as artifacts with 30-day retention

Unlike the previous CI integration, performance regressions now log warnings but do not fail the build, allowing for more flexible performance monitoring while still alerting maintainers to potential issues.

Mergify Integration

The project uses Mergify (.mergify.yml) to automate handling of bot-generated pull requests, streamlining dependency updates and automated maintenance PRs. Mergify automatically approves and merges PRs from Dependabot, Dosubot, and release-plz after CI checks pass, and keeps these PRs up to date with the main branch.

Maintaining and Extending Automation Workflows

To maintain and extend the automation workflows:

  • Follow the conventions in the workflow YAML files and contributing documentation.
  • Use explicit triggers (workflow_dispatch) for manual runs when testing new workflow changes or deploying documentation.
  • Expand the CI matrix to support new platforms or Rust versions by updating the matrix strategy in the workflow files.
  • Integrate new quality checks, coverage tools, or security analysis by adding steps to the relevant jobs in workflow files.
  • Maintain high code coverage and quality by regularly updating and enhancing unit, integration, property-based, and performance tests. Use tools like llvm-cov, criterion, and proptest as specified in the workspace Cargo.toml (issue #61).
  • Monitor CI metrics with daily coverage reports, weekly performance reviews, and periodic test suite maintenance.
  • Use the organized test directory structure for integration, benchmarks, property-based tests, and fixtures to support comprehensive testing strategies.
  • For supply-chain security, monitor Scorecard results and badge status, and update workflow configuration as needed for new checks or requirements.

Example: To add a new platform to the cross-platform test matrix, update the include section in the test-cross-platform job of ci.yml:

strategy:
  matrix:
    include:
      - os: ubuntu-24.04
        platform: "Linux"
      - os: macos-15
        platform: "macOS"
      - os: windows-2025
        platform: "Windows"
      # Add new platform below
      - os: ubuntu-22.04
        platform: "Linux"

For new coverage, quality, or security tools, add installation and execution steps to the relevant jobs, ensuring results are uploaded or reported as needed. For Scorecard, configure authentication and publishing as described in the Scorecard Action documentation.

References