Documents
Contributing to DaemonEye
Contributing to DaemonEye
Type
External
Status
Published
Created
Apr 18, 2026
Updated
Apr 18, 2026
Updated by
Dosu Bot
Source
View

Contributing to DaemonEye#

Thank you for your interest in contributing to DaemonEye! This guide
will help you get started with contributing to the project.


Table of Contents#

[TOC]


Getting Started#

Prerequisites#

Before contributing to DaemonEye, ensure you have:

  • Rust 1.91+: Latest stable Rust toolchain
  • Git: Version control system
  • Docker: For containerized testing (optional)
  • Just: Task runner (install with
    cargo install just)
  • Editor: VS Code with Rust extension
    (recommended)

Fork and Clone#

  1. Fork the repository on GitHub
  2. Clone your fork locally:
    git clone https://github.com/your-username/daemoneye.gitcd daemoneye
  3. Add the upstream repository:
    git remote add upstream https://github.com/EvilBit-Labs/daemoneye.git

Development Setup#

  1. Install dependencies:
    just setup
  2. Run tests to ensure everything works:
    just test
  3. Build the project:
    just build

Development Environment#

Project Structure#

DaemonEye/ ├── procmond/ # Process monitoring daemon ├── daemoneye-agent/ # Agent orchestrator ├── daemoneye-cli/ # CLI interface ├── daemoneye-lib/ # Shared library ├── docs/ # Documentation ├── tests/ # Integration tests ├── examples/ # Example configurations ├── justfile # Task runner ├── Cargo.toml # Workspace configuration └── README.md # Project README

Workspace Configuration#

DaemonEye uses a Cargo workspace with the following structure:
[workspace]resolver="2"members=["procmond","daemoneye-agent","daemoneye-cli","daemoneye-lib",][workspace.dependencies]tokio={ version ="1.0", features =["full"] }clap={ version ="4.6.0", features =["derive","completion"] }serde={ version ="1.0", features =["derive"] }sqlx={ version ="0.7", features =["runtime-tokio-rustls","sqlite"] }sysinfo="0.38.4"tracing="0.1"thiserror="1.0"anyhow="1.0"

Development Commands#

Use the just task runner for common development
tasks:
# Setup development environmentjust setup# Build the projectjust build# Run testsjust test# Run lintingjust lint# Format codejust fmt# Run benchmarksjust bench# Run procmond benchmarksjust bench-procmond# Generate documentationjust docs# Clean build artifactsjust clean

Commit Signing and GPG Setup#

Workspace tooling enforces signed commits
(git.enableCommitSigning) and a rebase-first sync strategy.
Configure GPG before making changes to avoid commit failures.

  1. Install GnuPG
    • macOS: brew install gnupg pinentry-mac
    • Linux: install gnupg and an appropriate
      pinentry package via your package manager
    • Windows: install Gpg4win and
      enable the GPG Agent during setup
  2. Generate a signing key (4096-bit RSA
    recommended)

    gpg--full-generate-key
    Choose RSA and RSA, key size 4096, set an
    expiration (or none), and provide your name and email that matches the
    user.email configured for Git.
  3. Locate the long key ID
    gpg--list-secret-keys--keyid-format=long
    Copy the 16-character key ID that appears after
    sec rsa4096/.
  4. Export the public key for review systems
    gpg--armor--export<KEY_ID>> ~/.gnupg/daemoneye.pub
    Upload this armored key to GitHub (Settings → SSH and GPG keys) and
    any internal key servers used by your organization.
  5. Configure Git to sign by default
    git config --global user.signingkey <KEY_ID>git config --global commit.gpgsign truegit config --global tag.gpgsign truegit config --global gpg.program $(command-v gpg)
    If the workspace settings conflict with per-repo preferences, create
    a project-local override via .git/config after following
    the required steps.
  6. Ensure the GPG agent is active
    • macOS: add
      pinentry-program /opt/homebrew/bin/pinentry-mac to
      ~/.gnupg/gpg-agent.conf, then run
      gpgconf --kill gpg-agent
    • Windows: restart the "GnuPG Agent" service from the Gpg4win config
      utility
    • Linux: ensure gpg-agent starts from your desktop
      session (eval "$(gpg-agent --daemon)" in shell profile if
      necessary)
  7. Verify signing works
    echo"test"|gpg--clearsigngit commit --allow-empty-S-m"test: verify signing"
    If prompted for a passphrase, select "save in keychain/keyring" so
    future commits succeed from VS Code as well as the CLI.

Troubleshooting#

  • gpg: signing failed: No secret key — confirm the key ID
    in git config matches
    gpg --list-secret-keys
  • gpg: signing failed: Inappropriate ioctl for device
    configure an appropriate pinentry program and restart
    gpg-agent
  • Emergency opt-out (e.g., broken key) — run
    git config commit.gpgsign false locally and notify
    the maintainers; automated pipelines still require signatures for merged
    commits.
    These steps ensure developers understand the enforced workflow before
    VS Code rejects unsigned commits. Refer back to this section any time a
    workstation is rebuilt.

Code Standards#

Rust Standards#

DaemonEye follows strict Rust coding standards:

  1. Edition: Always use Rust 2024 Edition
  2. Linting: Zero warnings policy with
    cargo clippy -- -D warnings
  3. Safety: unsafe_code = "forbid"
    enforced at workspace level
  4. Formatting: Standard rustfmt with 119
    character line length
  5. Error Handling: Use thiserror for
    structured errors, anyhow for error context

Code Style#

// Use thiserror for library errors#[derive(Debug,Error)]pubenum CollectionError {#[error("Permission denied accessing process {pid}")] PermissionDenied { pid:u32},#[error("Process {pid} no longer exists")] ProcessNotFound { pid:u32},#[error("I/O error: {0}")] IoError(#[from]std::io::Error),#[error("No processes were collected")] NoProcessesCollected,}// Use anyhow for application error contextuseanyhow::{Context,Result};// Helper function to validate collected process datafn validate_process_data(processes:&[ProcessInfo]) ->Result<(), CollectionError>{if processes.is_empty() {returnErr(CollectionError::NoProcessesCollected);}// Additional validation logic could go here// For example: checking for required fields, data integrity, etc.Ok(())}pubasyncfn collect_processes() ->Result<Vec<ProcessInfo>, CollectionError>{letmut system =sysinfo::System::new_all(); system.refresh_all();let processes = system.processes().values().map(|p|ProcessInfo::from(p)).collect::<Vec<_>>();// Validate collected data before returning validate_process_data(&processes)?;Ok(processes)}// Document all public APIs/// Collects information about all running processes.////// # Returns////// A vector of ProcessInfo structs containing details about each process.////// # Errors////// This function will return an error if:/// - System information cannot be accessed/// - Process enumeration fails/// - No processes could be collected from the system/// - Memory allocation fails////// # Examples////// ```rust/// use daemoneye_lib::collector::ProcessCollector;////// let collector = ProcessCollector::new();/// let processes = collector.collect_processes().await?;/// println!("Found {} processes", processes.len());/// ```pubasyncfn collect_processes() ->Result<Vec<ProcessInfo>, CollectionError>{// Implementation}

Naming Conventions#

  • Functions: snake_case
  • Variables: snake_case
  • Types: PascalCase
  • Constants: SCREAMING_SNAKE_CASE
  • Modules: snake_case
  • Files: snake_case.rs

Documentation Standards#

All public APIs must be documented with rustdoc comments:
/// A process information structure containing details about a running process.////// This structure provides comprehensive information about a process including/// its PID, name, executable path, command line arguments, and resource usage.////// # Examples////// ```rust/// use daemoneye_lib::ProcessInfo;////// let process = ProcessInfo {/// pid: 1234,/// name: "example".to_string(),/// executable_path: Some("/usr/bin/example".to_string()),/// command_line: Some("example --arg value".to_string()),/// start_time: Some(Utc::now()),/// cpu_usage: Some(0.5),/// memory_usage: Some(1024),/// status: ProcessStatus::Running,/// executable_hash: Some("abc123".to_string()),/// collection_time: Utc::now(),/// };/// ```#[derive(Debug,Clone, Serialize, Deserialize,PartialEq)]pubstruct ProcessInfo {/// The process ID (PID) of the processpub pid:u32,/// The name of the processpub name:String,/// The full path to the process executablepub executable_path:Option<String>,/// The command line arguments used to start the processpub command_line:Option<String>,/// The time when the process was startedpub start_time:Option<DateTime<Utc>>,/// The current CPU usage percentagepub cpu_usage:Option<f64>,/// The current memory usage in bytespub memory_usage:Option<u64>,/// The current status of the processpub status: ProcessStatus,/// The SHA-256 hash of the process executablepub executable_hash:Option<String>,/// The time when this information was collectedpub collection_time: DateTime<Utc>,}

Testing Requirements#

Test Coverage#

All code must have comprehensive test coverage:

  • Unit Tests: Test individual functions and
    methods
  • Integration Tests: Test component interactions
  • End-to-End Tests: Test complete workflows
  • Property Tests: Test with random inputs
  • Fuzz Tests: Test with malformed inputs

Test Structure#

`#[cfg(test)]
mod tests {
use super::*;
use insta::assert_snapshot;
use tempfile::TempDir;

#[tokio::test]
async fn test_process_collection() {
    let collector = ProcessCollector::new();
    let processes = collector.collect_processes().await.unwrap();

    assert!(!processes.is_empty());
    assert!(processes.iter().any(|p| p.pid > 0));
}

#[test]
fn test_process_info_serialization() {
    let process = ProcessInfo::default();
    let serialized = serde_json::to_string(&process).unwrap();
    let deserialized: ProcessInfo = serde_json::from_str(&serialized).unwrap();
    assert_eq!(process, deserialized);
}

}`

Running Tests#

# Run all testscargo test# Run specific testcargo test test_process_collection# Run tests with outputcargo test ----nocapture# Run integration testscargo test --test integration# Run benchmarkscargo bench# Run fuzz testscargo fuzz run process_info

Pull Request Process#

Before Submitting#

  1. Sync with upstream:
    git fetch upstreamgit checkout maingit merge upstream/main
  2. Create a feature branch:
    git checkout -b feature/your-feature-name
  3. Make your changes:
    • Write code following the coding standards
    • Add comprehensive tests
    • Update documentation
    • Run all tests and linting
  4. Commit your changes:
    git add .git commit -m"feat: add new feature description"

Commit Message Format#

Use conventional commits format:
`[optional scope]:

[optional body]

[optional footer(s)]`

Types#

  • feat: New feature
  • fix: Bug fix
  • docs: Documentation changes
  • style: Code style changes
  • refactor: Code refactoring
  • test: Test changes
  • chore: Build process or auxiliary tool changes

Examples#

`feat(collector): add process filtering by name

Add ability to filter processes by name pattern using regex.
This improves performance when monitoring specific processes.

Closes #123 fix(database): resolve memory leak in query execution

Fix memory leak that occurred when executing large queries.
The issue was caused by not properly cleaning up prepared statements.

Fixes #456`

Pull Request Guidelines#

  1. Title: Clear, descriptive title
  2. Description: Detailed description of changes
  3. Tests: Ensure all tests pass
  4. Documentation: Update relevant documentation
  5. Breaking Changes: Clearly mark any breaking
    changes
  6. Related Issues: Link to related issues

Pull Request Template#

## DescriptionBrief description of the changes made.## Type of Change- [ ] Bug fix (non-breaking change which fixes an issue)- [ ] New feature (non-breaking change which adds functionality)- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)- [ ] Documentation update- [ ] Performance improvement- [ ] Code refactoring## Testing- [ ] Unit tests pass- [ ] Integration tests pass- [ ] End-to-end tests pass- [ ] Manual testing completed- [ ] Performance tests pass## Checklist- [ ] Code follows the project's style guidelines- [ ] Self-review of code completed- [ ] Code is properly commented- [ ] Documentation updated- [ ] No new warnings introduced- [ ] Breaking changes documented## Related IssuesCloses #123Fixes #456

Issue Reporting#

Bug Reports#

When reporting bugs, please include:

  1. Environment: OS, Rust version, DaemonEye
    version
  2. Steps to Reproduce: Clear, numbered steps
  3. Expected Behavior: What should happen
  4. Actual Behavior: What actually happens
  5. Logs: Relevant log output
  6. Screenshots: If applicable

Feature Requests#

When requesting features, please include:

  1. Use Case: Why is this feature needed?
  2. Proposed Solution: How should it work?
  3. Alternatives: Other solutions considered
  4. Additional Context: Any other relevant
    information

Issue Templates#

Use the provided issue templates:

  • Bug Report: .github/ISSUE_TEMPLATE/bug_report.md
  • Feature Request:
    .github/ISSUE_TEMPLATE/feature_request.md
  • Question: .github/ISSUE_TEMPLATE/question.md

Documentation#

Documentation Standards#

  • Keep documentation up to date
  • Use clear, concise language
  • Include code examples
  • Follow markdown best practices
  • Use consistent formatting

Documentation Structure#

docs/ ├── src/ │ ├── introduction.md │ ├── getting-started.md │ ├── architecture/ │ ├── technical/ │ ├── user-guides/ │ ├── api-reference/ │ ├── deployment/ │ ├── security.md │ ├── testing.md │ └── contributing.md ├── book.toml └── README.md

Building Documentation#

# Install mdbookcargo install mdbook# Build documentationmdbook build# Serve documentation locallymdbook serve

Community Guidelines#

Code of Conduct#

We are committed to providing a welcoming and inclusive environment
for all contributors. Please:

  • Be respectful and constructive
  • Focus on what is best for the community
  • Show empathy towards other community members
  • Accept constructive criticism gracefully
  • Help others learn and grow

Communication#

  • GitHub Issues: For bug reports and feature
    requests
  • GitHub Discussions: For questions and general
    discussion
  • Pull Requests: For code contributions
  • Discord: For real-time chat (invite link in
    README)

Recognition#

Contributors are recognized in:

  • CONTRIBUTORS.md file
  • Release notes
  • Project documentation
  • Community highlights

Development Workflow#

Branch Strategy#

  • main: Production-ready code
  • develop: Integration branch for features
  • feature/*: Feature development branches
  • bugfix/*: Bug fix branches
  • hotfix/*: Critical bug fixes

Release Process#

  1. Version Bumping: Update version numbers
  2. Changelog: Update CHANGELOG.md
  3. Documentation: Update documentation
  4. Testing: Run full test suite
  5. Release: Create GitHub release
  6. Distribution: Publish to package managers

Continuous Integration#

All pull requests must pass:

  • Unit tests
  • Integration tests
  • Linting checks
  • Security scans
  • Performance benchmarks
  • Documentation builds

Getting Help#

If you need help contributing:

  1. Check Documentation: Review this guide and other
    docs
  2. Search Issues: Look for similar issues or
    discussions
  3. Ask Questions: Use GitHub Discussions or
    Discord
  4. Contact Maintainers: Reach out to project
    maintainers

License#

By contributing to DaemonEye, you agree that your contributions will
be licensed under the same license as the project (Apache 2.0 for core
features, commercial license for business/enterprise features).


Thank you for contributing to DaemonEye! Your contributions help
make the project better for everyone.


Source note: Populated from the public repo
(docs/src/contributing.md) on 2026-04-18. This page was
previously empty; the content above mirrors the repo at the time of
sync.