You are currently viewing an archived version of a document, the data may be outdated.
Documents
Response Automation
Response Automation
Type
Topic
Status
Published
Created
Feb 24, 2026
Updated
Mar 2, 2026
Created by
Dosu Bot
Updated by
Dosu Bot

Response Automation#

Response Automation is Dosu's core capability for automatically generating, evaluating, and publishing AI responses to user queries in GitHub issues, discussions, pull request review comments, GitLab merge requests, and Slack channels. The system enables teams to provide faster, more consistent support while maintaining quality through configurable oversight mechanisms.

When someone asks a question, Dosu searches indexed code, documentation, past issues, and other connected data sources to find relevant context, then generates a response that synthesizes what it found, with citations linking back to source material. This approach ensures responses are grounded in the actual codebase and documentation rather than generic information.

The system provides three configurable publishing modes—Mention, Auto Draft, and Auto Reply—that balance automation with human oversight. Combined with sophisticated evaluation mechanisms that assess response quality before publication, Response Automation enables teams to gradually increase automation as they gain confidence in the system's performance.

Response Generation Pipeline#

Overview#

Dosu searches indexed code, documentation, past issues, and other connected data sources to find relevant context, then generates a response that synthesizes findings with citations linking back to the original materials. This citation-based approach provides transparency and allows users to verify information by reviewing the sources directly.

Response Personalization#

Dosu tailors responses based on the user's technical background and expertise level. Users can add a bio to their profile (found under Settings > Profile) describing their experience, technical expertise, or role. When a bio is provided, Dosu automatically incorporates this information into the response generation process to adjust response style, terminology, and technical depth to match the user's background.

The system fetches the user's bio from their profile during the orchestration workflow and integrates it into the AI system prompts. This enables the AI to customize responses based on the user's self-described expertise level and preferences. For example, a user who identifies as a senior backend engineer will receive responses with more technical detail and implementation specifics, while a product manager might receive explanations that focus on functionality and outcomes with less implementation detail.

This personalization is optional—if no bio is provided, Dosu generates responses based on the content and context of the question alone.

Triggering Responses#

Responses are triggered by mentioning @dosu (or @dosubot or @dosu-bot) in GitHub issues, discussions, pull request review comments, or Slack channels. The system uses case-insensitive regex pattern matching to detect these mentions. Users can also edit an existing message to add a @dosu mention, and Dosu will respond to the edited message.

In Auto Reply and Auto Draft modes, Dosu can respond without explicit mentions, though it evaluates each message to determine if a response is appropriate.

Supported Channels#

Dosu supports several channel types for Response Automation:

  • GitHub Issues: Threaded conversations on repository issues
  • GitHub Discussions: Forum-style Q&A threads
  • GitHub Pull Request Comments: Two distinct channel types:
    • GITHUB_PR: General comments on the overall pull request (conversation-level comments)
    • GITHUB_PR_REVIEW: Code-level review comments on specific lines in pull request diffs (inline comments during code review). Only review threads where Dosu is mentioned or has participated are synced, ensuring Dosu focuses on conversations where its input is relevant.
  • GitLab Merge Requests: Code review discussions
  • Slack Channels: Team workspace conversations

Data Source Integration#

Connected data sources determine what content Dosu indexes and searches. These include:

  • Code repositories
  • Documentation sites and wikis
  • Past issues and discussions
  • Pull requests and code reviews
  • Internal knowledge bases

Dosu can also answer questions about Dosu itself by searching Dosu's official product documentation, enabling self-service support for platform questions.

Technical Architecture#

The response generation pipeline follows an event-driven architecture:

  1. Webhook Ingestion: GitHub webhooks are received and validated using HMAC SHA-256 signature verification, then published to Google Cloud Pub/Sub topics for durable event processing

  2. Review Thread Syncing: For pull request review comments, the system uses a GraphQL-based fallback pattern to navigate from the comment node ID to the parent review thread. When a pull_request_review_comment webhook event is received (created/edited/deleted actions), the ReviewThreadSyncer queries the GitHub GraphQL API to find the parent pull request, fetches all review threads for that PR, identifies the thread containing the comment, and syncs it to the database with channel type GITHUB_PR_REVIEW. Review thread data includes code position metadata such as path, line, start_line, diff_side, start_diff_side, is_outdated, subject_type, original_line, and original_start_line. The system filters at both the router and syncer levels to sync only threads where Dosu is mentioned or has participated, ensuring Dosu focuses on conversations where its input is relevant.

  3. Trigger Detection: AgentRunContextGenerator determines the trigger reason (MENTION, AUTO, DM, etc.) by analyzing message content

  4. Workflow Orchestration: The root workflow orchestrates the entire pipeline including validation, context generation, data source filtering, and response execution

  5. AI Generation: The orchestrator agent transforms thread state into agent input, fetches the user's bio from their profile, runs AI search, and returns structured output with topics and citations. The user's bio is integrated into the AI system prompts to customize response generation based on the user's technical background and preferences

  6. Search Capabilities: The search client provides semantic and full-text search across indexed code, documentation, threads, pull requests, and knowledge store

  7. Citation Tracking: Citations track source URLs, titles, and line numbers; they are converted and saved to the database linked to messages or pages

Evaluation Mechanisms#

Two-Gate Architecture#

Response Automation uses a two-gate evaluation system to ensure quality. The first gate determines whether to respond at all, while the second gate evaluates whether a generated response should be auto-published or held for human review.

Should-Reply Gate#

Dosu evaluates whether to respond based on the nature of the request and available data sources. The should-reply gate evaluates whether to respond at all based on: (1) user actively seeking help, (2) request within scope, (3) available tools can help, (4) user hasn't solved problem.

The should-reply gate receives deployment guidelines (from the deployment's description field) to account for deployment-specific policies when deciding whether to respond. These guidelines allow the gate to make more informed decisions based on the deployment owner's guidance and preferences. This aligns the should-reply gate with the orchestrator agent, which also receives deployment guidelines.

Dosu will skip responding if the message doesn't appear to be a help request, if it lacks relevant tools to answer, or if the user has already solved their problem.

Bug Report Recognition#

Dosu recognizes that reporting a bug and asking for help are the same thing. When a user files a bug report with specific errors or unexpected behavior, they are seeking help, whether that's investigation, root cause analysis, workarounds, or pointers to related issues.

Users who provide specific technical details—error messages, stack traces, version numbers, reproduction steps, or logs—are recognized as seeking help. Even when a bug report may need developer attention for a fix, Dosu can provide value by investigating the codebase, identifying the relevant code or commit, finding related issues, or suggesting workarounds.

Should-Publish Gate#

In Auto Reply mode, Dosu evaluates response quality before auto-publishing. The should-publish gate evaluates response quality before auto-publishing; it returns a PUBLISH or HOLD decision with quality score (0.0-1.0), confidence (0.0-1.0), and reasoning.

The should-publish gate receives deployment guidelines (from the deployment's description field) to account for deployment-specific policies when deciding whether to auto-publish responses. These guidelines help the gate make more informed decisions based on the deployment owner's quality standards and publication preferences. This aligns the should-publish gate with the orchestrator agent, which also receives deployment guidelines.

PUBLISH criteria include: user needs help, response directly addresses the problem, response is concrete (not vague), and the conversational context is straightforward.

HOLD criteria include: not a help request, generic or speculative response, contentious thread, or directed at a specific person or team. Responses may be held for review if they're speculative, generic, or if the conversation involves disagreement or is directed at a specific person or team. The system follows a "when in doubt, HOLD" philosophy to prioritize precision over recall.

Confidence Threshold#

Response confidence threshold is set at 0.75 (75%)—responses below this threshold are held for review. The orchestrator workflow integrates quality gates and forces preview generation if quality checks fail.

Publishing Modes#

Mode Overview#

Dosu provides three publishing modes that offer different levels of automation and human oversight:

Mention Mode (default): Dosu responds only when explicitly mentioned. This provides maximum control and is ideal for teams starting with Dosu or operating in sensitive contexts where every response should be intentional.

Auto Draft Mode: Dosu generates a draft response for a maintainer's review before posting. This mode is useful when you want oversight before responses go live, balancing automation with quality assurance.

Auto Reply Mode: Dosu responds automatically to new issues or discussions without waiting for a mention or review. This mode incorporates the evaluation gates described above to maintain quality while maximizing responsiveness.

Mode Selection Strategy#

Most teams start with Mention mode until they're confident in response quality. Teams typically progress from Mention to Auto Draft to Auto Reply as they become comfortable with Dosu's performance in their specific context.

Technical Implementation#

Reply modes are defined as an enum: MENTION, AUTO_DRAFT, AUTO_POST in AgentReplyMode. The AutoReplySettings class controls whether auto-reply is enabled and if review is required, and ChannelConfig translates auto_reply settings into user-facing reply modes through its reply_mode property.

The publishing logic's _should_post() method determines whether to post or save as draft based on: preview mode, is_response_required flag, and the channel's auto_reply.enabled/review_required settings. The _publish_message() method creates messages and either posts them or saves as "shadow" messages (drafts) based on _should_post() result; it returns PREVIEW_ONLY status for drafts.

Platform-specific publishers (GitHub, Slack, Dosu App) inherit the abstract publisher pattern and implement platform-specific posting logic. For GitHub pull request review comments, the system uses _update_review_comment() to edit existing review comments and _add_review_thread_reply() to add new replies to review threads, both utilizing GitHub's GraphQL API.

Escalation Strategies#

Graduated Automation Approach#

Response Automation provides three graduated levels of human oversight that correspond to the publishing modes:

  • Mention Mode: Maximum oversight—humans initiate every response by explicitly mentioning Dosu
  • Auto Draft Mode: Moderate oversight—Dosu generates responses automatically, but humans review before publication
  • Auto Reply Mode: Minimum oversight—Dosu publishes responses automatically with quality evaluation gates

This graduated approach allows teams to increase automation incrementally as they gain confidence in system performance.

Preview Mode#

Preview Mode creates a deployment with all public-facing features disabled, generating response drafts that appear only in the dashboard for review. This mode is ideal for testing Dosu without affecting repository users.

Technically, AgentTriggerReason.PREVIEW_GENERATION prevents posting regardless of deployment configuration. The system forces preview generation when confidence falls below the 0.75 threshold, and the should-publish evaluation gate can override the trigger reason to PREVIEW_GENERATION if a response should be held for review.

Review Dashboard#

The dashboard Review page allows users to generate and preview responses before they're posted publicly, providing oversight and quality control. This feature is particularly useful for Preview Mode deployments or when using Auto Draft reply mode.

The Review dashboard filters threads by PREVIEW_ONLY status to show only drafts awaiting review. When viewing a document page with a pending review, users see a banner notification with a "View Review" link that navigates to the review page. The review page displays the proposed changes directly without tabs or wrapper content, allowing users to accept, decline, or edit the changes through a compact toolbar. The toolbar groups action buttons (Accept, Decline, Edit) together and includes a book icon to navigate back to the document page if needed. Users can trigger manual preview generation via the API endpoint /threads/generate-preview/{thread_id}, and publish draft messages with approval or edits through the publish_message_internal() endpoint.

Configuration and Settings#

Enabling Response Automation#

To enable Response Automation: Connect your repository or Slack workspace to Dosu, navigate to Settings > Deployments in the Dosu dashboard, select or create a deployment, and configure the reply mode under Issues or Discussions. For Slack, configure reply settings in the Slack section of your deployment.

Response Guidelines#

Each deployment can have custom response guidelines that shape how Dosu writes responses. Use this feature to enforce tone, formatting, or content requirements specific to your project. Response guidelines are found under Settings > Deployments > Deployment > Response Guidelines.

Discussion Category Filtering#

In GitHub Discussions, Dosu only responds to the categories you specify. By default, this includes the Q&A and Questions categories. You can adjust this in your deployment settings under Discussions > Included Categories. This filtering prevents Dosu from responding in announcement or showcase categories where Q&A isn't expected.

System Architecture#

Deployment Architecture#

A Deployment connects Dosu to a specific GitHub repository, GitLab project, or Slack workspace. Each deployment has its own configuration that determines how Dosu behaves in that location.

Deployments can connect to:

  • GitHub repositories: Dosu can comment on issues, discussions, general pull request comments (channel type GITHUB_PR), and code-level pull request review comments (channel type GITHUB_PR_REVIEW); publish documentation updates via pull requests
  • GitLab projects: Dosu can comment on merge requests; publish documentation updates via merge requests
  • Slack workspaces: Team members can ask questions in connected channels

Webhook and Event Processing#

The architecture receives GitHub webhooks and validates them using HMAC SHA-256 signature verification with a configured webhook secret to ensure only legitimate events are processed.

Events are published to Google Cloud Pub/Sub topics, decoupling webhook receipt from event processing. This architecture provides durability, allows for parallel processing, and enables retry logic for failed operations.

Usage and Best Practices#

Best Practices for Question Askers#

Include relevant context like error messages, code snippets, and what you've already tried. Be specific about what you're trying to accomplish, and reference specific files or features when applicable.

Providing detailed technical information—such as version numbers, reproduction steps, and error logs—helps Dosu generate more accurate and actionable responses.

Best Practices for Maintainers#

Connect comprehensive data sources (e.g., documentation, code, and past issues) so Dosu can find relevant context. Keep documentation current, as Dosu searches what's indexed.

Start with Mention mode to build confidence, then progressively enable Auto Draft or Auto Reply mode as you observe consistent quality. Use custom response guidelines to align Dosu's tone and style with your project's communication standards.

Live Examples#

To see Response Automation in action, view these open-source repositories with Dosu enabled:

Relevant Code Files#

Functional AreaFile PathDescription
Webhook Ingestionbackend/cloudfunctions/github_webhook/main.pyReceives GitHub webhooks, validates HMAC signatures, publishes to Pub/Sub
Event Routingbackend/cloudfunctions/github_event_handler/handler.pyRoutes events with priority-based queue management and deduplication
Review Thread Syncerbackend/api/github/events/review_thread_sync.pySyncs GitHub PR review threads; queries GraphQL to navigate from comment node to parent thread, fetches review data with code position metadata, filters to threads where Dosu is mentioned or participated
Review Comment Converterbackend/api/github/events/converters/review_comment.pyConverts review comment webhook events (create/edit/delete) to message params, triggers GraphQL fallback for thread sync
Event Routerbackend/api/github/events/router.pyRoutes webhook events including pull_request_review_comment, filters by Dosu mentions at router level
GraphQL Queriesbackend/core/github/gql/queries/pull_requests.graphqlGraphQL queries for review threads (getReviewThreads), mutations for adding replies (addPullRequestReviewThreadReply) and updating comments (updatePullRequestReviewComment)
Mention Detectionbackend/core/db/types.pyDefines @dosu/@dosubot/@dosu-bot regex pattern matching
Trigger Contextbackend/agent/run_context.pyDetermines trigger reason (MENTION, AUTO, PREVIEW_GENERATION, etc.)
Root Workflowbackend/agent/workflows/root.pyOrchestrates entire response generation pipeline with DBOS workflows
Orchestrator Agentbackend/agent/experimental/orchestrator/run.pyMain AI agent that searches data sources and generates responses
Search Clientbackend/agent/retrieval/search_client.pySemantic and full-text search across indexed content
Citation Typesbackend/agent/experimental/tools/search/types.pyCitation and SourceMetadata models for attribution
Should Reply Gatebackend/agent/experimental/auto_reply/agents/should_reply.pyEvaluates whether to respond to messages (REPLY/SKIP decision)
Should Publish Gatebackend/agent/experimental/auto_reply/agents/should_publish.pyEvaluates response quality for auto-publishing (PUBLISH/HOLD decision)
Quality Criteriabackend/agent/experimental/auto_reply/agents/prompts/should_publish/system.j2Prompt defining PUBLISH/HOLD criteria and "when in doubt, HOLD" philosophy
Orchestrator Workflowbackend/agent/workflows/steps/run_orchestrator_agent.pyIntegrates quality gates, applies confidence thresholds, forces preview generation
Reply Mode Configbackend/core/db/types.pyAgentReplyMode enum (MENTION, AUTO_DRAFT, AUTO_POST) and channel configs
Abstract Publisherbackend/api/publishing/abstract.pyCore publishing logic with _should_post() determining draft vs. publish
GitHub Publisherbackend/api/publishing/github.pyPosts/edits GitHub comments or saves as "shadow" drafts
Review Dashboard UIfrontend/app/app/[locale]/(auth)/(main)/review/[[...id]]/page.tsxReview page with thread list filtered to PREVIEW_ONLY drafts
Publishing APIbackend/public_api/messages/router.pyPreview generation and draft publishing endpoints
  • Deployments: Configuration of Dosu connections to repositories and workspaces
  • Data Sources: Systems that provide indexed content for response generation
  • GitHub Integration: Platform-specific features for GitHub repositories
  • Slack Integration: Platform-specific features for Slack workspaces
  • Response Guidelines: Customization of response tone and style