Documents
pr-review
pr-review
Type
External
Status
Published
Created
Mar 25, 2026
Updated
Apr 4, 2026
Updated by
Dosu Bot

AI PR Review#

Manual-trigger AI security review for pull requests. Comment /review on any PR to get a focused security review.

Trigger Commands#

CommandModelUse When
/reviewFast (default: gpt-5.4-mini)Quick check, most PRs
/review fastFast (default: gpt-5.4-mini)Same as /review
/review deepDeep (default: gpt-5.4)Complex changes, security-sensitive code

What It Reviews#

The reviewer is tuned for Pipelock's security model. It flags:

  • Weakened isolation or sandbox boundaries
  • Implicit trust of model output
  • Unsafe tool input/output handling
  • Auth, policy, or permission bypass risk
  • Race conditions in enforcement paths
  • Missing validation where untrusted data crosses boundaries
  • Logging or audit gaps
  • Prompt injection escape vectors

It ignores style nits and generic suggestions. If nothing is wrong, it says so explicitly.

Setup#

Required GitHub Secrets#

Set these in Settings > Secrets and variables > Actions:

SecretRequiredDescription
LITELLM_BASE_URLIf using LiteLLMYour LiteLLM proxy URL (e.g., https://litellm.example.com/v1)
LITELLM_API_KEYIf using LiteLLMAPI key for LiteLLM proxy
OPENAI_API_KEYIf not using LiteLLMDirect OpenAI API key (fallback)
PR_REVIEW_MODEL_FASTNoModel for /review, /review fast, /review tests, /review docs, /review stats (default: gpt-5.4-mini)
PR_REVIEW_MODEL_DEEPNoModel for /review deep (default: gpt-5.4)

GITHUB_TOKEN is provided automatically by GitHub Actions.

LiteLLM vs Direct OpenAI#

LiteLLM (preferred): Set LITELLM_BASE_URL and LITELLM_API_KEY. Point at whatever upstream model you want (OpenAI, Anthropic, local). The script sends OpenAI-compatible requests to your LiteLLM proxy.

Direct OpenAI (fallback): Set only OPENAI_API_KEY. The script calls api.openai.com directly.

If both are set, LiteLLM takes priority.

Switching Models#

Override the model via secrets:

PR_REVIEW_MODEL_FAST=gpt-5.4-mini # fast (~$0.02/review)
PR_REVIEW_MODEL_DEEP=gpt-5.4 # thorough (~$0.07/review)

With LiteLLM, use any model your proxy supports:

PR_REVIEW_MODEL_DEEP=anthropic/claude-sonnet-4-20250514
PR_REVIEW_MODEL_FAST=groq/llama-3.3-70b-versatile

Cost Control#

  • Only runs when manually triggered (no auto-review on push)
  • Diff is truncated to ~100k chars (~25k tokens) to cap costs
  • /review fast uses a cheaper model by default
  • /review deep is opt-in for thorough analysis

Files#

FileWhat
.github/workflows/pr-review.yamlGitHub Actions workflow
scripts/pr-review.pyReview script (fetches diff, calls LLM, posts comment)