Back to Claude Code
Claude CodeAdvanced11 min read

GitHub Actions Workflows — Put Claude Code on Repeated Team Jobs

Where Claude Code fits in GitHub Actions, review automation, and repo maintenance workflows without over-automating the wrong things

github-actionsautomationreviewteam-workflow

Official References: GitHub Actions · Claude Code Overview · Claude Code SDK

Curriculum path

  1. CLAUDE.md Mastery — repo memory and rules
  2. Effective Prompting — task framing and constraints
  3. MCP Power Tools — connect tools and live context
  4. Multi-Agent Workflows — delegation and parallel execution
  5. Hooks Automation — local workflow enforcement
  6. GitHub Actions Workflows — repeated team automation ← You are here

Official docs used in this guide

  • Claude Code in GitHub ActionsGitHub Actions
  • Automation and orchestration surfaceSDK
  • How Claude Code fits into broader workflowsOverview

Where Claude Code Fits in Team Automation

Claude Code is strongest in day-to-day interactive development, but Anthropic also supports using it in GitHub Actions and automation flows.

That opens up good team use cases such as:

  • PR review against a checklist
  • issue triage and summaries
  • maintenance documentation refresh
  • structured fix suggestions on failed workflows
  • repeatable repo tasks triggered by labels or comments

Good Automation Jobs

Claude Code works well in automation when the work is:

  • bounded
  • reviewable
  • attached to a real artifact like a PR or issue
  • still valuable even when a human approves the final outcome

Examples:

  • "Review this PR for security and migration risks"
  • "Summarize failing checks and propose likely fixes"
  • "Draft release notes from merged PRs"

Bad Automation Jobs

Avoid pushing Claude Code automation into workflows that need:

  • broad product invention
  • unrestricted secret access
  • high-risk production changes without review
  • vague goals with no validation path

If a job would scare you when run by a shell script, it should also scare you when run by an AI workflow.

The Practical Pattern

A healthy GitHub Actions workflow usually looks like this:

  1. trigger from PR, issue, or label
  2. pass clear repo context
  3. run Claude on a narrow job
  4. capture output as a comment, summary, or artifact
  5. keep a human reviewer in the merge path

That balance matters. The goal is not "remove humans." It is "reduce repetitive team work."


Basic Setup

Add the API Key

Go to your repository's Settings → Secrets and variables → Actions and add ANTHROPIC_API_KEY. Without this, no workflow will run.

Use the Official Action

Anthropic provides anthropics/claude-code-action@v1 as the official way to run Claude Code inside GitHub Actions. No manual installation or environment setup required.

- uses: anthropics/claude-code-action@v1
  with:
    anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
    prompt: "Describe the task you want Claude to perform here."

If your repository has a CLAUDE.md file at the root, the action reads it automatically to understand your coding conventions and architecture. This is one of the simplest ways to improve automation quality.


PR Review Automation Workflow

Claude automatically posts review comments whenever a PR is opened or updated with new commits.

name: Claude Code Review
on:
  pull_request:
    types: [opened, synchronize]
 
jobs:
  review:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: write
    steps:
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: |
            Review this PR for:
            - Security vulnerabilities
            - Performance issues
            - Code style consistency
            Provide actionable feedback as PR comments.

pull-requests: write is required in permissions. Without it, the action cannot post comments.

Specific prompts produce better results. "Review this code" is weak. "Focus on migration risks and missing error handling" gives Claude a clear job with measurable output.


Issue Triage Workflow

When a new issue is filed, Claude reads it and automatically labels it and posts an initial response.

name: Issue Triage
on:
  issues:
    types: [opened]
 
jobs:
  triage:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      issues: write
    steps:
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: |
            Analyze this newly filed issue:
 
            Title: ${{ github.event.issue.title }}
            Body: ${{ github.event.issue.body }}
 
            Please do the following:
            1. Classify as bug report, feature request, or question
            2. If reproduction steps or context are missing, post a comment requesting them
            3. Suggest appropriate labels (bug, enhancement, question, needs-info)
            4. Assess urgency (critical, high, medium, low)
 
            Post a triage summary as an issue comment.

Issue triage is especially effective for open-source projects where the volume of incoming issues makes manual first-response slow and inconsistent.


Failure Log Analysis Workflow

When a CI workflow fails, Claude reads the logs, analyzes the root cause, and posts a summary comment on the related PR.

name: Analyze Failure
on:
  workflow_run:
    workflows: ["CI"]
    types: [completed]
 
jobs:
  analyze:
    if: ${{ github.event.workflow_run.conclusion == 'failure' }}
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: write
      actions: read
    steps:
      - name: Get workflow logs
        id: logs
        uses: actions/github-script@v7
        with:
          script: |
            const logs = await github.rest.actions.downloadWorkflowRunLogs({
              owner: context.repo.owner,
              repo: context.repo.repo,
              run_id: context.payload.workflow_run.id,
            });
            return Buffer.from(logs.data).toString('utf-8').slice(0, 8000);
 
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: |
            Here are the logs from a failed CI workflow:
 
            ${{ steps.logs.outputs.result }}
 
            Please analyze:
            1. What failed first
            2. The most likely root cause
            3. Concrete next steps to fix it
 
            Post the analysis as a comment on the related PR.

Passing full logs burns tokens quickly. Trimming to the relevant portion with slice(0, 8000) or similar keeps costs reasonable without losing the signal.


Security and Permissions

Minimal Permissions Principle

Only grant the permissions a workflow actually needs. If it only posts comments, pull-requests: write is enough. contents: write is only needed when the action modifies files directly.

permissions:
  contents: read        # read source code
  pull-requests: write  # post PR comments
  issues: write         # post issue comments and apply labels
  # actions: read       # read workflow logs (only when needed)

Secret Management

  • Always store ANTHROPIC_API_KEY in GitHub Secrets. Never put it directly in workflow files.
  • Use GitHub Environments if you need per-environment key separation.
  • Rotate keys on a regular schedule.

Rate Limiting and Cost Awareness

Every GitHub Actions run that calls the Claude API consumes Anthropic API credits. A few things to watch:

  • In large repositories, PRs can open very frequently. Attaching a review workflow to every PR will accumulate costs quickly.
  • Use paths filters to run the workflow only when relevant files change.
  • Use concurrency to prevent duplicate runs on the same PR.
on:
  pull_request:
    paths:
      - "src/**"
      - "tests/**"
 
concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

Best Practices

1. Start with read-only actions

Begin with comments, summaries, and drafts before attempting write actions or auto-merge. Build confidence in the output quality before expanding scope.

2. Keep humans in the merge path

Even with high-quality Claude comments, the final merge decision should stay with a human reviewer. Autonomous merge behavior is tempting but makes root cause analysis much harder when something goes wrong.

3. Set timeout limits

Prevent runaway workflows with timeout-minutes.

jobs:
  review:
    runs-on: ubuntu-latest
    timeout-minutes: 10

4. Use CLAUDE.md to give repo context to the action

Put your coding conventions, architecture decisions, and review checklists in CLAUDE.md at the repository root. The action picks it up automatically and produces significantly better output when it understands the repo.

Example CLAUDE.md content for automation:

## Coding Rules
- TypeScript strict mode required
- All API endpoints must have error handling
- New features require tests
 
## Review Checklist
- Check for SQL injection vectors
- Verify authentication and authorization
- Validate all external input

5. Write specific prompts

Vague prompts produce vague results. Be explicit about what Claude should check and what format the output should take. Treat the prompt like a job description for a careful reviewer.


Pair It with Repository Guidance

Automation gets better when the repo already has strong instructions.

Useful guidance sources:

  • CLAUDE.md for coding rules and architecture
  • issue templates for task shape
  • review checklists for stable policy
  • hooks for local enforcement during interactive work

If your repo guidance is weak, GitHub automation will just scale ambiguity faster.

SDK-Based CI Automation

Beyond the official action, you can use the Claude Code CLI directly in CI for more fine-grained control.

name: Claude Code CI Tasks
on:
  push:
    branches: [main]
 
jobs:
  analyze:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
 
      - name: Install Claude Code
        run: npm install -g @anthropic-ai/claude-code
 
      - name: Security Audit
        env:
          ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
        run: |
          claude -p "Audit the files changed in this push for security vulnerabilities. \
            Focus on: SQL injection, XSS, auth bypasses. \
            Output a markdown summary." \
            --output-format text > security-report.md
 
      - name: Upload Report
        uses: actions/upload-artifact@v4
        with:
          name: security-report
          path: security-report.md
  • The CLI gives you more control than the action: custom output formats, piping, and chaining.
  • Use --model haiku for cost-efficient bulk analysis.
  • Combine with --allowedTools "Read,Glob,Grep" to restrict to read-only operations.

Automated Release Notes

When a release is created, Claude analyzes the commit history and generates release notes automatically.

name: Release Notes
on:
  release:
    types: [created]
 
jobs:
  generate-notes:
    runs-on: ubuntu-latest
    permissions:
      contents: write
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0  # Full history for commit analysis
 
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: |
            Generate release notes for tag ${{ github.event.release.tag_name }}.
 
            Analyze all commits since the previous tag:
            1. Group changes by category (Features, Bug Fixes, Performance, Documentation)
            2. Write user-facing descriptions (not commit hashes)
            3. Highlight breaking changes with ⚠️ prefix
            4. Include migration steps if needed
 
            Format as GitHub-flavored markdown.
            Update the release body with the generated notes.
  • fetch-depth: 0 is critical — Claude needs full git history to compare tags.
  • Works best with Conventional Commits (feat:, fix:, breaking:).
  • Pair with CLAUDE.md rules about your changelog format for more consistent output.

Matrix Strategy for Parallel Reviews

Run multiple specialized reviews in parallel using GitHub Actions matrix strategy.

name: Multi-Aspect Review
on:
  pull_request:
    types: [opened, synchronize]
 
jobs:
  review:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        aspect:
          - name: security
            prompt: "Review for OWASP Top 10 vulnerabilities, injection risks, and auth issues"
          - name: performance
            prompt: "Review for N+1 queries, memory leaks, unnecessary re-renders, and heavy computations"
          - name: maintainability
            prompt: "Review for code duplication, complex functions (>20 lines), missing error handling, and unclear naming"
    permissions:
      contents: read
      pull-requests: write
    steps:
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: |
            You are a ${{ matrix.aspect.name }} specialist.
            ${{ matrix.aspect.prompt }}
 
            Be specific: reference file names and line numbers.
            If no issues found, say "✅ No ${{ matrix.aspect.name }} issues found."
  • Each aspect runs in a separate job — truly parallel, not sequential.
  • Each review focuses on one dimension, preventing overly broad superficial reviews.
  • Cost: roughly 3x a single review, but much more thorough.
  • Add if: contains(github.event.pull_request.labels.*.name, 'deep-review') to trigger only when needed.

Dependency Update Automation

Run a weekly dependency check and automatically create PRs for security patches.

name: Weekly Dependency Check
on:
  schedule:
    - cron: "0 9 * * 1"  # Every Monday at 9 AM
  workflow_dispatch:  # Manual trigger
 
jobs:
  check-deps:
    runs-on: ubuntu-latest
    permissions:
      contents: write
      pull-requests: write
    steps:
      - uses: actions/checkout@v4
 
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: |
            Check for outdated dependencies in this project:
 
            1. Run `npm outdated` (or equivalent for the package manager used)
            2. For each outdated package, check:
               - Is it a major version bump? (potential breaking changes)
               - Does the changelog mention security fixes?
               - Are there known vulnerabilities? (check `npm audit`)
            3. Create a summary categorized by:
               - 🔴 Security patches (update immediately)
               - 🟡 Minor updates (safe to update)
               - 🟠 Major updates (needs review)
            4. If there are security patches, create a PR with those updates
 
            Be conservative — only auto-update for security patches.
            Major version bumps should only be recommended, not applied.
  • Schedule with cron for regular maintenance.
  • workflow_dispatch allows manual runs when needed.
  • Conservative approach: auto-PR for security only, recommend for everything else.
  • Pair with CLAUDE.md rules about dependency management policies.

Claude Code vs Codex in CI

Claude Code and Codex can both help in CI/CD, but they tend to shine in slightly different ways.

Tool Stronger when
Claude Code you want rich review, synthesis, and workflow commentary
Codex you want bounded execution, code changes, and task isolation

A useful rule of thumb:

  • use Claude when the workflow is heavy on explanation and review
  • use Codex when the workflow is heavy on execution and validation

Start with Commenting, Not Merging

The safest first step is usually automation that comments, summarizes, or drafts.

Examples:

  • review comments on PRs
  • failure summaries on CI jobs
  • generated release notes

Jumping straight to autonomous merge behavior is usually the wrong starting point.

The Real Value

The real payoff is not flashy autonomy. It is making repeated team workflows faster, more consistent, and easier to review.

That is where Claude Code automation becomes genuinely useful.

Connected Guides