Code reviews are essential but let's be honest - they can be time-consuming. What if you could have an AI assistant that reviews every pull request automatically? That's exactly what we're going to set up today using OpenCode and Ollama Cloud.
OpenCode is an open-source CLI coding agent that uses LLM's to help with code review, and when combined with Ollama Cloud's powerful models and GitHub Actions, it becomes your automated code reviewer that never sleeps.
What You'll Get
By the end of this guide, you'll have a GitHub Action that automatically:
- Reviews every pull request when it's opened or updated
- Identifies code quality issues and potential bugs
- Flags security concerns
- Suggests improvements and best practices
- Comments directly on specific lines of code
- Categorizes issues by type and severity
All of this happens automatically without any manual intervention.
Setting Up Your Ollama Cloud API Key
First, you need to get your Ollama Cloud API key and add it to your GitHub repository:
- Log in to your Ollama Cloud account and navigate to the API keys section
- Generate a new API key
- Go to your GitHub repository
- Navigate to Settings → Secrets and variables → Actions → New repository secret
- Name it
OLLAMA_API_KEY, paste your API key and click Add secret
Your API key is now securely stored and ready to use.
Creating the GitHub Action Workflow
Now let's create the workflow file that will trigger the automated review.
In your repository, create a new file at .github/workflows/opencode-review.yml:
name: opencode-review
on:
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
jobs:
review:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
pull-requests: write
issues: write
steps:
- uses: actions/checkout@v6
- uses: anomalyco/opencode/github@latest
env:
OLLAMA_API_KEY: ${{ secrets.OLLAMA_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
model: "ollama-cloud/gpt-oss:20b"
use_github_token: true
prompt: |
Review this pull request:
- Check for code quality issues
- Look for potential bugs / security issues.
- Suggest improvements
- Look for best practices like proper variable naming convention.
- Don't modify any files. Just provide review comments as detailed as possible.
- The review comments should highlight specific lines of code in the pull request.
- Add categories to each comment like [BUG], [IMPROVEMENT], [BEST PRACTICE], [SECURITY] and severity levels like [LOW], [MEDIUM], [HIGH]. Try to avoid unnecessary nitpicks unless they are important and hint towards a larger issue.
- If there are no issues, say "No issues found. Good job!"
Let's break down what's happening here:
Understanding the Workflow
Trigger Events: The workflow runs when a PR is opened, updated (synchronize), reopened, or marked as ready for review. This ensures every significant change gets reviewed.
Permissions: We give the action the necessary permissions to read code, write comments on PRs, and manage issues. This is crucial for the bot to post its review comments.
The Model: We're using ollama-cloud/gpt-oss:20b, which is a 20-billion parameter model. It's powerful enough to catch complex issues but fast enough to review PRs quickly.
The Prompt: This is where the magic happens. The prompt tells the AI exactly what to look for and how to format its responses. You can customize this based on your team's needs.
Customizing the Review Prompt
The default prompt is comprehensive, but you might want to tailor it to your project. Here are some ideas:
For API Projects
prompt: |
Review this pull request with emphasis on:
- API design and RESTful principles
- Database query optimization
- Error handling and edge cases
- Security vulnerabilities (SQL injection, XSS, etc.)
- Authentication and authorization
- [BUG], [SECURITY], [PERFORMANCE], [API_DESIGN] with severity levels
For Stricter Reviews
prompt: |
Perform a thorough code review:
- Every deviation from best practices
- All potential bugs, even minor ones
- Security issues at all levels
- Performance implications
- Code maintainability and readability
- Test coverage gaps
- Documentation quality
- [BUG], [SECURITY], [PERFORMANCE], [DOCS], [TESTS] with severity levels
Testing Your Setup
Once you've committed the workflow file, it's time to test it:
- Create a new branch in your repository
git checkout -b test-opencode-review
- Make some changes to a file (intentionally add something that could be improved)
echo "console.log('test')" >> test.js
git add test.js
git commit -m "Add test file"
git push origin test-opencode-review
-
Open a pull request from your branch to main
-
Watch the magic happen! Within a minute or two, you should see the OpenCode bot reviewing your PR
For example, if you left a console.log in production code, you might see:
[BEST PRACTICE] [LOW]
Remove console.log statement before production. Consider using a proper logging library instead.
Handling False Positives
Like any automated tool, the AI might occasionally flag something that's actually fine. Here's how to handle it:
- Refine your prompt: Add specific instructions about what not to flag
- Add comments in your code: The AI respects code comments, so you can explain why something is done a certain way
- Use ignore patterns: Add specific patterns to ignore in the prompt
Example of ignoring test files:
prompt: |
Review this pull request, but ignore files in the __tests__ directory and files ending with .test.js or .spec.js.
Focus on:
- Production code quality
- ... (rest of your prompt)
Advanced Configuration
Using Different Models
Ollama Cloud offers various models. You can switch them based on your needs:
Running Reviews Only on Specific Files
You can add a filter to review only certain file types:
- uses: anomalyco/opencode/github@latest
if: contains(github.event.pull_request.changed_files, '.ts') || contains(github.event.pull_request.changed_files, '.tsx')
env:
OLLAMA_API_KEY: ${{ secrets.OLLAMA_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
model: "ollama-cloud/gpt-oss:20b"
use_github_token: true
prompt: |
Review only TypeScript and TSX files in this pull request...
Skipping Reviews When Needed
Sometimes you want to skip the automated review (like for documentation-only changes). Add [skip-review] to your PR title, and modify the workflow:
jobs:
review:
if: "!contains(github.event.pull_request.title, '[skip-review]')"
runs-on: ubuntu-latest
# ... rest of the workflow
Real-World Benefits
After setting this up in several projects, here's what I've noticed:
- Catches bugs early: The AI often spots edge cases and potential bugs before human reviewers
- Consistent standards: Every PR gets the same level of scrutiny
- Learning tool: Junior developers learn from the suggestions
- Saves time: Human reviewers can focus on architecture and business logic instead of style issues
- 24/7 availability: Reviews happen instantly, even at 3 AM
Troubleshooting
The workflow doesn't trigger
- Check that the workflow yaml file doesn't have any errors and is in the correct location:
.github/workflows/opencode-review.yml - Make sure the workflow is on the base branch (usually
mainormaster)
No comments appear on the PR
- Check the Actions tab in your GitHub repository for error messages
- Verify your
OLLAMA_API_KEYis set correctly in repository secrets
Too many comments
- Refine your prompt to be more selective and use file filters to focus on critical paths
- Add severity thresholds to only show HIGH and MEDIUM issues
ProviderModelNotFoundError
- If you're using other models from different providers, ensure that the model name is correct and supported by OpenCode. Check the Models API JSON for valid provider/model names.
Wrapping Up
You now have an AI-powered code review bot that automatically reviews every pull request in your repository.
The best part? This is just the beginning. You can customize the prompt, adjust the model, and fine-tune the workflow to match your team's specific needs. Experiment with different configurations and see what works best for you.
Happy coding, and may your PRs always be bug-free! 🚀