Conversation
📝 WalkthroughWalkthroughA new GitHub Actions workflow for staging deployments to EC2 is introduced, which handles credential assumption, executes deployment commands via AWS SSM, and monitors completion. Additionally, the guardrail validator normalization is updated to drop the "name" field from incoming validator configurations. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/schemas/guardrail_config.py (1)
78-88:⚠️ Potential issue | 🟡 MinorRemove the duplicate
nameentry fromdrop_fields.Line 87 repeats
"name", which is already present on Line 80. This is behaviorally harmless but triggers Ruff B033 and adds noise.Proposed fix
drop_fields = { "id", "name", "organization_id", "project_id", "stage", "is_enabled", "created_at", "updated_at", - "name", }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/schemas/guardrail_config.py` around lines 78 - 88, Remove the duplicate "name" entry from the drop_fields set in guardrail_config.py: locate the drop_fields set definition and delete the repeated "name" element so each field appears only once (this will silence Ruff B033 and clean up the set literal).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/cd-staging.yml:
- Around line 3-5: The workflow triggers on branches: [enhancement/cd_staging]
but the deployment step pulls from origin main; update the CD logic in
.github/workflows/cd-staging.yml so the deployed ref matches the trigger—for
example, change the git pull/reset in the EC2 deployment step to use the exact
commit ref ${{ github.sha }} (or checkout with ref: ${{ github.sha }}) or else
pull/reset to the staging branch name (the same branch listed in branches:
[enhancement/cd_staging]) instead of always pulling origin main; ensure the
deployment command references github.sha or the incoming ref to deploy the same
ref that triggered the workflow.
- Around line 48-55: The script currently runs `aws ssm wait command-executed`
under bash's default `set -e`, so if the waiter fails the subsequent `aws ssm
get-command-invocation` never runs; modify the block around `aws ssm wait
command-executed` to temporarily disable errexit (e.g., `set +e`), run `aws ssm
wait command-executed --command-id "$CMD_ID" --instance-id "$INSTANCE_ID"` and
capture its exit code into a variable, then always run `aws ssm
get-command-invocation --command-id "$CMD_ID" --instance-id "$INSTANCE_ID"
--query
'{Status:Status,Stdout:StandardOutputContent,Stderr:StandardErrorContent}'
--output json` to fetch stdout/stderr, and finally restore errexit (e.g., `set
-e`) and exit or handle based on the saved exit code so failures are preserved
while invocation output is still captured.
---
Outside diff comments:
In `@backend/app/schemas/guardrail_config.py`:
- Around line 78-88: Remove the duplicate "name" entry from the drop_fields set
in guardrail_config.py: locate the drop_fields set definition and delete the
repeated "name" element so each field appears only once (this will silence Ruff
B033 and clean up the set literal).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: eecfbdc4-2233-4a65-9639-5b6e7f14ba5d
📒 Files selected for processing (3)
.github/workflows/cd-staging.yml.github/workflows/continuous-integration.ymlbackend/app/schemas/guardrail_config.py
| on: | ||
| push: | ||
| branches: [enhancement/cd_staging] |
There was a problem hiding this comment.
Deploy the same ref that triggered the workflow.
This workflow runs on enhancement/cd_staging, but the EC2 command pulls origin main, so staging pushes can deploy unrelated code from main. Either trigger from main, or pull/reset to the staging branch or exact ${{ github.sha }}.
🐛 Proposed fix if staging branch deployment is intended
- --parameters commands='["set -eux","sudo -iu ubuntu bash -lc \"cd /data/kaapi-guardrails && git fetch --all && git pull origin main && docker compose build && docker compose run --rm backend uv run alembic upgrade head && docker compose up -d --remove-orphans && docker image prune -f\""]' \
+ --parameters commands='["set -eux","sudo -iu ubuntu bash -lc \"cd /data/kaapi-guardrails && git fetch origin enhancement/cd_staging && git reset --hard origin/enhancement/cd_staging && docker compose build && docker compose run --rm backend uv run alembic upgrade head && docker compose up -d --remove-orphans && docker image prune -f\""]' \Also applies to: 37-37
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/cd-staging.yml around lines 3 - 5, The workflow triggers
on branches: [enhancement/cd_staging] but the deployment step pulls from origin
main; update the CD logic in .github/workflows/cd-staging.yml so the deployed
ref matches the trigger—for example, change the git pull/reset in the EC2
deployment step to use the exact commit ref ${{ github.sha }} (or checkout with
ref: ${{ github.sha }}) or else pull/reset to the staging branch name (the same
branch listed in branches: [enhancement/cd_staging]) instead of always pulling
origin main; ensure the deployment command references github.sha or the incoming
ref to deploy the same ref that triggered the workflow.
| aws ssm wait command-executed \ | ||
| --command-id "$CMD_ID" \ | ||
| --instance-id "$INSTANCE_ID" | ||
| aws ssm get-command-invocation \ | ||
| --command-id "$CMD_ID" \ | ||
| --instance-id "$INSTANCE_ID" \ | ||
| --query '{Status:Status,Stdout:StandardOutputContent,Stderr:StandardErrorContent}' \ | ||
| --output json No newline at end of file |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's see the full file to understand the script context
wc -l .github/workflows/cd-staging.ymlRepository: ProjectTech4DevAI/kaapi-guardrails
Length of output: 113
🏁 Script executed:
# Get the full context around the lines in question
cat -n .github/workflows/cd-staging.yml | sed -n '35,70p'Repository: ProjectTech4DevAI/kaapi-guardrails
Length of output: 1410
🏁 Script executed:
# Also check if there are any other workflow files in the repo that might show similar patterns
fd -type f '\.yml$' '.github/workflows/' | head -10Repository: ProjectTech4DevAI/kaapi-guardrails
Length of output: 248
🏁 Script executed:
# Check if there's any documentation or comments in other workflow files about error handling
rg "set \+e|set -e" .github/workflows/ -A 2 -B 2Repository: ProjectTech4DevAI/kaapi-guardrails
Length of output: 781
🏁 Script executed:
# Let's check the complete workflow step to ensure we have full context
sed -n '43,60p' .github/workflows/cd-staging.ymlRepository: ProjectTech4DevAI/kaapi-guardrails
Length of output: 628
Capture SSM command output even when the waiter fails.
GitHub Actions runs bash with set -e enabled by default. If aws ssm wait command-executed exits non-zero, the script stops before aws ssm get-command-invocation runs, hiding the command's stdout/stderr when debugging is most critical. Disable errexit before the waiter, capture its exit code, fetch the invocation details, then restore the original exit status:
Proposed fix
+ set +e
aws ssm wait command-executed \
--command-id "$CMD_ID" \
--instance-id "$INSTANCE_ID"
+ WAIT_RC=$?
+ set -e
aws ssm get-command-invocation \
--command-id "$CMD_ID" \
--instance-id "$INSTANCE_ID" \
--query '{Status:Status,Stdout:StandardOutputContent,Stderr:StandardErrorContent}' \
--output json
+ exit "$WAIT_RC"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| aws ssm wait command-executed \ | |
| --command-id "$CMD_ID" \ | |
| --instance-id "$INSTANCE_ID" | |
| aws ssm get-command-invocation \ | |
| --command-id "$CMD_ID" \ | |
| --instance-id "$INSTANCE_ID" \ | |
| --query '{Status:Status,Stdout:StandardOutputContent,Stderr:StandardErrorContent}' \ | |
| --output json | |
| set +e | |
| aws ssm wait command-executed \ | |
| --command-id "$CMD_ID" \ | |
| --instance-id "$INSTANCE_ID" | |
| WAIT_RC=$? | |
| set -e | |
| aws ssm get-command-invocation \ | |
| --command-id "$CMD_ID" \ | |
| --instance-id "$INSTANCE_ID" \ | |
| --query '{Status:Status,Stdout:StandardOutputContent,Stderr:StandardErrorContent}' \ | |
| --output json | |
| exit "$WAIT_RC" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/cd-staging.yml around lines 48 - 55, The script currently
runs `aws ssm wait command-executed` under bash's default `set -e`, so if the
waiter fails the subsequent `aws ssm get-command-invocation` never runs; modify
the block around `aws ssm wait command-executed` to temporarily disable errexit
(e.g., `set +e`), run `aws ssm wait command-executed --command-id "$CMD_ID"
--instance-id "$INSTANCE_ID"` and capture its exit code into a variable, then
always run `aws ssm get-command-invocation --command-id "$CMD_ID" --instance-id
"$INSTANCE_ID" --query
'{Status:Status,Stdout:StandardOutputContent,Stderr:StandardErrorContent}'
--output json` to fetch stdout/stderr, and finally restore errexit (e.g., `set
-e`) and exit or handle based on the saved exit code so failures are preserved
while invocation output is still captured.
Target issue is #70
Summary