Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds optional AWS Secrets Manager overlay for Postgres settings during Settings construction; adds AWS-related env vars to Changes
Sequence DiagramsequenceDiagram
participant App as Application
participant Config as Config Module
participant AWS as AWS Secrets Manager
participant Settings as Settings Model
App->>Config: request Settings
Config->>Config: read env (USE_AWS_SECRETS, AWS_SECRETS_REGION, AWS_POSTGRES_SECRET_NAME)
alt USE_AWS_SECRETS true and secret name present
Config->>AWS: boto3.client("secretsmanager", region_name?)
Config->>AWS: get_secret_value(SecretId=...)
AWS-->>Config: SecretString (JSON)
Config->>Config: parse JSON -> dict
alt parsed JSON is object
Config->>Settings: overlay POSTGRES_USER/PASSWORD/HOST/PORT for provided keys
else
Config-->>App: raise ValueError ("must be a JSON object")
end
else
Config->>Settings: initialize from env only
end
Settings-->>App: return Settings instance
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docker-compose.yml (1)
33-43:⚠️ Potential issue | 🟠 MajorRedis
sh -c/ healthcheck variable expansion happens at docker-compose parse time, not in the container.
${REDIS_PASSWORD:-}incommand:andtest:is interpolated by docker-compose when parsing the file on the host (before container startup), using only host-level environment variables. Theenv_file: .enventry applies to the container's runtime environment only—it doesn't reach these expansions.If users set
REDIS_PASSWORDonly in.envas documented in.env.example, the conditional resolves toif [ -n "" ]at parse time, so Redis starts without authentication. Meanwhile, the backend (which reads.env) attempts to authenticate and fails.Fix: Move the password configuration logic into a container startup script or shell wrapper that runs inside the container where
env_filevalues are visible, or explicitly document thatREDIS_PASSWORDmust also be exported in the host environment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker-compose.yml` around lines 33 - 43, The docker-compose uses host-side variable expansion in the service command and healthcheck (the sh -c wrapper and the test: ["CMD-SHELL", ...]) so ${REDIS_PASSWORD:-} is evaluated at compose-parse time instead of inside the container; move the conditional and password usage into a container-run script (e.g. create a redis-entrypoint.sh executed as command/entrypoint) that reads REDIS_PASSWORD from the container environment (provided via env_file: .env), and update the healthcheck to call redis-cli inside the container using the runtime $REDIS_PASSWORD (or call the same entrypoint healthcheck helper) so no host-side ${REDIS_PASSWORD:-} expansion remains in the docker-compose command or test entries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/core/config.py`:
- Around line 36-47: The AWS secrets path is skipped when environment vars are
only in the project's .env because load_secret_from_aws reads os.getenv
directly; change the flow so get_settings (or the caller that already constructs
Settings) provides the values to load_secret_from_aws instead of letting it call
os.getenv itself. Update load_secret_from_aws to accept explicit parameters
(e.g., use_aws_secrets, secret_names, aws_secrets_region) and have get_settings
instantiate a base Settings (or use dotenv_values to read the env_file) to
obtain those fields and pass them in; this ensures local .env-loaded values are
respected and removes direct os.getenv usage inside load_secret_from_aws.
- Around line 43-56: Wrap the boto3 client creation and each
client.get_secret_value call in try/except blocks inside the settings loader
(get_settings) so AWS errors don't crash startup: catch
botocore.exceptions.ClientError (inspect error.response["Error"]["Code"] and
handle common codes like ResourceNotFoundException, DecryptionFailure,
InvalidParameterException), botocore.exceptions.NoCredentialsError, and
EndpointConnectionError; log a clear error via the app logger and either raise a
controlled configuration exception (e.g., RuntimeError/ConfigError) or
skip/continue depending on missing/optional secrets; when parsing the secret
result check for "SecretString" and fallback to decoding base64 from
"SecretBinary" before json.loads, and validate the parsed value is a dict before
merged.update; ensure references are to boto3.client("secretsmanager"),
client.get_secret_value(SecretId=name), names, merged, and get_settings so the
changes are localized to this secret-loading logic.
In `@docker-compose.yml`:
- Around line 11-18: The docker-compose change replaces strict shell-expansion
guards with silent defaults for sensitive vars (POSTGRES_PASSWORD,
POSTGRES_USER, POSTGRES_DB), which can hide missing production creds; revert the
sensitive entries in docker-compose.yml to use the required-variable form (e.g.,
${POSTGRES_PASSWORD:?POSTGRES_PASSWORD is required}) or, alternatively, update
the backend validator function _check_default_secret in
backend/app/core/config.py to reject additional weak defaults such as "postgres"
(and any other common defaults) so deployments fail fast when a secure secret is
not provided; modify either the docker-compose variable expansions or the
_check_default_secret logic accordingly to enforce non-default passwords.
---
Outside diff comments:
In `@docker-compose.yml`:
- Around line 33-43: The docker-compose uses host-side variable expansion in the
service command and healthcheck (the sh -c wrapper and the test: ["CMD-SHELL",
...]) so ${REDIS_PASSWORD:-} is evaluated at compose-parse time instead of
inside the container; move the conditional and password usage into a
container-run script (e.g. create a redis-entrypoint.sh executed as
command/entrypoint) that reads REDIS_PASSWORD from the container environment
(provided via env_file: .env), and update the healthcheck to call redis-cli
inside the container using the runtime $REDIS_PASSWORD (or call the same
entrypoint healthcheck helper) so no host-side ${REDIS_PASSWORD:-} expansion
remains in the docker-compose command or test entries.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bd813ba9-0791-4cb2-b8cc-9cc7e3783fb8
📒 Files selected for processing (3)
.env.examplebackend/app/core/config.pydocker-compose.yml
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
backend/app/tests/core/test_config.py (2)
12-142: Add type hints to test methods and fixture per coding guidelines.Per the repo guideline "Always add type hints to all function parameters and return values in Python code", the fixture and each test method should annotate
monkeypatchand the return type. Minor, but consistent with the rest of the codebase.♻️ Example diff
-def _clean_aws_env(monkeypatch): +def _clean_aws_env(monkeypatch: pytest.MonkeyPatch) -> None:- def test_toggle_off_returns_empty(self, monkeypatch): + def test_toggle_off_returns_empty(self, monkeypatch: pytest.MonkeyPatch) -> None:Apply the same pattern (
monkeypatch: pytest.MonkeyPatch,mock_boto: MagicMock,-> None) to the other tests.As per coding guidelines: "Always add type hints to all function parameters and return values in Python code".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/tests/core/test_config.py` around lines 12 - 142, Add type hints to the fixture and test methods: annotate the fixture _clean_aws_env(monkeypatch: pytest.MonkeyPatch) -> None and update each TestLoadSecretFromAws test signature to include parameter and return annotations (e.g. monkeypatch: pytest.MonkeyPatch, mock_boto: MagicMock) and add -> None on each test function (for example test_toggle_off_returns_empty(self, monkeypatch: pytest.MonkeyPatch) -> None, test_single_secret_maps_all_fields(self, mock_boto: MagicMock, monkeypatch: pytest.MonkeyPatch) -> None, etc. Ensure the imports for pytest.MonkeyPatch and MagicMock are present or adjusted if needed.
43-142: LGTM — solid coverage of the new helper.The suite exercises the toggle, empty/whitespace names, single-secret mapping, missing-key skipping, left-to-right merge precedence, non-dict payload error, region precedence (
AWS_SECRETS_REGIONoverAWS_DEFAULT_REGIONimplicitly via the dedicated fallback test), and the no-region client construction path. Patchingboto3.clientat module scope correctly intercepts the function-localimport boto3since it rebinds the attribute on the module. Good use of the autouse fixture to prevent env leakage from.env.test.One optional addition worth considering: a test asserting
AWS_SECRETS_REGIONtakes precedence when both it andAWS_DEFAULT_REGIONare set, to lock in the precedence contract fromconfig.py.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/tests/core/test_config.py` around lines 43 - 142, Add a new test that verifies AWS_SECRETS_REGION takes precedence over AWS_DEFAULT_REGION: patch boto3.client, set monkeypatch.setenv("USE_AWS_SECRETS","true"), monkeypatch.setenv("AWS_SECRETS_REGION","ap-south-1"), monkeypatch.setenv("AWS_DEFAULT_REGION","us-east-1"), call load_secret_from_aws("kaapi-db", {"password":"POSTGRES_PASSWORD"}) and assert mock_boto.assert_called_once_with("secretsmanager", region_name="ap-south-1"); name it e.g. test_explicit_region_overrides_default to mirror existing tests and keep mocking/get_secret_value behavior consistent with other tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@backend/app/tests/core/test_config.py`:
- Around line 12-142: Add type hints to the fixture and test methods: annotate
the fixture _clean_aws_env(monkeypatch: pytest.MonkeyPatch) -> None and update
each TestLoadSecretFromAws test signature to include parameter and return
annotations (e.g. monkeypatch: pytest.MonkeyPatch, mock_boto: MagicMock) and add
-> None on each test function (for example test_toggle_off_returns_empty(self,
monkeypatch: pytest.MonkeyPatch) -> None,
test_single_secret_maps_all_fields(self, mock_boto: MagicMock, monkeypatch:
pytest.MonkeyPatch) -> None, etc. Ensure the imports for pytest.MonkeyPatch and
MagicMock are present or adjusted if needed.
- Around line 43-142: Add a new test that verifies AWS_SECRETS_REGION takes
precedence over AWS_DEFAULT_REGION: patch boto3.client, set
monkeypatch.setenv("USE_AWS_SECRETS","true"),
monkeypatch.setenv("AWS_SECRETS_REGION","ap-south-1"),
monkeypatch.setenv("AWS_DEFAULT_REGION","us-east-1"), call
load_secret_from_aws("kaapi-db", {"password":"POSTGRES_PASSWORD"}) and assert
mock_boto.assert_called_once_with("secretsmanager", region_name="ap-south-1");
name it e.g. test_explicit_region_overrides_default to mirror existing tests and
keep mocking/get_secret_value behavior consistent with other tests.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 84e834a1-e759-42ba-ac0c-4304ac256cf4
📒 Files selected for processing (1)
backend/app/tests/core/test_config.py
| if not names: | ||
| return {} | ||
|
|
||
| import boto3 |
There was a problem hiding this comment.
I think, we can kept it's lazy, as far as I know boto3 is pretty heavy to import. If USE_AWS_SECRETS=false, the function returns early and we never pay that cost.
There was a problem hiding this comment.
this is what came up in claude
Lazy import boto3 (config.py:43) is fine, but boto3 is already a hard dep of the project (used in cloud/storage.py). There's no import-cost benefit — move it to the module top for consistency with the rest of the codebase.
| logger = logging.getLogger(__name__) | ||
|
|
||
|
|
||
| def load_secret_from_aws( |
There was a problem hiding this comment.
move this inside the settings...
and use like this
@computed_field # type: ignore[prop-decorator]
@property
def RABBITMQ_URL(self) -> str:
return f"amqp://{self.RABBITMQ_USER}:{self.RABBITMQ_PASSWORD}@{self.RABBITMQ_HOST}:{self.RABBITMQ_PORT}/{self.RABBITMQ_VHOST}"
ignore if it is irrelevant
There was a problem hiding this comment.
in config settings .. no need of using os.getenv
There was a problem hiding this comment.
So I moved the logic inside Settings as a model_validator(mode="after"), reads self.USE_AWS_SECRETS / self.AWS_SECRETS_NAMES / self.AWS_SECRETS_REGION directly and overlays the AWS values onto the POSTGRES_* fields. No more os.getenv.
| if os.getenv("USE_AWS_SECRETS", "").strip().lower() not in ("1", "true", "yes"): | ||
| return {} | ||
|
|
||
| names = [n.strip() for n in secret_names.split(",") if n.strip()] |
There was a problem hiding this comment.
rename n for better readability
There was a problem hiding this comment.
Updated with new codebase.
| logger = logging.getLogger(__name__) | ||
|
|
||
|
|
||
| def load_secret_from_aws( |
There was a problem hiding this comment.
we can rename this load_secret_from_aws(singular) to load_secrets_from_aws(plural) as we are pulling multiple secrets
There was a problem hiding this comment.
I made the some updates in the code. or also renamed the function name accordingly.
| if not names: | ||
| return {} | ||
|
|
||
| import boto3 |
There was a problem hiding this comment.
this is what came up in claude
Lazy import boto3 (config.py:43) is fine, but boto3 is already a hard dep of the project (used in cloud/storage.py). There's no import-cost benefit — move it to the module top for consistency with the rest of the codebase.
| import json | ||
| import logging | ||
| import secrets | ||
| import warnings | ||
| import os |
There was a problem hiding this comment.
we can order this alphabetically
@AkhileshNegi Thanks for the suggestion! I'd prefer to keep it in |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/core/config.py (1)
8-22:⚠️ Potential issue | 🟡 MinorUse
typing.Selffor Python 3.11+.Line 22 imports
Selffromtyping_extensions; Python 3.11+ includesSelfin the standardtypingmodule. Update the import accordingly.♻️ Proposed fix
-from typing import Any, Literal +from typing import Any, Literal, Self @@ -from typing_extensions import Self🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/core/config.py` around lines 8 - 22, The code imports Self from typing_extensions but on Python 3.11+ Self is available in the standard typing module; update the import by removing "from typing_extensions import Self" and instead reference typing.Self (e.g., add "from typing import Self" or import typing and use typing.Self) so any usages of Self in this file use the stdlib symbol (locate the import of Self at the bottom of the import block and replace it accordingly).
♻️ Duplicate comments (1)
backend/app/core/config.py (1)
222-234:⚠️ Potential issue | 🟠 MajorWrap Secrets Manager calls in controlled config errors.
boto3.client(...),get_secret_value(...), and["SecretString"]can still raise raw boto3/KeyError/JSON errors duringsettings = get_settings(). Catch and re-raise with a clear configuration error, and handle missingSecretStringexplicitly.🛡️ Proposed fix
region = self.AWS_SECRETS_REGION or self.AWS_DEFAULT_REGION client_kwargs: dict[str, Any] = {"region_name": region} if region else {} - client = boto3.client("secretsmanager", **client_kwargs) + try: + client = boto3.client("secretsmanager", **client_kwargs) + except Exception as exc: + raise RuntimeError("Failed to create AWS Secrets Manager client") from exc for secret_name, field_map in active: self._load_secret(client, secret_name, field_map) @@ - data = json.loads(client.get_secret_value(SecretId=secret_name)["SecretString"]) + try: + response = client.get_secret_value(SecretId=secret_name) + except Exception as exc: + raise RuntimeError( + f"Failed to fetch AWS secret '{secret_name}'" + ) from exc + + secret_string = response.get("SecretString") + if secret_string is None: + raise ValueError( + f"Secret '{secret_name}' must include SecretString" + ) + + data = json.loads(secret_string)boto3 SecretsManager get_secret_value response SecretString SecretBinary exceptions ClientError NoCredentialsError EndpointConnectionError🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/core/config.py` around lines 222 - 234, Wrap the Secrets Manager interactions in controlled configuration errors: in the code paths that call boto3.client(...) (in the secret-loading setup) and inside _load_secret(client, secret_name, field_map) around client.get_secret_value(...) and json.loads(...), catch boto3/botocore exceptions (e.g., ClientError, NoCredentialsError, EndpointConnectionError), KeyError, and JSONDecodeError (and TypeError) and re-raise a clear configuration exception (create/use a ConfigError or ConfigurationError) with a message that includes the secret_name and the original exception (use "from err" to preserve trace); also explicitly check the response for the "SecretString" key and raise the same ConfigError if it's missing instead of indexing into the dict.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.env.example:
- Around line 100-103: The dotenv-linter requires the AWS-related env keys to be
sorted; currently the block shows USE_AWS_SECRETS, AWS_SECRETS_REGION,
AWS_POSTGRES_SECRET_NAME. Reorder the three variables so they appear as
AWS_POSTGRES_SECRET_NAME, AWS_SECRETS_REGION, USE_AWS_SECRETS (preserving their
values/comments) in the .env.example file to satisfy the linter.
In `@backend/app/core/config.py`:
- Around line 57-61: Add a post-overlay validation on the Settings model to fail
fast when DB fields are empty: in the Settings class add a pydantic
root_validator (or validate method) that checks POSTGRES_SERVER, POSTGRES_USER,
POSTGRES_PASSWORD and POSTGRES_DB are non-empty after environment/overlay
loading and raise a clear ValueError if any required field is blank; reference
the Settings class and the
POSTGRES_SERVER/POSTGRES_USER/POSTGRES_PASSWORD/POSTGRES_DB fields so missing DB
configuration cannot silently instantiate.
- Around line 241-243: The loop assigns raw JSON secret strings directly to
typed attributes (field_map, data, setattr), bypassing Pydantic validation and
allowing wrong types (e.g., "5432" to int). Instead, collect the mapped
key→field pairs from data into a temp dict, run it through Pydantic validation
(e.g., use self.__class__.parse_obj or construct a new model instance) to
coerce/validate types, and then assign the validated attributes back to self (or
replace self's state) rather than calling setattr with the raw values.
---
Outside diff comments:
In `@backend/app/core/config.py`:
- Around line 8-22: The code imports Self from typing_extensions but on Python
3.11+ Self is available in the standard typing module; update the import by
removing "from typing_extensions import Self" and instead reference typing.Self
(e.g., add "from typing import Self" or import typing and use typing.Self) so
any usages of Self in this file use the stdlib symbol (locate the import of Self
at the bottom of the import block and replace it accordingly).
---
Duplicate comments:
In `@backend/app/core/config.py`:
- Around line 222-234: Wrap the Secrets Manager interactions in controlled
configuration errors: in the code paths that call boto3.client(...) (in the
secret-loading setup) and inside _load_secret(client, secret_name, field_map)
around client.get_secret_value(...) and json.loads(...), catch boto3/botocore
exceptions (e.g., ClientError, NoCredentialsError, EndpointConnectionError),
KeyError, and JSONDecodeError (and TypeError) and re-raise a clear configuration
exception (create/use a ConfigError or ConfigurationError) with a message that
includes the secret_name and the original exception (use "from err" to preserve
trace); also explicitly check the response for the "SecretString" key and raise
the same ConfigError if it's missing instead of indexing into the dict.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 717ebcf0-047f-47c8-b7cb-3b8dc3fb935e
📒 Files selected for processing (3)
.env.examplebackend/app/core/config.pybackend/app/tests/core/test_config.py
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/app/tests/core/test_config.py
| # AWS Secrets Manager | ||
| USE_AWS_SECRETS=false | ||
| AWS_SECRETS_REGION= | ||
| AWS_POSTGRES_SECRET_NAME= |
There was a problem hiding this comment.
Reorder the new keys to satisfy dotenv-linter.
The new block is in the right template file, but the linter expects these keys ordered as AWS_POSTGRES_SECRET_NAME, AWS_SECRETS_REGION, then USE_AWS_SECRETS.
🧹 Proposed fix
# AWS Secrets Manager
-USE_AWS_SECRETS=false
-AWS_SECRETS_REGION=
AWS_POSTGRES_SECRET_NAME=
+AWS_SECRETS_REGION=
+USE_AWS_SECRETS=false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # AWS Secrets Manager | |
| USE_AWS_SECRETS=false | |
| AWS_SECRETS_REGION= | |
| AWS_POSTGRES_SECRET_NAME= | |
| # AWS Secrets Manager | |
| AWS_POSTGRES_SECRET_NAME= | |
| AWS_SECRETS_REGION= | |
| USE_AWS_SECRETS=false |
🧰 Tools
🪛 dotenv-linter (4.0.0)
[warning] 102-102: [UnorderedKey] The AWS_SECRETS_REGION key should go before the USE_AWS_SECRETS key
(UnorderedKey)
[warning] 103-103: [UnorderedKey] The AWS_POSTGRES_SECRET_NAME key should go before the AWS_SECRETS_REGION key
(UnorderedKey)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.env.example around lines 100 - 103, The dotenv-linter requires the
AWS-related env keys to be sorted; currently the block shows USE_AWS_SECRETS,
AWS_SECRETS_REGION, AWS_POSTGRES_SECRET_NAME. Reorder the three variables so
they appear as AWS_POSTGRES_SECRET_NAME, AWS_SECRETS_REGION, USE_AWS_SECRETS
(preserving their values/comments) in the .env.example file to satisfy the
linter.
| POSTGRES_SERVER: str = "" | ||
| POSTGRES_PORT: int = 5432 | ||
| POSTGRES_USER: str | ||
| POSTGRES_USER: str = "" | ||
| POSTGRES_PASSWORD: str = "" | ||
| POSTGRES_DB: str = "" |
There was a problem hiding this comment.
Fail fast when required Postgres settings remain empty.
Making POSTGRES_SERVER and POSTGRES_USER optional helps AWS overlays, but staging/production can now instantiate Settings with empty DB fields if the secret name is missing or AWS secrets are disabled. Add a post-overlay validation for required DB settings.
🛡️ Proposed fix
`@model_validator`(mode="after")
def _enforce_non_default_secrets(self) -> Self:
+ if self.ENVIRONMENT in {"staging", "production"}:
+ required_db_settings = {
+ "POSTGRES_SERVER": self.POSTGRES_SERVER,
+ "POSTGRES_USER": self.POSTGRES_USER,
+ "POSTGRES_PASSWORD": self.POSTGRES_PASSWORD,
+ "POSTGRES_DB": self.POSTGRES_DB,
+ }
+ missing = [
+ name for name, value in required_db_settings.items() if not value
+ ]
+ if missing:
+ raise ValueError(
+ f"Missing required database settings: {', '.join(missing)}"
+ )
+
self._check_default_secret("SECRET_KEY", self.SECRET_KEY)
self._check_default_secret("POSTGRES_PASSWORD", self.POSTGRES_PASSWORD)
self._check_default_secret(Also applies to: 250-258
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/core/config.py` around lines 57 - 61, Add a post-overlay
validation on the Settings model to fail fast when DB fields are empty: in the
Settings class add a pydantic root_validator (or validate method) that checks
POSTGRES_SERVER, POSTGRES_USER, POSTGRES_PASSWORD and POSTGRES_DB are non-empty
after environment/overlay loading and raise a clear ValueError if any required
field is blank; reference the Settings class and the
POSTGRES_SERVER/POSTGRES_USER/POSTGRES_PASSWORD/POSTGRES_DB fields so missing DB
configuration cannot silently instantiate.
| for key, field in field_map.items(): | ||
| if key in data: | ||
| setattr(self, field, data[key]) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Pydantic v2 BaseModel validate_assignment default setattr assignment validation model_validator after
💡 Result:
In Pydantic v2, validate_assignment=True (set via model_config = ConfigDict(validate_assignment=True)) enables validation on attribute assignment (setattr) after model initialization. By default, models are mutable, allowing instance.field = value without validation. When validate_assignment=True, assignment like instance.field = value triggers validation similar to initialization: the value passes through field validators (including type coercion, @field_validator in 'before'/'plain'/'after'/'wrap' modes), but model validators (@model_validator) are generally not fully triggered in the same way. Key behaviors and issues: - Field validators run on the assigned field value during setattr. - @model_validator(mode='after') can cause RecursionError if it assigns to fields (triggers validation again) (sources 13,20). - Workaround: Use self.dict['field'] = value or object.setattr(self, 'field', value) inside model validators to bypass validation. - @model_validator(mode='before') may not apply correctly to single-field assignment data (open bug, source 10). - Value may be set before model 'after' validation, so rejecting in model validator doesn't revert it (source 21). Default is validate_assignment=False. Only affects post-init setattr, not init validation. For complete model-level validation on assignment, use field validators primarily; model validators have limitations with assignment.
Citations:
- 1: https://pydantic.dev/docs/validation/latest/concepts/models/
- 2: https://pydantic.dev/docs/validation/latest/concepts/validators/
- 3: https://pydantic.dev/docs/validation/2.0/usage/model_config
- 4: Model Validators are not adhered to when using
validate_assignmentpydantic/pydantic#8474 - 5: validate_assignment=True && use of @model_validator(mode="after") leads to RecursionError pydantic/pydantic#6597
- 6: validate_assignment = True & @model_validator(mode='after') crashes pydantic/pydantic#9576
- 7:
validate_assignment=Trueandmodel_validator "after"doesn't prevent the improper value from getting through pydantic/pydantic#7105 - 8:
validate_assignment=Falsehas no effect pydantic/pydantic#9807
🏁 Script executed:
cat -n backend/app/core/config.py | head -250 | tail -50Repository: ProjectTech4DevAI/kaapi-backend
Length of output: 2165
🏁 Script executed:
wc -l backend/app/core/config.pyRepository: ProjectTech4DevAI/kaapi-backend
Length of output: 105
🏁 Script executed:
cat -n backend/app/core/config.pyRepository: ProjectTech4DevAI/kaapi-backend
Length of output: 11070
Add validation and type coercion before assigning secret values to typed fields.
Using setattr without validation bypasses Pydantic's field validators. Since the secret data comes from JSON (line 234), values are strings. The code assigns these directly: if the secret contains "port": "5432", it sets POSTGRES_PORT (typed as int) to the string "5432", violating the type contract. Validate and coerce the value to the target field's type before assignment.
Proposed fix
applied: list[str] = []
for key, field in field_map.items():
if key in data:
- setattr(self, field, data[key])
+ value = data[key]
+ if field == "POSTGRES_PORT":
+ if isinstance(value, bool):
+ raise ValueError(
+ f"Secret '{secret_name}' key '{key}' must be an integer port"
+ )
+ try:
+ value = int(value)
+ except (TypeError, ValueError) as exc:
+ raise ValueError(
+ f"Secret '{secret_name}' key '{key}' must be an integer port"
+ ) from exc
+ if not 1 <= value <= 65535:
+ raise ValueError(
+ f"Secret '{secret_name}' key '{key}' must be a valid TCP port"
+ )
+ elif not isinstance(value, str):
+ raise ValueError(
+ f"Secret '{secret_name}' key '{key}' must be a string"
+ )
+
+ setattr(self, field, value)
applied.append(field)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/core/config.py` around lines 241 - 243, The loop assigns raw JSON
secret strings directly to typed attributes (field_map, data, setattr),
bypassing Pydantic validation and allowing wrong types (e.g., "5432" to int).
Instead, collect the mapped key→field pairs from data into a temp dict, run it
through Pydantic validation (e.g., use self.__class__.parse_obj or construct a
new model instance) to coerce/validate types, and then assign the validated
attributes back to self (or replace self's state) rather than calling setattr
with the raw values.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
backend/app/core/config.py (3)
246-254:⚠️ Potential issue | 🟠 Major
setattrbypasses field validation; missing keys are silently ignored.Two concerns on this loop:
No type coercion (duplicate of prior review).
model_configdoes not enablevalidate_assignment, sosetattr(self, "POSTGRES_PORT", data["port"])will set the raw value — if the secret stores"port": "5432"(JSON string) or"port": 5432(int), only one matches the declared type. Coerce/validate explicitly forPOSTGRES_PORT(and any other non-strfields added later) before assignment.Silent partial application (new). Keys in
field_mapthat are absent from the secret payload are skipped without any log entry. A rotated secret missingusernameorhostwill leave those fields at their (now-empty) defaults and be detected only when a DB connection is attempted. Consider logging the missing subset alongsideappliedso operators can spot incomplete secrets:🛡️ Proposed adjustment
applied: list[str] = [] + missing: list[str] = [] for key, field in field_map.items(): if key in data: setattr(self, field, data[key]) applied.append(field) + else: + missing.append(key) logger.info( - f"[_load_secret] Loaded AWS secret | " - f"{{'name': '{secret_name}', 'fields': {sorted(applied)}}}" + f"[_load_secret] Loaded AWS secret name={secret_name!r} " + f"applied={sorted(applied)} missing_keys={sorted(missing)}" )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/core/config.py` around lines 246 - 254, The _load_secret loop uses setattr to apply secret values from field_map to self which bypasses pydantic validation and silently skips missing keys; update the loop in _load_secret to (1) explicitly coerce/validate non-str fields (e.g., POSTGRES_PORT) before assignment by converting/parsing values or calling the model's validators, and then assign the validated value to the attribute instead of raw setattr, and (2) compute and log the missing fields (field_map keys not present in data) alongside applied so the logger.info message for _load_secret includes both the sorted applied list and a sorted missing list to surface partial/rotated secrets.
228-240:⚠️ Potential issue | 🟠 MajorUnhandled boto3 exceptions will crash app startup.
boto3.client("secretsmanager", ...)andclient.get_secret_value(...)run duringSettingsconstruction, which is executed at module import time viasettings = get_settings()(line 279). Any of the following will propagate and abort startup with a raw stack trace:
botocore.exceptions.NoCredentialsError/PartialCredentialsErrorbotocore.exceptions.NoRegionError(when no region is provided and none in the default config)botocore.exceptions.EndpointConnectionError(network issues)botocore.exceptions.ClientErrorwith codes likeResourceNotFoundException,AccessDeniedException,DecryptionFailure,InvalidParameterExceptionKeyErroronresponse["SecretString"]for binary secretsjson.JSONDecodeErrorfor malformed payloadsWrap the boto3 calls with targeted exception handling, log a clear
[_load_secret]-prefixed message, and raise a controlled configuration error so operators get an actionable failure mode rather than a stack trace.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/core/config.py` around lines 228 - 240, Wrap the Secrets Manager client creation and the call inside _load_secret with targeted exception handling: catch botocore exceptions (e.g., NoCredentialsError, PartialCredentialsError, NoRegionError, EndpointConnectionError, ClientError), KeyError for missing "SecretString", and json.JSONDecodeError; when caught, log a clear message prefixed with "[_load_secret]" including the secret_name and the exception details, and then raise a controlled configuration exception (e.g., a custom ConfigError or ValueError) instead of letting the raw exception propagate; update the client creation block where boto3.client("secretsmanager", **client_kwargs) is called and the call to client.get_secret_value(...) inside _load_secret to implement this behavior so startup failures produce actionable, logged errors.
57-59:⚠️ Potential issue | 🟠 MajorEmpty defaults for
POSTGRES_SERVER/POSTGRES_USERcan silently produce a broken DB URI.Relaxing these to
""is necessary for the AWS overlay path, but withUSE_AWS_SECRETS=False(orAWS_POSTGRES_SECRET_NAMEunset) a misconfigured deployment will constructSettingssuccessfully and fail only later whenbackend/app/core/db.pybuilds the engine at module import — with a cryptic SQLAlchemy/psycopg error rather than a clear configuration error.Consider adding a post-overlay check in
_enforce_non_default_secrets(or a new validator) that rejects emptyPOSTGRES_SERVER/POSTGRES_USER/POSTGRES_DBforstaging/productionso startup fails fast with a clear message.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/core/config.py` around lines 57 - 59, The POSTGRES_* fields can be empty after overlays which causes cryptic DB errors later; update the Settings validation by adding a post-overlay check in _enforce_non_default_secrets (or a new validator called from it) to raise a clear ValueError when running in "staging" or "production" and any of POSTGRES_SERVER, POSTGRES_USER, or POSTGRES_DB are empty strings, unless USE_AWS_SECRETS is true (or AWS_POSTGRES_SECRET_NAME is set); reference the Settings class and the _enforce_non_default_secrets function to locate where to add this check so startup fails fast with a descriptive message about missing DB configuration.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/core/config.py`:
- Around line 224-226: The _apply_aws_secrets function silently returns when
secret_configs yields no entries (active is empty) causing silent no-op if
USE_AWS_SECRETS=True but no AWS secret names are set; update _apply_aws_secrets
to detect this case and emit a clear warning or error via the app logger (e.g.,
log a warning mentioning USE_AWS_SECRETS and the expected env vars like
AWS_POSTGRES_SECRET_NAME) before returning or raise a configuration error so
operators see the misconfiguration instead of proceeding with defaults.
---
Duplicate comments:
In `@backend/app/core/config.py`:
- Around line 246-254: The _load_secret loop uses setattr to apply secret values
from field_map to self which bypasses pydantic validation and silently skips
missing keys; update the loop in _load_secret to (1) explicitly coerce/validate
non-str fields (e.g., POSTGRES_PORT) before assignment by converting/parsing
values or calling the model's validators, and then assign the validated value to
the attribute instead of raw setattr, and (2) compute and log the missing fields
(field_map keys not present in data) alongside applied so the logger.info
message for _load_secret includes both the sorted applied list and a sorted
missing list to surface partial/rotated secrets.
- Around line 228-240: Wrap the Secrets Manager client creation and the call
inside _load_secret with targeted exception handling: catch botocore exceptions
(e.g., NoCredentialsError, PartialCredentialsError, NoRegionError,
EndpointConnectionError, ClientError), KeyError for missing "SecretString", and
json.JSONDecodeError; when caught, log a clear message prefixed with
"[_load_secret]" including the secret_name and the exception details, and then
raise a controlled configuration exception (e.g., a custom ConfigError or
ValueError) instead of letting the raw exception propagate; update the client
creation block where boto3.client("secretsmanager", **client_kwargs) is called
and the call to client.get_secret_value(...) inside _load_secret to implement
this behavior so startup failures produce actionable, logged errors.
- Around line 57-59: The POSTGRES_* fields can be empty after overlays which
causes cryptic DB errors later; update the Settings validation by adding a
post-overlay check in _enforce_non_default_secrets (or a new validator called
from it) to raise a clear ValueError when running in "staging" or "production"
and any of POSTGRES_SERVER, POSTGRES_USER, or POSTGRES_DB are empty strings,
unless USE_AWS_SECRETS is true (or AWS_POSTGRES_SECRET_NAME is set); reference
the Settings class and the _enforce_non_default_secrets function to locate where
to add this check so startup fails fast with a descriptive message about missing
DB configuration.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 06ebc20c-146c-48fa-a55b-0f9963087706
📒 Files selected for processing (1)
backend/app/core/config.py
Summary
Target issue is: #771 (comment)
Summary:
Adds opt-in AWS Secrets Manager support for Postgres credentials, so staging/prod can stop shipping
DATABASErelated credentials in.env.load_secret_from_aws(secret_names, field_map). Takes a comma-separated list of secret IDs, fetches each, merges their JSON payloads left-to-right, and remaps keys (e.g.password→POSTGRES_PASSWORD) onto Settings fields.USE_AWS_SECRETS,AWS_SECRETS_NAMES(comma-separated),AWS_SECRETS_REGION(falls back toAWS_DEFAULT_REGION).docker-compose.yml: replaced${VAR:?...}guards on Postgres/RabbitMQ/Redis with${VAR:-default}defaults sodocker compose upon the host doesn't fail when those creds live in AWS rather than.env..env.example: documents the new variables.Checklist
Before submitting a pull request, please ensure that you mark these task.
fastapi run --reload app/main.pyordocker compose upin the repository root and test.