Centralize pagination test, add constraints#313
Conversation
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Hey - I've left some high level feedback:
- The
test_pagination_invalid_typedocstring and parameter name no longer match its behavior (it now also checks range violations, not just types); consider renaming the test and updating the docstring to reflect that it validates both type and bounds. - Since
Pagination.limitandoffsetnow enforce specific bounds (gt=0,le=1000,ge=0), you might want to add a small test explicitly covering the boundary values (e.g.,limit=1,limit=1000,offset=0) to ensure the edges are accepted as intended.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `test_pagination_invalid_type` docstring and parameter name no longer match its behavior (it now also checks range violations, not just types); consider renaming the test and updating the docstring to reflect that it validates both type and bounds.
- Since `Pagination.limit` and `offset` now enforce specific bounds (`gt=0`, `le=1000`, `ge=0`), you might want to add a small test explicitly covering the boundary values (e.g., `limit=1`, `limit=1000`, `offset=0`) to ensure the edges are accepted as intended.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 53 minutes and 37 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughThe Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/dependencies/pagination_test.py (1)
19-31: Add an explicit boundary case forlimit=0.You validate negative and oversized limits, but not the exact lower boundary introduced by
gt=0. Addinglimit=0improves regression protection.✅ Suggested test addition
`@pytest.mark.parametrize`( ("kwargs", "expected_field"), [ ({"limit": "abc", "offset": 0}, "limit"), + ({"limit": 0, "offset": 0}, "limit"), ({"limit": -5, "offset": 0}, "limit"), ({"limit": 2000, "offset": 0}, "limit"), ({"limit": 5, "offset": "xyz"}, "offset"), ({"limit": 5, "offset": -5}, "offset"), ], ids=[ "bad_limit_type", + "zero_limit", "negative_limit", "limit_too_large", "bad_offset_type", "negative_offset", ], )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/dependencies/pagination_test.py` around lines 19 - 31, Add an explicit boundary case for limit=0 to the invalid-params parametrization: include the tuple ({"limit": 0, "offset": 0}, "limit") in the list of test cases and add the corresponding id "zero_limit" to the ids list so the test asserts that a zero limit (violating gt=0) is treated as invalid; update the parameter list entries and ids to include these new items in tests/dependencies/pagination_test.py.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/dependencies/pagination_test.py`:
- Around line 19-31: Add an explicit boundary case for limit=0 to the
invalid-params parametrization: include the tuple ({"limit": 0, "offset": 0},
"limit") in the list of test cases and add the corresponding id "zero_limit" to
the ids list so the test asserts that a zero limit (violating gt=0) is treated
as invalid; update the parameter list entries and ids to include these new items
in tests/dependencies/pagination_test.py.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 59c00b5d-556c-4bcd-9351-278ae5478b65
📒 Files selected for processing (3)
src/routers/dependencies.pytests/dependencies/pagination_test.pytests/routers/openml/task_list_test.py
💤 Files with no reviewable changes (1)
- tests/routers/openml/task_list_test.py
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #313 +/- ##
=======================================
Coverage 93.58% 93.58%
=======================================
Files 72 73 +1
Lines 3117 3119 +2
Branches 220 221 +1
=======================================
+ Hits 2917 2919 +2
Misses 143 143
Partials 57 57 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Closes #302 |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/routers/openml/datasets_list_datasets_test.py`:
- Around line 110-112: The test currently forces PHP results to LIMIT_DEFAULT by
doing php_json = php_json[:LIMIT_DEFAULT], which ignores an explicit requested
limit; change this to slice using the effective requested limit: read the test's
requested limit (e.g. from the variable used to build the request such as params
or limit), parse it to an int if present, and set effective_limit =
requested_limit if provided else LIMIT_DEFAULT, then do php_json =
php_json[:effective_limit] so php_json and py_json are compared using the same
limit; keep references to php_json, py_json and LIMIT_DEFAULT to locate the
change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: af95c178-862d-4a5f-85f9-9abf21660286
📒 Files selected for processing (5)
src/routers/dependencies.pytests/dependencies/pagination_test.pytests/routers/openml/datasets_list_datasets_test.pytests/routers/openml/migration/tasks_migration_test.pytests/routers/openml/task_list_test.py
💤 Files with no reviewable changes (1)
- tests/routers/openml/task_list_test.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/dependencies/pagination_test.py
For now I think a limit of 1000 seems reasonable, but we'll have to evaluate this later (and also whether or not we should have per-endpoint limits).