feat: add notifications system with unread count, mark-as-read, and mark-all-read#48
Conversation
…a-one-labs-logo Add Alpha One Labs logo to header and favicon across all pages
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 35 minutes and 51 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Repository: alphaonelabs/coderabbit/.coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughAdds a notifications feature: new Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Server as src/worker.py
participant Database as SQLite
rect rgba(0, 100, 200, 0.5)
Note over Client,Server: List Notifications Flow
Client->>Server: GET /api/notifications?unread_only=true&limit=10
Server->>Server: verify_token()
Server->>Database: SELECT id,type,title,message,is_read,related_id,created_at FROM notifications WHERE user_id=? AND is_read=0 ORDER BY created_at DESC LIMIT 10
Database-->>Server: rows
Server->>Database: SELECT COUNT(*) FROM notifications WHERE user_id=? AND is_read=0
Database-->>Server: count
Server-->>Client: {notifications: [...], unread_count: N}
end
rect rgba(100, 150, 50, 0.5)
Note over Client,Server: Mark-as-Read Flow
Client->>Server: POST /api/notifications/:id/read
Server->>Server: verify_token()
Server->>Database: UPDATE notifications SET is_read=1 WHERE id=? AND user_id=?
Database-->>Server: rows_affected
alt rows_affected > 0
Server-->>Client: 200 {success: true}
else
Server-->>Client: 404 Not Found
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@schema.sql`:
- Around line 97-110: Add an index to support the listing query ORDER BY: create
a new index named idx_notif_user_created_at on the notifications table covering
(user_id, created_at DESC) so the query WHERE user_id = ? ORDER BY created_at
DESC LIMIT ? can use the index instead of filesorting; reference the existing
notifications table and indexes (idx_notif_user, idx_notif_unread) and add the
new index with CREATE INDEX IF NOT EXISTS idx_notif_user_created_at ON
notifications(user_id, created_at DESC).
In `@src/worker.py`:
- Around line 1487-1500: Collapse the two nearly-identical queries into one by
building the SELECT/ORDER BY once and conditionally appending the "AND is_read =
0" predicate when unread_only is true, using env.DB.prepare to execute the final
SQL with bound parameters; wrap this DB call in a try/except that checks for
_is_no_such_table_error(error) and, if true, calls init_db(env) and retries the
query (mirror the pattern used in api_list_activities), and apply the same
no-such-table + init_db retry pattern to api_unread_count,
api_mark_notification_read, and api_mark_all_read so the service self-heals on
fresh deploys.
- Around line 1452-1466: The error handler in _create_notification must await
the async capture_exception and pass the env so Sentry can read SENTRY_DSN on
cold isolates; change the except block to await capture_exception(..., _env=env)
and keep the broad Exception catch, and update the related_id parameter type
from str = None to Optional[str] (adding Optional to the function's typing
imports) to satisfy RUF013.
- Around line 1452-1567: The notification feature is incomplete: hook the
internal helper _create_notification into at least one real flow (e.g., call
_create_notification(env, host_id, "peer_request", title, message, related_id)
from api_join or the appropriate handler) so notifications are created
end-to-end and add a TODO comment if you don't wire all types now; also add
tests exercising api_list_notifications (including unread_only filtering and
limit clamping), api_unread_count, api_mark_notification_read (cover 200 and 404
when marking others' notifications), and api_mark_all_read (idempotency and
cross-user isolation) to ensure authz and behavior are validated. Ensure tests
authenticate via verify_token or test helper, create notifications via
_create_notification or DB inserts, and assert responses and DB state for user A
vs user B.
- Around line 1479-1485: The current parsing of req.url for unread_only and
limit is fragile; replace the substring checks with proper query parsing using
urllib.parse.urlparse and urllib.parse.parse_qs: parse the query from req.url,
determine unread_only by checking if the 'unread_only' key exists and treating
values like 'true', '1', or case-insensitive 'true' as true, and compute limit
by reading the first value of the 'limit' param, converting to int with safe
fallback (default 20) and clamping between 1 and 50; update the variables
unread_only and limit (the same ones currently set) and remove the raw substring
logic to match the rest of this file's usage of parse_qs.
- Around line 1502-1534: Change all bracket-style D1 row access to attribute
access to match the rest of the file and avoid JsProxy subscript errors: in the
notifications builder (the loop over rows.results where the dict is assembled
using r["id"], r["type"], r["title"], r["message"], r["is_read"],
r["related_id"], r["created_at"] and the unread_count query result
unread_count["cnt"]) switch to r.id, r.type, r.title, r.message,
bool(r.is_read), r.related_id, r.created_at and unread_count.cnt; likewise in
api_unread_count replace row["cnt"] with row.cnt. While here, you can simplify
the notifications assembly into a list comprehension that maps rows.results to
the dict using attribute access to address the PERF401 hint.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: alphaonelabs/coderabbit/.coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 460292f6-59e1-4edc-a5d1-799a833027f7
📒 Files selected for processing (2)
schema.sqlsrc/worker.py
|
Addressed all CodeRabbit feedback:
Re: |
There was a problem hiding this comment.
♻️ Duplicate comments (4)
src/worker.py (4)
1479-1485:⚠️ Potential issue | 🟡 MinorParse query parameters with
parse_qs.The raw substring/split parsing can misread keys like
xlimit, ignores normal variants likeunread_only=TRUE, and is easy to regress.parse_qsis already imported here.Suggested fix
- url = req.url - unread_only = "unread_only=true" in url + params = parse_qs(urlparse(req.url).query) + unread_only = ((params.get("unread_only") or ["false"])[0].lower() + in ("true", "1", "yes")) try: - raw_limit = int(url.split("limit=")[1].split("&")[0]) if "limit=" in url else 20 + raw_limit = int((params.get("limit") or ["20"])[0]) limit = max(1, min(raw_limit, 50)) - except (ValueError, IndexError): + except ValueError: limit = 20🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/worker.py` around lines 1479 - 1485, Replace the fragile substring parsing of req.url with parse_qs: call parse_qs(urlparse(req.url).query) to get params, then set unread_only = any(v.lower() == "true" for v in params.get("unread_only", [])) (or bool presence if you prefer); read raw_limit_str = params.get("limit", ["20"])[0], parse int with try/except to set raw_limit, then clamp into limit = max(1, min(raw_limit, 50)). Update uses of url, unread_only, raw_limit, and limit accordingly so keys like xlimit or TRUE/true/True are handled correctly and malformed ints fall back to 20.
1502-1520:⚠️ Potential issue | 🟠 MajorUse D1 row attribute access consistently.
The rest of this file reads D1 rows as attributes (
row.id,row.cnt). The bracket access here can fail for Worker/D1 proxy rows, and the list-comprehension form also addresses the RuffPERF401hint.Suggested fix
- notifications = [] - for r in rows.results or []: - notifications.append({ - "id": r["id"], - "type": r["type"], - "title": r["title"], - "message": r["message"], - "is_read": bool(r["is_read"]), - "related_id": r["related_id"], - "created_at": r["created_at"], - }) + notifications = [ + { + "id": r.id, + "type": r.type, + "title": r.title, + "message": r.message, + "is_read": bool(r.is_read), + "related_id": r.related_id, + "created_at": r.created_at, + } + for r in (rows.results or []) + ] @@ - "unread_count": unread_count["cnt"] if unread_count else 0, + "unread_count": unread_count.cnt if unread_count else 0,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/worker.py` around lines 1502 - 1520, The notification assembly and unread_count access use dict-style bracket access which can fail for Worker/D1 proxy rows; update the code to use attribute access consistently (e.g., use r.id, r.type, r.title, r.message, r.is_read, r.related_id, r.created_at) and replace the loop with a list comprehension to satisfy PERF401 (build notifications from rows.results using attribute access). Likewise, read the unread_count via attribute access (e.g., unread_count.cnt) when handling the result of env.DB.prepare(...).bind(user["id"]).first() before returning the ok({ "notifications": notifications, "unread_count": ... }).
539-541:⚠️ Potential issue | 🟡 MinorKeep the runtime DDL in sync with
schema.sql.
schema.sqlnow creates the recent-listing index, but/api/initstill won’t create it because_DDLstops atidx_notif_unread. Fresh databases initialized through the Worker will miss the index used byORDER BY created_at DESC.Suggested fix
"CREATE INDEX IF NOT EXISTS idx_notif_user ON notifications(user_id)", "CREATE INDEX IF NOT EXISTS idx_notif_unread ON notifications(user_id, is_read)", + "CREATE INDEX IF NOT EXISTS idx_notif_created ON notifications(user_id, created_at DESC)",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/worker.py` around lines 539 - 541, The runtime DDL array (_DDL) in src/worker.py stops after the "CREATE INDEX IF NOT EXISTS idx_notif_unread ON notifications(user_id, is_read)" entry and therefore does not create the recent-listing index that schema.sql now defines; update the _DDL array to include the same CREATE INDEX statement for the recent-listing index (the index used by ORDER BY created_at DESC) so fresh DBs initialized via the Worker match schema.sql and include that index. Ensure you add the exact CREATE INDEX IF NOT EXISTS statement (matching the one in schema.sql) to the _DDL list alongside the existing "idx_notif_user" and "idx_notif_unread" strings.
1464-1465:⚠️ Potential issue | 🔴 CriticalUse
_env=envso notification failures stay non-blocking.Line 1465 passes
env=env, butcapture_exceptionaccepts_env, notenv. If the insert fails, this raisesTypeErrorinside the exception handler and breaks the helper’s “swallow errors” contract.Suggested fix
except Exception as exc: - await capture_exception(exc, env=env, where="_create_notification") + await capture_exception(exc, _env=env, where="_create_notification")As per coding guidelines, “Ensure proper error handling and logging.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/worker.py` around lines 1464 - 1465, In the except block inside _create_notification, change the capture_exception call to pass the environment using the correct keyword parameter (_env=env) so that capture_exception receives the env value and does not raise a TypeError; this preserves the helper’s “swallow errors” behavior and keeps notification failures non-blocking.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/worker.py`:
- Around line 1479-1485: Replace the fragile substring parsing of req.url with
parse_qs: call parse_qs(urlparse(req.url).query) to get params, then set
unread_only = any(v.lower() == "true" for v in params.get("unread_only", []))
(or bool presence if you prefer); read raw_limit_str = params.get("limit",
["20"])[0], parse int with try/except to set raw_limit, then clamp into limit =
max(1, min(raw_limit, 50)). Update uses of url, unread_only, raw_limit, and
limit accordingly so keys like xlimit or TRUE/true/True are handled correctly
and malformed ints fall back to 20.
- Around line 1502-1520: The notification assembly and unread_count access use
dict-style bracket access which can fail for Worker/D1 proxy rows; update the
code to use attribute access consistently (e.g., use r.id, r.type, r.title,
r.message, r.is_read, r.related_id, r.created_at) and replace the loop with a
list comprehension to satisfy PERF401 (build notifications from rows.results
using attribute access). Likewise, read the unread_count via attribute access
(e.g., unread_count.cnt) when handling the result of
env.DB.prepare(...).bind(user["id"]).first() before returning the ok({
"notifications": notifications, "unread_count": ... }).
- Around line 539-541: The runtime DDL array (_DDL) in src/worker.py stops after
the "CREATE INDEX IF NOT EXISTS idx_notif_unread ON notifications(user_id,
is_read)" entry and therefore does not create the recent-listing index that
schema.sql now defines; update the _DDL array to include the same CREATE INDEX
statement for the recent-listing index (the index used by ORDER BY created_at
DESC) so fresh DBs initialized via the Worker match schema.sql and include that
index. Ensure you add the exact CREATE INDEX IF NOT EXISTS statement (matching
the one in schema.sql) to the _DDL list alongside the existing "idx_notif_user"
and "idx_notif_unread" strings.
- Around line 1464-1465: In the except block inside _create_notification, change
the capture_exception call to pass the environment using the correct keyword
parameter (_env=env) so that capture_exception receives the env value and does
not raise a TypeError; this preserves the helper’s “swallow errors” behavior and
keeps notification failures non-blocking.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: alphaonelabs/coderabbit/.coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 5c178efd-a266-4f54-879c-e2d7c4fa3c43
📒 Files selected for processing (2)
schema.sqlsrc/worker.py
|
Fixed: switched from bracket access (r["id"]) to attribute access (r.id) throughout the list builder, and used list comprehension instead of append loop (PERF401). Also fixed unread_count["cnt"] → unread_count.cnt in api_unread_count. |
|
@coderabbitai please confirm that all notifications are encrypted |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 13248
Here's a comparison of what the rest of the schema does vs. what the
The Recommended fix — apply the same # In _create_notification() — encrypt before insert
await env.DB.prepare(
"INSERT INTO notifications (id, user_id, type, title, message, related_id)"
" VALUES (?, ?, ?, ?, ?, ?)"
).bind(
new_id(),
user_id,
type_,
encrypt(title, env.ENCRYPTION_KEY), # ← encrypt
encrypt(message, env.ENCRYPTION_KEY), # ← encrypt
related_id,
).run()
# In api_list_notifications() — decrypt on read
notifications = [
{
"id": r.id,
"type": r.type,
"title": decrypt(r.title, env.ENCRYPTION_KEY), # ← decrypt
"message": decrypt(r.message, env.ENCRYPTION_KEY), # ← decrypt
"is_read": bool(r.is_read),
"related_id": r.related_id,
"created_at": r.created_at,
}
for r in rows.results
]In transit: Cloudflare Workers enforces HTTPS by default, so data is encrypted in transit — no application-level change needed there. Would you like to open a follow-up issue to track the at-rest encryption gap for the notifications table? |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/worker.py (2)
1482-1488:⚠️ Potential issue | 🟡 MinorParse notification query parameters with
parse_qsinstead of URL substring checks.The current parsing lets unrelated params like
xlimit=5affectlimit, andunread_only=truecan match outside the actualunread_onlykey. The rest of this file already usesurlparse(...).query+parse_qs, which is safer and clearer.Proposed fix
- url = req.url - unread_only = "unread_only=true" in url + params = parse_qs(urlparse(req.url).query) + unread_only = (params.get("unread_only") or [""])[0].lower() in {"true", "1"} try: - raw_limit = int(url.split("limit=")[1].split("&")[0]) if "limit=" in url else 20 - limit = max(1, min(raw_limit, 50)) - except (ValueError, IndexError): - limit = 20 + raw_limit = int((params.get("limit") or ["20"])[0]) + except (TypeError, ValueError): + raw_limit = 20 + limit = max(1, min(raw_limit, 50))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/worker.py` around lines 1482 - 1488, Replace the fragile substring parsing around url, unread_only, and limit with proper query parsing: use urllib.parse.urlparse(req.url).query and urllib.parse.parse_qs to get params, then set unread_only = params.get("unread_only", [""])[0] == "true", compute raw_limit = params.get("limit", ["20"])[0], convert raw_limit to int with a try/except to fall back to 20, and clamp limit = max(1, min(raw_limit, 50)); update the code that currently assigns url, unread_only and limit to use these parsed values (referencing the variables url, unread_only, limit in this block).
1459-1468:⚠️ Potential issue | 🔴 CriticalUse the async AES helpers; the current sync shims always raise.
encrypt()/decrypt()are deprecated shims that raiseRuntimeError, so notification creation fails and listing any notification row returns 500. Also,capture_exception(..., env=env)uses an unsupported keyword; use_env=envso failures are actually swallowed and logged as intended.Proposed fix
await env.DB.prepare( "INSERT INTO notifications (id, user_id, type, title, message, related_id)" " VALUES (?, ?, ?, ?, ?, ?)" ).bind(new_id(), user_id, type_, - encrypt(title, enc), encrypt(message, enc), + await encrypt_aes(title, enc), await encrypt_aes(message, enc), related_id).run() except Exception as exc: - await capture_exception(exc, env=env, where="_create_notification") + await capture_exception(exc, _env=env, where="_create_notification")notifications = [ { "id": r.id, "type": r.type, - "title": decrypt(r.title or "", env.ENCRYPTION_KEY), - "message": decrypt(r.message or "", env.ENCRYPTION_KEY), + "title": await decrypt_aes(r.title or "", env.ENCRYPTION_KEY), + "message": await decrypt_aes(r.message or "", env.ENCRYPTION_KEY), "is_read": bool(r.is_read), "related_id": r.related_id, "created_at": r.created_at, }Also applies to: 1505-1516
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/worker.py` around lines 1459 - 1468, Replace the sync, deprecated encrypt(...) calls with the async AES helper (await encrypt_async(...)) so the title and message are encrypted asynchronously before binding (e.g., await encrypt_async(title, enc) and await encrypt_async(message, enc)); also change capture_exception(exc, env=env, where="_create_notification") to use the supported keyword _env (await capture_exception(exc, _env=env, where="_create_notification")) so exceptions are swallowed/logged as intended; apply the same two changes in the other block referenced (the similar code at the second location).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/worker.py`:
- Around line 539-541: The Worker DDL is missing the notifications index defined
in schema.sql: add the index creation statement for idx_notif_created to the
same DDL list that contains "idx_notif_user" and "idx_notif_unread" so the
Worker creates "CREATE INDEX IF NOT EXISTS idx_notif_created ON
notifications(user_id, created_at DESC)"; update the array or list in
src/worker.py where those two CREATE INDEX strings are declared (the block that
builds the notifications table/indexes) so fresh environments include the same
index used by queries that filter by user_id and ORDER BY created_at DESC LIMIT.
---
Duplicate comments:
In `@src/worker.py`:
- Around line 1482-1488: Replace the fragile substring parsing around url,
unread_only, and limit with proper query parsing: use
urllib.parse.urlparse(req.url).query and urllib.parse.parse_qs to get params,
then set unread_only = params.get("unread_only", [""])[0] == "true", compute
raw_limit = params.get("limit", ["20"])[0], convert raw_limit to int with a
try/except to fall back to 20, and clamp limit = max(1, min(raw_limit, 50));
update the code that currently assigns url, unread_only and limit to use these
parsed values (referencing the variables url, unread_only, limit in this block).
- Around line 1459-1468: Replace the sync, deprecated encrypt(...) calls with
the async AES helper (await encrypt_async(...)) so the title and message are
encrypted asynchronously before binding (e.g., await encrypt_async(title, enc)
and await encrypt_async(message, enc)); also change capture_exception(exc,
env=env, where="_create_notification") to use the supported keyword _env (await
capture_exception(exc, _env=env, where="_create_notification")) so exceptions
are swallowed/logged as intended; apply the same two changes in the other block
referenced (the similar code at the second location).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: alphaonelabs/coderabbit/.coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 22b40573-5584-4ac7-beaa-6668dae4656e
📒 Files selected for processing (1)
src/worker.py
|
Fixed: title and message fields are now encrypted with encrypt() before insert in _create_notification() and decrypted with decrypt() on read in api_list_notifications() — consistent with the rest of the codebase. All 238 tests passing ✅ |
|
Fixed: added idx_notif_created to the Worker DDL to match schema.sql — fresh environments initialized via /api/init will now create all three notification indexes correctly. |
Adds a notifications system to keep users informed about relevant platform events.
Changes
schema.sql + DDL:
notificationstable: user_id, type, title, message, is_read flag, optional related_id, timestampsrc/worker.py — 4 new API endpoints + 1 internal helper:
GET /api/notifications— list notifications with optionalunread_only=trueandlimitparamsGET /api/notifications/unread-count— lightweight badge count endpointPOST /api/notifications/:id/read— mark a single notification as readPOST /api/notifications/read-all— mark all notifications as read_create_notification()— internal helper for other handlers to call (grading, peer requests, new assignments)Design
_create_notification()silently swallows errors so notification failure never breaks the parent operationTesting
Notifications System
This PR adds a persistent notifications system with unread-count tracking and read-status management.
Purpose
Enable per-user notifications (assignment/session/peer events) with endpoints for listing, counting unread items, and marking notifications read—while ensuring notification failures do not break parent flows.
Database Changes
notificationstable: id, user_id (FK → users.id with ON DELETE CASCADE), type, title, message, is_read (INTEGER DEFAULT 0), related_id, created_at (DEFAULT datetime('now')).idx_notif_user(user_id),idx_notif_unread(user_id, is_read) for badge queries, andidx_notif_created(user_id, created_at DESC) to optimize recent-ordering queries.API Endpoints (all require JWT auth and enforce per-user access)
Internal Implementation
Security & Review Notes
Impact & Testing