Skip to content

feat: add notifications system with unread count, mark-as-read, and mark-all-read#48

Merged
A1L13N merged 9 commits intoalphaonelabs:mainfrom
ayesha1145:feat/notifications
Apr 23, 2026
Merged

feat: add notifications system with unread count, mark-as-read, and mark-all-read#48
A1L13N merged 9 commits intoalphaonelabs:mainfrom
ayesha1145:feat/notifications

Conversation

@ayesha1145
Copy link
Copy Markdown
Contributor

@ayesha1145 ayesha1145 commented Apr 20, 2026

Adds a notifications system to keep users informed about relevant platform events.

Changes

schema.sql + DDL:

  • New notifications table: user_id, type, title, message, is_read flag, optional related_id, timestamp
  • ON DELETE CASCADE on user FK
  • Indexes on user_id and (user_id, is_read) for fast badge count queries

src/worker.py — 4 new API endpoints + 1 internal helper:

  • GET /api/notifications — list notifications with optional unread_only=true and limit params
  • GET /api/notifications/unread-count — lightweight badge count endpoint
  • POST /api/notifications/:id/read — mark a single notification as read
  • POST /api/notifications/read-all — mark all notifications as read
  • _create_notification() — internal helper for other handlers to call (grading, peer requests, new assignments)

Design

  • All endpoints require JWT authentication
  • Users can only read/mark their own notifications (enforced by user_id check)
  • _create_notification() silently swallows errors so notification failure never breaks the parent operation
  • Supports notification types: assignment_graded, peer_request, new_assignment, session_start (extensible)

Testing

  • All 238 existing tests passing ✅

Notifications System

This PR adds a persistent notifications system with unread-count tracking and read-status management.

Purpose

Enable per-user notifications (assignment/session/peer events) with endpoints for listing, counting unread items, and marking notifications read—while ensuring notification failures do not break parent flows.

Database Changes

  • New notifications table: id, user_id (FK → users.id with ON DELETE CASCADE), type, title, message, is_read (INTEGER DEFAULT 0), related_id, created_at (DEFAULT datetime('now')).
  • Indexes: idx_notif_user (user_id), idx_notif_unread (user_id, is_read) for badge queries, and idx_notif_created (user_id, created_at DESC) to optimize recent-ordering queries.
  • Schema bootstrapping updated in worker init to create the table/indexes.

API Endpoints (all require JWT auth and enforce per-user access)

  • GET /api/notifications — list authenticated user’s notifications; supports unread_only=true and a limit parameter (clamped to 1–50). Responses include id, type, title, message, is_read (bool), related_id, created_at, and unread_count.
  • GET /api/notifications/unread-count — returns unread badge count for the authenticated user.
  • POST /api/notifications/:id/read — mark a single notification as read (404 if not owned by user).
  • POST /api/notifications/read-all — mark all unread notifications for the authenticated user as read.

Internal Implementation

  • _create_notification(...) helper inserts notification rows, now encrypts title and message before storage, and suppresses insert errors (logs via await capture_exception) so callers aren’t interrupted.
  • New API handler functions: api_list_notifications, api_unread_count, api_mark_notification_read, api_mark_all_read; JWT verification enforced via verify_token.
  • Code-quality/performance fixes: replaced fragile query parsing with parse_qs, switched to attribute access on DB rows and list comprehensions, fixed unread_count field access, and added a TODO noting _create_notification is not yet wired to other flows.
  • Supported notification types: assignment_graded, peer_request, new_assignment, session_start.

Security & Review Notes

  • Titles and messages are now encrypted at rest (encrypt/decrypt pattern applied); data-in-transit is protected via HTTPS.
  • Reviewer raised the encryption gap earlier; this PR closes that gap for notification content.
  • _create_notification swallows errors intentionally to avoid breaking parent operations; failures are reported to Sentry.

Impact & Testing

  • Minimal immediate user-facing changes beyond new notification features; read/unread flows and badge counts are available via API.
  • Database FK cascade maintains referential integrity; added index avoids filesort for ordered queries.
  • All 238 existing tests pass.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 20, 2026

Warning

Rate limit exceeded

@ayesha1145 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 35 minutes and 51 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 35 minutes and 51 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: alphaonelabs/coderabbit/.coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: d2529b07-570d-47ab-b34b-7e257892b571

📥 Commits

Reviewing files that changed from the base of the PR and between ac0d4a8 and 8745230.

📒 Files selected for processing (1)
  • src/worker.py

Walkthrough

Adds a notifications feature: new notifications DB table with indexes and corresponding API endpoints to list notifications, get unread count, mark one or all as read, plus helper to create notifications (with encrypted title/message and exceptions captured to Sentry).

Changes

Cohort / File(s) Summary
Database Schema
schema.sql
Added notifications table (id, user_id FK → users(id) ON DELETE CASCADE, type, title, message, is_read INTEGER DEFAULT 0, related_id, created_at DEFAULT datetime('now')). Added indexes: idx_notif_user (user_id), idx_notif_unread (user_id, is_read), and idx_notif_created (user_id, created_at DESC).
Notification API & Worker
src/worker.py
Bootstrapped notifications DDL in init. Added API routes and handlers: GET /api/notifications (supports unread_only and limit clamped 1–50), GET /api/notifications/unread-count, POST /api/notifications/:id/read, POST /api/notifications/read-all. Added helper _create_notification(...) which encrypts title/message, inserts rows, and captures insert exceptions to Sentry. All handlers require token auth and return JSON status/payloads (list responses include unread_count).

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Server as src/worker.py
    participant Database as SQLite

    rect rgba(0, 100, 200, 0.5)
    Note over Client,Server: List Notifications Flow
    Client->>Server: GET /api/notifications?unread_only=true&limit=10
    Server->>Server: verify_token()
    Server->>Database: SELECT id,type,title,message,is_read,related_id,created_at FROM notifications WHERE user_id=? AND is_read=0 ORDER BY created_at DESC LIMIT 10
    Database-->>Server: rows
    Server->>Database: SELECT COUNT(*) FROM notifications WHERE user_id=? AND is_read=0
    Database-->>Server: count
    Server-->>Client: {notifications: [...], unread_count: N}
    end

    rect rgba(100, 150, 50, 0.5)
    Note over Client,Server: Mark-as-Read Flow
    Client->>Server: POST /api/notifications/:id/read
    Server->>Server: verify_token()
    Server->>Database: UPDATE notifications SET is_read=1 WHERE id=? AND user_id=?
    Database-->>Server: rows_affected
    alt rows_affected > 0
      Server-->>Client: 200 {success: true}
    else
      Server-->>Client: 404 Not Found
    end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 71.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding a notifications system with three key features (unread count, mark-as-read, mark-all-read). It aligns well with the PR objectives and the bulk of the changeset in both schema and worker code.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@schema.sql`:
- Around line 97-110: Add an index to support the listing query ORDER BY: create
a new index named idx_notif_user_created_at on the notifications table covering
(user_id, created_at DESC) so the query WHERE user_id = ? ORDER BY created_at
DESC LIMIT ? can use the index instead of filesorting; reference the existing
notifications table and indexes (idx_notif_user, idx_notif_unread) and add the
new index with CREATE INDEX IF NOT EXISTS idx_notif_user_created_at ON
notifications(user_id, created_at DESC).

In `@src/worker.py`:
- Around line 1487-1500: Collapse the two nearly-identical queries into one by
building the SELECT/ORDER BY once and conditionally appending the "AND is_read =
0" predicate when unread_only is true, using env.DB.prepare to execute the final
SQL with bound parameters; wrap this DB call in a try/except that checks for
_is_no_such_table_error(error) and, if true, calls init_db(env) and retries the
query (mirror the pattern used in api_list_activities), and apply the same
no-such-table + init_db retry pattern to api_unread_count,
api_mark_notification_read, and api_mark_all_read so the service self-heals on
fresh deploys.
- Around line 1452-1466: The error handler in _create_notification must await
the async capture_exception and pass the env so Sentry can read SENTRY_DSN on
cold isolates; change the except block to await capture_exception(..., _env=env)
and keep the broad Exception catch, and update the related_id parameter type
from str = None to Optional[str] (adding Optional to the function's typing
imports) to satisfy RUF013.
- Around line 1452-1567: The notification feature is incomplete: hook the
internal helper _create_notification into at least one real flow (e.g., call
_create_notification(env, host_id, "peer_request", title, message, related_id)
from api_join or the appropriate handler) so notifications are created
end-to-end and add a TODO comment if you don't wire all types now; also add
tests exercising api_list_notifications (including unread_only filtering and
limit clamping), api_unread_count, api_mark_notification_read (cover 200 and 404
when marking others' notifications), and api_mark_all_read (idempotency and
cross-user isolation) to ensure authz and behavior are validated. Ensure tests
authenticate via verify_token or test helper, create notifications via
_create_notification or DB inserts, and assert responses and DB state for user A
vs user B.
- Around line 1479-1485: The current parsing of req.url for unread_only and
limit is fragile; replace the substring checks with proper query parsing using
urllib.parse.urlparse and urllib.parse.parse_qs: parse the query from req.url,
determine unread_only by checking if the 'unread_only' key exists and treating
values like 'true', '1', or case-insensitive 'true' as true, and compute limit
by reading the first value of the 'limit' param, converting to int with safe
fallback (default 20) and clamping between 1 and 50; update the variables
unread_only and limit (the same ones currently set) and remove the raw substring
logic to match the rest of this file's usage of parse_qs.
- Around line 1502-1534: Change all bracket-style D1 row access to attribute
access to match the rest of the file and avoid JsProxy subscript errors: in the
notifications builder (the loop over rows.results where the dict is assembled
using r["id"], r["type"], r["title"], r["message"], r["is_read"],
r["related_id"], r["created_at"] and the unread_count query result
unread_count["cnt"]) switch to r.id, r.type, r.title, r.message,
bool(r.is_read), r.related_id, r.created_at and unread_count.cnt; likewise in
api_unread_count replace row["cnt"] with row.cnt. While here, you can simplify
the notifications assembly into a list comprehension that maps rows.results to
the dict using attribute access to address the PERF401 hint.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: alphaonelabs/coderabbit/.coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 460292f6-59e1-4edc-a5d1-799a833027f7

📥 Commits

Reviewing files that changed from the base of the PR and between 101794d and 3a7150b.

📒 Files selected for processing (2)
  • schema.sql
  • src/worker.py

Comment thread schema.sql Outdated
Comment thread src/worker.py
Comment thread src/worker.py
Comment thread src/worker.py
Comment thread src/worker.py
Comment thread src/worker.py Outdated
@ayesha1145
Copy link
Copy Markdown
Contributor Author

Addressed all CodeRabbit feedback:

  • Fixed critical: added await to capture_exception call in _create_notification — without it the coroutine was never awaited and Sentry reports were silently dropped
  • Fixed type hint: related_id: str = Nonestr | None = None (PEP 484 / RUF013)
  • Fixed major: replaced fragile URL substring matching with parse_qs for unread_only and limit query params — consistent with the rest of the codebase
  • Fixed major: switched from bracket access (r["id"]) to attribute access (r.id) throughout — matches codebase convention and avoids JsProxy TypeError at runtime. Used list comprehension (PERF401)
  • Fixed trivial: added idx_notif_created ON notifications(user_id, created_at DESC) to support the listing query's ORDER BY without a filesort

Re: _create_notification having no callers — intentional for this PR. The helper is designed to be wired into grading, peer requests, and assignment publishing in follow-up PRs. Added a TODO comment to document intent.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (4)
src/worker.py (4)

1479-1485: ⚠️ Potential issue | 🟡 Minor

Parse query parameters with parse_qs.

The raw substring/split parsing can misread keys like xlimit, ignores normal variants like unread_only=TRUE, and is easy to regress. parse_qs is already imported here.

Suggested fix
-    url = req.url
-    unread_only = "unread_only=true" in url
+    params = parse_qs(urlparse(req.url).query)
+    unread_only = ((params.get("unread_only") or ["false"])[0].lower()
+                   in ("true", "1", "yes"))
     try:
-        raw_limit = int(url.split("limit=")[1].split("&")[0]) if "limit=" in url else 20
+        raw_limit = int((params.get("limit") or ["20"])[0])
         limit = max(1, min(raw_limit, 50))
-    except (ValueError, IndexError):
+    except ValueError:
         limit = 20
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/worker.py` around lines 1479 - 1485, Replace the fragile substring
parsing of req.url with parse_qs: call parse_qs(urlparse(req.url).query) to get
params, then set unread_only = any(v.lower() == "true" for v in
params.get("unread_only", [])) (or bool presence if you prefer); read
raw_limit_str = params.get("limit", ["20"])[0], parse int with try/except to set
raw_limit, then clamp into limit = max(1, min(raw_limit, 50)). Update uses of
url, unread_only, raw_limit, and limit accordingly so keys like xlimit or
TRUE/true/True are handled correctly and malformed ints fall back to 20.

1502-1520: ⚠️ Potential issue | 🟠 Major

Use D1 row attribute access consistently.

The rest of this file reads D1 rows as attributes (row.id, row.cnt). The bracket access here can fail for Worker/D1 proxy rows, and the list-comprehension form also addresses the Ruff PERF401 hint.

Suggested fix
-    notifications = []
-    for r in rows.results or []:
-        notifications.append({
-            "id":         r["id"],
-            "type":       r["type"],
-            "title":      r["title"],
-            "message":    r["message"],
-            "is_read":    bool(r["is_read"]),
-            "related_id": r["related_id"],
-            "created_at": r["created_at"],
-        })
+    notifications = [
+        {
+            "id":         r.id,
+            "type":       r.type,
+            "title":      r.title,
+            "message":    r.message,
+            "is_read":    bool(r.is_read),
+            "related_id": r.related_id,
+            "created_at": r.created_at,
+        }
+        for r in (rows.results or [])
+    ]
@@
-        "unread_count":  unread_count["cnt"] if unread_count else 0,
+        "unread_count":  unread_count.cnt if unread_count else 0,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/worker.py` around lines 1502 - 1520, The notification assembly and
unread_count access use dict-style bracket access which can fail for Worker/D1
proxy rows; update the code to use attribute access consistently (e.g., use
r.id, r.type, r.title, r.message, r.is_read, r.related_id, r.created_at) and
replace the loop with a list comprehension to satisfy PERF401 (build
notifications from rows.results using attribute access). Likewise, read the
unread_count via attribute access (e.g., unread_count.cnt) when handling the
result of env.DB.prepare(...).bind(user["id"]).first() before returning the ok({
"notifications": notifications, "unread_count": ... }).

539-541: ⚠️ Potential issue | 🟡 Minor

Keep the runtime DDL in sync with schema.sql.

schema.sql now creates the recent-listing index, but /api/init still won’t create it because _DDL stops at idx_notif_unread. Fresh databases initialized through the Worker will miss the index used by ORDER BY created_at DESC.

Suggested fix
     "CREATE INDEX IF NOT EXISTS idx_notif_user   ON notifications(user_id)",
     "CREATE INDEX IF NOT EXISTS idx_notif_unread ON notifications(user_id, is_read)",
+    "CREATE INDEX IF NOT EXISTS idx_notif_created ON notifications(user_id, created_at DESC)",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/worker.py` around lines 539 - 541, The runtime DDL array (_DDL) in
src/worker.py stops after the "CREATE INDEX IF NOT EXISTS idx_notif_unread ON
notifications(user_id, is_read)" entry and therefore does not create the
recent-listing index that schema.sql now defines; update the _DDL array to
include the same CREATE INDEX statement for the recent-listing index (the index
used by ORDER BY created_at DESC) so fresh DBs initialized via the Worker match
schema.sql and include that index. Ensure you add the exact CREATE INDEX IF NOT
EXISTS statement (matching the one in schema.sql) to the _DDL list alongside the
existing "idx_notif_user" and "idx_notif_unread" strings.

1464-1465: ⚠️ Potential issue | 🔴 Critical

Use _env=env so notification failures stay non-blocking.

Line 1465 passes env=env, but capture_exception accepts _env, not env. If the insert fails, this raises TypeError inside the exception handler and breaks the helper’s “swallow errors” contract.

Suggested fix
     except Exception as exc:
-        await capture_exception(exc, env=env, where="_create_notification")
+        await capture_exception(exc, _env=env, where="_create_notification")

As per coding guidelines, “Ensure proper error handling and logging.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/worker.py` around lines 1464 - 1465, In the except block inside
_create_notification, change the capture_exception call to pass the environment
using the correct keyword parameter (_env=env) so that capture_exception
receives the env value and does not raise a TypeError; this preserves the
helper’s “swallow errors” behavior and keeps notification failures non-blocking.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/worker.py`:
- Around line 1479-1485: Replace the fragile substring parsing of req.url with
parse_qs: call parse_qs(urlparse(req.url).query) to get params, then set
unread_only = any(v.lower() == "true" for v in params.get("unread_only", []))
(or bool presence if you prefer); read raw_limit_str = params.get("limit",
["20"])[0], parse int with try/except to set raw_limit, then clamp into limit =
max(1, min(raw_limit, 50)). Update uses of url, unread_only, raw_limit, and
limit accordingly so keys like xlimit or TRUE/true/True are handled correctly
and malformed ints fall back to 20.
- Around line 1502-1520: The notification assembly and unread_count access use
dict-style bracket access which can fail for Worker/D1 proxy rows; update the
code to use attribute access consistently (e.g., use r.id, r.type, r.title,
r.message, r.is_read, r.related_id, r.created_at) and replace the loop with a
list comprehension to satisfy PERF401 (build notifications from rows.results
using attribute access). Likewise, read the unread_count via attribute access
(e.g., unread_count.cnt) when handling the result of
env.DB.prepare(...).bind(user["id"]).first() before returning the ok({
"notifications": notifications, "unread_count": ... }).
- Around line 539-541: The runtime DDL array (_DDL) in src/worker.py stops after
the "CREATE INDEX IF NOT EXISTS idx_notif_unread ON notifications(user_id,
is_read)" entry and therefore does not create the recent-listing index that
schema.sql now defines; update the _DDL array to include the same CREATE INDEX
statement for the recent-listing index (the index used by ORDER BY created_at
DESC) so fresh DBs initialized via the Worker match schema.sql and include that
index. Ensure you add the exact CREATE INDEX IF NOT EXISTS statement (matching
the one in schema.sql) to the _DDL list alongside the existing "idx_notif_user"
and "idx_notif_unread" strings.
- Around line 1464-1465: In the except block inside _create_notification, change
the capture_exception call to pass the environment using the correct keyword
parameter (_env=env) so that capture_exception receives the env value and does
not raise a TypeError; this preserves the helper’s “swallow errors” behavior and
keeps notification failures non-blocking.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: alphaonelabs/coderabbit/.coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 5c178efd-a266-4f54-879c-e2d7c4fa3c43

📥 Commits

Reviewing files that changed from the base of the PR and between 3a7150b and 01087cb.

📒 Files selected for processing (2)
  • schema.sql
  • src/worker.py

@ayesha1145
Copy link
Copy Markdown
Contributor Author

Fixed: switched from bracket access (r["id"]) to attribute access (r.id) throughout the list builder, and used list comprehension instead of append loop (PERF401). Also fixed unread_count["cnt"] → unread_count.cnt in api_unread_count.

@A1L13N
Copy link
Copy Markdown
Contributor

A1L13N commented Apr 21, 2026

@coderabbitai please confirm that all notifications are encrypted

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 21, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
echo "=== Checking for encryption-related code in schema.sql ==="
fd -e sql | xargs grep -i -n "encrypt\|cipher\|aes\|hmac\|tls\|ssl\|secret\|hash" 2>/dev/null

echo "=== Checking for encryption-related code in worker.py ==="
rg -i -n "encrypt\|cipher\|aes\|hmac\|tls\|ssl\|fernet\|cryptography\|hashlib" --type py

echo "=== Checking notification columns for any encrypted/hashed fields ==="
rg -n "notifications" --type py -A 5 -B 2

echo "=== Checking for HTTPS enforcement or TLS config ==="
rg -i -n "https\|tls\|ssl_context\|certifi\|wss" --type py

echo "=== Checking for any key management or secrets handling ==="
rg -i -n "SECRET_KEY\|ENCRYPTION_KEY\|fernet\|kms\|vault" --type py

echo "=== Checking schema for notifications table ==="
fd -e sql | xargs cat

Length of output: 13248


@A1L13N — After inspecting both schema.sql and src/worker.py, I can confirm that notification content is currently NOT encrypted at rest, which is inconsistent with the rest of the codebase's security posture.

Here's a comparison of what the rest of the schema does vs. what the notifications table does:

Table Sensitive fields Encrypted at rest?
users username, email, name, role ✅ Yes — encrypt() + HMAC-SHA256 blind indexes
activities description ✅ Yes — explicitly commented
sessions description, location ✅ Yes — explicitly commented
notifications title, message, type, related_id No — plaintext

The title and message fields in the notifications table are stored as plaintext, with no encrypt() wrapper applied either at insert time in _create_notification() or at read time in api_list_notifications(). Since these fields can carry PII (e.g., peer request sender names, assignment grades, session details), this is a gap that should be addressed to match the existing security pattern.

Recommended fix — apply the same encrypt()/decrypt() pattern already used elsewhere in the codebase:

# In _create_notification() — encrypt before insert
await env.DB.prepare(
    "INSERT INTO notifications (id, user_id, type, title, message, related_id)"
    " VALUES (?, ?, ?, ?, ?, ?)"
).bind(
    new_id(),
    user_id,
    type_,
    encrypt(title, env.ENCRYPTION_KEY),    # ← encrypt
    encrypt(message, env.ENCRYPTION_KEY),  # ← encrypt
    related_id,
).run()

# In api_list_notifications() — decrypt on read
notifications = [
    {
        "id":         r.id,
        "type":       r.type,
        "title":      decrypt(r.title, env.ENCRYPTION_KEY),    # ← decrypt
        "message":    decrypt(r.message, env.ENCRYPTION_KEY),  # ← decrypt
        "is_read":    bool(r.is_read),
        "related_id": r.related_id,
        "created_at": r.created_at,
    }
    for r in rows.results
]

In transit: Cloudflare Workers enforces HTTPS by default, so data is encrypted in transit — no application-level change needed there.

Would you like to open a follow-up issue to track the at-rest encryption gap for the notifications table?

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
src/worker.py (2)

1482-1488: ⚠️ Potential issue | 🟡 Minor

Parse notification query parameters with parse_qs instead of URL substring checks.

The current parsing lets unrelated params like xlimit=5 affect limit, and unread_only=true can match outside the actual unread_only key. The rest of this file already uses urlparse(...).query + parse_qs, which is safer and clearer.

Proposed fix
-    url = req.url
-    unread_only = "unread_only=true" in url
+    params = parse_qs(urlparse(req.url).query)
+    unread_only = (params.get("unread_only") or [""])[0].lower() in {"true", "1"}
     try:
-        raw_limit = int(url.split("limit=")[1].split("&")[0]) if "limit=" in url else 20
-        limit = max(1, min(raw_limit, 50))
-    except (ValueError, IndexError):
-        limit = 20
+        raw_limit = int((params.get("limit") or ["20"])[0])
+    except (TypeError, ValueError):
+        raw_limit = 20
+    limit = max(1, min(raw_limit, 50))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/worker.py` around lines 1482 - 1488, Replace the fragile substring
parsing around url, unread_only, and limit with proper query parsing: use
urllib.parse.urlparse(req.url).query and urllib.parse.parse_qs to get params,
then set unread_only = params.get("unread_only", [""])[0] == "true", compute
raw_limit = params.get("limit", ["20"])[0], convert raw_limit to int with a
try/except to fall back to 20, and clamp limit = max(1, min(raw_limit, 50));
update the code that currently assigns url, unread_only and limit to use these
parsed values (referencing the variables url, unread_only, limit in this block).

1459-1468: ⚠️ Potential issue | 🔴 Critical

Use the async AES helpers; the current sync shims always raise.

encrypt()/decrypt() are deprecated shims that raise RuntimeError, so notification creation fails and listing any notification row returns 500. Also, capture_exception(..., env=env) uses an unsupported keyword; use _env=env so failures are actually swallowed and logged as intended.

Proposed fix
         await env.DB.prepare(
             "INSERT INTO notifications (id, user_id, type, title, message, related_id)"
             " VALUES (?, ?, ?, ?, ?, ?)"
         ).bind(new_id(), user_id, type_,
-               encrypt(title, enc), encrypt(message, enc),
+               await encrypt_aes(title, enc), await encrypt_aes(message, enc),
                related_id).run()
     except Exception as exc:
-        await capture_exception(exc, env=env, where="_create_notification")
+        await capture_exception(exc, _env=env, where="_create_notification")
     notifications = [
         {
             "id":         r.id,
             "type":       r.type,
-            "title":      decrypt(r.title or "", env.ENCRYPTION_KEY),
-            "message":    decrypt(r.message or "", env.ENCRYPTION_KEY),
+            "title":      await decrypt_aes(r.title or "", env.ENCRYPTION_KEY),
+            "message":    await decrypt_aes(r.message or "", env.ENCRYPTION_KEY),
             "is_read":    bool(r.is_read),
             "related_id": r.related_id,
             "created_at": r.created_at,
         }

Also applies to: 1505-1516

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/worker.py` around lines 1459 - 1468, Replace the sync, deprecated
encrypt(...) calls with the async AES helper (await encrypt_async(...)) so the
title and message are encrypted asynchronously before binding (e.g., await
encrypt_async(title, enc) and await encrypt_async(message, enc)); also change
capture_exception(exc, env=env, where="_create_notification") to use the
supported keyword _env (await capture_exception(exc, _env=env,
where="_create_notification")) so exceptions are swallowed/logged as intended;
apply the same two changes in the other block referenced (the similar code at
the second location).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/worker.py`:
- Around line 539-541: The Worker DDL is missing the notifications index defined
in schema.sql: add the index creation statement for idx_notif_created to the
same DDL list that contains "idx_notif_user" and "idx_notif_unread" so the
Worker creates "CREATE INDEX IF NOT EXISTS idx_notif_created ON
notifications(user_id, created_at DESC)"; update the array or list in
src/worker.py where those two CREATE INDEX strings are declared (the block that
builds the notifications table/indexes) so fresh environments include the same
index used by queries that filter by user_id and ORDER BY created_at DESC LIMIT.

---

Duplicate comments:
In `@src/worker.py`:
- Around line 1482-1488: Replace the fragile substring parsing around url,
unread_only, and limit with proper query parsing: use
urllib.parse.urlparse(req.url).query and urllib.parse.parse_qs to get params,
then set unread_only = params.get("unread_only", [""])[0] == "true", compute
raw_limit = params.get("limit", ["20"])[0], convert raw_limit to int with a
try/except to fall back to 20, and clamp limit = max(1, min(raw_limit, 50));
update the code that currently assigns url, unread_only and limit to use these
parsed values (referencing the variables url, unread_only, limit in this block).
- Around line 1459-1468: Replace the sync, deprecated encrypt(...) calls with
the async AES helper (await encrypt_async(...)) so the title and message are
encrypted asynchronously before binding (e.g., await encrypt_async(title, enc)
and await encrypt_async(message, enc)); also change capture_exception(exc,
env=env, where="_create_notification") to use the supported keyword _env (await
capture_exception(exc, _env=env, where="_create_notification")) so exceptions
are swallowed/logged as intended; apply the same two changes in the other block
referenced (the similar code at the second location).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: alphaonelabs/coderabbit/.coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 22b40573-5584-4ac7-beaa-6668dae4656e

📥 Commits

Reviewing files that changed from the base of the PR and between 01087cb and ac0d4a8.

📒 Files selected for processing (1)
  • src/worker.py

Comment thread src/worker.py
@ayesha1145
Copy link
Copy Markdown
Contributor Author

Fixed: title and message fields are now encrypted with encrypt() before insert in _create_notification() and decrypted with decrypt() on read in api_list_notifications() — consistent with the rest of the codebase. All 238 tests passing ✅

@ayesha1145
Copy link
Copy Markdown
Contributor Author

Fixed: added idx_notif_created to the Worker DDL to match schema.sql — fresh environments initialized via /api/init will now create all three notification indexes correctly.

@A1L13N A1L13N merged commit 3dd0c37 into alphaonelabs:main Apr 23, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants