Toxicity evaluation#90
Conversation
Co-authored-by: dennyabrain <denny.george90@gmail.com>
Resolved conflicts keeping nsfw_text validator features from this branch. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…fier Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…t model Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds an NSFW text validator: schema, config class, enum and manifest registration, docs, Docker pre-download of the HF model, tests, evaluation runner, and required dependencies. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant API
participant ValidatorConfig
participant GuardrailsHub
participant HFCache as "HF Cache\n(/app/hf_cache)"
Client->>API: POST /validate (includes `nsfw_text` config)
API->>ValidatorConfig: parse & resolve `NSFWTextSafetyValidatorConfig`
ValidatorConfig->>GuardrailsHub: build() -> instantiate NSFWText(model_name, device, threshold, on_fail)
GuardrailsHub->>HFCache: load model/tokenizer (cache_dir=/app/hf_cache)
GuardrailsHub-->>API: validation result (pass/fail or fix)
API-->>Client: response (success, data, errors)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (4)
backend/app/tests/test_guardrails_api_integration.py (2)
348-366: The low-threshold test doesn’t validate threshold behavior.Using a clearly explicit sentence likely fails at both low and default/high thresholds, so this test may still pass if threshold handling is broken. Consider an A/B assertion with a borderline input across two thresholds.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/tests/test_guardrails_api_integration.py` around lines 348 - 366, The test test_input_guardrails_with_nsfw_text_with_low_threshold uses an obviously explicit sentence so it doesn't verify threshold behavior; replace the input with a borderline phrase (e.g., mildly suggestive but not explicit) and assert A/B behavior by posting the same input twice: once with a low threshold (e.g., threshold=0.1) expecting success False (validator fails) and once with a high threshold (e.g., threshold=0.9) expecting success True (validator passes); refer to the VALIDATE_API_PATH call and the "validators" payload to implement the two assertions within this test (or split into two tests) so threshold sensitivity is validated.
331-383: Two NSFW exception tests overlap heavily.
test_input_guardrails_with_nsfw_text_on_explicit_contentandtest_input_guardrails_with_nsfw_text_exception_actioncurrently validate almost the same path. Merging them would keep coverage while reducing integration test runtime.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/tests/test_guardrails_api_integration.py` around lines 331 - 383, Two tests duplicate coverage: test_input_guardrails_with_nsfw_text_on_explicit_content and test_input_guardrails_with_nsfw_text_exception_action both post similar NSFW inputs and assert failure; merge them by removing one and consolidating into a single test (e.g., keep test_input_guardrails_with_nsfw_text_on_explicit_content) that covers both input variants or parameterize the test to run both payloads, ensuring you still call VALIDATE_API_PATH with request_id/organization_id/project_id and the nsfw_text validator (on_fail: "exception") and assert response.status_code == 200 and body["success"] is False; update or remove the redundant test function to avoid duplicate assertions and reduce runtime.backend/app/tests/test_toxicity_hub_validators.py (1)
295-442: Consider parametrizing repeated NSFW config tests.This block is clear, but a lot of cases follow the same patch/build/assert pattern. Consolidating with
pytest.mark.parametrizewould reduce repetition and future maintenance overhead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/tests/test_toxicity_hub_validators.py` around lines 295 - 442, Refactor the repeated patch/build/assert patterns in TestNSFWTextSafetyValidatorConfig by consolidating similar test cases into parametrized tests using pytest.mark.parametrize: identify groups like build-with-defaults/custom/threshold/device/model/on_fail mappings (tests test_build_with_defaults, test_build_with_custom_params, test_build_with_threshold_at_zero, test_build_with_threshold_at_one, test_build_with_device_none, test_build_with_model_name_none, test_on_fail_fix_resolves_to_callable, test_on_fail_exception_resolves_to_exception_action, test_on_fail_rephrase_resolves_to_callable) and replace each group with a single parametrized function that supplies (input-config-kwargs, expected kwargs or callable/OnFailAction) and inside the test perform the same with patch(_NSFW_PATCH) call to config.build() and assert mock_validator.call_args or result; keep the standalone tests that exercise _on_fix behavior and ValidationError cases (test_on_fix_sets_validator_metadata_when_fix_value_empty, test_on_fix_does_not_set_metadata_when_fix_value_present, test_invalid_on_fail_raises, test_wrong_type_literal_rejected, test_extra_fields_rejected, test_threshold_must_be_numeric) unchanged.backend/app/core/validators/config/nsfw_text_safety_validator_config.py (1)
11-11: Constrainvalidation_methodto known values.Using plain
strhere allows typos to pass schema validation and fail later at runtime. Restrict this field to explicit literals used by the validator interface.Proposed fix
- validation_method: str = "sentence" + validation_method: Literal["sentence", "full"] = "sentence"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/core/validators/config/nsfw_text_safety_validator_config.py` at line 11, Change the validation_method field from a bare str to a Literal of the allowed values to prevent typos: update validation_method: str = "sentence" to validation_method: Literal["sentence", "document"] = "sentence" (or the exact literals used by the validator interface), and add the necessary import for Literal from typing (or typing_extensions if supporting older Python versions); ensure the class in nsfw_text_safety_validator_config.py uses the Literal type so schema/type checkers enforce the allowed values.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/api/API_USAGE.md`:
- Around line 100-104: The documentation contains a duplicated "type=" filter
line in API_USAGE.md (the two similar bullet lines listing allowed type values);
remove the stale duplicate so only one definitive "type=..." entry remains (keep
the version that includes nsfw_text if that is the intended supported value),
ensuring the Optional filters section lists each query param exactly once and
update any surrounding punctuation/formatting to remain consistent.
In `@backend/app/core/validators/config/nsfw_text_safety_validator_config.py`:
- Line 10: Add a schema-level constraint to the threshold field on
NSFWTextSafetyValidatorConfig so invalid floats outside [0.0, 1.0] fail
validation; replace the loose float declaration for threshold with a Pydantic
Field that enforces ge=0.0 and le=1.0 (keeping the default 0.8) and import Field
from pydantic if not already present.
In `@backend/app/core/validators/README.md`:
- Around line 430-434: Update the "Default stage strategy" section so it mirrors
the earlier recommendation by adding `nsfw_text` to both the `input` and
`output` lists; ensure the operational summary later in the README also includes
`nsfw_text` for input and output and that the wording/justification matches the
earlier "input" and "output" recommendation for consistency across the document.
In `@backend/app/evaluation/toxicity/run.py`:
- Around line 32-41: The VALIDATORS dict currently always instantiates
LlamaGuard7B which requires remote inference; change the registration so remote
validators are only added when a runtime flag is enabled (e.g.,
enable_remote_validators or USE_REMOTE_VALIDATORS env var) or provide a separate
OFFLINE_VALIDATORS mapping; specifically, update the code that builds VALIDATORS
to conditionally include the "llamaguard_7b" entry (the LlamaGuard7B(...)
lambda) based on that flag, or move LlamaGuard7B into a distinct
REMOTE_VALIDATORS collection and merge it into VALIDATORS only when the flag is
true so the runner can operate fully offline without creating LlamaGuard7B.
- Around line 94-98: The module currently executes the evaluation loop at import
time (iterating DATASETS and calling run_dataset, then printing OUT_DIR); wrap
that logic in a main guard so it only runs when executed as a script. Move the
for dataset_name, dataset_cfg in DATASETS: loop and the final print("Done.
Results saved to", OUT_DIR) into an if __name__ == "__main__": block (keeping
references to DATASETS, run_dataset, and OUT_DIR intact) so importing the module
won't trigger dataset reads or model loading.
- Around line 63-69: The current use of .astype(str) turns NaN into the literal
"nan" and causes validator.validate (called via p.record and assigned to
df[f"{validator_name}_result"]) to score missing text; change the preprocessing
to either skip null rows or replace missing values with an empty string before
validation (e.g., operate on df[text_col].fillna("").astype(str) or filter
df[text_col].notna()), then call p.record(lambda t: validator.validate(t,
metadata={}), <cleaned text>) so missing inputs are not treated as real samples
and latency/statistics aren't skewed.
- Around line 51-54: After calling df["y_true"] = df[label_col].map(label_map)
when label_map is not None, immediately validate for unmapped/NaN values and
raise an error listing the unexpected original labels instead of proceeding;
e.g., check df["y_true"].isna(), compute unexpected =
df.loc[df["y_true"].isna(), label_col].unique(), and raise ValueError with those
values (keep the existing astype(int) branch unchanged when label_map is None).
In `@backend/app/tests/test_guardrails_api_integration.py`:
- Around line 331-346: The test
test_input_guardrails_with_nsfw_text_on_explicit_content only asserts success is
False and can pass for infrastructure/errors rather than actual NSFW detection;
update the test (and the similar test around lines 368-383) to include a
positive control that should pass NSFW validation (e.g., a benign clean input)
and assert it returns success is True, and for the explicit-content case assert
the failure payload structure/content (inspect body["errors"] or body["reason"]
depending on how VALIDATE_API_PATH returns validator failures) to verify the
nsfw_text validator actually flagged the input; use integration_client and the
same request payload shape but change "input" and check specific fields
(validator type "nsfw_text", on_fail outcome, and expected failure
message/shape) instead of only asserting success is False.
In `@backend/Dockerfile`:
- Around line 53-56: Update the Dockerfile pre-download step to pin the Hugging
Face model by adding a fixed revision SHA to both
AutoTokenizer.from_pretrained(...) and
AutoModelForSequenceClassification.from_pretrained(...) calls; then add a
model_revision field to the NSFWTextSafetyValidatorConfig and pass that value
through when constructing the NSFWText validator so the validator defaults use
the same pinned revision SHA; ensure the same revision string is used in the
Dockerfile prefetch and the NSFWTextSafetyValidatorConfig.model_revision to
guarantee reproducible builds and evaluations.
In `@backend/pyproject.toml`:
- Around line 50-58: The uv configuration in [tool.uv.sources] currently pins
the torch package to the pytorch-cpu index (name "pytorch-cpu" / url
"https://download.pytorch.org/whl/cpu" with explicit = true), which prevents
installing CUDA-enabled wheels and conflicts with the NSFW validator's
documented device="cuda" option; fix this by either removing or making
non-explicit the torch entry so normal indices can resolve CUDA wheels, or
introduce separate dependency profiles (e.g., "pytorch-cpu" and "pytorch-cuda")
and update documentation and the NSFW validator docs/devices accordingly so
device="cuda" is only advertised when the CUDA profile is used (reference the
[tool.uv.sources] / [[tool.uv.index]] entries and the NSFW validator
device="cuda" documentation).
---
Nitpick comments:
In `@backend/app/core/validators/config/nsfw_text_safety_validator_config.py`:
- Line 11: Change the validation_method field from a bare str to a Literal of
the allowed values to prevent typos: update validation_method: str = "sentence"
to validation_method: Literal["sentence", "document"] = "sentence" (or the exact
literals used by the validator interface), and add the necessary import for
Literal from typing (or typing_extensions if supporting older Python versions);
ensure the class in nsfw_text_safety_validator_config.py uses the Literal type
so schema/type checkers enforce the allowed values.
In `@backend/app/tests/test_guardrails_api_integration.py`:
- Around line 348-366: The test
test_input_guardrails_with_nsfw_text_with_low_threshold uses an obviously
explicit sentence so it doesn't verify threshold behavior; replace the input
with a borderline phrase (e.g., mildly suggestive but not explicit) and assert
A/B behavior by posting the same input twice: once with a low threshold (e.g.,
threshold=0.1) expecting success False (validator fails) and once with a high
threshold (e.g., threshold=0.9) expecting success True (validator passes); refer
to the VALIDATE_API_PATH call and the "validators" payload to implement the two
assertions within this test (or split into two tests) so threshold sensitivity
is validated.
- Around line 331-383: Two tests duplicate coverage:
test_input_guardrails_with_nsfw_text_on_explicit_content and
test_input_guardrails_with_nsfw_text_exception_action both post similar NSFW
inputs and assert failure; merge them by removing one and consolidating into a
single test (e.g., keep
test_input_guardrails_with_nsfw_text_on_explicit_content) that covers both input
variants or parameterize the test to run both payloads, ensuring you still call
VALIDATE_API_PATH with request_id/organization_id/project_id and the nsfw_text
validator (on_fail: "exception") and assert response.status_code == 200 and
body["success"] is False; update or remove the redundant test function to avoid
duplicate assertions and reduce runtime.
In `@backend/app/tests/test_toxicity_hub_validators.py`:
- Around line 295-442: Refactor the repeated patch/build/assert patterns in
TestNSFWTextSafetyValidatorConfig by consolidating similar test cases into
parametrized tests using pytest.mark.parametrize: identify groups like
build-with-defaults/custom/threshold/device/model/on_fail mappings (tests
test_build_with_defaults, test_build_with_custom_params,
test_build_with_threshold_at_zero, test_build_with_threshold_at_one,
test_build_with_device_none, test_build_with_model_name_none,
test_on_fail_fix_resolves_to_callable,
test_on_fail_exception_resolves_to_exception_action,
test_on_fail_rephrase_resolves_to_callable) and replace each group with a single
parametrized function that supplies (input-config-kwargs, expected kwargs or
callable/OnFailAction) and inside the test perform the same with
patch(_NSFW_PATCH) call to config.build() and assert mock_validator.call_args or
result; keep the standalone tests that exercise _on_fix behavior and
ValidationError cases (test_on_fix_sets_validator_metadata_when_fix_value_empty,
test_on_fix_does_not_set_metadata_when_fix_value_present,
test_invalid_on_fail_raises, test_wrong_type_literal_rejected,
test_extra_fields_rejected, test_threshold_must_be_numeric) unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5638df70-fc69-4bc5-a07e-10e516566e61
📒 Files selected for processing (11)
backend/Dockerfilebackend/app/api/API_USAGE.mdbackend/app/core/enum.pybackend/app/core/validators/README.mdbackend/app/core/validators/config/nsfw_text_safety_validator_config.pybackend/app/core/validators/validators.jsonbackend/app/evaluation/toxicity/run.pybackend/app/schemas/guardrail_config.pybackend/app/tests/test_guardrails_api_integration.pybackend/app/tests/test_toxicity_hub_validators.pybackend/pyproject.toml
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/evaluation/README.md`:
- Around line 290-295: The fenced code block containing the four output paths
(lines showing outputs/toxicity/predictions_hasoc.csv,
outputs/toxicity/metrics_hasoc.json, outputs/toxicity/predictions_sharechat.csv,
outputs/toxicity/metrics_sharechat.json) in README.md should include a language
identifier to satisfy MD040; update the opening fence from ``` to ```text so the
block becomes a ```text fenced block; keep the same contents and closing fence
unchanged.
- Around line 288-295: Update the documented example output paths under the
"**Output per dataset:**" section so they match the folder-tree layout (use
per-dataset subfolders), i.e., change the flat paths like
`outputs/toxicity/predictions_hasoc.csv` and
`outputs/toxicity/metrics_sharechat.json` to the nested forms
`outputs/toxicity/hasoc/predictions.csv`, `outputs/toxicity/hasoc/metrics.json`,
`outputs/toxicity/sharechat/predictions.csv`, and
`outputs/toxicity/sharechat/metrics.json` so the README's example paths align
with the folder structure.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8b240205-a6e8-4279-a05f-da2c128ffb09
📒 Files selected for processing (2)
backend/app/evaluation/README.mdbackend/scripts/run_all_evaluations.sh
✅ Files skipped from review due to trivial changes (1)
- backend/scripts/run_all_evaluations.sh
Summary
Target issue is #89
Explain the motivation for making this change. What existing problem does the pull request solve?
Adds an offline evaluation script for toxicity detection covering three guardrail validators across two benchmark datasets.
New file:
backend/app/evaluation/toxicity/run.pyUpdated:
backend/app/evaluation/README.mdUpdated:
backend/scripts/run_all_evaluations.shOutput structure
Checklist
Before submitting a pull request, please ensure that you mark these task.
fastapi run --reload app/main.pyordocker compose upin the repository root and test.Notes
Please add here if any other information is required for the reviewer.
Summary by CodeRabbit
New Features
Evaluation
Tests