[TAKEN] Fairness and Robustness in Risk Detection Models (2 Projects)
Risk detection models (such as IBM’s Granite Guardian) are increasingly used to flag harmful prompts and responses in large language model pipelines. These systems are trained on human and synthetic data to identify risks across multiple dimensions, but their reliability and fairness are not guaranteed. They may over-flag certain groups, miss subtle harms, or be … Read more