AI Researchers are increasingly called upon to anticipate and mitigate the harms arising from future use of their research, e.g. when publishing papers to major conferences [1][2][3] and journals or seeking funding from bodies like the EU [4]. The earlier the stage the AI research they work upon, the more challenging it is to imagine and anticipate future applications of their research and the associated risks. However, researchers are increasingly having to report to third parties about such ethical assessments, including to: institutional ethics review panels; publication editorial boards; research funders; and regulatory authorities and auditors, e.g. for GDPR or the AI Act. Due to the current diversity of different criteria this presents a major risk assessment challenge for AI researchers.
This project will involve an analysis of the state of the art in the range of ethical and legal obligations facing AI researchers and guide AI researchers to more systematically consider different potential applications of their research (include malicious and dual-use in weapons systems) and identify the risks of harm they may manifest so that mitigations can be considered and reported to different authorities. It will aim to develop an open machine readable format for documenting AI risks, in order to aid their review and comparison as the understanding of types of risk and the best way to mitigate their harms improves.
- ACLethics review guidelines: https://aclrollingreview.org/ethicsreviewertutorial
- NeurIPs ethics assessment: https://neurips.cc/public/EthicsGuidelines
- https://doi.org/10.1145/3531146.3533780
- EU funding AI ethics assessment: https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/guidance/ethics-by-design-and-ethics-of-use-approaches-for-artificial-intelligence_he_en.pdf