Anything and everything RL

Have experience with and your own FYP/MSc project idea related to Reinforcement Learning (“plain” or multi-objective, multi-agent, transfer, lifelong, explainable, inverse, etc?) – whether to develop new algorithms or to apply existing ones to a new application area? Contact me with your idea to see if we can formulate the topic together.

Cooperative driving using multi-agent RL

This project will explore use of and development of new multi-agent RL techniques to achieve cooperation and coordination between autonomous vehicles in order to optimize traffic congestion. Prior experience with RL is essential

Multi-objective optimization in reinforcement learning

This project will apply and extend multi-objective (MO) technique Deep W-Networks[1] to a number of benchmark environments from multi-objective gym [2], and benchmark against other MO RL techniques. References: [1] J Hribar, L Hackett, I Dusparic. Deep W-Networks: Solving Multi-Objective Optimisation Problems With Deep Reinforcement Learning. International Conference on Agents and Artificial Intelligence (ICAART) 2023 … Read more

Software testing techniques for reinforcement learning-based systems

This project will investigate existing software testing techniques and development of new ones for testing of reinforcement learning-based software applications. References: [1] Yuteng Lu, Weidi Sun, Meng Sun, Towards mutation testing of Reinforcement Learning systems, Journal of Systems Architecture, Volume 131, 2022, https://www.sciencedirect.com/science/article/pii/S1383762122001977 [2] Miller Trujillo, Mario Linares-Vásquez, Camilo Escobar-Velásquez, Ivana Dusparic, and Nicolás Cardozo. … Read more

Explainable/Trustworthy Reinforcement Learning

In the recent years causal inference has emerged as an important approach for addressing different issues within RL. Providing agents the ability to leverage causal knowledge was identified as a key ingredient in developing human-centered explanation methods. Namely, when using AI systems, humans tend to be interested in answering questions such as “What caused the … Read more

Counterfactual explanations for Explainable and Trustworthy Reinforcement Learning

Explanations targeted at non-expert users of AI systems are necessary to encourage collaboration and ensure user trust in the black-box system. Counterfactuals are user-friendly explanations that offer the user actionable advice on how to change their input features in order to achieve a desired output. While researched in depth in supervised learning, counterfactual explanationsare seldom … Read more