How can you trust and rely on automated decisions that are made by complex machine learning models, especially when such models are not held accountable for their predictions? Angelos Chatzimparmpas’ research simplifies the oversight and management of machine learning algorithms so that human experts can evaluate how automated computational solutions reach their results.

Our society relies on intelligent machines that are already very accurate and powerful. They are able to solve demanding problems, ranging from recommender systems for deciding which movie to watch next to medical diagnosis when a patient is admitted to a hospital.

But experts in high-risk domains need to know how a particular prediction was made before they can trust the process of machine learning and its results. In his doctoral thesis, Angelos Chatzimparmpas has looked for efficient solutions to this problem.

– My research shows how users of machine learning can benefit from visual analytics tools and systems that provide explainability, increase trustworthiness, and steer machine learning methods through interactive visual representations, that is, special types of charts and graphs that can be influenced by users”, says Angelos.

As part of his doctoral project, Angelos and his colleagues have provided an online survey browser that is now used worldwide by researchers, practitioners, and students interested in the visualization for machine learning (https://trustmlvis.lnu.se/). Moreover, they have designed and developed multiple visual analytics approaches that allow machine learning experts and model developers to improve all stages of an end-to-end machine learning workflow.

Both academic research and business intelligence can use the approaches presented in his dissertation. As an example taken from the financial world, decisions to decline loan applications need to be more transparent and explain precisely why they were turned down.

To build trust between humans and computers, it is important to be able to understand how an algorithm works and explain how it gets to a certain result after a prediction is made. This is exactly the core challenge addressed by Angelos’ study.

– Together with the growing application of complex machine learning techniques to many analytical tasks, there is an increasing need for interpretable and explainable solutions, says Angelos. I think it is safe to predict that visual analytics for explainable and trustworthy machine learning will continue to be on the forefront as a research topic in the near future.

The future will also bring interesting new challenges for Angelos since, from March 2023, he will be joining the MU Collective research lab at Northwestern University, USA. Angelos Chatzimparmpas will keep researching there as a postdoctoral scholar with the goal of making the idea of visualization as a model check for machine learning more formal and expanding even further on this topic.

Lastly, he will come up with novel ways to test how well exploratory and confirmatory visual analysis tools work.

More information

The dissertation was performed at the Department of Computer Science and Media Technology. The research is part of the ISOVIS research group and the Linnaeus University Centre for Data Intensive Sciences and Applications (DISA).

Contact

Angelos Chatzimparmpas, doctor in computer and information science, +46 (0)470-70 81 77, angelos.chatzimparmpas@lnu.se

Press contact:

Ulrika Bergström

Phone:

0480-49 70 55

Mobile phone:

070-259 36 29