Trustworthiness and sustainability are two fundamental pillars underlying modern society. Simultaneously, with the development of information technologies, AI has become ubiquitous in almost all our routines.
We increasingly interact not only with each other, but also with AI-empowered machines. Therefore, it is more and more necessary to properly assess how, and how much, humans trust AI tools and devices and how these tools contribute to a sustainable society.
Because of this, AI can no longer be seen as a technical isolated discipline within the umbrella of natural sciences and pure technological knowledge.
This was the background for the first official NordSTAR workshop, where questions like “how successful have the scientific approaches in AI research been to develop trustworthy and sustainable AI?” and “ what interdisciplinary challenges do we need to face and solve?” were discussed.
The workshop was opened by the research director at OsloMet, Yngve Foss, followed by the NordSTAR directors Pedro Lind and Anis Yazidi.
The first presentation of the day was held by Alexander Buhmann and Christian Fieseler from the Norwegian Business School (BI). In their talk, titled Deep Learning meets Deep Democracy, they shared their research on deliberative governance and responsible innovation in artificial intelligence.
They have developed a framework of responsibilities for AI innovation, and a deliberative governance approach for enacting these responsibilities.
Henrik Skaug Sætra from Østfold University College then gave his talk, titled AI in context and the sustainable development goals.
The presentation was based on a book by the same title, written by Sætra. In his talk, and more in-depth in the book (routledge.com), he shows how AI can potentially affect all the sustainable development goals, both positively and negatively.
The last speaker of the day was Audun Jøsang from the University of Oslo. In his talk, titled Assessing trust in IT systems, he presented some key elements and a framework for reasoning from his book “Subjective Logic: A Formalism for Reasoning Under Uncertainty” (mn.uio.no).
The workshop ended with two round tables. In the first round table NordSTAR had invited experts from different fields and contexts to give their perspectives on trust and sustainability in AI:
- Morten Dahlbeck from Faktisk.no
- Jawad Saleemi from Ruter
- Freyja Jørgensen from Simula Research Laboratory/Gründergarasjen
- Arber Berisha from BearingPoint
- Emilia Struck from the International Consortium of Investigative Journalism
- Laurence Habib, head of the Department of Computer Science, OsloMet.
The panelists were asked if they think artificial intelligence today is sustainable and trustworthy, and what AI experts should consider to make the tools they develop more trustworthy and sustainable.
In the first discussion, all panelists agreed that there is still a lot of work that needs to be done to make artificial intelligence more trustworthy and sustainable.
Transparency was mentioned as a key factor when it comes to what AI experts should consider when developing tools. In regards to sustainability, they have to be aware of the carbon footprint. For example, training a single AI model can emit as much carbon as five cars in their lifetimes.
In the second round table, it was discussed how the development of trustworthy and sustainable AI will change AI basic research. The round table consisted of experts in artificial intelligence, and collaborators working closely with AI.
- Alexander Buhmann & Christian Fieseler (Norwegian Business School)
- Ira Haraldsen (Oslo University Hospital)
- Helge Røsjø (Akershus University Hospital)
- Audun Jøsang (University of Oslo)
- Elena Parmiggiani (NTNU, NordSTAR)
- Michael Riegler (SimulaMet, NordSTAR)
- Henrik Sætra (Østfold University College)
- Henrik Wiig (OsloMet)
In this discussion, the panelists brought forward important input. Interdisciplinary research will become more important, and there is a need for more collaborations across disciplines.
More people should have knowledge of artificial intelligence, and the students in the field need to understand AI in the context of trustworthiness and sustainability.
Usefulness should be looked at in the context of trustworthiness, how is the process done today, and how can AI support that? Both humans and machines will be flawed and biased, but by working together we can increase the accuracy.
The workshop was closed by Vahid Hassani, Vice-Dean at OsloMet, who emphasized the importance of inviting groups of people with different backgrounds and perspectives to discuss big topics like sustainability and trustworthiness in AI.
The image shows pictures from the second round table. From left: Pedro Lind, Michael Riegler, Helge Røsjø, Audun Jøsang, Henrik Sætra, Henrik Wiig, Elena Pamiggiani, Ira Haraldsen.