Have you ever wondered how Google and Facebook always manage to show you advertisements for the thing you were just thinking about?
Artificial intelligence, commonly referred to as AI, forms the backbone of the programmes that collect your data and determine what ads to show you.
That same kind of AI might help design therapeutic drugs tailored to your specific illness or guide your car safely down the highway. But as AI grows more complex and requires more processing power, it becomes harder to assure people it will work as intended.
The Nordic Centre for Sustainable and Trustworthy AI research (NordSTAR) is one of five OsloMet research groups recently granted the Centre of Research Excellence designation.
The goal of the new centre is to establish a new tradition in research on sustainable and trustworthy artificial intelligence (AI).
Building the team
Professors Pedro Lind and Anis Yazidi, two researchers at the OsloMet - SimulaMet AI Lab, brought NordSTAR together to build the trust and sustainability that are required for AI's successful future.
"We are all from AI labs at OsloMet but we didn't feel like we had an overall focus on basic AI research," Lind explains.
"Being named a Centre of Research Excellence means we can now merge different competencies at our labs into one common goal and take part in large-scale projects in Norway, Europe, and world-wide."
NordSTAR combines technical AI experts in the fields of application design, cyber security, quantum computer-based AI, and AI inspired by biology - with social science and ethics experts who study how humans interact with AI.
Together, this diverse group of researchers is developing new principles to make AI trustworthy by design and, in the process, a more sustainable technology.
Understanding the machines
Older AI systems were based on well understood principles of statistics and probability. Today's most powerful AI systems are more like a black box—they can adapt and change themselves so much that not even the programmers are able to tell what is going on inside.
These modern AI systems, known as neural networks, grow and learn in much the same way the human brain can and might entail millions of variables and complex operations.
"Some AI systems can surpass humans in cognitive tasks, but it’s often impossible to know the logic behind the AI's decisions" Yazidi explains. "We want to make tools that can be understood and explained so they can be used and trusted by people who aren’t computer scientists."
Trust for experts and the public
To build trust in AI tools, NordSTAR researchers will work alongside the experts who use them. For example, the computer scientists on the team might help a cardiologist make a decision about whether a patient has heart disease by developing an algorithm that looks at ECG data.
Once the tool is ready, the social scientists will get feedback from the cardiologists who use it. The next version of the AI tool will then build on their comments and concerns.
Trust building also extends to public interactions with AI. New technologies like quantum computing will give us exceptionally fast searches but may also make our current encryption useless.
AI can give us autonomous vehicles or try to predict crime rates, but if the data going in is missing information about traffic rules or biased against certain people, the algorithms may yield disastrous results.
NordSTAR's work to develop "good AI" will reduce these risks, allowing people to take advantage of AI's potential for producing better results than humans.
The future of NordSTAR
As a newly established Centre of Research Excellence, NordSTAR will have access to expanded resources to conduct research within the university and develop national and international collaborations. This will give them a platform to share their vision for sustainable and trustworthy AI.
In addition, the centre will receive NOK 1 million each year through 2024 and support for graduate students and postdocs. The team plans to apply to be a Centre of Excellence (SFF) in 2025.