Ethical Policy for CHAIR

This is the Ethical Policy for Chalmers AI Research Centre that was prepared by the AI Ethics Committee and approved by the centre's Steering Group on May 9, 2019.


Artificial intelligence (AI) technologies already have profound impact on the lives of most humans and on society as a whole, and this tendency is likely to increase dramatically in coming years. The impact can be for better or for worse, and it is therefore crucial that ethical considerations are taken heavily into account in the development, implementation, dissemination and application of AI systems. Important ethical considerations that come up in various AI settings include but are not limited to issues of fairness vs bias, transparency vs opacity, accountability, human autonomy, privacy and integrity, democratic participation, safety and sustainability.

With this in mind, the ethics perspective should permeate all research projects and all other activities at Chalmers AI Research Centre (CHAIR), including those that are carried out in collaboration with an external partner. It should do so from the very start of a project, including the planning, proposal and funding application stages. All calls for project proposals within CHAIR will include instructions to explicitly address ethical considerations. 

While project managers and participants always bear full responsibility for ethical concerns pertaining to the project, CHAIR leadership is nevertheless responsible for fostering an environment that cultivates informed discussion on ethical issues, as well as for supporting only projects that adequately address relevant ethical concerns.

An overarching principle is that AI systems whose risk of causing harm is not clearly outweighed by their beneficial effects should not be built or disseminated. When estimating benefit vs harm, it is not always sufficient to consider the problem from the viewpoints of developers, owners and users of an AI system; in many cases, there is a need to consider also further stakeholders including third parties affected by the use, as well as effects on the environment. This fundamental principle should never be allowed to be overridden by commercial, military or other considerations.

It should be recognized that an action is not automatically ethically justified or ethically permissible just because it is legal. Furthermore, it does not suffice to focus solely on the direct effects of an AI system: it is also necessary to consider its possible indirect or longer-term consequences, such as the risk of contributing to an AI arms race or of being integrated into a system that can spiral out of control. 

Finally, it should be noted that ethical AI is not only about avoiding unethical actions or harmful consequences, but even more about developing and using AI to bring about good consequences, such as equity, accessibility for the disabled, humanitarian action, environmental protection, human flourishing and a sustainable society.


AnnotatedEthicsPolicy_2020.pdfAnnotated Ethics Policy (June 2020)

Page manager Published: Wed 02 Sep 2020.