An inspiring afternoon on AI with many angles

Image 1 of 5
A man with a headset standing in front of a screen with a remote speaker; a man sitting in a study with bookshelves in the background
A woman standing in front of a screen, by a high table with a laptop. She has a microphone in her hand. She is facing an auditorium.
A woman standing in front of a screen with a presentation.
A man in a grey sweater standing in front of a screen, speaking to an auditorium.
Four people sitting on high chairs in front of a screen, talking. In the forground a rollup and a plant,
Henrik Berglund introduced Professor Daron Acemoglu, MIT, who spoke remotely from Boston.

On February 7, Information and Communication Technology Area of Advance hosted the seminar AI, Economy and Societal Impact. The seminar was held for about 150 people in Studion at Johanneberg Science Center, and almost as many followed it online.  

AI’s impact on society is powerful and has many layers, which we learned much about in excellent talks by Hannah Ruschemeier, Daron Acemoglu (remote), and Shalom Lappin. Many questions were raised and the panel discussion at the end of the day, moderated by Devdatt Dubhashi and Henrik Berglund, concluded that this was a useful and important seminar that covered legal, and political issues as well as technical aspects.

Regulating AI is about regulating power

The seminar’s first speaker, Hannah Ruschemeier, junior professor of Public Law, Data Protection Law, and Law of the Digital Transformation at the University of Hagen, talked inspiringly about the normative challenges of AI and the new type of power that comes with AI.

Prof. Ruschemeier discussed the power structures and the concepts of data power. AI needs data. Humans produce data. So, AI is dependent on humans. And now, everything can be data. There is no irrelevant data anymore.

Almost nothing on the internet is free. You always pay with your data. But there is no price tag on it.

Hannah RuschemeierJunior professor of Public Law, Data Protection Law, and Law of the Digital Transformation at the University of Hagen

Hannah also addressed the systematical violations of law and rights. She pointed out that a new attitude of the big players seems to be to act first and then face the consequences of any legal violations. Up to now, the normal way to act has been in accordance with the law, and not the other way around. This, together with the tendency that new norms are set by private actors, in parallel with the norms of the state, can be problematic.

The last part of Prof. Ruschemeier’s presentation covered the AI act and the Digital Services Act (DSA), and she ended her presentation with a smile and the conclusion that “not all hope is lost”.

Rethinkning AI

Next was Daron Acemoglu, Institute Professor at MIT and prominent economist, who spoke remotely from Boston, Massachusetts, USA. Prof. Acemoglu started his presentation by saying that there is something wrong with the way we are focusing on AI right now, and there are steps we need to take. And continued with the question of how to do generative AI better.

Prof. Acemoglu identified four roadblocks to the pro-human vision of AI: excessive automation, loss of informational diversity, misalignment between human cognition and AI algorithms and finally, control of information.

In his reasoning, Daron Acemoglu discussed questions like if AI is digging its own grave, as if everyone uses LLMs for information, who will produce new information? Also, how humans may misinterpret or mistrust algorithmic recommendations, or overreact to certain types of information. And finally, the questions of who controls information and benefits from it and how it can be used in misleading ways.

Concluding his presentation, Prof. Acemoglu found that re-directing AI, to make it more pro-worker and information-democratic, is a possible way to make generative AI better. He stated though, that this is not where we are heading. The development of AI tools is privatized and driven by incentives other than social ones.

Towards smaller and more transparent deep learning models

The third speaker of the day, Professor Shalom Lappin, Queen Mary University of London, University of Gothenburg, and King’s College London, presented, what he called, an internal point of view, from the development of AI. 

He first gave a background to Large Language Models and their importance for the development of AI and tools like ChatGPT. He then continued on how these models require massive amounts of data and computing power for pre-training. For example, ChatGPT 4 uses the electrical power of a small American town. And as the speakers before him, Prof. Lappin also pointed out that this is a serious problem. It concentrates monopoly power in the hands of large tech companies, that have the resources to develop, train and support LLMs. Consequently, researchers, students and startups have all become clients of these companies.

Prof. Lappin concluded his talk by pointing out the importance of smaller and more transparent deep learning systems, in order to facilitate the development of innovative alternative architectures and designs for these systems.

Reflections of the day

At the end of the day, the panel looked forward by reflecting on how to design law in the future, and the importance of finding systemic solutions and collective directions. But also, that more people with technical competence are needed in politics. Another question was how academia can impact future development. Here, the panel addressed the importance of steering funding, but also to continue to provide critical education to the students.

If you missed the event, you can still watch it on Chalmers Youtube channel.

ICT Area of Advance
Additional information

Research area that focuses on Information and Communication Technology.

Author

Ulrika Avedal Åberg