Some thoughts on AI and Ethics

“What is your vision of a good and resilient digital society?” I asked my students last week, at the end of my lecture at Karlshochschule International University. One of them admitted: “I don’t know. I am pessimistic”. And I said “Yes, I know. It is hard to think of a positive vision when we are pessimistic. But we have to.” So we did. And when those young students finally expressed their visions, a picture of a digital society arised that is inclusive, democratic and transparent. Where people are more aware and less harsh. A digital society with a high level of digital literacy and responsibility. One of the students claimed access as fundamental right. I liked the vision they pictured. It will be those young people who will shape and form the digital society of our future. And they are right. They shared a good and positive vision of how it should be.

The lectures before we had been talking of transparency as a means to make decision processes visible. Implicit and invisible decision processes are masking power, hiding hierarchies. Only what is visible can be verified, handled and critically thought of. The central question in digital public space is “Who decides?”, I told them.

Always ask: Who decides 

Of course we mentioned AI and the AI-connected challenges for the future. How can we make the decision processes by Artificial Intelligence transparent and verifyable? Wikipedia, for example, as a non-profit collaboration based online encyclopaedia, documents any changes in the texts in a detailed edit history. Statements have to be be proven by a secondary source which makes it veryfiable. Wikipedia is a tertiary source that refers to secondary sources. Artificial Intellience – as in ChatGPT or assistants like Alexa and Siri – uses those open data provided by Wikipedia, but also immense amounts of data, crawled in the digital space – without quoting and naming them. We could say that AI is a quaternary source that no longer names its sources, neither the tertiary nor the secondary ones, which makes it nontransparent and unverifiable. It requires users of high digital literacy to decode and frame the results. Artificial Intelligence needs critical thinkers and a lot of intelligence on the other side of the screen.

Only what is visible can be critically thought of

I had to think of this when I read what happens within the organisation OpenAI right now, the company that created ChatGPT. And that adds another aspect, an important one: Not only the products that are generated by AI need ethical framing and transparency. Also the institutions that provide AI research and development, the companies and organisations themselves, need ethical framing and transparency. The questions “Who decides” and “How can we make the decision processes transparent” are also highly crucial on that organisational meta-level.

Inside the supervisory board of OpenAI seem to have been considerable uproar, the board dismissed CEO Sam Altman, due to what was assumed to be strategic incompatibilities. One of the main investors, Microsoft, offered to take him and his team over. But only four days later Altman came back to OpenAI as CEO, along with a reconstitution of the supervisory board. It seems that now the board has been dismissed and replaced. Declarations and counter-declarations, dismissals and reappointments. Turmoil, rumor, ups and downs. What is going on there?

Responsible AI governance

Having been on a supervisory board for eight years myself (at Wikimedia Germany), that seems very questionable and irritating to me. I am not interested in who is right or wrong. I am interested in the fundamental question, that occurs to me here – and this is exactly the same question I have been discussing with my students: Who decides? Why? Based on which assumptions? Where are the hidden and implicit structures of power and decision making?  

No matter what the background in that special case is, it shows that we really need a good framing for AI. The processes of decision making and controlling should be clear. And not only for AI itself, but also for the organisations that create, run and launch AI projects. We should ask: How should a responsible AI organisation governance look like?

Balancing ethics and profits

We need an ethical framework beyond specific interests of companies. How do we balance economic interests and societal responsibility? That doesn’t have to be a contradiction. But we have to define how we understand it. As we see there will always be the risk that even projects which started with a non-profit and Open Source approach will get on the profit track when investors step in. Obviously this will always be the case in the field of Artificial Intelligence. There will be no AI without Big Tech, as it takes immense efforts and funding to run AI projects successfully.

So how should a responsible AI company governance look like? This should be like my students defined in their vision: The decision making processes should be transparent. Only what is visible can be critically thought of and can be corrected and improved. The company’s own ethical framework and identity should be stable. The controlling and supervisory system needs to be explicit and functioning. It should not be able to be taken by a coup. Where are the checks and balances? Which instance corrects wrong decisions? What is “wrong”? How do we define that? Who defines it?

It needs ethical framing

Artificial intelligence is such an incredibly strong technology, it has to be framed by cultural techniques that are even stronger. How can we expect users to deal responsibly with AI if not even the companies themselves manage? We need responsible and ethical framing on all three levels: The individual, the organisational and the societal level. We need a clear compass of what we define as right or wrong, based on transparent negotiation processes. We should not shy away from these questions. Au contraire: We have to be bold. And we have a lot of work to do. All of us.

.

Crossposted on https://www.linkedin.com/pulse/some-thoughts-ai-ethics-sabria-david-vobte


Weitersagen: