I Used To Work In the UK Government’s AI Risk Team. When I Raised Ethical Concerns, They Retaliated, Intimidated and Surveilled Me.

Whistleblowing in AI: A Disturbing Personal Account from Within Government

Greetings, readers,

I’d like to share a candid insight from my experience working in the UK government’s Central AI Risk Function. In this role, I was part of a team that had a critical mission: to help ensure that Artificial Intelligence technologies are deployed ethically and to mitigate potential risks related to bias and discrimination. However, what I encountered within this team raised significant ethical concerns and prompted me to voice my worries.

Unfortunately, instead of fostering an environment for open conversation, my attempts to address these ethical issues led to a series of unsettling responses. I found myself subjected to surveillance, intimidation, and even lockouts from my team—a clear indication of institutional retaliation against those who dare to speak up.

Over the past few weeks, I have dedicated my time to compiling a comprehensive archive that delves deeply into these problematic occurrences. This archive includes not only evidence of the issues I faced but also legal analysis and reflections on the implications for AI governance moving forward.

I’m eager to engage with others who are concerned about the future of whistleblowing within technology sectors, particularly in government. Is it feasible to achieve public accountability regarding AI ethics under the current systems we have in place?

I invite you to share your thoughts, experiences, or questions related to this pressing topic. Let’s foster a dialogue about the path ahead for ethical considerations in the evolving landscape of Artificial Intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *