~~A distinguished technology ethics expert combining rigorous academic research with practical policy impact in AI governance and digital rights.~~ I wrote this when I thought using LLMs to condense my CV made sense… So, let’s start again, and try to make this in a way that doesn’t bore myself.
I want to help organizations see what they are actually not prepared for: the collapse of costly signals and trust mechanisms.
Passionate about: machines that can produce reasons. And how that will change everything. Not the Superintelligence story, but the Trust and Transparency story.
The longer story: I spent 10 years in academia publishing about trust, transparency and bias in AI, as a philosopher within (and later, leading his own) interdisciplinary teams. https://philpeople.org/profiles/michele-loi
Then ChatGPT made me stop and rethink everything. Here’s what I now think is important:
1) Organizations will create new discrimination trying to filter out AI slop
2) Institutions need to redesign how they verify/trust work in an AI world
3) Individuals need new ways to maintain credibility when everyone can generate expert-sounding output.
Let’s talk about what that means for your organization..