Trustability
It all started on Medium
As some of you already know, I write a little bit over on Medium. When I first started I tried submitting to the very excellent ‘Philosophy Today’. It’s a Medium publication that allows readers to follow topics of interest. I liked the content that I saw there and so decided to pitch a few things.
The editors there (Matt Fujimoto and Romaric Jannel) are both great: kind, encouraging and really helpful in getting me to shape that work for that…medium.
You can check out some examples of my work here and here.
Over time, Romaric and I chatted more and more about trust. We kicked about a few ideas and, in the end, we’ve just published this article in a mainstream academic journal AI and Ethics. I’ll talk a bit about the article in a moment.
What I wanted to talk about first, though, was the experience.
When I started writing on Medium, I was not looking for an academic collaborator. The point of writing for audiences on Medium and Substack was to reach beyond the academic sphere and find non-philosophers. I think I’ve managed to do that to some extent. And that’s been good.
But that I didn’t intend to find a collaborator is rather the point of this post.
I’m reminded constant of the ways in which my academic career has been full of surprises. My work on the philosophy of time was never done with the intention of producing video content. But here I am.
I look about 14 in that video.
And the video content was not produced with the intention of becoming a philosophical advisor to an SME. But that happened too. And all of this little turns that my career has taken have moved me to a different place, given me different skills and insights, and helped me to see the world in a different way.
They are all profoundly positive experiences in way or another.
And so while I did not join Medium (or Substack!) with the intention of finding new collaborators, I am very glad that I have.
The paper?
Ah, yes, the paper. What does it do?
The paper introduces and defends a distinction between trustability and trustworthiness in the context of trust relationships, particularly concerning AI systems. We argue that trustability is a logical precondition for trust—whether an entity is even the kind of thing that can be trusted. It’s a “gatekeeping status” that determines if trust can coherently apply at all.
Trustworthiness concerns whether an entity that can be trusted actually merits trust based on its properties and behavior. We argue that many treat AI systems as being trustable—and we go on to say that this is a category error with ethical consequences. Current AI lacks “dependence-responsiveness” (the capacity to recognize and respond to someone’s reliance on them), which is necessary for genuine trust.
When trustability isn’t met, the appropriate stance is reliance with accountability rather than trust. The paper argues this distinction helps clarify when trust in AI is not just misplaced but structurally incoherent, providing a framework for better governance and design of AI systems.
If you want to check it out, you can find it online here.

I'm very glad you did! It was not in my plans either, as it was not in my plans to work on the philosophy of trust. But today, I do understand why I found your work so interesting from the start, and I am pretty sure I will continue to deal with that question. (I would be glad to do so with you, for sure!).
Working on that topic also helps me to see clearer other questions that I was working on. So, thank you 🙏