I’ve been fortunate to have joined Bluesky in its early days. It’s opened up invite codes and is growing rapidly, even though it is still in “beta”. It’s been an excellent experience so far and I’m hopeful it will become the default approach to running a Twitter-like social media experience.
Their CEO, Jay Graber, shares her concept of composable moderation and how it works in decentralised social media.
Centralized social platforms delegate all moderation to a central set of admins whose policies are set by one company. This is a bit like resolving all disputes at the level of the Supreme Court. Federated networks delegate moderation decisions to server admins. This is more like resolving disputes at a state government level, which is better because you can move to a new state if you don’t like your state’s decisions — but moving is usually difficult and expensive in other networks. We’ve improved on this situation by making it easier to switch servers, and by separating moderation out into structurally independent services.
We’re calling the location-independent moderation infrastructure “community labeling” because you can opt-in to an online community’s moderation system that’s not necessarily tied to the server you’re on.
So for now I have the following settings at a server level. It is awesome to have this level of control. Albeit the moderation system is still learning what constitutes one thing from another, it’s getting better.
And eventually …
- Anyone can create a label set, then add admins or mods to help manage it
- Mods can add labels that the set defines to accounts or content (“rude”, “troll”, etc.)
- Anyone can subscribe to the set and have the labels be applied to their experience
Anyone should be able to create or subscribe to moderation labels that third parties create.
Sounds brill. And I very much welcome this world so folks can experience a safer and cleaner social experience that is hopefully less toxic.
I’m also a software tester and often like to explore risks – the things that threaten the value of a product/service/feature.
What risks could you imagine with a composable moderation approach? How might it be abused?
Some thoughts:
- What happens if a server creates a “No Trolls” label set and a mod decides to delete it without warning? Where does that leave the people who were subscribing to it? – I guess they find another “no trolls” label to subscribe to. Or maybe there’s a timed lock on deleting a label that lasts for x days so it notifies the subscribers that it will be removed in x day’s time. Perhaps default to 7 days.
- What happens if a label set is a trick? As in, someone believes they are subscribing to “Blocked Nazis” only to discover it’s a trap, a feed full of white supremacists posting racist content? Sounds like it’ll be essential to be able to report label sets to a super admin and for them to review (and takedown) ASAP
- What level of security should a label hold? As in, what would happen if a bad actor hacked a mod account to gain access to label admin and let loose by allowing things that the very label is aiming to hide?
- How can mods have a space to discuss what goes into a label set and what doesn’t? Should this be included in the label creation/support system or just assumed that it will happen in whatever way the mods wish to communicate? I think there’s value in offering it as part of the composable moderation backend/mod/admin view
- How might users of a label share feedback on the usefulness of the label? Could that feedback mechanism be abused/gamed? How do you protect the feedback mechanism from the very people that the label is attempting to protect others from?
What other risks do you imagine with a composable moderation approach?
Let’s use our moderation experience and help the folks at Bluesky. It seems they are genuinely trying to build something that moves social media in a positive direction. And as a community professional, I think it would be cool to be part of that.