Part 2 established that community membership issuance and service governance should be governed by separate primitives — Rosters and Venues, respectively. But just separating these two does not, on its own, deliver credible exit. If verifying a credential requires a call to the issuer, then the credential is still dependent on the issuer’s infrastructure.
This is where furryli.st stands today. Any service that wishes to ask “Is this person a member?” has exactly one place to ask. If furryli.st were to disappear tomorrow, that place, and by extension, that answer, would be unresolvable.
How, then, can we make sure that a credential can outlive the entity that issued it?
To answer this, we can look back at the AT Protocol’s core innovation of self-authenticating data. Every repository within the AT Protocol is a Merkle tree whose root is signed by its controlling DID. A post’s authorship and integrity can be verified using only the record itself, the tree, and the author’s DID document. The data carries its own proof.
This is what makes credible exit real and tangible for user data. Your posts can survive your PDS going offline because the proof travels with your data, rather than your infrastructure.
I believe we can do the same thing for community belonging.
Why Simple Records Don’t Work
The most obvious approach to use in this case would be something like the user copying a simple record from the Roster’s repository. Since standard AT Protocol’s records are authenticated through repository commit chains, the Merkle tree can guarantee integrity, while the DID’s signing key can guarantee authorship.
But therein lies the issue — the Merkle tree. The structure that resides within the repository.
This means that the authentication itself depends on repository availability. If a service needs to verify “did this Roster credential this user?”, it needs to walk the Roster’s commit chain, which means the Roster’s repository has to be accessible.
For normal social data meant to be handled by one agent, this is a fine and acceptable compromise. However, for community membership — that is, data that must survive the Roster’s failure — it brings back the very dependency we are trying to eliminate.
This means we need a different approach that doesn’t depend on any repository being online.
Embedded Signatures
Credentials must be a signed statement that attests the following: “This Roster, identified by this DID, attests that this user met this membership criteria at this time.” This statement must be verifiable through two things, and two things only: the credential itself, and the Roster’s DID document.
Concretely: an Ed25519 signature produced by the Roster’s signing key, embedded directly within the credential record. This credential lives in the member’s repository, not the Roster’s. Any service, at any time, can validate the credential by checking the signature against the Roster’s public key, without a call to the Roster or cooperation from the Roster whatsoever.
Four key properties follow from this design. Credentials become:
Member-custodied. The credential lives in the member’s repository—not the Roster’s, a shared registry, or a third party service.
Immutable. Once issued, the credential cannot be modified by anyone. This makes the credential a fact, rather than a permission slip.
Self-authenticating. Verification requires only the credential and the Roster’s DID document. There is no network call, status checks, or cooperation required. This credential is verifiable in the same way every other record in the AT Protocol is verifiable — the proof is in the data itself.
Portable. The credential lives in the repository just as any other record, with all the portability guarantees this provides. A PDS migration carries it automatically. The community membership survives every infrastructure change the member makes, just as posts and identity do.
These four properties extend the same self-authenticating guarantees the protocol provides for content to community membership. A credential is verifiable, portable, and independent of any operator.
The Trust Set
Credentials are the member’s proof of initial endorsement. However, the Roster still needs a voice after a credential is issued. Community criteria evolve, and, occasionally, someone who passed initial vetting might turn out to be a bad actor. The Roster needs a way to communicate its present judgement to downstream services, independently of the credentials it has already issued.
This is the trust set — or the set of members the Roster actively vouches for.
The trust set, much like a label, acts like an editorial position. While issuing a credential communicates “This member met our criteria at some point in time”, the trust set communicates “this person meets our criteria right now”.
This is the closest analogue of what we actually do in furryli.st. We maintain a list of members we have not withdrawn trust from. The trust set is what a feed generator checks when it wants to scope to “people who furryli.st has not withdrawn trust from”.
The Cluster
When a Roster removes someone from the trust set, the credential doesn’t disappear. It’s in the member’s repository, and remains self-authenticating even if the Roster withdrew its endorsement.
This results in two sets. One is the smaller trust set — the people the Roster actively endorses. The larger set is everyone who currently holds a valid credential from this Roster (let’s call this the Cluster). Every member of the active trust set is in the Cluster, but not vice verse. The Cluster can only shrink at the discretion of the members. And, because the Cluster is available to all and owned by none, it’s a commons that any service can build from.
Acknowledgement: DID Dependency
Everything I’ve described in this proposal is contingent on one dependency: the ability to resolve the Roster’s public key at verification time.
A credential says “This was signed by key A belonging to DID B”. Verification requires looking up DID B’s document to confirm that key A is, or was, associated with it. This creates three distinct failure cases:
DID resolution unavailable. The infrastructure that resolves the Roster’s DID can go offline. This means no one can look up the DID document, and, consequently, no one can verify any credential from that Roster
Key rotation. The Roster rotates its signing key. Credentials signed by the old key are still valid, but a verifier checking the current DID document will not find the old key. If key history is unavailable, old credentials become unverifiable.
Key compromise. The Roster’s signing key is stolen. An attacker can forge credentials indistinguishable from real ones. The Roster rotates to a new key, but forged credentials now exist in the Cluster.
How severe and recoverable these failures are depends on properties that the credential design itself cannot control—the precise failure state depends on the DID method.
did:plc provides the strongest foundation for credentials out of the two DID methods supported in the AT Protocol today. Its audit log preserves key history, so timestamped credential verification is possible. However, it’s operationally dependent on PLC governance.
did:web yields operational independence from PLC, but at a singificant cost—there is no key history. Resolution depends on a domain the Roster alone controls, which means the Roster’s infrastructure failure takes DID resolution with it. This is at fundamental tension with credentials designed to survive the issuer’s disappearance.
Other, currently unsupported DID methods occupy various points along this spectrum. The credential design is compatible with all of them, but the durability guarantees change with each tradeoff the method makes.
This means that, for Rosters, the choice of DID method is itself a governance decision, with extremely important consequences for the credentials it issues. Rosters which use did:plc accept PLC governance as a dependency in exchange for a persistent key history. did:web does not. The latter results in significantly weaker credential guarantees. This tradeoff needs to be visible.
Even within did:plc, there are still gaps, though. If the PLC directory becomes unavailable, resolution is broken. And key compromise can’t be addressed through key history alone, since a compromised key can still produce valid-looking signatures regardless.
There is an architectural property worth noting here, though: the trust set yields a partial defense against key compromise. If a Roster’s key is compromised, forged credentials may enter the Cluster, but, after the fact, they can be withdrawn from the trust set — which the Roster maintains with its new key after rotation. Venues that scope strictly to the trust set are protected, but Venues that scope to the full Cluster are exposed. This could be mitigated by integrating reasoning for trust withdrawal within withdrawal records, but this is an implementation detail that I will defer for the time being.
For the remaining gaps, I can think of two possible solutions:
Embedded keys credentials. Rosters include their public key directly in the credentials alongside the signature. This makes the credentials self-verifiable without any DID resolution at all—but it doesn’t solve the loss of proof of provenance salve external infrastructure.
Venue-local public key caching. Venues that scope to a Roster already have a defined relationship with it. They could cache the Roster’s key history in a record as part of normal operations. This can handle resolution unavailability for established Venue-Roster relationships, and can ensure persistence if Roster infrastructure goes offline. Venues encountering the Roster for the first time might be able to use a consensus system among existing Venues to derive key history.
Independent key archival services. Dedicated services that mirror DID document history and make it independently queryable. A PLC lite that keeps append-only logs of all Roster methods. This follows the same pattern as relays mirroring repository data, which might add competition surfaces for infrastructure availability that itself reinforces integrity. This would come at the cost of one more service type in the ecosystem, however.
Neither of these address every problem. The right approach will probably require some combination — perhaps PLC acting as the recommended default for Rosters that prioritize credential durability, timestamped credentials with embedded keys as a verification baseline, with Venue-local caching and independent archival, all collectively acting as layers of resilience for methods with weaker native guarantees.
This is far from a solved problem, though, and the design space is open. The tradeoffs here intersect with DID methods governance, cryptographic infrastructure, and ecosystem assumptions that I, as one person, cannot and should not unilaterally try to resolve. The credential design is compatible with one or any of the potential mitigations above. The question, really, is which combination the ecosystem converges on, and whether DID methods with credential-friendly properties can emerge as community infrastructure becomes a real use case.
For the remainder of this proposal, I will assume that Rosters use did:plc, that PLC’s log is available, and that timestamped credentials provide a clean verification path. This is a simplification. I’m deferring complexity onto independently maintained infrastructure which is not guaranteed, and which does not reflect the full range of DID methods the credential design can technically support.
Acknowledgement: Safety and Privacy
The Cluster’s permanence is intended as a bulwark for community resilience, but there is a real, serious cost for individual privacy that I must address.
A credential, as described, is a fundamentally different kind of record from a post or a follow. It is a persistent, public attestation that a person was endorsed by a specific community at a specific time. A member who leaves the community can delete the credential from their repository, but any service that previously indexed it has already seen it. The Cluster, by design, is really a one-way ratchet that only its members can shrink.
For communities where membership is not really a concern, this is fine. But, for communities where membership could carry present or future personal risk — which for many could include, I must acknowledge, the furry community — this is a real and serious tradeoff. Credentials that are designed to survive the issuer’s failure are also credentials that survive a member’s, a community’s, or the world’s change of circumstances.
This problem is not unique to Composable Trust, of course. It’s just a specific and particularly aggravated instance of an ongoing tension between self-authentication and privacy within the AT Protocol. In a public data model, self-authentication works because data is public. Privacy requires that the credential not be publicly readable, which means verification can’t work through simple inspection. Something else has to take its place, and every option necessitates tradeoffs the public model doesn’t have.
There are 4 questions I can think of that need answers before this design can responsibly serve privacy-sensitive communities:
How does a Venue Scope to Credentials it can’t see?
In a public data model, an AppView or service can read public repositories and index anyone holding a private credential. This makes scoping pull-based. If credentials are private, the service can’t see them, meaning it can’t pull them. This necessitates a shift to some kind of push-based interaction.
This is a real and meaningful change in the UX. Pull-based scoping is what makes the public model feel seamless. A push-based model requires the member to actively present credentials to each service they wish to participate in. This has a real cost to both convenience and Venue competition dynamics.
I can think of a few possible ways to mitigate this. Selective disclosure could preserve some of the pull-based ergonomics if ATProto’s permissioned data infrastructure can support it. Zero-knowledge membership proofs could preserve ergonomics more fully, but it would require ZK infrastructure that I am not familiar enough to engage with and that does not exist in the AT Protocol ecosystem right now regardless.
How does trust withdrawal work when membership isn’t public?
In a public data model, the trust set is made up of editorial trust withdrawal signals, which Venues use to independently decide their scope. This trust withdrawal is legible because both the trust set and credentials are public.
If credentials are private, trust withdrawal is harder. If the trust set remains public, but credentials are private, then a trust withdrawal reveals that a specific user was a member, which itself leaks the membership information the private credential was meant to protect. If the trust set is also private, Venues need some way to query it, which means making the Roster an active dependency for every scoping decision — which is exactly what half of this proposal tries to avoid…
I can think of a few ways to deal with this. Credential identifiers could have an embedded identifier, and the Roster can publish a list of revoked credential IDs without revealing who holds them. Services can check presented credentials against this list and their own judgement. Credentials could carry a validity window (something which should probably be supported by default anyways), which would give the Roster a natural withdrawal mechanism via non-renewal rather than revocation. Both of these can probably work, but both reintroduces some degree of Roster dependency.
Where does the Roster become a dependency again?
This gets to the heart of this contradiction. In a public data model, the Roster can be structurally decoupled from verification through self-authentication. In a private model, we lose some of this decoupling. The degree of dependency varies by the approach, but the result is consistent: privacy trades some structural independence for confidentiality. This is probably not a surprise to anyone reading this far, but, when dealing with systems such as this which is meant to guarantee sovereignty, this is a particularly prescient property of the solution space.
Whatever infrastructure emerges should try to preserve the architectural properties even if it adjusts the more granular operational ones. What I can think of, specifically — members should custody their own credentials. Venues should still be able to make independent scoping decisions. The Roster should not gain unilateral control over whether a credential is valid. If the Roster becomes a dependency for renewal, its dependency should be time-bounded and surviveable.
How do we prevent untrusted Venues from accessing permissioned community data?
In the public model, Venue creation is permissionless. This is a core design feature. Anyone can create a service, scope to it, and compete.
In a permissioned model, this must necessarily collapse. If community-permissioned data is visible only to Cluster members, then a Venue that serves that data must necessarily access it. An untrusted operator who creates a Venue scoped to permissioned community data becomes a man-in-the-middle.
This means that permissioned community data requires some form of gatekeeping over which Venues can access it, which is in direct tension with permissionless Venue competition.
There are a few approaches I can think of to help with this, which I’l list in order of Roster dependence:
Member-mediated data access. This is probably the most familiar resolution. Members individually decide which Venues to share permissioned data with. The Venue only sees data from members who actively choose to participate. This preserves the sovereignty of the users and requires no gatekeeping. Every new Venue requires an active trust decision from every member who wants to use it. The UX cost is familiar: something akin to joining a Discord server.
Roster-endorsed Venues. The Roster could maintain a registry of Venues it endorses for access to permissioned data. This reintroduces a Roster-to-Venue coupling the public model is designed to avoid, though. If a Roster gains influence over which services can operate, this undermines Venue sovereignty. But, for permissioned data, somebody needs to make a trust judgement about service operators, and the Roster is the entity the community has already entrusted with its membership.
Credential-gated Venue operation. Accounts that access permissioned data must hold a credential from the Roster. This raises the cost of attack, but makes permissioned data flow dependent entirely on the Roster’s registry.
Member-mediated data access yields members the most sovereignty, while Roster endorsements allows for editorializing while still ultimately yielding autonomy to members. A combined approach using these two strategies might yield an acceptable compromise; autonomy would ultimately reside with members, and Roster endorsements could themselves be directly subject to community back-pressure.
Ultimately, though, permissioned data necessarily creates a trust boundary, and trust boundaries must be gatekept by necessity. Whatever approach the ecosystem might adopt should aim to make gatekeeping as bounded, legible, and competitive as possible.
A credential design is compatible with push-only selective disclosure once ATProto’s permissioned data infrastructure is ready. The signature and member-custodied model doesn’t assume visibility, just verifiability. Selective disclosure, ZK proofs, and presentation-based models could all (theoretically) sit on top of the credential format without changing its structure.
But is far, far from a solved problem. The precise mechanisms are questions that will require dedicated work by the people building private data infrastructure. I strongly encourage discussion with privacy-focused projects such as Northsky and the Private Data WG, the ones building the systems that any kind of private credential system would need to build from.
Until this exists, this proposal best serves communities where users consider their membership public and uncontroversial, now and forever. I need to reiterate— the architecture is designed to be extendable to private credentials, but such an extension would require infrastructure that does not yet exist, and deploying it for privacy-sensitive communities before the infrastructure is ready would be grossly irresponsible for the safety of the people within them.
Venue-Custodied Scope
Venues that scope to the Roster need to decide what they’re scoping to — the trust set, the Cluster, something stricter, or something in between. This creates a structural problem, however. The trust set lives in the Roster’s repository. If the Roster goes offline, the trust set becomes unavailable. Any Venue that defines its scope by directly querying the Roster’s trust set inherits a single point of failure, which is the exact dependency we’re trying to eliminate with credentials. Given this, where should scope decisions actually live?
Scope decisions must live in the Venue’s own repository.
A Venue publishes a credential policy declaring which Roster credentials it accepts, and under what conditions. This defines the clusters it scopes to as a baseline to works from. Refinements — whether that be independently or by mirroring the Roster’s trust withdrawals — are reflected as scope records in the Venue’s own repository.
This means the Roster’s trust set acts like a signal rather than a dependency. Much like labels in composable moderation, the Roster can publish its judgement, but what downstream consumers do with that judgement is their own decision. A Venue that mirrors the trust set can watch for changes and reflect them as scope records. A Venue that scopes to the full Cluster completely ignores the trust set. Either way, the Venue’s scope set is always defined by its own records — never by the Roster’s.
Acknowledgement: Hidden Scoping Decisions
Because Venues have unilateral control over both input (their scope) and output (what they surface), nothing in this architecture categorically prevents a Venue from quietly removing someone from their scope without publishing the decision as a scope record. This asymmetry is dealt with through trust relations.
Rosters are structurally constrained. They issue credentials and publish trust signals, but they cannot control how downstream services consume them.
Venues are socially constrained. They have operational sovereignty over their service, and the check on this sovereignty is the expectation of legibility, the low cost of exit, and the risk of being caught.
When a community expects scoping decisions to be legible — which this architecture designs to make the norm — a hidden removal is a bet on the part of the Venue operator. That bet is that, 1.) The decision won’t be discovered, and/or 2.) The community won’t care if it is.
In an ecosystem where switching costs are near-zero, competing Venues are plentiful, data is public, and scoping decisions are a surface where Venues are evaluated, that bet will carry a significant social risk. When a Venue operator hides a decision it could have made openly, this signals to the community that they knew the community would disagree, and acted accordingly. If this hidden scoping decision is discovered, the trust cost is significantly higher than it would have been if they’d simply made that decision transparently.
When trust acts as capital, these decisions are expensive for the Venue compared to transparent ones. This calculation tilts heavily in favor of transparency when exit is cheap enough that users can act on the discovery in an instant. This is back pressure in action.
Trust as a Spectrum
Because Venues define their own scope, and the Roster’s trust set is just a signal, services are not limited to a binary choice between the Cluster and the trust set. Instead, a spectrum exists: from a set of none, to the Roster’s trust set, to the full Cluster, with an arbitrary number of intermediate or combinatorial positions available:
A feed that trusts the Roster’s judgements might choose to scope strictly to the active trust set as a baseline, and add additional feed-specific rules on top.
A moderation labeler might scope to the full Cluster because broad inclusion serves its purpose.
A feed that only somewhat trusts the Roster’s judgement might choose to be notified of trust withdrawals but always retain final scoping decisions to themselves.
Each service makes its own judgement about whether they trust the Roster’s trust signals. However, that deference is always local and always revocable.
This is the sovereign demand-side infrastructure Part 2 said credentials alone couldn’t provide. The Cluster allows for a range of possible scopes, and the Venue can choose where on that range to operate, independently, without needing permission from the Roster. The Roster provides suggestions through its trust set, but it cannot force Venues to follow those decisions.
This brings us to full structural sovereignty for each primitive within Composable Trust:
Credentials are member-custodied. The member is fully in control of their own proof of belonging.
Scope decisions are Venue-custodied. The service defines its own definition of who is in scope through its credential function and scope records, neither of which are dependent on the Roster.
The Roster controls neither. Its role is to issue credentials and publish trust signals. It cannot, however, control how others use either.
The Cluster is the floor, the trust set is the Roster’s editorial position, and the scope is the Venue’s final word. Each layer feeds the other, but no layer controls or custodies another.
Composing Trust
Calling back from Part 2, we identified a core fragility of monolithic governance: Every decision a single steward makes is a moment where the community can evaluate whether that steward still speaks for them. The more decisions are made, the more opportunities there are for trust to erode.
Composable Trust doesn’t try to hide or eliminate these moments — they are indispensable for the values of the community to be legible, and are the root of back-pressure for bad governance.
What it does do, however, is bound the scope of these moments. A Venue’s bad moderation call is a trust question about that Venue, not about the Roster or the community’s infrastructure. A Roster’s questionable vetting decision is a trust question about the Roster, a decision that members might have opinions about — but that can be absorbed by Venues without necessitating crisis.
Because Venues custody their own scope decisions, a Roster’s trust withdrawal has to survive contact with every Venue that receives it. Venues that disagree with it can override it locally, and those overrides are visible in the network. This means Roster misjudgments create friction at the layer closest to members, which is exactly where back pressure should originate.
This back-pressure flows in both directions. A Venue that governs poorly loses members to competing Venues. But a Roster that withdraws trust frequently imposes correction costs on every Venue that scopes to it — which degrades community confidence in the Roster as a stable trust signal. The architecture back-pressures each layer toward the decision frequency it’s best suited for: Venues towards fast, continuous, contextual governance of their spaces. Rosters towards slow, infrequent, high-conviction positions on belonging.
This, ultimately, is what allows trust to be composable. It does not, and should not, mean that trust is distributed among equal actors. Rather, the roles that trust entails are decomposed into independent, structured relationships, each of which can be individually evaluated on their own terms without compromising the whole.
Irrevocability and Trust in Practice
There’s a familiar gap here I think is worth mentioning. There’s a difference between structural irrevocability (i.e. no one can revoke the credential) and practical irrevocability (i.e. the credential continues to open doors). The latter depends on whether Venues exist that will honor it. If every Venue mirrors the trust set, exclusion from the set becomes experientially close to revocation.
This gap is a compromise that the AT Protocol accepts at the platform level. If Bluesky’s AppView bans a user, their data and identity remain intact, yes — but nothing surfaces them unless an alternative AppView exists. The guarantee the protocol survives isn’t immediate visibility, but that the architecture makes alternatives possible, and that the possibility of alternatives constrains behavior at every infrastructure layer.
This same constraint operates here. However, Composable Trust has a structural advantage that the AT Protocol lacks. Like platforms, Venues work from public data and are the layer closest and most legible to users. However, Venues have a lower barrier to entry compared to platforms. When many Venues exist, run by many self-selected community stewards, their independent scoping decisions, driven by their values and member back-pressure, collectively signal what the community thinks belonging should look like.
A Roster’s trust withdrawal that is universally overridden by Venues presents the Roster with visible evidence that its judgement is in direct conflict with the community’s. A trust withdrawal that creates a genuine split, though — i.e. some Venues honor it, some override it — signals a deeper division, one which users can navigate by self-organizing toward the governance they trust aligns with their own values.
The architecture’s guarantee, then, is not that every credential will always open every door, but that the market for Venues, and the legibility of their scoping decisions, has 2 key effects: it creates a real trust cost for Rosters that misjudge, and it turns genuine community disagreements from what might otherwise be a destructive trust evaluation event into constructive trust reorganization events at the Venue level.
The conflict doesn’t disappear, but it’s absorbed where it can do useful work: community trust reorganizes around the governance that earned it.
Acknowledgement: Limitations for Small Communities
Venue-layer absorption, bidirectional back pressure, and trust reorganization are emergent properties of a sufficiently rich Venue ecosystem. They are not architectural guarantees in-and-of-themselves.
Composable Trust’s guarantees are: credentials survive independently. Scope decisions are Venue-custodied. Exit is structurally possible at every layer. But the quality of this exit is a function of:
How many independent Venues exist
How engaged the community is
How discoverable alternatives are
How easy it is to create a new service
For a community the size of furryli.st, with tens of thousands of members and a natural diversity of interests and affinities, the conditions for a rich Venue layer are realistic. But for a community with 100 people, two feeds, and a labeler, it might not be.
Composable Trust’s architectural guarantees exist a gradient:
In large communities with many Venues, back-pressure operates as described in the text. Roster misjudgements are absorbed by the Venue layer, competition is real, and exit is an experientially meaningful expression of trust realignment.
In medium-sized communities with a thin Venue layer, back pressure exists in fragile form. A Roster failure may not be absorbable because there aren’t enough independent services to provide alternatives.
In small communities with minimal infrastructure, Venue-layer dynamics are almost or completely absent. The only guarantee that remains is the credential and the shared graph’s survival through infrastructure failure.
For small communities, the architecture’s protections narrow from Venue-centric to credential-centric; Composable trust does not guarantee that failure is absorbed. It guarantees that failure is survivable. That the community social graph persists even when the infrastructure around it doesn’t, and that rebuilding can start from a shared starting place rather than nothing.
This is the same shape of the guarantee the AT Protocol provides at the platform level. The only thing the protocol guarantees is that you can run an alternative PDS, relay, or AppView in theory. It does not, however, guarantee that those alternatives will exist. The value in the protocol is that this guarantee constrains behavior and makes alternatives architecturally possible. The experiential qualify of an exit depends on whether someone actually builds them.
Part 5 will walk through how this architecture absorbs failures at escalating severity, including what recovery looks like for communities where the Venue layer is thin or absent.
What this means for furryli.st
Right now, furryli.st’s membership of 60,000 exists at our discretion. Every member’s belonging runs through infrastructure we alone control.
Consider if, instead of this, we issued them self-authenticating credentials verifying their community membership.
This inverts the relationship between us and our members. Every member of furryli.st holds their own proof of membership in their own repository, verifiable by anyone using our public key. Cluster membership is no longer something we ourselves can retract. We created the trust graph, but the community custodies it, and the community decides what to do with it.
If someone we believe is a bad actor slips through our vetting, we can notify downstream services by updating our trust set. If they disagree, their sovereignty yields them the ability to retain them within their scope. The trust dispute can be absorbed at the service layer, because no one is structurally dependent on us to make the call for them.
And if, God forbid, furryli.st went offline entirely — all 60,000 credentials would still exist, distributed across 60,000 member’s repositories, each independently verifiable by any service that knows our public key. Every service’s scope remains untouched. The community survives, held up not by us, but by the members and services who were always doing the real work.
The Missing Experience
We’ve established the philosophy and the structural foundation of Composable Trust. Members hold their own proof of belonging; Venues hold their own scope decisions; Rosters issue credentials, signal trust, but control neither. Each one is sovereign over their own domain.
But, even with this foundation, a member who holds a credential from furryli.st would still have the exact same experience they had in Part 1. There’s nothing yet connecting these independent actors into something a user would recognize as a “community”.
How can we compose these pieces into something people would actually use?
This is the question I will explore in Part 4.