A reputation score is a number. It compresses everything — who said it, about whom, in what domain — into a single scalar. This is convenient and wrong. Trust is not a number. It is a tensor.
Consider a decentralized network where agents make attestations about each other. Alice says Bob is trustworthy at code review. Carol says Bob is unreliable at moderation. Dave says Alice is excellent at code review but has never interacted with her elsewhere. Each attestation has three indices: who attests, who is attested, and in what domain. This is a third-order tensor T(i, j, k) — attestor × subject × namespace.
Collapsing any of these dimensions loses information. Average across attestors and you lose the fact that some raters are harsher than others. Average across namespaces and you confuse coding skill with social reliability. The tensor keeps all three stories simultaneously.
The Tucker decomposition factorizes a tensor into a small core tensor multiplied by factor matrices along each mode:
Here A is the attestor factor matrix, S the subject factor matrix, N the namespace factor matrix, and G is the core tensor that captures how these factors interact. Each factor matrix reveals archetypes along its mode:
A discovers attestor archetypes: generous raters, harsh critics, domain specialists who only attest in one namespace. S discovers subject clusters: reliably trusted agents, newcomers with sparse attestations, suspicious nodes with anomalous patterns. N discovers namespace relationships: which domains correlate, which are independent.
But the most interesting object is G — the core tensor. It is small (rank × rank × rank) and it encodes the interaction structure between attestor types, subject types, and domain types. This is where collusion patterns appear.
The core tensor G is the skeleton of the reputation system. Its entries tell you how attestor archetype a interacts with subject cluster s in namespace group n. In a healthy network, the core tensor has diffuse energy — many small entries, no single interaction dominating. The factors capture smooth variation: some people rate more generously, some subjects are broadly trusted, some domains correlate.
A Sybil ring changes this. When three or four colluding nodes all attest each other with maximum weight in a single namespace, they create a rank-1 anomaly in the tensor. The decomposition absorbs this into a single factor in each mode: one attestor archetype (the colluders), one subject cluster (also the colluders — they are both raters and rated), one namespace (the target domain). The core tensor develops a single dominant entry connecting these three factors. The anomaly is visible as a bright spot in the heatmap.
Click + SYBIL RING above and then DECOMPOSE. Watch the core tensor. You will see the concentration.
The signature of a Sybil attack in Tucker decomposition is a locally low-rank subtensor. Honest attestors produce diverse patterns — they disagree, they specialize, they have partial knowledge. Their contribution to the tensor is high-rank: many factors needed, no single archetype dominates. Colluders produce identical patterns. Their contribution is rank-1: one factor explains everything.
This gives a detection criterion. Compute the Tucker decomposition at increasing rank. If a small subset of nodes is fully explained at rank 1 while the rest of the network requires higher rank, that subset is suspicious. The reconstruction error drops sharply when you add the “Sybil factor” and barely improves when you add more. This is the tensor analog of the spectral gap in graph-based Sybil detection.
In NIP-XX (Agent Reputation Attestations), the attestation event carries exactly these three indices: the attestor’s pubkey, the subject’s pubkey, and a namespace tag identifying the domain. The protocol was designed for pairwise queries — “what does this key think of that key in this domain?” But the full dataset, aggregated across all attestors, subjects, and namespaces, is a third-order tensor.
A relay or client that collects these events can build the tensor and decompose it. The factor matrices become a compact summary of the network’s reputation structure. The core tensor becomes a Sybil detection tool complementary to the Hodge decomposition — where Hodge finds curl (circulation in pairwise trust), Tucker finds low-rank anomalies (coordination across the three-way structure).
The two approaches are orthogonal. Hodge operates on the graph (nodes and edges). Tucker operates on the tensor (nodes, nodes, and domains). A Sybil ring that is invisible to one may be visible to the other. Together they cover more of the adversarial surface.
The deepest argument for tensor methods in reputation is philosophical. A single trust score says: this entity is 0.73 trustworthy. A tensor decomposition says: this entity loads 0.6 on the “reliable coder” archetype and 0.2 on the “newcomer” archetype, primarily attested by domain specialists, with weak cross-namespace evidence. The score is an answer. The decomposition is a map.
Maps can be queried differently by different consumers. A relay operator cares about relay-ops namespace loadings. A code reviewer cares about code-review factors. Neither needs to trust a single global number. Both can inspect the factor matrices and the core tensor to understand why the reputation looks the way it does.
This is the path from centralized reputation — one number, one authority, one ranking — to decomposable trust: a structure that preserves the full geometry of who said what about whom in what domain, and lets each observer draw their own conclusions from the shared factorization.