2148
astralblue
@astralblue #2148
Dad. Musician. Engineer.
84 Follower 16 Following
https://twitch.tv/adagiorustico 🎹 − 🎙️ + 🌙 = 💕
Being a part-time live musician, I've been pondering about how AI would disrupt creative/entertainment industries where the business model mainly hinges on the value of media transmitted and replayed, pre-recorded or not. The recording industry was the first to be hit; DRM is keeping it on life support but it seems like a matter of time until even that gets breached. Online performance still holds value, but AI will soon disrupt that as well (think MAVE). In-person live performance will probably be the last to hold, and I used to think it would be the way for us live musicians to go, but now it seems that advancements in robotics will probably breach that as well—my personal guess is in about 10 years. (con't)
EigenTrust is built on that positive➕ trust can often be applied recursively: "You're my friend, so I'll warm up to your friends even though I don't know them. 💕"
Same cannot be said about negative➖ trust opinions: Is my enemy's enemy my friend or enemy? It depends! The "× negative" sign flip is much too simplistic as a general recursive trust rule.
Like, if I distrust X, do I take X's trust opinions in the opposite direction, that is, think lowly➖ of everyone who X endorses➕? Not really. X's trust opinions may still be honest and valuable.
In fact, if I did take X's opinions in the opposite direction, X could even exploit that! X could intentionally express negative opinions about someone he secretly wants to promote. Especially when X scores low in the network and "has nothing to lose." It'd be a powerful trolling vector. 🤣
This is why EigenTrust doesn't allow negative local trust. More on later how to tackle this.
Same cannot be said about negative➖ trust opinions: Is my enemy's enemy my friend or enemy? It depends! The "× negative" sign flip is much too simplistic as a general recursive trust rule.
Like, if I distrust X, do I take X's trust opinions in the opposite direction, that is, think lowly➖ of everyone who X endorses➕? Not really. X's trust opinions may still be honest and valuable.
In fact, if I did take X's opinions in the opposite direction, X could even exploit that! X could intentionally express negative opinions about someone he secretly wants to promote. Especially when X scores low in the network and "has nothing to lose." It'd be a powerful trolling vector. 🤣
This is why EigenTrust doesn't allow negative local trust. More on later how to tackle this.
https://twitch.tv/adagiorustico yours truly at piano
In OpenRank, seed trust is an essential part of the network's trust assumptions. When rankings are recalculated/updated using the same seed trust as before, each seed peer continues to receive the same relative portion of the network's trust pie, *undiluted* by the network's growth.
That is, seed peers' influence grows *with* the network: 5% today will still be 5% when the network is 10x. Therefore, seed peers bear great responsibility. They must remain active and vigilant. They should uphold the network's trust value in what they do—from which their P2P trust is inferred. Otherwise the network's capability to deal with adversaries (sybils) is compromised.
Given this critical importance, seed trust should be groomed from time to time. Active seed peers doing a good job should be given higher seed weight. Dead seed peers should be removed from the seed trust.
At first, usually the trust designer grooms the seed trust. In the long term, the beneficiary community could own this process.
That is, seed peers' influence grows *with* the network: 5% today will still be 5% when the network is 10x. Therefore, seed peers bear great responsibility. They must remain active and vigilant. They should uphold the network's trust value in what they do—from which their P2P trust is inferred. Otherwise the network's capability to deal with adversaries (sybils) is compromised.
Given this critical importance, seed trust should be groomed from time to time. Active seed peers doing a good job should be given higher seed weight. Dead seed peers should be removed from the seed trust.
At first, usually the trust designer grooms the seed trust. In the long term, the beneficiary community could own this process.
❤️ dogs at (We)Work ❤️❤️❤️
The a ("alpha") parameter in EigenTrust says how much of the network's global trust is reserved for the seed peers. For example, a=0.5 sets 50% of the global trust aside for seed peers, so if there are 10 equally pre-trusted seed peers, each seed peer is guaranteed 5% global trust, plus trust earned from other peers.
The seed trust reserved via alpha boosts not just the seed peers' own ranking, but also what they have to say about other peers (remember: in EigenTrust, what you say matters exactly as much as how trusted you are by the network). Getting trust signal from one of the (strongly pre-trusted) seed peers is therefore a big boost for both your ranking and your trust opinions about others.
The seed trust reserved via alpha boosts not just the seed peers' own ranking, but also what they have to say about other peers (remember: in EigenTrust, what you say matters exactly as much as how trusted you are by the network). Getting trust signal from one of the (strongly pre-trusted) seed peers is therefore a big boost for both your ranking and your trust opinions about others.
Seed trust (aka pre-trust) gives EigenTrust rankings both Sybil-resistance and their "flavors". Seed trust works by letting ranking designers put intentional bias in whose trust opinions to boost and by how much.
Ranking designers should choose a good seed trust to achieve an important balance. Seed trust should be limited to those "prudent" peers who place their direct trust sparingly so that their close neighbors (both direct and indirect) in the network is are unlikely to trust sybils, but there should still be enough seed peers to "light up" the majority of the trust network. (A peer is "lit up" if there's a recursive trust path from at least one seed peer.)
Ranking designers should choose a good seed trust to achieve an important balance. Seed trust should be limited to those "prudent" peers who place their direct trust sparingly so that their close neighbors (both direct and indirect) in the network is are unlikely to trust sybils, but there should still be enough seed peers to "light up" the majority of the trust network. (A peer is "lit up" if there's a recursive trust path from at least one seed peer.)
OpenRank use case of the day: Trust-weighted voting. Given N voters rated by OpenRank, give each of them a voting power equal to their OpenRank global trust multiplied by N. Since OpenRank global trust values sum up to 1.0, the aggregated voting power is N—the same as in vanilla voting.
In OpenRank, a growing network—like Farcaster—also means that, if one doesn't do anything much in it, their OpenRank score (percentage) will be diluted. It does not necessarily mean that their absolute trust level is decreasing.
In order to quantify OpenRank score into an absolute score that can be quantified over time, one needs to also measure the size of the network's trust pie, and multiply it by their OpenRank score. Once I have my absolute score calculated this way, it'd be a real problem if that absolute score decreased.
So here's an open question: How can we define the size of Farcaster's trust pie for OpenRank? (I don't have an answer, I started actively using FC only 2 months ago 🤣)
In order to quantify OpenRank score into an absolute score that can be quantified over time, one needs to also measure the size of the network's trust pie, and multiply it by their OpenRank score. Once I have my absolute score calculated this way, it'd be a real problem if that absolute score decreased.
So here's an open question: How can we define the size of Farcaster's trust pie for OpenRank? (I don't have an answer, I started actively using FC only 2 months ago 🤣)
OpenRank score—EigenTrust global trust value—is different from traditional trust scores such as credit scores, in that it's a ratio percentage, ala "who in this community deserves how big of a share of that trust pie the community built as a whole?"
As such, it doesn't make sense to say, for example, "A 0.05 score—5% share of trust pie—is good enough over time for XYZ purposes." OpenRank scores do not have absolute thresholds: Me having only 5% of the trust pie in a network of 100,000 people will likely be still better than, say, me having had 20% of the trust pie 3 years ago when the network had only 10 people in it.
As such, it doesn't make sense to say, for example, "A 0.05 score—5% share of trust pie—is good enough over time for XYZ purposes." OpenRank scores do not have absolute thresholds: Me having only 5% of the trust pie in a network of 100,000 people will likely be still better than, say, me having had 20% of the trust pie 3 years ago when the network had only 10 people in it.
OpenRank's EigenTrust is not just about #1/#2/#3/... ranking; it binds a "global trust" value to each peer. And trust values matter more than ranking! Like, @sahil, @dharmi, and me being ranked #100-#102 does not necessarily mean we're in the same league: My trust value may be orders of magnitude lower than theirs!
o hai bay area
twitch.tv/adagiorustico a short one for the night, tune in if you feel like some chill piano <3
twitch.tv/adagiorustico evening live piano stream, join the chat and say hi <3
Me: Give me ERC20 transactions *forgets including LIMIT clause*
Dune: OK
Brave:
Dune: OK
Brave:
Just saying hi, twitch.tv/adagiorustico here :) I stream live piano🎹