On self ranking on fractal democracy

in #gems2 years ago

Image from https://www.picserver.org

A core value proposition of Fractal Democracy (FD) is to accurately quantify into a blockchain the respect a community manifests for each of its members. For this purpose, members are randomly distributed in groups of 5 to 6 people and required to rank their contributions to the community.

Consider a group of 6 people where 3 of them rank themselves as Level 6 contributors. Irrespective of whether they genuinely believe that their contributions are the best, that means that only 3 people out of 6 are surely trying to measure the system impartially. That is, there is a probable 50% loss in the measuring power for the selection of the most relevant position of that group. Also, without a second thought, the 2 players who missed the L6 spot will rank themselves as L5, automatically. This implies a probable 40% loss in the measuring power for the second most relevant position of this group. Because we are dealing with probabilistic events, there is no guarantee of this loss in any specific group. However, the expected value of the overall accuracy of the system is reduced. This reduction goes against the value proposition of FD.

Most physicists have a hard time understanding the work of their fellow colleagues; programmers usually take some time to asses the code of other programmers. Now, put in a FD room designers and ask them to rank the work of physicist or ask musicians to rank the work of developers, even ask laymen to rank the work of experts; require this ranking to take place in just a few minutes. What do you end up with? Answer: a little signal and a lot of noise. This noise will propagate to the following rounds. Both the noise and its propagation undermine the value proposition of FD.

To put it more graphically, we already have a system that gives an edge to convincing extroverts; kind of Marlon Blumer's style. Would Mr Blumer vote for himself in the first round of FD on any given week? What about the proponent of Terra-Luna, Mr Do Kwon? How is it more likely that we protect a community from this kind of characters? By allowing or by preventing them for voting themselves up? Now, suppose that they both make it to the second round. Will they cancel each other out? Or is it more likely that they will affect the second round measurement, and the community incurs in the extra work of having to cut through their mutually amplified BS?

Fortunately, mathematics provides and answer for this question. According to basic probability theory, noise will not tend to cancel but add itself up. Sum, for example, two normally distributed random variables. The variance of the result is the sum of the variances. Now, instead of adding, substract them. The variance of the difference of random variables is also the sum of the variances. No matter how Mr Blumer and Mr Kwon interact in the second round the noise will increase, always.

No, this problem will not occur only during the first iterations. During the first iterations you may filter Mr Blumer and Mr Kwon out, but many others will keep coming. There are billions of people out there. At a rate of 1 weekly meeting, how long is it needed to arrive to a healthy consensus about who is a trustworthy contributor? How does this change if you are using a sub-optimal protocol? At what cost? Isn't it way more convenient to protect the system from the very beginning and prevent people from self ranking?

What about distant communities of simpleminded people who will never pay attention to these nuances, but who still need to effectively achieve consensus? Will we present them with a sub-optimal product and abandon them at the mercy of their own noise?

In a previous post, I stated that the level of sincerity each individual has during his presentations is one of the variables that makes it more difficult to obtain any predictions from any FD model. This is because this variable has no clear probability distribution. People come in all imaginable levels of sincerity when their reputation is at stake. There are those who provide inaccurate information because of self sabotaging. Within this group there are some that believe that such a behavior constitutes a virtue. There are those that exaggerate; there are those that don't exaggerate but omit important aspects. In short, the problem comes from both, those who over estimate their contribution and those who downplay it. This is all improved by preventing any kind of self ranking.

The larger the community becomes, the harder it will get for anyone to grasp the implications of any of the contributions of his breakout session mates. Thence, the higher the incentive for each member to just rank himself up or down. This creates a vicious cycle of information loss. This not only goes against the value proposition of FD, it is potentially lethal for it. What is needed is a system that maximizes honest contribution assessment, even in large communities; not one that counteracts Metcalfe's law by making the system less valuable with every new member it incorporates.
Image from https://ak7.picdn.net/shutterstock/
Fractal Democracy, it has been stated many times, is a tool for measuring. Any measuring device comes with measurement uncertainty. Repetition of experiments will increase accuracy. Nonetheless, it would be considered foolish to not increase device precision and rely only on performing more experiments. That would be the equivalent of preferring to take 20 measurements every time, for every circuit, instead of better calibrating the voltmeter. To really accomplish the value proposition of FD, noise must be minimized, at every level. Otherwise, FD wouldn't be an effective measuring tool and we should re-brand it as something else.

If someone really considers his contribution deserves the highest respect from the community, then he should go all the way and allow the community to decide for itself; instead of trying to affect the measurement by being judge in his own cause.

If someone really considers his contribution to be the highest, then he should incetivize his breakout session mates to examine it carefully. This is best achieved by preventing those fellows from focusing primarily on their own contributions and self rank themselves to the top. In other words, the highest contributor needs (and deserves) other people to raise their eyes and look at his work with 100% attention, free from the bias of any individual interest.

Sort:  

Good read Aguerrido.
I'm personally developing my own thoughts on this particular example, but agree strongly with this:
"What is needed is a system that maximizes honest contribution assessment, even in large communities; not one that counteracts Metcalfe's law by making the system less valuable with every new member it incorporates."
A challange for growing companies is enacting the same simple values of a smaller company, but at a larger scale. Growing companies appear to often fail because they lose those simple values.
Considering now while we are small, how to keep the simple values, seems to be a worthy goal.

Thoughts here and in the previous posts assume that there exists an objective (correct) evaluation of human work. Any talk about "noise" would not make sense otherwise.

@jamesmart already made a similar point, but I will say it in my own words: there's no such thing as objective (or correct) evaluation of human work.

Let's say, we have an ideal world AI which would evaluate contributions. The result would be subjective. Why? Because any such algorithm would have to have a criteria by which it evaluates and that would make it biased towards that specific criteria as opposed to some other values. So in the end the biases of the designer of the algorithm would make the whole evaluation biased. Different designers will have different values, which will make them prefer different criteria. How do we choose criteria then?

Therefore, best we can do is: 1) attempt to reach consensus on what we as a community value and 2) try to rank each other according to that consensus. Fractally consensus meetings try to do both in the same process.

I think the analogy that fractally process as a measurement tool, which with enough samples will approach some correct measurement, is missleading, since it assumes that "correct" measurement exists as some physical reality, free from human interpretation.