Hello Brainiacs, for my first post to the VerifyYourBrain Tribe I'd like to propose a model for decentralized moderation of the Tribe. It's not perfect, but with your help we can get it there.
When I initially read the Unofficial Whitepaper, moderation was the only section that gave me doubts. It sounded a bit too centralized for my tastes, so I had a solo brainstorming session and this is what I came up with.
The General Idea
First of all, no one should ever judge through an assumption in a way that would cause harm without evidence. Nor should any one person be able to derail the reach of another. It should only be after a review by a number of your peers that any negative action be taken against another.
The basic idea is to add a flag option(add new tag) to every article and make a selection of offenses easy to choose from. Something like a labelled hot button that only needs a tap or click to activate it. This way you just tap/click the hot button that most closely fits the offense in question, like Plagiarism, Child Porn, or Spam, for example.
Depending on the choice, a relevant option will appear, like;
{{Edit}}
I really like Trostparadox's and Calamam's idea to require a user to reach a certain level in the Tribe before they are able to moderate. In my opinion, how often and the quality of their engagement in the community should be the criteria with the highest weight given.
For plagiarism you should first check the text/image through a 3rd party like Google reverse Image Look up or SmallToolsSEO Plagiarism Checker. This will result in a url for the original, so the appropriate next step after choosing an infraction for this option would be for a text box to appear asking for the URL.
For spam, maybe have the same option where three text boxes appear, so the URLs of multiple links can be added.
For Child Porn it might a little more tricky, because this would be more obvious. The issue here is that if more immediate action is taken by the word of one user, for example, there's a high likelyhood it could be abused.
That being said, for this particular option, since it's the most serious, the article could be minimized until more users weigh in or muted until it can be reviewed, so it's not left up, causing serious repercussions for the platform.
All actions should be reversible as well, to avoid brigading. Since the VYB concept was created, because the downvote can and is being abused, we need to allow the flags to be reversed in the same way they are added.
Because these types of features tend to be used out of emotion, rather than objectivity much of the time, I think we still need a group who manually reviews each flag before permanent action is taking. It will probably be a good idea if these positions are elected.
Adding a system similar to this will make sure fewer infractions slip through the cracks. A small group of users cannot evaluate every post, but can review a smaller group of posts that are already curated for them. This will make everyone's responsibilities a little less.
Source
Order of Actions After the Initial Flag
After the flag has been initiated, the tag will/can replace the main tag, so everyone can see there may be an issue. The number of users needed to valid the flag can be determined after an initial debate, which can be increased as the userbase grows.
One way to show where in the process the tag is to be validated is to show just the tag initially. After another user or three agrees, it could show a checkmark or some type of symbol. For the final validation it can show the checkmark enclosed or simply be muted at this point. Another option is to just collapse the post, while it's waiting for review.
As suggested in the VYB Whitepaper, the idea was to create a small group to take charge of the moderation. There are many issues with this method. Here are just some of them;
Not everyone(or enough to be fair) is online at the same time, so catching issues early will be unlikely.
This of course is Centralized and opens the process up to skepticism and bias. While bias will always exist, having a static group would allow bias in a consistent direction.
Obviously it goes against the goals of the Hive Blockchain to become fully decentralized.
A couple years ago Minds.com created a jury system of 12 random users to judge flagged content. As most of us know who like to use the shuffle option when listening to music on our media player, few are very efficient. The same users were being chosen often and many users were never chosen at all. This can be both a flaw in the algorithm or built in.
While this system is still being used, it has caused more issues than it's worth. There were many legit posts and accounts being removed and banned. This is still happening today and the platform isn't growing.
Another platform I'm familiar with who is trying to decentralize their moderation is Bastyon.com. Currently their system mutes an account after the reputation reaches -30 and bans them on a second infraction. They also have a flagging feature that is controlled by an algorithm.
The formula for the flagging feature requires the amount of flags to be at least 1/3 of the upvotes, with a minimum of ten flags needed to mute a post on the platform. Again there are two strikes given before an account is banned.
They are working on a new system similar to the minds.com jury system, which I'll let the screenshot from their FAQ explain.
The moderation proposal I outline above is a tweaked version of what den.social uses on their platform mixed with bastyon.com's flagging feature. There are many benefits for using a feature that works similar to my proposal.
It's decentralized. We the users are responsible for moderation, as it should be.
It gives us the tools to act as we see it. A small group of static users will have trouble dealing with any issues in a timely manner, but not the userbase as a whole.
It can scale with the userbase as it grows. As the userbase grows and more users are browsing throughout the day, the thresholds can be increased to make sure it's as fair as possible, while still being able to remain effective.
It can help to increase interaction and give the userbase a sense of ownership. If we feel invested and understand how important it is to keep our environment inviting and fair for us all, we are more likely to read content we wouldn't normally open. If it's on par with standards, while it's open and has been read, why not give it a vote and drop a comment.
Well, there it is. What do you think? What would you add or substract? Do you have an idea for a more fair and efficient system?
Let's here it. All Tribes are an experiment, so let's experiment.
@scolaris, tagged as promised👍
Please make sure to take the time to get outside and bond with your environment. Your health will thank you at every level of your being and please share your experiences with the world. Personal communal knowledge is beneficial to us all, because this interaction is essential to our evolution.
Thank you and I hope your day unfolds on your terms.