I have made a decision to alter and/or remove various restrictions on Voat. I’ve thought a lot about this and it’s something both @Atko and I believe needs to be reevaluated.
Voat has always had a problem with spam. @Amalek would spam posts and hijack the new queue making it unusable. MH101 and then later @SaneGoatiSwear would hijack comment pages making them unusable. The rules Voat uses were put in place in to combat this behavior. They are old rules, mostly remaining unchanged from the initial versions of this site. Most, if not all, of the rules were in direct response to spam attacks. It was never Voat’s intention to limit non-spam accounts, but this is what has happened as an indirect result of these rules.
Voat will not keep in place a system that permanently limits a segment of users from debating and conversing. This isn’t Free Speech as I see it or as I want it.
Voat will shortly be going live with a new code base, and I want to have a new system designed and ready for when this happens, so I am posting this announcement to get feedback from the community.
The main areas of concern:
- Commenting restrictions on negative CCP accounts that aren't spamming their comments
- Limiting any account that spam comments
TL;DR
We need to allow unpopular opinions while preventing comment spam.
How do we do it?
All options are on the table
https://voat.co/v/announcements/1330806
view the rest of the comments →
10246470? ago
:D
Spam is an issue and we don't want it overrunning the website. But at the same time you're right, these restrictions have been inhibiting people who have done nothing wrong but share too many unpopular opinions, and it isn't in the spirit of Voat.
We should consider what tools we have available. The /v/ReportSpammers community is very hard-working and dedicated to keeping Voat free of spam, and it is a community very capable of growing. Spam is against Voat's rules; accounts that spam get permanently banned from the website. We determine that accounts are spamming by responding to user reports against specific accounts, evaluating their comments / submissions, and then deciding if they have indeed spammed. If they have, you eventually ban them. I think that's the basic process.
Waiting for a spammer to accrue negative CCP is actually relatively slow. What we could do instead is this: if an account receives spam reports, and one of the trusted community members in /v/ReportSpammers marks the report as actual spam, then upon that marking the account could be restricted until such time as you or someone else is able to review the reports and ban the guilty users.
As far as I am aware this follows the same process as right now, except it will not restrict any account's commenting ability based on CCP, only on confirmed spam reports. As I understand it this should restrict guilty accounts much faster than negative CCP would have, without restricting non-spam accounts. All we require is a sufficiently large and trusted report marker section of the community, and then the awareness of the Voat community at large to place spam reports instead of downvotes in the first place.
The community at large can vote on who they want / trust to mark reports as actual spam, and we can keep those who have been doing a perfect job already (@Cynabuns namely. I'm sure @NeedleStack would do well also).
I can adjust anything I've written above for feasibility reasons but I think some interpretation of this will work for Voat well without punishing the innocent.
MadWorld ago
Yes, feedback on negative ccp to determine spamming is way too slow and undesirable as this can be mixed up with unpopular opinions. I have thought about using neural network or plagiarism detection. It sounds interesting at first. But it's really just a game/evolution of the cat and the mouse. Sooner or later, the spammers will always find new ways to cheat the system. Human elements will remain to be the best judgement.
@PuttItOut, I would propose something like this toward the spammers:
Something optional to keep the users motivated, but I suspect voaters might not care much since they love voat so much.
PuttItOut ago
I really like the idea of automatically making a post in v/reportspammers when any trigger level has been detected. This is a very transparent way of verifying the accuracy of the code.
If we move into any sort of reporting system, we have already decided we will have to build a confidence interval for users. If done right the system would be able to flag spam based on reports very quickly depending on who is reporting content and their history of reports vs outcomes.
This can also be gamed so we will have to still have accountability and not trust the system fully.
guinness2 ago
But how would this solution cope with shit like:
bots that create a single new accounts just so they can maliciously down-vote a random post in a sub or on the front-page;
bots that create a single new account just to make random false reports to /v/ReportSpammers so the mods are too busy dealing with fake reports to keep up with dealing with real ones;
@MadWorld
MadWorld ago
The new code base will have Vote Identity 2.7/2.8 built in to restrict the number of alt accounts that can vote on submissions and comments, assuming the bots have acquired minimum of 100 ccp. With a few exceptions, I believe it won't be possible to simply keep on creating new accounts to get around the barrier. When fake reports are identified, the bot accounts will be restricted or banned. This ban, in combination with Vote Identity 2.7/2.8 could be used to prevent bots from creating new alt accounts. Note that both spammers and false accusers can be restricted or banned.
In the case of sole upvoting/downvoting that doesn't leave any trace of spamming, how would a bot acquire enough ccp points to perform the downvote? It cannot earn enough ccp without making meaningful comments that generate ccp to permit downvoting action. If it is smart enough, possibly using AI, it would be the borderline between legitimate user and spammer.
guinness2 ago
Ha ha: Vote Identity 2.7/2.8 sounds fantastic!
Thank you for being made from pure awesome!
Hearing this makes me even more excited about Voat's bright future!
MadWorld ago
Yeah it is awesome! Both rules were discovered while testing on the preview site. I thought it was the pre-existing rules, but it turned out to be new features. The vote identity is probably hashed based on some hardware attributes of the machines.
guinness2 ago
Ha ha ha: I can't wait for "certain users" to throw a fit when they learn about this... they'll claim malicious spamming and brigading attacks on other users is how they express their free speech!