Everyone I have something very important to say about The Agora.

The Problem

Let me be super clear here to something people don’t seem to understand about lemmy and the fediverse. Votes mean absolutely nothing. No less than nothing.

In the fediverse, anyone can open a instance, create as many users as they want and one person can easily vote 10,000 times. I’m serious. This is not hard to do.

Voting at best is a guide to what is entertaining.

As soon as you allow a incentive the vast majority of votes will be fake. They might already be mostly fake.

If you try to make any decision using votes as a guide someone WILL manipulate votes to control YOU.

one solution (think of others too!)

A counsel of trusted users.

The admin, top mods may set up a group to decide on who to ban and what instances to defederate from. You will not get it right 100% of the time but you also won’t be controlled by one guy in his basement, running 4 instances and 1,000 alts.

Now i’m gonna go back to shit posting.

  • jjagaimo@lemmy.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    A public/private key pair is more effective. Thats how “https” sites work. SSL/TLS uses certificates to authenticate who is who. Every site with https has a SSL certificate which basically contains the public key of the site. The site can then use its private key to sign all data it sends to you, and you can verify that it actually came from them by trying to decrypt it with their public key. Certificates are granted by a certificate authority, which are basically the identity service you are talking about. Certificates are usually themselves signed by the certificate authority so that you can tell that someone didnt just man-in-the-middle-attack you and swap out the certificate, and the site can still directly serve you the certificate instead of you needing to go elsewhere to find the certificate

    The problem with this is severalfold. You would need some kind of digital identity organization(s) to be handling sensitive user data. This organization would need to

    1. Be trusted. Trust is the key to having these things work. Certificate authorities are often large companies with a vested interest in having people keep business with them, so they are highly unlikely to mess with people’s data. If you can’t trust the organization, you can’t trust any certificate issued or signed by them.

    2. Be secure. Leaking data or being compromised is completely unnaceptable for this type of service

    3. Know your identity. The ONLY way to be 100% sure that it isnt someone just making a new account and a new key or certificate (e.g. bots) would be to verify someone’s details through some kind of identification. This is pretty bad for several reasons. Firstly it puts more data at risk in the event of a security breach. Secondly there is the risk of doxxing or connecting your real identity to your online identity should your data be leaked. Thirdly it could allow impersonation using leaked keys (though im sure theres a way to cryptographically timestamp things and then just mark the key as invalid). Fourth, you could allow one person to make multiple certificates for various accounts to keep them separately identifiable, but this would also potentially enable making many alts.

    There may be less agressive ways of verifying individual humanness of a user, or just preventing bots as in that 3rd point may be better. For example, a simple sign up with questions to weed out bots, which generates an identity (certificate / key) which you can then add to your account. That would then move the bot target from various lemmy instances, solely to the certificate authorities. Certificate authorities would probably need to be a smaller number of trusted sources, as making them “spin up your own” means that anyone could do just that, with less pure intentions or modified code that lets them impersonate other users as bots. That sucks because it goes against the fundamental idea that anyone should be able to do it themselves and the open source ideology. Additionally, you would need to invest in tools to prevent DDOS attacks and chatgpt bots.

    There most certainly exists user authentication authorities, however it wouldn’t surprise me a bit if there were no suitable drop in solutions for this. This in and of itself is a fairly difficult project because of the scale needed to start as well as the effort put into verifying users are human. It’s also a service that would have to be completly free to be accepted, yet cannot just shut down at risk of preventing further users from signing up. I considered perhaps charging instances a small fee (e.g. $1/mo) if they have over a certain threshold of users to allow issuing further certificates to their instance, but its the kind of thing I think would need to be decoupled from Lemmy to have a chance of surviving through more widespread use.

    • Ajen@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Interesting idea, but I don’t think it would be practical to verify identities for a global community. If you’ve ever worked in a bar or other business that checks ID (and are from the US) you know how hard it is just to verify the identity of US citizens. If you’re considering a global community, US and EU users would be the easiest to verify, and citizens of smaller countries would be much harder. How do you handle countries that have extremely corrupt governments, where it’s easy to bribe an official for “real” documents for fictitious people?

      • jjagaimo@lemmy.one
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        There are companies that do this with IDs but they are typically already global corporations or SSL certificate authorities already. One example is Verisign and another is Globalsign. Their products are unsuitable however because it connects your real identity to the account. It could be useful for a one time humanness verification though.

        The main goal would be to decouple the humanness check from Lemmy and give it over to an authority meant just to create certificates which cannot be linked back to the person. You could probably rate limit each person after the human check for creating new certificate. This would allow creating alts but limit the number of bots one person could create, as theyd need to pass the automate the verification.

        One issue would be trust because you would need to trust the authority saying that the person who created the certificate was human

    • DarkwingDuck@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      What the fuck happened to the internet? What happened to “never share your real name or any identifying information on the internet”?

      some kind of digital identity organization(s) to be handling sensitive user data

      Like Equifax? Excuse me if I am a little skeptical of “trusted” organizations handling my data.

      • jjagaimo@lemmy.one
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I literally addressed this. My point is that we’d need to give personal identifying information to be 100% sure, so the best way at the moment would instead be to just verify humanness as best as possible (e.g. better captcha, AI/chatgpt response detection, etc.) and shift the account sign up to the authority’s side, accepting <100% unique individuals making accounts and prevent bots in other ways.

        Also “trusted organizations handling your data” is exactly how 99% of the modern internet works. Rarely if ever do we give thought to the fact that companies like Verisign exist, nor that people regularly give credit card information to websites. At the same time, companies and corporations arent just some random schmuck spinning up their own authentication service