CypherEthics: Difference between revisions

From Traxel Wiki
Jump to navigation Jump to search
Line 133: Line 133:
I want to address this using Nostr. It is a decentralized communication protocol similar to IRC (Internet Relay Chat, from 1988), but with modernized features like cryptographic identities and message signing.
I want to address this using Nostr. It is a decentralized communication protocol similar to IRC (Internet Relay Chat, from 1988), but with modernized features like cryptographic identities and message signing.


Nostr includes the notion of labeling, which is like an upvote but with three major upgrades:
Nostr includes the notion of labeling, which is like an upvote but with four major upgrades:
# Every label is signed by the labeler. So you can base your trust of any particular label on the person's integrity and domain knowledge.
# Every label is signed by the labeler. You can base your trust of any particular label on the person's integrity and domain knowledge.
# Labels can be an enumerated set. So you can mark a comments with labels like Insightful or Fact-Based.
# Labels can be more diverse than up/down vote. You can mark a comments with labels like Insightful or Fact-Based.
# Labelers can attach more than one label. So you can mark a comment as both Ad-Hominem and Emotionally Charged.
# Labelers can attach more than one label. You can mark a comment as both Ad-Hominem and Emotionally Charged.
# Labels can belong to an ontology. You can enumerate the valid selections, and massively reduce the complexity and noise associated with natural language sentiment analysis.


== NIP nn: Critiques ==
== NIP nn: Critiques ==

Revision as of 05:57, 21 October 2023

Social Punishment

Social punishment in a cypherpunk society would be much more efficient. The ability to programmatically participate in social movements using publicly available algorithms is potent.

It would have a massive impact on influencers, who depend on social network penetration. If I can describe to my computer who I trust and give it some angles on what I trust them for, I can have it incorporate those filters into my inbound datastream.

There is far too much great information out there to consume it all. We all need to curate. Having an algorithmic curator of our inbound information is a critical quality of life issue.

We have them now. They're called Google, Facebook, ClearChannel, and MSNBC. But you don't get much input on what they should focus on.

So you get what they're pushing. It is push media, to a substantial degree, even if it is curated to your taste profile.

That is not what the Internet was built for.

Reddit is The Least Evil?!?

The most recent dustup with Spez and Conde Nast sheds a harsh light on social media. Reddit is the most open of all the social media platforms. And they just took a big step toward closing down large scale direct access to the content people are producing for them.

We don't have a Wikipedia of social media, and we need it. I'm not sure how far along Mastadon or Nostr are, but they - or others like them - need juice.

When we used to talk about launching a platform, way back in the dark ages of the public Internet, we always talked about "the killer app." Any launch needs something so good that people will make the effort to adopt the platform.

But I digress - see more about that on CypherBusiness. The main note here is this: If Reddit is the best, and it is pretty bad, we have to band together and fix this stuff. Software engineers have had it really good for the past couple decades, and a lot of us - like me - have been living the high life for much of that time. We can, should, and must do more. Nobody else can, and we've had it very good.

Read More: https://en.wikipedia.org/wiki/2023_Reddit_API_controversy

Social Media Will Save Humanity

Key Points

  • Greed in itself cannot motivate a person to act for the good of others.
  • Benevolence can and does motivate people to act for the good of others.
  • Social media is biased in favor of mass action (at the network theory / this is mathematical truth level).
  • ML influence of human cognition is the riskiest experiment humanity has ever run.
    • ML could result in annihilation just as nuclear proliferation could.
    • And it could carry us, unnoticing, to where we no longer think for ourselves.
    • Any ML that influences what information is presented to people must be Open.

Benevolent IT People

  • IT people have to be STEM-intelligent and data-driven to be successful.
  • Benevolent people can and do act for the good of others.
  • IT skills are required to work the problem.
  • IT skills imply approaching problems rationally and analytically.
  • Benevolence is required to get mass effect.
  • Open Source influence tech would foment rational, benevolent direction.
  • Making influence tech available to activist IT people is pro-social.

However

  • Benevolent people can be misled to believe they are acting for the good of others. (Reagan and the religious right)
  • Greedy people can be misled to believe acting for the good of another, even at an apparent expense to themselves, is in their best interest. (Trump: "They're attacking you, but I'm in their way.")
  • Oligarchs own the social media platforms.

Conclusion

  • It won't be easy.
  • It is necessarily, mathematically, possible - perhaps inevitable.

Bonus Points

Decline of Disinformation Research

Academics, universities and government agencies are overhauling or ending research programs designed to counter the spread of online misinformation amid a legal campaign from conservative politicians and activists who accuse them of colluding with tech companies to censor right-wing views.

The escalating campaign — led by Rep. Jim Jordan (R-Ohio) and other Republicans in Congress and state government — has cast a pall over programs that study not just political falsehoods but also the quality of medical information online.

Facing litigation, Stanford University officials are discussing how they can continue tracking election-related misinformation through the Election Integrity Partnership (EIP), a prominent consortium that flagged social media conspiracies about voting in 2020 and 2022, several participants told The Washington Post. The coalition of disinformation researchers may shrink and also may stop communicating with X and Facebook about their findings.

The National Institutes of Health froze a $150 million program intended to advance the communication of medical information, citing regulatory and legal threats. Physicians told The Post that they had planned to use the grants to fund projects on noncontroversial topics such as nutritional guidelines and not just politically charged issues such as vaccinations that have been the focus of the conservative allegations.

Social Media Failure

People Aren't Failing

  • Gaza / Israel
    • 3 kinds of posts
      • People grinding an axe (few sustained upvotings)
      • People complaining about people grinding their axes (moderate sustained upvotings)
      • People saying, "Take a minute and let's think about this..." (much sustained upvoting)
    • Brigading definitely happens, and poisons the well.
    • But that's not because the masses are failing.
    • They are genuinely trying, and could be succeeding, if the mechanism didn't suck.
  • House Speaker Clown Show
    • /r/conservative - genuine introspection
    • /r/conservative also enforces viewpoints.
      • but often only after an initial period, when a story first drops, before the "party line" has traction, during which they are genuinely contemplative.
    • They are genuinely trying, and could be succeeding, if the mechanism didn't suck.

Systems Are Failing People

  • Upvotes, Downvotes, Anonymity
    • Comments are pseudonymous, with long-term implications for an ID
      • Though mass access to posts is not super-well supported.
    • Updoots are anonymous from the outside.
      • And not super-well curated from the inside.
      • Even if they intend to do well, they aren't doing well.
      • Some platforms don't even intend to do well (in the benevolent sense of "do well at doing good").
  • Decentralize
    • Central can help, and it will put things out, but it cannot do everything.
    • We The People have the strongest vested interest in the well-being of We The People.
    • Centralized epidemic response depends on decentralized immune systems.
    • That is every bit as true of information infections as it is of biological infections.
  • Much Greater Focus on Reactions
    • People love to react to things.
    • It's much easier than writing a cogent message.
    • Currently we write our reactions because "thumbs up" and "thumbs down" are not sufficiently expressive.
    • While this is obvious when someone posts about their Mom's cancer treatment, it is at least equally true of critical analysis.
    • And it is a very treatable gap in the immune response.
    • Reaction Emojis.

Critique Tags

Why

If we require decentralized defense against information infections, it necessarily follows that we need decentralized signaling about the quality of information packets that are transiting the social network.

It is not possible to have a system that reacts in a decentralized fashion by acting upon signals that are being generated from a centralized source.

One might assume that the most important information in a social network is the contents of the messages passed between nodes, but I think that is untrue, or at best partly true. The ability to distinguish between messages that are benevolent or malevolent, and between messages that are robust versus those that are flawed.

Script

The US has had over 1.1 million deaths from COVID-19. A social media study found that just 52 doctors, who were superspreaders of vaccine misinformation, had a total of 18 million followers. In a test that followed 20,000 vaccinated people and 20,000 placebo controls for 28 days, 16 placebo people were hospitalized, 7 placebo people died, and zero vaccinated were hospitalized or died. More than a hundred thousand people died because of lies propagated over social media. Trump was elected because of social media. Trump used Social media to try to overthrow the government and is likely to try it again. Hamas and Likud are using social media to fan the flames of war. Putin uses social media to set American's at each others throats and manipulate Russian citizens into continuing a brutal war he cannot win.

So far, our approaches to mitigating misinformation are dismal failures. Social media is a broken product that is killing people.

Our approach to mitigating misinformation is currently almost entirely centralized. We rely on the major social media corporation to protect us against toxic, infectious misinformation. For many reasons, their efforts are not working. There are good reasons, like respecting free speech, and bad ones, like everything Elon Musk has done to Twitter.

But the individual reasons that they fail are not the main problem. The main problem is that centralized, for-profit, social media is not the right tool for combating misinformation. There is no clear-cut and universal profit motive behind it, and even if there was, centralized solutions to misinformation are ineffectual.

Information transmission through humans follows biological infection models. And like biological infection, even when we have a centralized solution like vaccines, the efficacy is massively dependent on broad-based distribution of immune response.

Fighting misinformation in social media requires a decentralized solution, both from a network transmission reduction standpoint and from a diversity of immune response standpoint.

One of the components that needs to be addressed is decentralization of the signals about information quality. Currently we rely on the central systems to detect misinformation, but they are not able to apply the same amount of computing power that the public applies to social media. And I'm not talking about the information agents that we should be developing, I'm talking about the meatware computers we carry around in our heads.

Humans are naturally skilled bullshit detectors. And while there are highly proficient bullshit artists, average day-to-day bullshit cannot withstand the scrutiny of even a handful of randomly selected people. With the right tools for propagating bullshit detection signals, the total distributed computing power of human bullshit detectors is enormous. But we currently apply it with just two tools: Upvotes, which is easily processed by machines and a clear signal, but is only a single bit. And replies, which allow a much more nuanced response, but are extremely noisy for machine processing, requiring NLP and sentiment analysis.

In addition, the identities of upvotes are typically hidden, which facilitates brigading.

I want to address this using Nostr. It is a decentralized communication protocol similar to IRC (Internet Relay Chat, from 1988), but with modernized features like cryptographic identities and message signing.

Nostr includes the notion of labeling, which is like an upvote but with four major upgrades:

  1. Every label is signed by the labeler. You can base your trust of any particular label on the person's integrity and domain knowledge.
  2. Labels can be more diverse than up/down vote. You can mark a comments with labels like Insightful or Fact-Based.
  3. Labelers can attach more than one label. You can mark a comment as both Ad-Hominem and Emotionally Charged.
  4. Labels can belong to an ontology. You can enumerate the valid selections, and massively reduce the complexity and noise associated with natural language sentiment analysis.

NIP nn: Critiques

A Critique is a NIP-25 Reaction Event which contains one or more of the following emojis in its content field. Clients SHOULD interpret each emoji as an expression meaning "this signer believes the reacted-to event satisfies this condition".

Each individual emoji in a Critique is a Reaction, meaning that a Reaction Event may contain more than one Reaction.

Each Reaction MUST be one of the following 24 symbols (3 currently reserved), which are taken from Unicode 15.1 and use the official Unicode encoding and short name, with spaces replaced with underscores in the short name where necessary.

The list of symbols is deliberately kept very short to create guardrails that guide the reacting user into a machine comparable set of options. While machine learning systems can extract signal from noisier inputs, it is considerably more efficient to hand-hold the human in the process of creating a less noisy signal in the first place.

It is intended that this list of symbols will evolve very slowly once established, that the original list (once reaching "recommended" status) will be respected in perpetuity, and the number of new symbols over time will be on the order of ones per decade. The three "reserved" placeholders are intended to take the first decade worth of additions.