CypherEthics

From Traxel Wiki
Jump to navigation Jump to search

Principles

  • The way to harden your views is not to indulge them but to challenge them.
    • Emotional outbursts - anger, vitriol, toxicity, slurs, insults - are self-indulgence.
  • ???
  • Opinions are like assholes.
  • Comments should be Reasoned, Constructive, and Substantive
    • Reasoned: Avoid Logical Fallacies
    • Constructive: No baiting or attacking. Use critique flags to notify moderation systems of destructive content. Do not, yourself, become destructive.
    • Substantive: Trivial fluff creates the appearance of shallowness.

Discussion Style

  • State facts neutrally, even if you don't feel neutral about the topic. If the objective is to communicate facts, emotion is at best unproductive. Taking a position .

Bullshit

John Petrocelli, TEDx: https://www.youtube.com/watch?v=WaOiRRqHNNk

  1. Don't call bullshit, unless you're sure it's bullshit.
  2. Be considerate. Consider the possibility that you're confused. Try being uncertain and inviting the potential bullshitter to elaborate.
  3. Attack the claim and not the person.
  4. Reduce confusion to an understandable error in reasoning. Doing so is more forgiving and more palatable to the bullshitter.
  5. When you find yourself guilty of bullshit, just admit the fault. We all make mistakes. Don't double down on your bullshit.

It often takes only one person to stop the propagation of bullshit.

Cashflow

  • Consider creating a cryptocurrency.
    • IC retains some amount of currency.
    • Using or holding that currency creates value.
    • IC gradually dilutes its stake by giving currency from its pool as rewards.
    • Think about the DAO model for cashflow. EG: If IC is giving money away, can currency holders vote to give themselves the money?

Social Punishment

Social punishment in a cypherpunk society would be much more efficient. The ability to programmatically participate in social movements using publicly available algorithms is potent.

It would have a massive impact on influencers, who depend on social network penetration. If I can describe to my computer who I trust and give it some angles on what I trust them for, I can have it incorporate those filters into my inbound datastream.

There is far too much great information out there to consume it all. We all need to curate. Having an algorithmic curator of our inbound information is a critical quality of life issue.

We have them now. They're called Google, Facebook, ClearChannel, and MSNBC. But you don't get much input on what they should focus on.

So you get what they're pushing. It is push media, to a substantial degree, even if it is curated to your taste profile.

That is not what the Internet was built for.

Reddit is The Least Evil?!?

The most recent dustup with Spez and Conde Nast sheds a harsh light on social media. Reddit is the most open of all the social media platforms. And they just took a big step toward closing down large scale direct access to the content people are producing for them.

We don't have a Wikipedia of social media, and we need it. I'm not sure how far along Mastadon or Nostr are, but they - or others like them - need juice.

When we used to talk about launching a platform, way back in the dark ages of the public Internet, we always talked about "the killer app." Any launch needs something so good that people will make the effort to adopt the platform.

But I digress - see more about that on CypherBusiness. The main note here is this: If Reddit is the best, and it is pretty bad, we have to band together and fix this stuff. Software engineers have had it really good for the past couple decades, and a lot of us - like me - have been living the high life for much of that time. We can, should, and must do more. Nobody else can, and we've had it very good.

Read More: https://en.wikipedia.org/wiki/2023_Reddit_API_controversy

Social Media Will Save Humanity

Key Points

  • Greed in itself cannot motivate a person to act for the good of others.
  • Benevolence can and does motivate people to act for the good of others.
  • Social media is biased in favor of mass action (at the network theory / this is mathematical truth level).
  • ML influence of human cognition is the riskiest experiment humanity has ever run.
    • ML could result in annihilation just as nuclear proliferation could.
    • And it could carry us, unnoticing, to where we no longer think for ourselves.
    • Any ML that influences what information is presented to people must be Open.

Benevolent IT People

  • IT people have to be STEM-intelligent and data-driven to be successful.
  • Benevolent people can and do act for the good of others.
  • IT skills are required to work the problem.
  • IT skills imply approaching problems rationally and analytically.
  • Benevolence is required to get mass effect.
  • Open Source influence tech would foment rational, benevolent direction.
  • Making influence tech available to activist IT people is pro-social.

However

  • Benevolent people can be misled to believe they are acting for the good of others. (Reagan and the religious right)
  • Greedy people can be misled to believe acting for the good of another, even at an apparent expense to themselves, is in their best interest. (Trump: "They're attacking you, but I'm in their way.")
  • Oligarchs own the social media platforms.

Conclusion

  • It won't be easy.
  • It is necessarily, mathematically, possible - perhaps inevitable.

Bonus Points

Decline of Disinformation Research

Academics, universities and government agencies are overhauling or ending research programs designed to counter the spread of online misinformation amid a legal campaign from conservative politicians and activists who accuse them of colluding with tech companies to censor right-wing views.

The escalating campaign — led by Rep. Jim Jordan (R-Ohio) and other Republicans in Congress and state government — has cast a pall over programs that study not just political falsehoods but also the quality of medical information online.

Facing litigation, Stanford University officials are discussing how they can continue tracking election-related misinformation through the Election Integrity Partnership (EIP), a prominent consortium that flagged social media conspiracies about voting in 2020 and 2022, several participants told The Washington Post. The coalition of disinformation researchers may shrink and also may stop communicating with X and Facebook about their findings.

The National Institutes of Health froze a $150 million program intended to advance the communication of medical information, citing regulatory and legal threats. Physicians told The Post that they had planned to use the grants to fund projects on noncontroversial topics such as nutritional guidelines and not just politically charged issues such as vaccinations that have been the focus of the conservative allegations.

Social Media Failure

People Aren't Failing

  • Gaza / Israel
    • 3 kinds of posts
      • People grinding an axe (few sustained upvotings)
      • People complaining about people grinding their axes (moderate sustained upvotings)
      • People saying, "Take a minute and let's think about this..." (much sustained upvoting)
    • Brigading definitely happens, and poisons the well.
    • But that's not because the masses are failing.
    • They are genuinely trying, and could be succeeding, if the mechanism didn't suck.
    • Update 2023-12-11: The Likud machine has pretty much won.
  • House Speaker Clown Show
    • /r/conservative - genuine introspection
    • /r/conservative also enforces viewpoints.
      • but often only after an initial period, when a story first drops, before the "party line" has traction, during which they are genuinely contemplative.
    • They are genuinely trying, and could be succeeding, if the mechanism didn't suck.

Systems Are Failing People

  • Upvotes, Downvotes, Anonymity
    • Comments are pseudonymous, with long-term implications for an ID
      • Though mass access to posts is not super-well supported.
    • Updoots are anonymous from the outside.
      • And not super-well curated from the inside.
      • Even if they intend to do well, they aren't doing well.
      • Some platforms don't even intend to do well (in the benevolent sense of "do well at doing good").
  • Decentralize
    • Central can help, and it will put things out, but it cannot do everything.
    • We The People have the strongest vested interest in the well-being of We The People.
    • Centralized epidemic response depends on decentralized immune systems.
    • That is every bit as true of information infections as it is of biological infections.
  • Much Greater Focus on Reactions
    • People love to react to things.
    • It's much easier than writing a cogent message.
    • Currently we write our reactions because "thumbs up" and "thumbs down" are not sufficiently expressive.
    • While this is obvious when someone posts about their Mom's cancer treatment, it is at least equally true of critical analysis.
    • And it is a very treatable gap in the immune response.
    • Reaction Emojis.

Critique Tags

Why

If we require decentralized defense against information infections, it necessarily follows that we need decentralized signaling about the quality of information packets that are transiting the social network.

It is not possible to have a system that reacts in a decentralized fashion by acting upon signals that are being generated from a centralized source.

One might assume that the most important information in a social network is the contents of the messages passed between nodes, but I think that is untrue, or at best partly true. The ability to distinguish between messages that are benevolent or malevolent, and between messages that are robust versus those that are flawed.

Script

In 2007, I was targeting advertisements and a friend got me interested in linear algebra, the new way to do Artificial Intelligence. I used it to target ads. For every dollar that the human targeting experts sold, my machine sold $3.80.

I was shaken by the abstract implication of that. I put the right product in front of the right person at the right time, and they bought it. Going a step further, if I could put the right stimulus in front of a person at the right time, I could influence them toward whatever I wanted them to do. Find a guy who is mad at his boss, tell him whiskey will make him feel better, he'll buy Jack Daniels. Find a person who got fired, blame it on the opposing politician, they'll vote for you.

The US has had 1.1 million deaths from COVID-19. Just 52 doctors were the main superspreaders of vaccine misinformation, with a total of 18 million followers. Hundreds of thousands of Americans died.

Trump was elected because of social media, and used it in his effort to overthrow the government. Hamas and Likud are using social media to fan the flames of war. Putin uses social media to keep Russian citizens supporting a war he cannot win.

We are failing to mitigate misinformation. Social media is a broken product that is killing people.

Our approach to misinformation is almost entirely centralized. We rely on the social media companies to fix it. But there is no clear profit motive driving them to do it, and even if there was, centralized solutions to misinformation are inefficient.

Information transmission through humans follows biological infection models. And like biological infections, the most effective response stops the spread at the individual organism. Fighting misinformation in social media requires a decentralized solution.

And that's what I'm working on with the discussion system I'm building around the Reddit summary engine. It runs on Nostr Protocol, which is like IRC - Internet Relay Chat - from way back in 1988. Nostr adds some modern upgrades like JSON formatting, cryptographic identities, and message signing.

The first piece I want to show you enables better signaling about information quality. Humans are naturally skilled bullshit detectors. Lots of people get fooled, but with enough eyes on it, most bullshit can't stand. With the right tools for propagating bullshit detection signals, we can dramatically improve mitigation of misinformation.

We currently use three main tools for decentralized feedback:

Comment, Like, and Subscribe!

Likes and subscribes are each a single bit of data, and aren't publicly linked to an identity, which makes them easy to game. Comments contain more data and are linked to an identity, but require natural language processing and sentiment analysis, making them noisy for machine processing.

Nostr includes the notion of labeling, which is similar to Likes or Upvotes but with four major upgrades:

  1. Every label is signed by the labeler. You can base your trust of a label on their identity, their history, or network of trust.
  2. Labels can be more diverse than up/down vote. You can mark comments with labels like Insightful, Fact-Based, or Off-Topic.
  3. Labels can belong to an ontology. You can enumerate the valid selections, and massively reduce the complexity and noise associated with natural language sentiment analysis.
  4. The ontology can be multi-class, meaning labelers can attach more than one label. You can mark a comment as both Ad-Hominem and Emotionally Charged, or as Well-Structured and Fact-Based.

And at last, here is the proposed multi-class ontology.

What

How

Challenge

The way to harden your views is not to indulge them but to challenge them.

NIP nn: Critiques

A Critique is a NIP-25 Reaction Event which contains one or more of the following emojis in its content field. Clients SHOULD interpret each emoji as an expression meaning "this signer believes the reacted-to event satisfies this condition".

Each individual emoji in a Critique is a Reaction, meaning that a Reaction Event may contain more than one Reaction.

Each Reaction MUST be one of the following 24 symbols (3 currently reserved), which are taken from Unicode 15.1 and use the official Unicode encoding and short name, with spaces replaced with underscores in the short name where necessary.

The list of symbols is deliberately kept very short to create guardrails that guide the reacting user into a machine comparable set of options. While machine learning systems can extract signal from noisier inputs, it is considerably more efficient to hand-hold the human in the process of creating a less noisy signal in the first place.

It is intended that this list of symbols will evolve very slowly once established, that the original list (once reaching "recommended" status) will be respected in perpetuity, and the number of new symbols over time will be on the order of ones per decade. The three "reserved" placeholders are intended to take the first decade worth of additions.