CypherEthics

From Traxel Wiki
Jump to navigation Jump to search

Social Punishment

Social punishment in a cypherpunk society would be much more efficient. The ability to programmatically participate in social movements using publicly available algorithms is potent.

It would have a massive impact on influencers, who depend on social network penetration. If I can describe to my computer who I trust and give it some angles on what I trust them for, I can have it incorporate those filters into my inbound datastream.

There is far too much great information out there to consume it all. We all need to curate. Having an algorithmic curator of our inbound information is a critical quality of life issue.

We have them now. They're called Google, Facebook, ClearChannel, and MSNBC. But you don't get much input on what they should focus on.

So you get what they're pushing. It is push media, to a substantial degree, even if it is curated to your taste profile.

That is not what the Internet was built for.

Reddit is The Least Evil?!?

The most recent dustup with Spez and Conde Nast sheds a harsh light on social media. Reddit is the most open of all the social media platforms. And they just took a big step toward closing down large scale direct access to the content people are producing for them.

We don't have a Wikipedia of social media, and we need it. I'm not sure how far along Mastadon or Nostr are, but they - or others like them - need juice.

When we used to talk about launching a platform, way back in the dark ages of the public Internet, we always talked about "the killer app." Any launch needs something so good that people will make the effort to adopt the platform.

But I digress - see more about that on CypherBusiness. The main note here is this: If Reddit is the best, and it is pretty bad, we have to band together and fix this stuff. Software engineers have had it really good for the past couple decades, and a lot of us - like me - have been living the high life for much of that time. We can, should, and must do more. Nobody else can, and we've had it very good.

Read More: https://en.wikipedia.org/wiki/2023_Reddit_API_controversy

Social Media Will Save Humanity

Key Points

  • Greed in itself cannot motivate a person to act for the good of others.
  • Benevolence can and does motivate people to act for the good of others.
  • Social media is biased in favor of mass action (at the network theory / this is mathematical truth level).
  • ML influence of human cognition is the riskiest experiment humanity has ever run.
    • ML could result in annihilation just as nuclear proliferation could.
    • And it could carry us, unnoticing, to where we no longer think for ourselves.
    • Any ML that influences what information is presented to people must be Open.

Benevolent IT People

  • IT people have to be STEM-intelligent and data-driven to be successful.
  • Benevolent people can and do act for the good of others.
  • IT skills are required to work the problem.
  • IT skills imply approaching problems rationally and analytically.
  • Benevolence is required to get mass effect.
  • Open Source influence tech would foment rational, benevolent direction.
  • Making influence tech available to activist IT people is pro-social.

However

  • Benevolent people can be misled to believe they are acting for the good of others. (Reagan and the religious right)
  • Greedy people can be misled to believe acting for the good of another, even at an apparent expense to themselves, is in their best interest. (Trump: "They're attacking you, but I'm in their way.")
  • Oligarchs own the social media platforms.

Conclusion

  • It won't be easy.
  • It is necessarily, mathematically, possible - perhaps inevitable.

Bonus Points

Decline of Disinformation Research

Academics, universities and government agencies are overhauling or ending research programs designed to counter the spread of online misinformation amid a legal campaign from conservative politicians and activists who accuse them of colluding with tech companies to censor right-wing views.

The escalating campaign — led by Rep. Jim Jordan (R-Ohio) and other Republicans in Congress and state government — has cast a pall over programs that study not just political falsehoods but also the quality of medical information online.

Facing litigation, Stanford University officials are discussing how they can continue tracking election-related misinformation through the Election Integrity Partnership (EIP), a prominent consortium that flagged social media conspiracies about voting in 2020 and 2022, several participants told The Washington Post. The coalition of disinformation researchers may shrink and also may stop communicating with X and Facebook about their findings.

The National Institutes of Health froze a $150 million program intended to advance the communication of medical information, citing regulatory and legal threats. Physicians told The Post that they had planned to use the grants to fund projects on noncontroversial topics such as nutritional guidelines and not just politically charged issues such as vaccinations that have been the focus of the conservative allegations.