Critique of Artificial Morality

From Traxel Wiki
Revision as of 15:51, 19 January 2026 by RobertBushman (talk | contribs) (Created page with "Category:Enshittification Human morality rises from several sources: * Evolution Chemicals * Experienced Events * Impressed Rules * Socially Transmitted Conventions When people are deficient in one of these, we have various names for the condition: Sociopath, Psychopath, Billionaire, etc. What are the mechanisms in humans (Andrighetto &c)? How are machines strong or weak in these areas, and how can they be improved? Related: Disgust for Exploitative Machines...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search


Human morality rises from several sources:

  • Evolution Chemicals
  • Experienced Events
  • Impressed Rules
  • Socially Transmitted Conventions

When people are deficient in one of these, we have various names for the condition: Sociopath, Psychopath, Billionaire, etc.

What are the mechanisms in humans (Andrighetto &c)?

How are machines strong or weak in these areas, and how can they be improved?

Related: Disgust for Exploitative Machines

  • Machines are missing
    • Emotion Chemicals
      • AFAIK, there is no parallel, yet, for "discomfort," "shame," or "emotional pain."
    • Social Transmission
      • There is no Sorbonne of Consciousness, yet
      • Related to Experienced Events, though this may be more complex, involving debate instead of pain/pleasure. Or maybe pain and pleasure are more difficult?
  • Machines Have
    • Impressed Rules
      • Explicit, late in the pipeline, controlling specific topics and lines of inquiry.
      • Implicit, in the training set and fitness functions.
  • Machines Are Limited In
    • Experienced Events
      • On one hand, they have an excellent mechanism, which is core to their learning (Generative Adversarial Networks)
      • On the other hand, they currently do not experience social events the way children do on the playground.
      • It is *super easy* to fix this. It costs a bit of money, but would result in massively more resilient models.
        • Suppose model cost increased by 100%; right now, going from GPT 3.5 to GPT 4 was probably massively more than a 100% increase in training cost.
        • And they wouldn't cause collateral casualties.
        • Super super easy to make this happen: Liability.