• scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    20 days ago

    If your decision theory can’t address weird totally plausible in the near future hypotheticals with omniscient God-AIs offering you money in boxes if you jump through enough cognitive hoops, what is it really good for?

    • diz@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      19 days ago

      Tbh whenever I try to read anything on decision theory (even written by people other than rationalists), I end up wondering how do they think a redundant autopilot (with majority vote) would ever work. In an airplane, that is.

      Considering just the physical consequences of a decision doesn’t work (unless theres a fault, consequences don’t make it through the voting electronics, so the alternative decisions made for the alternative that there is no fault, never make it through).

      Each one simulating the two or more other autopilots is scifi-brained idiocy. Requiring that autopilots are exact copies is stupid (what if we had two different teams write different implementations, I think Airbus actually sort if did that).

      Nothing is going to be simulating anything, and to make matters even worse for philosophers amateur and academic alike, the whole reason for redundancy is that sometimes there is a glitch that makes them not compute the same values, so any attempt to be clever with “ha, we just treat copies as one thing” doesn’t cut it either.

      • aio@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        Related to aviation systems, here’s a writeup about a complicated incident, where a discrepancy between two “identical” components was a contributing factor:

        It was possible for one channel to detect that the plane was airborne while the other channel did not […] With one channel in flight mode and the other in ground mode, the SECs believe that there has been a failure of one of the LGCIUs, and they both shut off.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        18 days ago

        Yeah, even if computers predicting other computers didn’t require overcoming the halting problem (and thus contradict the foundations of computer science) actually implementing such a thing with computers smart enough to qualify as AGI in a reliable way seems absurdly impossible.

        • diz@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          18 days ago

          To be entirely honest I don’t even like the arguments against EDT.

          Smoking lesion is hilarious. So theres a lesion that is making people smoke. It is also giving them cancer in some unrelated way which we don’t know, trust me bro. Please bro don’t leave this decision to the lesion, you gotta decide to smoke, it would be irrational to decide not to smoke if the lesion’s gonna make you smoke. Correlation is not causation, gotta smoke, bro.

          Obviously in that dumb ass hypothetical, the conditional probability is conditional on the decision, not on the lesion, and the smoking in cancer cases is conditional on the lesion, not on the decision. If those two were indistinguishable then the right decision would be not to smoke. And more generally, adopting causal models without statistical data to back them up is called “being gullible”.

          The tobacco companies actually did manufacture the data, too, thats where “type-A personality” comes from.