‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says::Pressure grows on artificial intelligence firms over the content used to train their products

  • dhork@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    In Wikipedia’s case, the text is (well, at least so far), written by actual humans. And no matter what you think about the ethics of Wikipedia editors, they are humans also. Human oversight is required for Wikipedia to function properly. If Wikipedia were to go to a model where some AI crawls the web for knowledge and writes articles based on that with limited human involvement, then it would be similar. But that’s not what they are doing.

    The Internet Archive is on a bit less steady legal ground (see the resent legal actions), but in its favor it is only storing information for archival and lending purposes, and not using that information to generate derivative works which it is then selling. (And it is the lending that is getting it into trouble right now, not the archiving).

    • phillaholic@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      The Internet Archive has no ground to stand on at all. It would be one thing if they only allowed downloading of orphaned or unavailable works, but that’s not the case.

    • randon31415@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Wikipedia has had bots writing articles since the 2000 census information was first published. The 2000 census article writing bot was actually the impetus for Wikipedia to make the WP:bot policies.