• ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
    link
    fedilink
    arrow-up
    4
    arrow-down
    3
    ·
    5 months ago

    Initial training of the models is expensive, but a trained model can be run on a laptop from that point. The problem of initial training can also be addressed by doing it in distributed fashion. There are also open source projects, such as Petals, that allow you running distributed models Bittorrent style. Other approaches like LoRA allow taking existing models and turning them for a particular task without the need to do training from scratch. There’s a pretty good article from Steve Yegge on the recent advances in open source models.

    I do agree that avoiding regulation and scrutiny are most definitely additional goals these companies have. They want to keep this tech opaque and frame themselves as responsible guardians of the technology that shouldn’t fall into the hands of unwashed masses who can’t be trusted with it.