Raccoonn@lemmy.ml to Memes@lemmy.ml · 1 年前AI will never be able to write like me.lemmy.mlimagemessage-square125linkfedilinkarrow-up11.6Karrow-down117 cross-posted to: memes@lemmy.world
arrow-up11.58Karrow-down1imageAI will never be able to write like me.lemmy.mlRaccoonn@lemmy.ml to Memes@lemmy.ml · 1 年前message-square125linkfedilink cross-posted to: memes@lemmy.world
minus-squaresaigot@lemmy.calinkfedilinkarrow-up4·1 年前If it was done with enough regularity to eb a problem, one could just put an LLM model like this in-between to preprocess the data.
minus-squareAzzu@lemm.eelinkfedilinkarrow-up4·1 年前That doesn’t work, you can’t train models on another model’s output without degrading the quality. At least not currently.
minus-squareVashtea@sh.itjust.workslinkfedilinkEnglisharrow-up1·1 年前I don’t think he was suggesting training on another model’s output, just using ai to filter the training data before it is used.
minus-squareFooBarrington@lemmy.worldlinkfedilinkarrow-up1·1 年前No, that’s not true. All current models use output from previous models as part of their training data. You can’t solely rely on it, but that’s not strictly necessary.
If it was done with enough regularity to eb a problem, one could just put an LLM model like this in-between to preprocess the data.
That doesn’t work, you can’t train models on another model’s output without degrading the quality. At least not currently.
I don’t think he was suggesting training on another model’s output, just using ai to filter the training data before it is used.
No, that’s not true. All current models use output from previous models as part of their training data. You can’t solely rely on it, but that’s not strictly necessary.