• 0 Posts
  • 14 Comments
Joined 11 months ago
cake
Cake day: August 7th, 2023

help-circle









  • BetaDoggo_@lemmy.worldtoMemes@lemmy.mlIt's pronounced AYE...
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    10 months ago

    The issue is the marketing. If they only marketed language models for the things they are able to be trusted with, summarization, cleaning text, writing assistance, entertainment, etc. there wouldn’t be nearly as much debate.

    The creators of the image generation models have done a much better job of this, partially because the limitations can be seen visually, rather than requiring a fact check on every generation. They also aren’t claiming that they’re going to revolutionize all of scociety, which helps.


  • LLMs only predict the next token. Sometimes those predictions are correct, sometimes they’re incorrect. Larger models trained on a greater number of examples make better predictions, but they are always just predictions. This is why incorrect responses often sound plausable even if logically they don’t make sense.

    Fixing hallucinations is more about decreasing inaccuracies rather than fixing an actual problem with the model itself.



  • Duckduckgo doesn’t have anywhere near the capacity to collect data that google does, and their ads are keyword based, rather than being influenced by other data. Their search engine is really the only thing I’d recommend using however since their add-on and browser don’t offer anything that others don’t.