There’s an extraordinary amount of hype around “AI” right now, perhaps even greater than in past cycles, where we’ve seen an AI bubble about once per decade. This time, the focus is on generative systems, particularly LLMs and other tools designed to generate plausible outputs that either make people feel like the response is correct, or where the response is sufficient to fill in for domains where correctness doesn’t matter.

But we can tell the traditional tech industry (the handful of giant tech companies, along with startups backed by the handful of most powerful venture capital firms) is in the midst of building another “Web3”-style froth bubble because they’ve again abandoned one of the core values of actual technology-based advancement: reason.

  • Sonori@beehaw.org
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    4 months ago

    OpenAI’s algorithm like all LLM’s is designed to give you the next most likely word in a sentence based on what most frequently came next in its training data. Their main strategy has actually been to use a older and simpler transformer algorithm, and to just vastly increase the scrapped text content and recently bias with each new release.

    I would argue that any system that works by stringing sudorandom words together based on how often they appear in its input sources is not going to be able to do anything but generate bullshit, albeit bullshit that may happen to be correct by pure accident when it’s near directly quoting said input sources.