There’s an extraordinary amount of hype around “AI” right now, perhaps even greater than in past cycles, where we’ve seen an AI bubble about once per decade. This time, the focus is on generative systems, particularly LLMs and other tools designed to generate plausible outputs that either make people feel like the response is correct, or where the response is sufficient to fill in for domains where correctness doesn’t matter.
But we can tell the traditional tech industry (the handful of giant tech companies, along with startups backed by the handful of most powerful venture capital firms) is in the midst of building another “Web3”-style froth bubble because they’ve again abandoned one of the core values of actual technology-based advancement: reason.
The fact that you think “AGI” is a new term, or the fact that you think the “G” stands for “Generative” shows how much you know about the field, so maybe you should go read up on literally any of it before you come at me with this attitude and your “due to this fact” pseudo-intellectual bullshit.
The “G” stands for “General”, friend. It delineates between an Artificial Intelligence that is narrow in the scope of its knowledge, from intelligences like us that can adapt to new tasks and improve themselves. We do not have Artificial General Intelligence yet, but the ones we have getting there and faster than you could possibly imagine.
Tell me, oh Doctor Of Neuropsychology and Computer Science: how do people learn? How do people generate new information?
Actually no fuck that, I have a better question: define “intelligence”. Let’s hear it, I’ve wanted to act the Picard in a Data trial since I was a kid. Since around the time that the term Artificial General Intelligence was coined in fact: nineteen ninety fucking seven.
You’ve shown your IQ right there. No time to waste with you. Goodbye.