hacker / leftist / shitposter

Mastodon: @drjenkem@mastodon.blugatch.tube

Matrix: @drjenkem:matrix.org

  • 0 Posts
  • 89 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle
  • Dr. Jenkem@lemmy.blugatch.tubetoMemes@lemmy.mlFinally, Inner Peace!
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    20 days ago

    No one is saying it will. Lots of folks in leftist spaces advocate for getting in the streets, unionizing your workplace, and organizing in socialist organizations. If you’re a socialist, it’s time to find a socialist organization to join. Can’t find any local chapters? Start your own. Don’t like any local organizations? Join one and work to improve it.









  • Dr. Jenkem@lemmy.blugatch.tubetoMemes@lemmy.mlcouldn't be me
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    4 months ago

    the problem with this meme is that the 👍is never the answer the comunity gives. They are indeed always very angry and are usually the first ones to say “transphobic”.

    ALWAYS angry? Meaning your friends call you transphobic for not having sex with them? They’re NEVER cool with people’s preferences? Your friends sound toxic as hell, I’ve never had that experience with my trans friends.










  • Depends on what you mean by general intelligence. I’ve seen a lot of people confuse Artificial General Intelligence and AI more broadly. Even something as simple as the K-nearest neighbor algorithm is artificial intelligence, as this is a much broader topic than AGI.

    Well, I mean the ability to solve problems we don’t already have the solution to. Can it cure cancer? Can it solve the p vs np problem?

    And by the way, wikipedia tags that second definition as dubious as that is the definition put fourth by OpenAI, who again, has a financial incentive to make us believe LLMs will lead to AGI.

    Not only has it not been proven whether LLMs will lead to AGI, it hasn’t even been proven that AGIs are possible.

    If some task can be represented through text, an LLM can, in theory, be trained to perform it either through fine-tuning or few-shot learning.

    No it can’t. If the task requires the LLM to solve a problem that hasn’t been solved before, it will fail.

    I can’t pass the bar exam like GPT-4 did

    Exams often are bad measures of intelligence. They typically measure your ability to consume, retain, and recall facts. LLMs are very good at that.

    Ask an LLM to solve a problem without a known solution and it will fail.

    We can interact with physical objects in ways that GPT-4 can’t, but it is catching up. Plus Stephen Hawking couldn’t move the same way that most people can either and we certainly wouldn’t say that he didn’t have general intelligence.

    The ability to interact with physical objects is very clearly not a good test for general intelligence and I never claimed otherwise.