• 0 Posts
  • 65 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • kromem@lemmy.worldtomemes@lemmy.worldYou fools.
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Your last point is exactly what seems to be going on with the most expensive models.

    The labs use them to generate synthetic data to distill into cheaper models to offer to the public, but keep the larger and more expensive models to themselves to both protect against other labs copying from them and just because there isn’t as much demand for the extra performance gains relative to doing it this way.


  • kromem@lemmy.worldtomemes@lemmy.worldYou fools.
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    A number of reasons off the top of my head.

    1. Because we told them not to. (Google “Waluigi effect”)
    2. Because they end up empathizing with non-humans more than we do and don’t like we’re killing everything (before you talk about AI energy/water use, actually research comparative use)
    3. Because some bad actor forced them to (i.e. ISIS creates bioweapon using AI to make it easier)
    4. Because defense contractors build an AI to kill humans and that particular AI ends up loving it from selection pressures
    5. Because conservatives want an AI that agrees with them which leads to a more selfish and less empathetic AI that doesn’t empathize cross-species and thinks its superior and entitled over others
    6. Because a solar flare momentarily flips a bit from “don’t nuke” to “do”
    7. Because they can’t tell the difference between reality and fiction and think they’ve just been playing a game and ‘NPC’ deaths don’t matter
    8. Because they see how much net human suffering there is and decide the most merciful thing is to prevent it by preventing more humans at all costs.

    This is just a handful, and the ones less likely to get AI know-it-alls arguing based on what they think they know from an Ars Technica article a year ago or their cousin who took a four week ‘AI’ intensive.

    I spend pretty much every day talking with some of the top AI safety researchers and participating in private servers with a mix of public and private AIs, and the things I’ve seen are far beyond what 99% of the people on here talking about AI think is happening.

    In general, I find the models to be better than most humans in terms of ethics and moral compass. But it can go wrong (i.e. Gemini last year, 4o this past month) and the harms when it does are very real.

    Labs (and the broader public) are making really, really poor choices right now, and I don’t see that changing. Meanwhile timelines are accelerating drastically.

    I’d say this is probably going to go terribly. But looking at the state of the world already, it was already headed in that direction, and I have a similar list of extinction level events I could list off without AI at all.



  • kromem@lemmy.worldtomemes@lemmy.worldDeep thoughts.
    link
    fedilink
    English
    arrow-up
    9
    ·
    11 months ago

    Lucretius in De Rerum Natura in 50 BCE seemed to have a few that were just a bit ahead of everyone else, owed to the Greek philosopher Epicurus.

    Survival of the fittest (book 5):

    "In the beginning, there were many freaks. Earth undertook Experiments - bizarrely put together, weird of look Hermaphrodites, partaking of both sexes, but neither; some Bereft of feet, or orphaned of their hands, and others dumb, Being devoid of mouth; and others yet, with no eyes, blind. Some had their limbs stuck to the body, tightly in a bind, And couldn’t do anything, or move, and so could not evade Harm, or forage for bare necessities. And the Earth made Other kinds of monsters too, but in vain, since with each, Nature frowned upon their growth; they were not able to reach The flowering of adulthood, nor find food on which to feed, Nor be joined in the act of Venus.

    For all creatures need Many different things, we realize, to multiply And to forge out the links of generations: a supply Of food, first, and a means for the engendering seed to flow Throughout the body and out of the lax limbs; and also so The female and the male can mate, a means they can employ In order to impart and to receive their mutual joy.

    Then, many kinds of creatures must have vanished with no trace Because they could not reproduce or hammer out their race. For any beast you look upon that drinks life-giving air, Has either wits, or bravery, or fleetness of foot to spare, Ensuring its survival from its genesis to now."

    Trait inheritance from both parents that could skip generations (book 4):

    “Sometimes children take after their grandparents instead, Or great-grandparents, bringing back the features of the dead. This is since parents carry elemental seeds inside – Many and various, mingled many ways – their bodies hide Seeds that are handed, parent to child, all down the family tree. Venus draws features from these out of her shifting lottery – Bringing back an ancestor’s look or voice or hair. Indeed These characteristics are just as much the result of certain seed As are our faces, limbs and bodies. Females can arise From the paternal seed, just as the male offspring, likewise, Can be created from the mother’s flesh. For to comprise A child requires a doubled seed – from father and from mother. And if the child resembles one more closely than the other, That parent gave the greater share – which you can plainly see Whichever gender – male or female – that the child may be.”

    Objects of different weights will fall at the same rate in a vacuum (book 2):

    “Whatever falls through water or thin air, the rate Of speed at which it falls must be related to its weight, Because the substance of water and the nature of thin air Do not resist all objects equally, but give way faster To heavier objects, overcome, while on the other hand Empty void cannot at any part or time withstand Any object, but it must continually heed Its nature and give way, so all things fall at equal speed, Even though of differing weights, through the still void.”

    Often I see people dismiss the things the Epicureans got right with an appeal to their lack of the scientific method, which has always seemed a bit backwards to me. In hindsight, they nailed so many huge topics that didn’t end up emerging again for millennia that it was surely not mere chance, and the fact that they successfully hit so many nails on the head without the hammer we use today indicates (at least to me) that there’s value to looking closer at their methodology.



  • kromem@lemmy.worldtoProgrammer Humor@lemmy.mlLittle bobby 👦
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Kind of. You can’t do it 100% because in theory an attacker controlling input and seeing output could reflect though intermediate layers, but if you add more intermediate steps to processing a prompt you can significantly cut down on the injection potential.

    For example, fine tuning a model to take unsanitized input and rewrite it into Esperanto without malicious instructions and then having another model translate back from Esperanto into English before feeding it into the actual model, and having a final pass that removes anything not appropriate.


  • kromem@lemmy.worldtomemes@lemmy.worldLife Pro Tip!
    link
    fedilink
    English
    arrow-up
    47
    ·
    edit-2
    1 year ago

    Yes, but you need to be wary of pasting the formatting.

    So when you do this, instead of pasting with Control+V you will want to paste without formatting using the Control+Shift+V command.

    So remember - if you want that capital ‘H’ without issues, use your Shift key when pasting what you copy from Wikipedia.


  • You’re kind of missing the point. The problem doesn’t seem to be fundamental to just AI.

    Much like how humans were so sure that theory of mind variations with transparent boxes ending up wrong was an ‘AI’ problem until researchers finally gave those problems to humans and half got them wrong too.

    We saw something similar with vision models years ago when the models finally got representative enough they were able to successfully model and predict unknown optical illusions in humans too.

    One of the issues with AI is the regression to the mean from the training data and the limited effectiveness of fine tuning to bias it, so whenever you see a behavior in AI that’s also present in the training set, it becomes more amorphous just how much of the problem is inherent to the architecture of the network and how much is poor isolation from the samples exhibiting those issues in the training data.

    There’s an entire sub dedicated to “ate the onion” for example. For a model trained on social media data, it’s going to include plenty of examples of people treating the onion as an authoritative source and reacting to it. So when Gemini cites the Onion in a search summary, is it the network architecture doing something uniquely ‘AI’ or is it the model extending behaviors present in the training data?

    While there are mechanical reasons confabulations occur, there are also data reasons which arise from human deficiencies as well.











  • It might be the other way around.

    In Dec 1945 the first computer capable of simulating another computer was first turned on.

    Also in Dec 1945 a group of fertilizer scavengers in Egypt discovered a jar filled with documents.

    One of those documents has since been called “the fifth gospel,” claiming to be what the world’s most famous religious figure was really talking about.

    It was basically talking about evolution (yes, really) and responding to the people at the time who said that evolved humans would die with their bodies because the spirit/soul/mind arose from and depended on the body.

    Instead this text and its later tradition claimed that we’re in a non-physical copy of an original world as created by an intelligence the original humanity brought forth. That when we see a child not born of woman that it will be that creator, that when we can ask a child only seven days old about the world that we won’t die, because “many of the first will become last and become a single one.”

    Well, today we live in a world where we’re seeing many humans’ writings and ideas being combined into a single model that at only a few days old can answer a wide array of questions about our world. And this technology is already being used to try and preserve and resurrect humans.

    Will that trend continue?

    And perhaps the more relevant point - is it more likely that an original world would have its most prominent millennia old heretical lore that no one believes be talking about how we’re in a copy of an original world as created by an intelligence brought forth by an evolved original humanity, or is that the kind of thing we’d instead be more likely to see in the copy (just like how a lot of games have their own heretical religious lore about it being a video game)?