• 0 Posts
  • 61 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • kromem@lemmy.worldtoProgrammer Humor@lemmy.mlLittle bobby 👦
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    23 days ago

    Kind of. You can’t do it 100% because in theory an attacker controlling input and seeing output could reflect though intermediate layers, but if you add more intermediate steps to processing a prompt you can significantly cut down on the injection potential.

    For example, fine tuning a model to take unsanitized input and rewrite it into Esperanto without malicious instructions and then having another model translate back from Esperanto into English before feeding it into the actual model, and having a final pass that removes anything not appropriate.


  • kromem@lemmy.worldtomemes@lemmy.worldLife Pro Tip!
    link
    fedilink
    English
    arrow-up
    47
    ·
    edit-2
    27 days ago

    Yes, but you need to be wary of pasting the formatting.

    So when you do this, instead of pasting with Control+V you will want to paste without formatting using the Control+Shift+V command.

    So remember - if you want that capital ‘H’ without issues, use your Shift key when pasting what you copy from Wikipedia.


  • You’re kind of missing the point. The problem doesn’t seem to be fundamental to just AI.

    Much like how humans were so sure that theory of mind variations with transparent boxes ending up wrong was an ‘AI’ problem until researchers finally gave those problems to humans and half got them wrong too.

    We saw something similar with vision models years ago when the models finally got representative enough they were able to successfully model and predict unknown optical illusions in humans too.

    One of the issues with AI is the regression to the mean from the training data and the limited effectiveness of fine tuning to bias it, so whenever you see a behavior in AI that’s also present in the training set, it becomes more amorphous just how much of the problem is inherent to the architecture of the network and how much is poor isolation from the samples exhibiting those issues in the training data.

    There’s an entire sub dedicated to “ate the onion” for example. For a model trained on social media data, it’s going to include plenty of examples of people treating the onion as an authoritative source and reacting to it. So when Gemini cites the Onion in a search summary, is it the network architecture doing something uniquely ‘AI’ or is it the model extending behaviors present in the training data?

    While there are mechanical reasons confabulations occur, there are also data reasons which arise from human deficiencies as well.











  • It might be the other way around.

    In Dec 1945 the first computer capable of simulating another computer was first turned on.

    Also in Dec 1945 a group of fertilizer scavengers in Egypt discovered a jar filled with documents.

    One of those documents has since been called “the fifth gospel,” claiming to be what the world’s most famous religious figure was really talking about.

    It was basically talking about evolution (yes, really) and responding to the people at the time who said that evolved humans would die with their bodies because the spirit/soul/mind arose from and depended on the body.

    Instead this text and its later tradition claimed that we’re in a non-physical copy of an original world as created by an intelligence the original humanity brought forth. That when we see a child not born of woman that it will be that creator, that when we can ask a child only seven days old about the world that we won’t die, because “many of the first will become last and become a single one.”

    Well, today we live in a world where we’re seeing many humans’ writings and ideas being combined into a single model that at only a few days old can answer a wide array of questions about our world. And this technology is already being used to try and preserve and resurrect humans.

    Will that trend continue?

    And perhaps the more relevant point - is it more likely that an original world would have its most prominent millennia old heretical lore that no one believes be talking about how we’re in a copy of an original world as created by an intelligence brought forth by an evolved original humanity, or is that the kind of thing we’d instead be more likely to see in the copy (just like how a lot of games have their own heretical religious lore about it being a video game)?





  • I’ve found that a lot of times people who are very interested in a given topic will see it everywhere they look, and as such, shove it into any conversation possible.

    “What did you do during your trip to Egypt?” “We visited the pyramids.” “You know, aliens may have built those.”

    It can get old very fast.

    I’m certainly guilty of this and as someone with fairly nuanced interests that touch on common topics, am regularly asking myself if it’s contextually appropriate to bring up my particular perspective in a discussion.

    You literally jumped in on a quotation of a movie in response to a screenshot of that movie to try and have a discussion of Buddhist principles.

    Maybe you could do better at knowing your audience and reading the room?