• 0 Posts
  • 8 Comments
Joined 9 months ago
cake
Cake day: September 27th, 2023

help-circle
  • Mirodir@discuss.tchncs.detoLemmy Shitpost@lemmy.worldAutomation
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    8 days ago

    So is the example with the dogs/wolves and the example in the OP.

    As to how hard to resolve, the dog/wolves one might be quite difficult, but for the example in the OP, it wouldn’t be hard to feed in all images (during training) with randomly chosen backgrounds to remove the model’s ability to draw any conclusions based on background.

    However this would probably unearth the next issue. The one where the human graders, who were probably used to create the original training dataset, have their own biases based on race, gender, appearance, etc. This doesn’t even necessarily mean that they were racist/sexist/etc, just that they struggle to detect certain emotions in certain groups of people. The model would then replicate those issues.


  • Eh, nothing I did was “figuring out which loophole [they] use”. I’d think most people in this thread talking about the mathematics that could make it a true statement are fully aware that the companies are not using any loophole and just say “above average” to save face. It’s simply a nice brain teaser to some people (myself included) to figure out under which circumstances the statement could be always true.

    Also if you wanna be really pedantic, the math is not about the companies, but a debunking of the original Tweet which confidently yet incorrectly says that this statement couldn’t be always true.


  • It’s even simpler. A strictly increasing series will always have element n be higher than the average between any element<n and element n.

    Or in other words, if the number of calls is increasing every day, it will always be above average no matter the window used. If you use slightly larger windows you can even have some local decreases and have it still be true, as long as the overall trend is increasing (which you’ve demonstrated the extreme case of).


  • so the names of the ai characters HAVE to be stored in game…

    Some games also generate names oh the fly based on rules. For example, KSP stitches names together based on a pre- and suffix and then rejects a few unfortunate possible combinations such as Dildo, prompting a reroll.

    I suspect with your game, they just fed it a dictionary of common words though without properly vetting it.





  • This exact image (without the caption-header of course) was on one of the slides for one of the machine-learning related courses at my college, so I assume it’s definitely out there somewhere and also was likely part of the training sets used by OpenAI. Also, the image in those slides has a different watermark at the bottom left, so it’s fair to assume it’s made its rounds.

    Contradictory to this post, it was used as an example for a problem that machine learning can solve far better than any algorithms humans would come up with.