• 0 Posts
  • 102 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle
  • Were probably? That’s a giant understatement and you know it.

    Ai will save billions of lives and improve the living standard for everyone on the planet, it’ll be just like mobile phones where the biggest benefits come to the poorest communities - tech haters often ignore this reality, millions of children in Africa, Asia, etc were only able to get access to education through mobile infrastructure.

    The internet has given everyone access to huge amounts of education resources and it’s only increased as they technology matures - current LLMs are amazing for language learners and for people who need things like English articles explained in their own language, I just asked chatgpt to explain the code I’m working on in Tagalog and it did it without hesitation (I can’t speak for the accuracy personally but looks legit) it even translated variable names but not function calls.

    And this before we’ve scratched the surface of it’s utility, I’ll tell you one thing if you ever say to your grandkids ‘o I was against ai when it came out’ they’ll look at up like you’d look at someone who said they didn’t think math would catch on or that iron would never be as popular as bronze.


  • Yeah the amount of good ai can do for the world is staggering, even just giving a speed boost and quality improvement to open source Devs will unlock a lot of new potential.

    The problem is people in a certain age bracket often fear change because they feel they’ve put effort into learning how things work and if things change then all that effort will be worthless.

    It doesn’t really matter though, gangs of idiots literally smashed the prototype looms when they were demonstrated because despite the cost of cloth being one of the major factors in poverty at the time a handful of people took it on themselves to fight to maintain the status quo – of course we know how it turned out, the same that it always does…

    Areas that resisted technological and social growth stagnated and got displaced by those which welcomed it


  • Do you get the super creepy ‘easten European girls are popular because of their traditional values…’ and the I’ve that’s ‘Chinese women want to come and study in but need loving middle aged men to have sex with them while they’re here…’ I’m paraphraseing of course but that’s basically what they say.

    Makes me laugh when everything is sanitized because advertiser’s don’t want to appear next to anything untoward then the adverts are for human traffickers.

    Though honestly the ‘student was expelled for inventing a more efficient heater’ advert is possibly worse, of course he was expelled he broke the laws of physics, he should have been arrested! Why is such an obvious scam allowed but a friendly Australian can’t call his cobba a cunt?



  • If you ask it to make up nonsense and it does it then you can’t get angry lol. I normally use it to help analyse code or write sections of code, sometimes to teach me how certain functions or principles work - it’s incredibly good at that, I do need to verify it’s doing the right thing but I do that with my code too and I’m not always right either.

    As a research tool it’s great at taking a basic dumb description and pointing me to the right things to look for, especially for things with a lot of technical terms and obscure areas.

    And yes they can occasionally make mistakes or invent things but if you ask properly and verify what you’re told then it’s pretty reliable, far more so than a lot of humans I know.


  • Why would I rebut that? I’m simply arguing that they don’t need to be ‘intelligent’ to accurately determine the colour of the sky and that if you expect an intelligence to know the colour of the sky without ever seeing it then you’re being absurd.

    The way the comment I responded to was written makes no sense to reality and I addressed that.

    Again as I said in other comments you’re arguing that an LLM is not will smith in I Robot and or Scarlett Johansson playing the role of a usb stick but that’s not what anyone sane is suggesting.

    A fork isn’t great for eating soup, neither is a knife required but that doesn’t mean they’re not incredibly useful eating utensils.

    Try thinking of an LLM as a type of NLP or natural language processing tool which allows computers to use normal human text as input to perform a range of tasks. It’s hugely useful and unlocks a vast amount of potential but it’s not going to slap anyone for joking about it’s wife.


  • People do that too, actually we do it a lot more than we realise. Studies of memory for example have shown we create details that we expect to be there to fill in blanks and that we convince ourselves we remember them even when presented with evidence that refutes it.

    A lot of the newer implementations use more complex methods of fact verification, it’s not easy to explain but essentially it comes down to the weight you give different layers. GPT 5 is already training and likely to be out around October but even before that we’re seeing pipelines using LLM to code task based processes - an LLM is bad at chess but could easily install stockfish in a VM and beat you every time.


  • That’s only true on a very basic level, I understand that Turings maths is complex and unintuitive even more so than calculus but it’s a very established fact that relatively simple mathematical operations can have emergent properties when they interact to have far more complexity than initially expected.

    The same way the giraffe gets its spots the same way all the hardware of our brain is built, a strand of code is converted into physical structures that interact and result in more complex behaviours - the actual reality is just math, and that math is almost entirely just probability when you get down to it. We’re all just next word guessing machines.

    We don’t guess words like a Markov chain instead use a rather complex token system in our brain which then gets converted to words, LLMs do this too - that’s how they can learn about a subject in one language then explain it in another.

    Calling an LLM predictive text is a fundamental misunderstanding of reality, it’s somewhat true on a technical level but only when you understand that predicting the next word can be a hugely complex operation which is the fundamental math behind all human thought also.

    Plus they’re not really just predicting one word ahead anymore, they do structured generation much like how image generators do - first they get the higher level principles to a valid state then propagate down into structure and form before making word and grammar choices. You can manually change values in the different layers and see the output change, exploring the latent space like this makes it clear that it’s not simply guessing the next word but guessing the next word which will best fit into a required structure to express a desired point - I don’t know how other people are coming up with sentences but that feels a lot like what I do



  • I use LLMs to create things no human has likely ever said and it’s great at it, for example

    ‘while juggling chainsaws atop a unicycle made of marshmallows, I pondered the existential implications of the colour blue on a pineapples dream of becoming a unicorn’

    When I ask it to do the same using neologisms the output is even better, one of the words was exquimodal which I then asked for it to invent an etymology and it came up with one that combined excuistus and modial to define it as something beyond traditional measures which fits perfectly into the sentence it created.

    You can’t ask a parrot to invent words with meaning and use them in context, that’s a step beyond repetition - of course it’s not full dynamic self aware reasoning but it’s certainly not being a parrot


  • But also the people who seem to think we need a magic soul to perform useful work is way way too high.

    The main problem is Idiots seem to have watched one too many movies about robots with souls and gotten confused between real life and fantasy - especially shitty journalists way out their depth.

    This big gotcha ‘they don’t live upto the hype’ is 100% people who heard ‘ai’ and thought of bad Will Smith movies. LLMs absolutely live upto the actual sensible things people hoped and have exceeded those expectations, they’re also incredibly good at a huge range of very useful tasks which have traditionally been considered as requiring intelligence but they’re not magically able everything, of course they’re not that’s not how anyone actually involved in anything said they would work or expected them to work.




  • I think a lot is because Hollywood became a Henry Ford production process, one part feeds into the next so they’d have empty studios and workers idle if the next idea isn’t ready to go.

    Also it literally doesn’t matter, this marvel film has literally the same plot and jokes as the last one? That’s ok we cooked up a drama where we pretend villainous gamers are against it to get people talking about it, we seeded stories into the media we own about it and forced our celebrities to pretend to love it…

    They can make the absolute worst shit and as long as they link it to something vaguely related to some culturally significant thing it’ll be huge, even more so if they can link it to a social divide or political division they have no intention of ever actually caring about.

    Childhood toy + social flag = money, it works for comic books ‘i had a the flash t-shirt when I was six I have to like these new films’, it worked with Barbie ‘this proconsumerism corporate tat which was heavily criticised by notable feminists has made a film attempting to shoehorn social progress back into a corporate friendly sales generating mush, they say the baddies don’t like it so I have to go see it!’, and it works with endless sequels ‘this franchise now makes zero sense, has the most painfully predictable plots, has gone so far off the rails jumping sharks that literally nothing makes sense and there are zero stakes to any of it which totally ruins everything that made the first one good…’ and you can’t even tell what I’m talking about with that because it’s everything (i was thinking john wick btw)

    Make something actually good and no one will care unless the media circus tells them to, that’s how you get s flop. Make something even slightly changing intellectually or from a certain point of view and instantly most your audience is gone or angry, but be like Barbie and put sparkle on social concepts 90% of the world has agreed on for decades while actively avoiding anything more contentious then you don’t need to worry about alienating the audience or going over their heads.

    And for some reason people just won’t stop watching it, they won’t watch indy stuff made with passion or small budget things no matter how good they are because they HAVE to see the big releases, like you’ll lose touch with society and be unable to make friends if you don’t force yourself to endure at least a dozen painfully dull industry movies a year.


  • Really though writing should be the least important part of a journalists job, digging through stories and finding the truth or understanding the complex strands of the story should be and that often involves going back and editing, restructuring, reediting, reworking and adding to it over and over again.

    It gets really hard to see your writing with fresh eyes once you’ve got it so perfectly constructed in your head, it’s super easy to miss awkward mistakes that have crept in - this is why editors were a thing but newspapers rarely bother anymore or the editor is too focused on political and social acceptable to notice grammar or word choice errors


  • Because writing doesn’t really work like that, the reason we get bland writing is because they keep adding extra chefs.

    Thay get these professional writers that learned formula in school and apply it to sections of someone else’s work and wonder why the result is an ugly tapestry of formulaic rubbish.

    All the things people love are written by people with passion for the project, then they get a budget increase and professional industry writers get brought in and it’s all shitty generic snappy dialog and dramatic posing that feels uncomfortable and awkward in the scene.



  • Nothing to control the motor, nothing to control the heater, nothing to do timing or turn on and off water in and out?

    Even a really shitty one has door lock sensor, temperature sensor, turbidity sensor…

    Which means logic gates and transformers and things to shift voltages or control power flow.

    That’s before you even get into the logic of controlled programs or advanced features like weight based energy saving.

    A micro controller connected to a few relays and sensors could replace all the complex stuff and it’d cost far less, plus it could tell you which sensor is out. Plus it allows you to do otherwise very complex things like reprogram the current job while it’s running or to sync with other devices to limit max power load.



  • Yeah but washing machines either use a really simple micro controller or a whole load of really complex voltage based logic and control board electronics that even the guy who designed it couldn’t fix without a lot of writing notes and doing maths.

    There’s more to go wrong on an old washing machine and each control board was unique to the machine so tracking down a replacement is hard - a nice simple raspberry pi Pico you can flash over WiFi would make it so easy to switch out one heater for another without too much thought about impedance or upgrade the turbidity sensor without desoldering resistors.

    Plus it gives you infinite control over the program cycles allowing you to update up the best wash method for your detergent and lifestyle.

    Of course you can only do that with an open source one. I think it’s coming, year of the open source desktop kitchen work surface coming soon.