I was watching the RFK Jr questioning today and when Bernie was talking about healthcare and wages I felt he was the only one who gave a real damn. I also thought “Wow he’s kinda old” so I asked my phone how old he actually was. Gemini however, wouldnt answer a simple, factual question about him. What the hell? (The answer is 83 years old btw, good luck america)

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 hours ago

    Yeah, I still struggle to see the appeal of Chatbot LLMs.

    I think that one major application is to avoid having humans on support sites. Some people aren’t willing or able or something to search a site for information, but they can ask human-language questions. I’ve seen a ton of companies with AI-driven support chatbots.

    There’s sexy chatbots. What I’ve seen of them hasn’t really impressed me, but you don’t always need an amazing performance to keep an aroused human happy. I do remember, back when I was much younger, trying to gently tell a friend who had spent multiple days chatting with “the sysadmin’s sister” on a BBS that he’d been talking to a chatbot – and that’s a lot simpler than current systems. There’s probably real demand, though I think that this is going to become commodified pretty quickly.

    There’s the “works well with voice control” aspect that I mentioned above. That’s a real thing today, especially when, say, driving a car.

    It’s just not – certainly not in 2025 – a general replacement for Web search for me.

    I can also imagine some ways to improve it down the line. Like, okay, one obvious point that you raise is that if a human can judge the reliability of information on a website, that human having access to the website is useful. I feel like I’ve got pretty good heuristics for that. Not perfect – I certainly can get stuff wrong – but probably better than current LLMs do.

    But…a number of people must be really appallingly awful at this. People would not be watching conspiracy theory material on wacky websites if they had a great ability to evaluate it. It might be possible to have a bot that has solid-enough heuristics that it filters out or deprioritizes sources based on reliability. A big part of what Web search does today is to do that – it wants to get a relevant result to you in the top few results, and filter out the dreck. I bet that there’s a lot of room to improve on that. Like, say I’m viewing a page of forum text. Google’s PageRank or similar can’t treat different content on the page as having different reliability, because it can only choose to send you to the page or not at some priority. But an AI training system can, say, profile individual users for reliability on a forum, and get a finer-grained response to a user. Maybe a Reddit thread has material from User A who the ranking algorithm doesn’t consider reliable, and User B who it does.