• 0 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle


  • Okay… I even came with receipts on this one. Am I just annoying? What’s with the downvote, even on ones where people are suggesting target date funds? The fund will bounce, it’s just a huge dip for one that was supposed to be, according to professionals, safe for retirement use. So sure, I can see the downvote as disagreeing with sensationalism, but I was contesting the suggestion that no funds dropped in that time. If it’s because I got spammy, sure… I assume most people don’t reread the other comments after the first time they go through, but I can stop.

    For reference, target date funds are still usually good, but total stock index is always better in a ten year period, so whether they are actually worth it is questionable.











  • Check the vanguard target retirement income fund (vtinx) and other similar funds. There was a dip in 2021 that absolutely destroyed a number of retirements, my patents included, despite being low risk options. Total bond index funds also suffered for some reason, and those are as low risk as you can get. Every other fund I have is doing great, but the ones that are supposed to be safe are not doing great.








  • And I wouldn’t call a human intelligent if TV was anything to go by. Unfortunately, humans do things they don’t understand constantly and confidently. It’s common place, and you could call it fake it until you make it, but a lot of times it’s more of people thinking they understand something.

    LLMs do things confident that they will satisfy their fitness function, but they do not have the ability to see farther than that at this time. Just sounds like politics to me.

    I’m being a touch facetious, of course, but the idea that the line has to be drawn upon that term, intelligence, is a bit too narrow for me. I prefer to use the terms Artificial Narrow Intelligence and Artificial General Intelligence as they are better defined. Narrow referring to it being designed for one task and one task only, such as LLMs which are designed to minimize a loss function of people accepting the output as “acceptable” language, which is a highly volatile target. AGI or Strong AI is AI that can generalize outside of its targeted fitness function and continuously. I don’t mean that a computer vision neural network that is able to classify anomalies as something that the car should stop for. That’s out of distribution reasoning, sure, but if it can reasonably determine the thing in bounds as part of its loss function, then anything that falls significantly outside can be easily flagged. That’s not true generalization, more of domain recognition, but it is important in a lot of safety critical applications.

    This is an important conversation to have though. The way we use language is highly personal based upon our experiences, and that makes coming to an understanding in natural languages hard. Constructed languages aren’t the answer because any language in use undergoes change. If the term AI is to change, people will have to understand that the scientific term will not, and pop sci magazines WILL get harder to understand. That’s why I propose splitting the ideas in a way that allows for more nuanced discussions, instead of redefining terms that are present in thousands of ground breaking research papers over a century, which will make research a matter of historical linguistics as well as one of mathematical understanding. Jargon is already hard enough as it is.