Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 214 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle
  • 1: is lemmy good for macro blogging? Like how you’d use something like Tumblr or the like.

    Lemmy is a lot closer to Reddit, and is centered communities, not people. I think you’d have a better experience on one of the microblogging platforms for that use case.

    2: when you create a community for yourself and post in it, does it reach other people or is it only if they actively search for it? Is it common here to create a community just for yourself to post blogs and the like? Can you even do that?

    That’s a big “it depends” as some instances have bots to go subscribe to every community and pull it all in. Lemmy only federates content to instances that have at least one subscriber to the community, so discoverability would be a problem.

    3: how does the federation thing work exactly? I’m from an instance that has downvotes disabled, so what happens when someone tries to downvote me?

    You just don’t see them and they’re not counted in the score displayed to you. They’re still added up in the back end unless you post to a community with downvotes disabled, in this case then they’re discarded entirely. But since this community is on lemmy.ml and that one accepts downvotes, then they work as you’d expect. You still won’t see them on your side.

    4: is lemmy safe from AI scrapping or nah? Is this platform good for artists compared to something like mastodon, twitter, or bluesky?

    No, far from it. Everything is visible publicly, and when it’s public there’s little to do to stop AI scraping.

    5: is there search engine crawling on lemmy? Are all posts on here possible to show up in search engines or nah? How do things work on that front?

    Yes. I don’t even need to crawl Lemmy to index it, all the other instances are willingly sending it to me in real time. I have a copy of everything my instance has seen.

    6: how’s development? Is lemmy going to continue to build and improve or are things gonna stay as they are for the foreseeable future?

    Only the developers can comment but it seems slow but steady.

    7: how privacy friendly and secure is lemmy really? I’m guessing a lot better then reddit, but just curious.

    Zero, none. There is zero privacy on Lemmy because the fediverse is inherently public. I can see who voted what, I could see the entire edit history of a given post or comment, I could store all deleted posts and comments, the data is all on my server should I want to do anything with it.

    So your privacy will depend solely on your OpSec: don’t share personally identifiable information or anything.

    8: are there normal people or communities here? From what I’m seeing all of lemmy seems primarily focused on politics and tech, am not seeing much beyond that.

    Those do drown pretty much everything else, but you can look at Lemmy Explorer and find communities you like and subscribe to them, and then browse by subscriptions. The default feed is basically a firehose of literally everything going in every community at once.

    Some people also opt to just block the communities they’re not interested in such that all that’s left is interesting ones so you don’t miss anything.






  • To kind of visually see it, I found this thread of some guy that took oscilloscope captures of the output of their UPS and they’re all pseudo-sines: https://forums.anandtech.com/threads/so-i-bought-an-oscilloscope.2413789/

    As you can see, the power isn’t very smooth at all. It’s good enough for a lot of use cases and lower end power supplies, because they just shove that into a bridge rectifier and capacitors. Higher end power supplies have tighter margins, and are also more likely to have more safety features to protect the PC so they can get into protection mode and shut off. Because bad power can mean dips in power to the system which can cause calculation errors which is very undesirable especially in on a server. It probably also messes with power factor correction circuits, which is something cheap PSUs often cheap out on but a good high quality one would have and may shut down because of it.

    As you can see in those images too, it spends a significant amount of time at 0V (no power, that’s at the middle of the screen) whereas the sine waves spends an infinitely short time at 0, it goes positive and then negative immediately. All the time spent at 0, you rely on big capacitors in the PSU to hold enough charge to make it to the next burst of power. With the sine wave they’d hold just long enough (we’re going down to 12V and 5V from 120/240V input, so the amount of time normally spent at or below ±12V is actually fairly short).

    It’s technically the same average power, so most devices don’t really care. It really depends on the design of the particular unit, some can deal with some really bad power inputs and manage just fine and some will get damaged over long term use. Old linear ones with an AC transformer on the input in particular can be unhappy because of magnetic field saturation and other crazy inductor shenanigans.

    Pure sine UPSes are better because they’re basically the same as what comes out of the wall outlet. Line interactive ones are even better because they’re ready to take over the moment power goes out and exactly at the same spot in the sine wave so the jitter isn’t quite as bad during the transition. Double conversion is the top tier because they always run off the battery, so there’s no interruption for the connected computer at all. Losing power just means the battery isn’t being charged/kept topped off from the wall anymore so it starts discharging.



  • I would probably just skip the Lemmy Easy Deploy and just do a regular deployment so it doesn’t mess with your existing. Getting it running with just Docker is not that much harder and you just need to point your NGINX to it. Easy Deploy kind of assumes it’s got the whole machine for itself so it’ll try to bind on the same ports as your existing NGINX, so does the official Ansible as well.

    You really just need a postgres instance, the backend, pictrs, the frontend and some NGINX glue to make it work. I recommend stealing the files from the official Ansible, as there’s a few gotchas in the NGINX config as the frontend and backend share the same host and one is just layered on top.




  • Good thing you don’t need to watch his videos to use his tools. Not a huge fan either, but the tool works and gets the job done. I wouldn’t use linutil because it’s kind of a mess, and I imagine winutil ain’t that much better, but I don’t know how to do all those tweaks myself so I welcome them anyway. If it’s useful to at least one person then it has some value.

    I we cared that much about the people behind the software rather than the software on its own merits, we’d be rushing to eliminate GNU from our systems because RMS is known for some pretty disgusting takes. The guy behind Hyprland is also fairly toxic, but Hyprland is still nice.


  • I love Linux but I don’t think it’s for you yet, at least not with a lot of sacrifices and compromises. If 3 and 6 and possibly 8 are non-negotiable then they’re dealbreakers. Some of it can be somewhat handled with things like virtual machines and GPU passthrough but that will absolutely be a bunch of terminal stuff to get running well, and possibly extra hardware.

    I should also mention that I’m a goal oriented person. I just want to use it, I don’t want to tinker with it. That goes for pretty much any tool. I consider the OS a tool.

    Sometimes, achieving goals require upgrading your skills and taking the time to learn them properly, and for Linux the terminal is the most powerful tool you could have. We don’t use it because we have to, we use it because it’s a powerful tool that can get just about anything done.

    In your case, using tools to debloat Windows might be the best bet. I’ve been using winutil for my Windows VMs, works great and removes most of the crap: https://github.com/ChrisTitusTech/winutil


  • IMO a lot of what makes nice self-hostable software is clean and sane software in general. A lot of stuff tend to end up trying to be too easy and you can’t scale up, or stuff so unbelievably complicated you can’t scale it down. Don’t make me install an email server and API keys to services needed by features I won’t even use.

    I don’t particularly mind needing a database and Redis and the likes, but if you need MySQL and PostgreSQL and Redis and memcached and an ElasticSearch cluster and some of it is Go, some of it is Ruby and some of it is Java with a sprinkle of someone’s erlang phase, … no, just no, screw that.

    What really sucks is when Docker is used as a bandaid to hide all that insanity under the guise of easy self-hosting. It works but it’s still a pain to maintain and debug, and it often uses way more resources than it really need. Well written software is flexible and sane.

    My stuff at work runs equally fine locally in under a gig of RAM and barely any CPU at idle, and yet spans dozens of servers and microservices in production. That’s sane software.


  • If you look at it from a different angle and ask: who might be interested by a user being reported, given that each instance operate independently? The answer is all of them.

    • The instance you’re on could be interested because it might violate the local instance’s rules, and the admin might want to delete it even if from just that instance.
    • The instance hosting the community, because regardless of the other two instances they might not want that there.
    • The instance of the user being reported, because it’s their user and if they’re causing trouble they might want to ban the account.

    The rest comes naturally: obviously if the account is banned at the source it’s effectively banned globally. If it’s banned on the community’s instance, then you won’t see that user there but might on other instances. And your instance can ban the user, in which case they’re freely posting on other instances but you won’t see it from your perspective.



  • Max-P@lemmy.max-p.metoLinux@lemmy.worldI'm fairly sure linux just hates me.
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    2 months ago

    And when I was going to the library, asking questions online, and then printing the answers a week later, everyone was saying “Oh, try these other disros…”

    It is ASTOUNDING to me how linux users think. The answer to every problem someone else is facing is “Your way is stupid, that’s why it doesn’t work. Do it MY way, on MY distros”

    So, this isn’t a very fun advice, but a big part of a Linux distro is changing your starting configuration and dependencies and everything. So it is true that changing your distro can make a whole bunch of things work out of the box that didn’t before, especially if it’s a weird distro. Now, can you make it work on any distro? Also yes, but it’s more effort, and if you’re printing help at a library, yeah it might be a better choice to just try a bunch of distros and at least try to find the one that gets you on the Internet out of the box. Bazzite for example literally ships with Steam, Wine, and a whole bunch of gaming utilities out of the box, so for a gamer that means more stuff works out of the box, and that’s great if you hate tinkering.

    There’s a lot of complicated legal and philosophical stuff in the Linux world where some drivers are either not shipped by default because it’s proprietary and it makes puritans angry, or legally, the firmware just cannot be distributed by the distro.

    And sometimes, you really just don’t vibe with the distro. Ubuntu has a way of doing things, so if you hated Ubuntu (2013 would put you in the Unity era which was pretty terrible, it was wildly hated for that reason). Fedora has a different way of doing things. Mint takes a bunch of stuff people hate with Ubuntu and fixes it. Pop_OS takes stuff they hate from Ubuntu and fix it. If you hated Ubuntu, why bother trying to fix it?

    And when picking niche distros like Zorin you also significantly reduce you help pool, because it’s not that popular so people don’t know how it works. You ask me something about Void Linux and I’m gonna be like, I don’t know man, I have no idea how to solve you problem on that distro. But I do know how on Arch and Debian based distros. Niche distros are sharp double-edged sword: it can be very nice because it lines up exactly with what you want, or you could be fighting endlessly with the distro because you’re trying to do the polar opposite of what it wants you to do.

    You can go beyond that obviously, Linux is endlessly customizable, but that takes experience and skill with Linux to do successfully because it might involve compiling stuff from source and whatnot. It’s not hard, but it does come with a lot of pitfalls on its own.

    Everything went fine until the actual install at 5:59 of the video. At 6:00 he jump cuts to after the installation. The installation itself took roughly 5 hours.

    And then it took roughly 30 minutes to boot. I googled it, and it should only take 15-20 minutes to install, and boot almost instantly.

    What’s the performance of your USB stick? You can check with utilities like CrystalDiskMark on Windows, so you get the Windows numbers. My guess is your USB stick is a USB 2.0 stick and is horribly slow at anything other than bulk file reading and writing. That could explain why it was so bad and that’d make it not Linux’s fault. The live USB would decompress into RAM, so it’s faster because it’s compressed (less data to read), and the data is then in RAM where it’s very very fast to access.

    I actually had the same issue but in reverse: I needed to run motherboard software just once on Windows to configure a few features once forever. I installed it on the only USB stick I had, a 32GB Verbatim from Microcenter. It took hours to install, a solid half an hour to make it to the Windows desktop, and probably like an hour to manage to open up Edge, download the software, install it and run it. It was absolutely horrible.

    I tried sudo mount /dev/sdc/ but terminal spit out an error.

    That’s not a valid command because /dev/sdc/ isn’t a folder, it’s a device file. It probably would have worked with sudo mount /dev/sdc, without the trailing /. But that still would only work if the drive is listed in /etc/fstab as to where it should be mounted. You ideally want sudo mount /dev/sdc /some/other/path so it ends up mounted where you want it. Or just mount it from the file manager which will do it for you in a temporary location.


    I don’t really have better advice for you and Linux. For some people it just works out of the box, some other people have more annoying hardware that’s a pain to get working. With the attitude you have, you don’t seem like you want to take the time to learn how Linux works and get used to it, so you end up frustrated and you just go nowhere. Sometimes it really just takes persistence to get through it.

    You’re not helping yourself at all doing weird setups like temporarily installing it on a USB stick. As I said in my other comment, you started off with an impossible task (that app will not work in Wine), so you’re also thrown into a rabbit hole of commands and troubleshooting that has no chance of succeeding. That is very demotivating in itself.

    The “easy” Linux distros can be convenient but ultimately if you want to have a good Linux experience you must be willing to learn how it works and get familiar with how things are done. You’re relearning to use a computer all over, it’s not an easy task, it takes persistence. The issues must be turned into learning experiences, not frustration and points towards “I’m done with this stupid OS”.




  • The issue DNS solves is the same as the phone book. You could memorize everyone’s phone number/IP, but it’s a lot easier to memorize a name or even guess the name. Want the website for walmart? Walmart.com is a very good guess.

    Behind the scenes the computer looks it up using DNS and it finds the IP and connects to it.

    The way it started, people were maintaining and sharing host files. A new system would come online and people would take the IP and add it to their host file. It was quickly found that this really doesn’t scale well, you could want to talk to dozens of computers you’d have to find the IP for! So DNS was developed as a central directory service any computer can request to look things up, which a hierarchy to distribute it and all. And it worked, really well, so well we still use it extensively today. The desire to delegate directory authority is how the TLD system was born. The host file didn’t use TLDs just plain names as far as I know.


  • There’s definitely been a surge in speculation on domain names. That’s part of the whole dotcom bubble thing. And it’s why I’m glad TLDs are still really hard to obtain, because otherwise they would all be taken.

    Unfortunately there’s just no other good way to deal with it. If there’s a shared namespace, someone will speculate the good names.

    Different TLDs can help with that a lot by having their own requirements. .edu for example, you have to be a real school to get one. Most ccTLDs you have to be a citizen or have a company operating in the country to get it. If/when it becomes a problem, I expect to see a shift to new TLDs with stronger requirements to prove you’re serious about your plans for the domain.

    It’s just a really hard problem when millions of people are competing to get a decent globally recognized short name, you’re just bound to run out. I’m kind of impressed at how well it’s holding up overall despite the abuse, I feel like it’s still relatively easy to get a reasonable domain name especially if you avoid the big TLDs like com/net/org/info. You can still get xyz for dirt cheap, and sometimes there’s even free ones like .tk and .ml were for a while. There’s also several free short-ish ones, I used max-p.fr.nf for a while because it was free and still looks like a real domain, it looks a lot like a .co.uk or something.