• 19 Posts
  • 48 Comments
Joined 8 months ago
cake
Cake day: February 10th, 2024

help-circle

  • mox@lemmy.sdf.orgtoLinux@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    29 days ago

    Ted was being an unconscionably rude fucker, but - diatribe aside - his process question is a reasonable one, although his solution “well you’re SOL” was poor, undiplomatic, and unhelpful.

    Maybe so. What I watched of the video had little surrounding context, though.

    I’ve seen more than a few abrasive outbursts from people who care a lot about what they’re doing. When I see video of one, I try to keep in mind that they don’t often come out of nowhere. There’s a good chance that there was a much longer preceding exchange (perhaps not even in person) wherein the speaker had been trying to explain their perspective calmly and politely, but the other person was persistently missing it, due either to stubborn selfishness or to honest lack of understanding. Frustrated people sometimes resort to a blunt approach to try to get their message through.

    In any case, I’m with you in noticing that important issues are being raised here. They’re not easy to solve, so it’s no surprise to see frustration along the way, but they still might lead to a good outcome.

    Drew DeVault recently wrote up an idea similar to one that has been on my mind lately: What might come of a bunch of passionate Rust developers making a new kernel exposing Linux ABIs? It would be much faster and easier than a new kernel from scratch, because there’s already a working reference implementation in C. That seems like an effective way to work through design challenges without disrupting the existing system and development process, and once proven to work, might guide a better-defined path to integration with (or even replacement of) the C kernel. It would certainly have less friction than what we’re seeing now.

    https://drewdevault.com/2024/08/30/2024-08-30-Rust-in-Linux-revisited.html


  • mox@lemmy.sdf.orgtoLinux@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    4
    ·
    edit-2
    1 month ago

    Your perspective makes more sense when you put it that way.

    I think it’s important to understand that “having to learn Rust” is a proxy for “having to learn, become proficient in, become expert in, commit to regularly using, and take on the additional work of managing bindings between a large continually changing codebase and Rust, with no foreseeable end”. Multiply that by the number of kernel developers who would be affected, and remember that Rust in particular is famously time-consuming and (at least for some) often painful to use.

    It’s not, “I don’t want to learn this”. (The people maintaining the kernel surely learn new things all the time in the course of their work, after all, as do most advanced programmers.) It’s more like, “I cannot reasonably take on such an enormous additional workload.”

    The Rust camp in this disagreement doesn’t seem to grasp that yet. If everyone involved figures out a way to bridge that gap, I expect the frustrations will go away.


  • mox@lemmy.sdf.orgtoLinux@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    8
    ·
    edit-2
    1 month ago

    Mainlining memory safety improvements, in C, for C code should be welcomed and it is very concerning if she indeed got shunned because the end goal was to offer lifetime guarantees (which to my admittedly non-expert eye sounds like it would be a good thing for memory safety in general).

    It would be a good thing. Nobody is debating that. It’s why Linus agreed to start experimenting with Rust in certain parts of the kernel.

    However, trying to integrate one very specific approach to it into a large, already-working system that works quite differently, is a lot harder than writing from scratch one small component that mainly has to work in its own native ecosystem (as Lina has done).

    Without good and realistic answers to how the long-term maintenance of such changes would be managed, it is myopically unrealistic to propose those changes, let alone to push this hard for them and be so dismissive of the folks who actually have the experience and responsibility to keep it all running. Especially when it’s something that the entire world has come to depend upon in one way or another, as is the case with the linux kernel.

    The concern from those contributors (and we might soon see the same in QEMU) is that these bindings are essentially a weaponization which forces the great majority of contributors to learn Rust or drop out. Essentially a hostile takeover.

    Seems like a moral panic over absolutely nothing (where are the Rust developers allegedly forcing people to learn Rust? all I’ve seen in these threads today is Rust developers asking for an open mind and a willingness to collaborate), and that the response to this “concern” is to block any and all changes that might benefit Rust adoption is really concerning (but unfortunately not unsurprising) behavior.

    The problem isn’t the immediate thing they’re asking for; it’s the inevitable chain reaction of events that will follow. They don’t seem to understand the bigger picture, so they don’t have answers for how it would be managed. The obvious but unstated solution would be that many kernel developers would have to invest an enormous amount of time (which they might not have) to become proficient in Rust and adapt an enormous amount of surrounding code to it, on top of their existing responsibilities. More than a few people (who are very much in a position to know) see that as unviable, at least for now.

    No viable alternative has been offered. Hence the objection. And, since the vocal minority keep on pushing for their changes without addressing the issues that have been raised, the only sensible response is to reject their request.


  • mox@lemmy.sdf.orgtoLinux@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    8
    ·
    edit-2
    1 month ago

    Rust is—again for better or worse—something Linus thinks is good for the project, and thus learning Rust at least enough to not break the builds is a requirement for the project.

    That misrepresents the situation. Linus accepted Rust provisionally, and only into certain parts of the kernel (drivers). It’s more of an experiment than what you wrote would suggest.

    Rust is mainstream now,

    Rust is highly visible now, due in no small part to its deafening evangelism. But it is not remotely mainstream in the sense of being a prevailing language, nor in the sense of being representative of the majority. It brings to the table a novel way to solve certain problems, and that is useful, but let’s not mistake that as the only way or those as the only problems.

    Rust is mainstream now, and “i don’t want to learn this” is a dogshit technical justification.

    That is a straw man.



  • mox@lemmy.sdf.orgtoLinux@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    5
    ·
    edit-2
    1 month ago

    The subject is considerably more complex and nuanced than expressed by these one or two (obviously frustrated) people. I won’t presume to capture all the issues, but this person on HN does a decent job of capturing some of them:

    You have a minority who wants to impose a change, and the concerns outlined in that video by the audience member reflects genuine concerns from many other maintainers and contributors.

    That this discussion repeats itself can only be taken to be either:

    1. Evil C programmers are stodgy and old, and can’t/won’t get with the program, boo!

    2. The Rust minority has, as of yet, failed to properly answer what happens when C APIs change in either signature or semantics, either of which can break the Rust bindings. Some questions:

    • Who tests to avoid this?

    • Who’s expected to fix it? The one changing the C code, who might not know Rust or a separate bindings team?

    • Is there a process ? A person to contact/raise the issue with? To get help or to have the work done?

    • What happens if the bindings cannot be fixed in time for the next Kernel release? Put differently, will Rust bindings changes hold back changes to the C code?

    If broken bindings indeed can hold back changes, then C changes are held back by Rust and indeed then the onus is on the committer to either forego improving/evolving the C API or pick up Rust and fix the bindings also. In that case, yes, the Rust bindings will either freeze the C API or force the individual contributor to learn Rust.

    That people repeat their concerns isn’t an expression of stupidity any more than a result of the people driving Rust into the kernel have yet to properly communicate how they envision this process to work, I suppose.

    And then there is this angle, which also exists:

    The concern from those contributors (and we might soon see the same in QEMU) is that these bindings are essentially a weaponization which forces the great majority of contributors to learn Rust or drop out. Essentially a hostile takeover.


  • mox@lemmy.sdf.orgtoLinux@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    7
    ·
    edit-2
    1 month ago

    In the spirit of making suggestions, as this frustrated Rust developer was doing, and in the spirit of reducing vulnerabilities, as Rust itself is trying to do:

    Screen shots on lemmy are mostly hosted on remote sites, so they don’t show up for people who block off-site media (e.g. to avoid tracking). They don’t work with screen readers (e.g. for the vision impaired). They don’t work with search at all.

    And since Mastodon won’t show anything unless the visitor allows javascript, and since it’s a distributed platform instead of a single well-known site, a would-be visitor would have to allow javascript on random web sites in order to view Mastodon posts. That would expose them to tracking and browser exploits.

    For these reasons, quoting the text you want to share would be better than screen shots or Mastodon links, for convenience, utility, and safety.








  • That 10+ year old “legacy” hardware was practically a supercomputer not long before that, and there aren’t many things we do with computers now that we didn’t back then.

    Sadly, developers and publishers have collectively decided that efficiency no longer matters, since they can instead pressure users into short hardware upgrade cycles, forever. They reduce their own costs by skimping on design and optimization time, and pass those costs on to everyone else. The transferred expense is immeasurable, as it includes money, time, wasted power, wasted materials, pollution, and is multiplied by however many users they have.

    I live booted just to see the slideshow that was windows 10 on 2GB of ram, and Debian ran really smooth.

    Yep. It warms my heart a little whenever I hear from someone discovering what a difference efficient software makes. Thanks for the story.


  • mox@lemmy.sdf.orgtoLinux@lemmy.worldI Love Linux (because it isn't Windows)
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Does it matter?

    Yes. Of course it matters. You just disputed an observation about relative amounts, with only a single amount to support your argument. With no point of comparison, your argument is meaningless.

    But now I see you already provided an answer in an earlier comment: “multiple decades of using Windows”. Compared to your “~3 years” with Linux. That doesn’t refute my observation at all, now does it?

    (We don’t even have to consider the likelihood that you’ve also spent more time per year on Windows than you have on Linux, since the difference in years is so significant on its own.)

    Do a Google for “how to x on Linux” and tell me you’re not instructed to enter a bunch of commands you don’t understand into a terminal

    If you were to complain that googling for random people’s ideas on how to solve a problem tends to yield more helpful results with the older and globally dominant desktop OS than it does with the younger one with a tiny minority desktop market share, then I might say you were right about that. But instead you wrote, “it’s just not true,” about something that you’re not in a position to know. That’s a bit of an overreach, don’t you think?

    It’s fine not to like a thing. It’s fine not to understand a thing. But to go around condemning it as inferior based on your subjective and limited experience is unfair, and more than a little biased.

    How many years do you think it should take to become familiar with the basic functions of an OS?

    Hard to say, given that most of us have been using our OS of choice for long enough to no longer clearly remember how long it took us. It’s complicated by the fact that so many people learn Windows as their first OS, so their expectations and habits are built around it from a young age, and those shape their approach and assumptions when trying something different. But in my family, grandma got familiar and productive with the basic functions of Linux in roughly 2-3 months. I imagine it varies a lot from person to person.



  • Windows is, by a landslide, the easier system to use, regardless of what the reasons are.

    Sometimes people find a thing easier to use, but then it turns out they only believe that because they have a lot more (or more recent) experience with it than the alternative.

    I have used both Windows and Linux extensively. The easier system to use is always the one I’m more familiar with. (This became obvious when I tried using Windows again after being away from it for a decade or two.)


  • Want to see a really big difference? Try doing updates (or using Windows at all) with “only” 4GB of RAM and a mechanical hard drive. You can do it in a virtual machine if you don’t have a spare system sitting around. Use Windows 10 or newer for best effect. (Good luck if it needs more than a few weeks of updates; you might be waiting and rebooting for quite a while before it finishes.)

    One might argue that this is unrealistic, because modern Windows system requirements state up front that such modest hardware isn’t enough, but that’s not the point.

    Do the same thing on any modern Linux distro, and notice the difference. Now consider how much more efficient Linux is at making use of your hardware, no matter how much RAM or how fast the disk.







  • Why would I want to scan a qr code on my phone to read shit on a tiny screen you could’ve just printed on the computers display?

    Because getting it off your crashed computer’s display and into text format, so it can be grepped or posted in a bug report, is a cumbersome task. (OCR tools are not ubiquitous, convenient, or reliable.) And an impossible task when half the crash dump scrolled off the screen.

    Also this is gonna play out great in secured environments where cameras are a no no.

    It’s optional.

    Leave shit like this to the fuckers with no taste at Microsoft. Kernel panics are supposed to be verbose.

    That’s how I felt when the BSoD screen was introduced, but with this new way of using it to reliably deliver more information than ever before, it’s starting to look useful.