Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 8 Posts
  • 595 Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle

  • I’ve never used the software package in question.

    If you already own the software, and if the hardware it uses to talk to the microcontroller is on a serial port or USB-attached serial port, then you can most-likely just run it under WINE. This isn’t a VM, but a Windows compatibility layer — you don’t need to run a copy of Windows in a VM and all that. It’d be my first shot. That way, you can just use it like any other Linux program, don’t need to blow extra memory or overhead on running Windows in a VM.

    So, say the program in question has an installer, picbasic-installer.exe.

    So you’re going to want to install WINE. I don’t use Arch, so I’ll leave that up to you, but I believe that the Arch package manager is pacman. They may have some graphical frontend that you prefer to use.

    Then go ahead and, in a virtual terminal program, invoke picbasic-installer.exe — assuming that that’s what the installer is called — under WINE:

    $ wine picbasic-installer.exe
    

    That’ll run the installer.

    Now, my guess is that that much won’t have problems. And that WINE will run the thing. And it’ll probably let you compile BASIC programs.

    You can go ahead and fire up your PICBASIC PRO program. I don’t know how you launch Windows programs in your Arch environment. In general, WINE installers will drop a .desktop file under ~/.local/share/applications, and that can be started the way any other application can. I use a launcher program, tofi, to start programs like that under sway using tofi-drun, but you probably have a completely different environment set up. My guess is that your desktop environment on Arch probably has some kind of system menu of applications or something like that that will include WINE programs with a desktop file in it. Or maybe you have some program that shows a searchable list of programs and can launch from that. KDE Plasma, GNOME, Cinnamon, etc will probably all have their own routes, but I don’t use those, so I can’t tell you what they do. I’ll leave that up to you.

    What you’re likely to run into problems with is that if the PICBASIC PRO program wants to talk to that microcontroller programmer via a serial port (which on Windows would probably be COM0 or COM1 or whatever), it’s going to need to talk to /dev/ttyS0 or /dev/ttyS1 or whatever on Linux, or if it’s a USB-attached, /dev/ttyUSB0, /dev/ttyUSB1, etc. Ordinary users probably don’t have permission to write directly to them, by default.

    There are a couple ways to grant permission, but one of the most-straightforward ways is to add your user to a group that has permission.

    The basic Unix file permission system has each file — including device files, like /dev/ttyS0 — owned by one user and one group.

    On my Debian trixie system:

    $ ls -l /dev/ttyS0
    crw-rw---- 1 root dialout 4, 64 Jan 15 20:46 /dev/ttyS0
    $
    

    So that serial port device file is owned by the user root, which has read and write privileges (the first “rw”) and the group dialout, which has read and write privileges (the second “rw”). Any user that belongs to that group will be able to write to the serial ports.

    On my system, my user doesn’t belong to the “dialout” group:

    $  groups
    tal cdrom floppy sudo audio dip video plugdev users render netdev bluetooth lpadmin scanner docker libvirt ollama systemd-journal
    $
    

    So I’m going to want to add my user to that group:

    $ sudo usermod -aG dialout tal
    $
    

    Group permissions get assigned to processes when you log in (that is, usermod just sets what groups the process started when you log in as has, and then all its child processes). Technically, you don’t have to log out to do this — you could run sg dialout at this point, and then from that shell, run wine and see if it works — but I’d probably log out and then back in again, to keep things simplest. After you do that, you should see that you’re in the “dialout” group:

    $ groups
    night_petal <list of groups> dialout
    $
    

    After that, you should be able to use the program and write code to the microcontroller.




  • https://stackoverflow.com/questions/30869297/difference-between-memfree-and-memavailable

    Rik van Riel’s comments when adding MemAvailable to /proc/meminfo:

    /proc/meminfo: MemAvailable: provide estimated available memory

    Many load balancing and workload placing programs check /proc/meminfo to estimate how much free memory is available. They generally do this by adding up “free” and “cached”, which was fine ten years ago, but is pretty much guaranteed to be wrong today.

    It is wrong because Cached includes memory that is not freeable as page cache, for example shared memory segments, tmpfs, and ramfs, and it does not include reclaimable slab memory, which can take up a large fraction of system memory on mostly idle systems with lots of files.

    Currently, the amount of memory that is available for a new workload, without pushing the system into swap, can be estimated from MemFree, Active(file), Inactive(file), and SReclaimable, as well as the “low” watermarks from /proc/zoneinfo.

    However, this may change in the future, and user space really should not be expected to know kernel internals to come up with an estimate for the amount of free memory.

    It is more convenient to provide such an estimate in /proc/meminfo. If things change in the future, we only have to change it in one place.

    Looking at the htop source:

    https://github.com/htop-dev/htop/blob/main/MemoryMeter.c

       /* we actually want to show "used + shared + compressed" */
       double used = this->values[MEMORY_METER_USED];
       if (isPositive(this->values[MEMORY_METER_SHARED]))
          used += this->values[MEMORY_METER_SHARED];
       if (isPositive(this->values[MEMORY_METER_COMPRESSED]))
          used += this->values[MEMORY_METER_COMPRESSED];
    
       written = Meter_humanUnit(buffer, used, size);
    

    It’s adding used, shared, and compressed memory, to get the amount actually tied up, but disregarding cached memory, which, based on the above comment, is problematic, since some of that may not actually be available for use.

    top, on the other hand, is using the kernel’s MemAvailable directly.

    https://gitlab.com/procps-ng/procps/-/blob/master/src/free.c

    	printf(" %11s", scale_size(MEMINFO_GET(mem_info, MEMINFO_MEM_AVAILABLE, ul_int), args.exponent, flags & FREE_SI, flags & FREE_HUMANREADABLE));
    

    In short: You probably want to trust /proc/meminfo’s MemAvailable, (which is what top will show), and htop is probably giving a misleadingly-low number.



  • If databases are involved they usually offer some method of dumping all data to some kind of text file. Usually relying on their binary data is not recommended.

    It’s not so much text or binary. It’s because a normal backup program that just treats a live database file as a file to back up is liable to have the DBMS software write to the database while it’s being backed up, resulting in a backed-up file that’s a mix of old and new versions, and may be corrupt.

    Either:

    1. The DBMS needs to have a way to create a dump — possibly triggered by the backup software, if it’s aware of the DBMS — that won’t change during the backup

    or:

    1. One needs to have filesystem-level support to grab an atomic snapshot (e.g. one takes an atomic snapshot using something like btrfs and then backs up the snapshot rather than the live filesystem). This avoids the issue of the database file changing while the backup runs.

    In general, if this is a concern, I’d tend to favor #2 as an option, because it’s an all-in-one solution that deals with all of the problems of files changing while being backed up: DBMSes are just a particularly thorny example of that.

    Full disclosure: I mostly use ext4 myself, rather than btrfs. But I also don’t run live DBMSes.

    EDIT: Plus, #2 also provides consistency across different files on the filesystem, though that’s usually less-critical. Like, you won’t run into a situation where you have software on your computer update File A, then does a sync(), then updates File B, but your backup program grabs the new version of File B but then the old version of File A. Absent help from the filesystem, your backup program won’t know where write barriers spanning different files are happening.

    In practice, that’s not usually a huge issue, since fewer software packages are gonna be impacted by this than write ordering internal to a single file, but it is permissible for a program, under Unix filesystem semantics, to expect that the write order persists there and kerplode if it doesn’t…and a traditional backup won’t preserve it the way that a backup with help from the filesystem can.



  • I think that the problem will be if software comes out that’s doesn’t target home PCs. That’s not impossible. I mean, that happens today with Web services. Closed-weight AI models aren’t going to be released to run on your home computer. I don’t use Office 365, but I understand that at least some of that is a cloud service.

    Like, say the developer of Video Game X says “I don’t want to target a ton of different pieces of hardware. I want to tune for a single one. I don’t want to target multiple OSes. I’m tired of people pirating my software. I can reduce cheating. I’m just going to release for a single cloud platform.”

    Nobody is going to take your hardware away. And you can probably keep running Linux or whatever. But…not all the new software you want to use may be something that you can run locally, if it isn’t released for your platform. Maybe you’ll use some kind of thin-client software — think telnet, ssh, RDP, VNC, etc for past iterations of this — to use that software remotely on your Thinkpad. But…can’t run it yourself.

    If it happens, I think that that’s what you’d see. More and more software would just be available only to run remotely. Phones and PCs would still exist, but they’d increasingly run a thin client, not run software locally. Same way a lot of software migrated to web services that we use with a Web browser, but with a protocol and software more aimed at low-latency, high-bandwidth use. Nobody would ban existing local software, but a lot of it would stagnate. A lot of new and exciting stuff would only be available as an online service. More and more people would buy computers that are only really suitable for use as a thin client — fewer resources, closer to a smartphone than what we conventionally think of as a computer.

    EDIT: I’d add that this is basically the scenario that the AGPL is aimed at dealing with. The concern was that people would just run open-source software as a service. They could build on that base, make their own improvements. They’d never release binaries to end users, so they wouldn’t hit the traditional GPL’s obligation to release source to anyone who gets the binary. The AGPL requires source distribution to people who even just use the software.


  • I will say that, realistically, in terms purely of physical distance, a lot of the world’s population is in a city and probably isn’t too far from a datacenter.

    https://calculatorshub.net/computing/fiber-latency-calculator/

    It’s about five microseconds of latency per kilometer down fiber optics. Ten microseconds for a round-trip.

    I think a larger issue might be bandwidth for some applications. Like, if you want to unicast uncompressed video to every computer user, say, you’re going to need an ungodly amount of bandwidth.

    DisplayPort looks like it’s currently up to 80Gb/sec. Okay, not everyone is currently saturating that, but if you want comparable capability, that’s what you’re going to have to be moving from a datacenter to every user. For video alone. And that’s assuming that they don’t have multiple monitors or something.

    I can believe that it is cheaper to have many computers in a datacenter. I am not sold that any gains will more than offset the cost of the staggering fiber rollout that this would require.

    EDIT: There are situations where it is completely reasonable to use (relatively) thin clients. That’s, well, what a lot of the Web is — browser thin clients accessing software running on remote computers. I’m typing this comment into Eternity before it gets sent to a Lemmy instance on a server in Oregon, much further away than the closest datacenter to me. That works fine.

    But “do a lot of stuff in a browser” isn’t the same thing as “eliminate the PC entirely”.







  • tal@lemmy.todaytoLinux@lemmy.worldSetting up LaTeX on debian?
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    12 days ago

    I’ve always just written single-file LaTeX, but it looks like the settings.sty failure you’re getting is because of this:

    % Most commands and style definitions are in settings.sty.
    \usepackage{settings}
    

    By installing texlive from source, and installing CurVe to the working directory, I was able to fix that problem.

    I’m not sure how this would resolve the issue — I’d think that you’d still need settings.sty. It looks to me like Debian trixie packages CurVe in texlive-pictures, so I don’t think that you need to manually install texlive or CurVe from source:

    $ apt-file search curve.cls
    texlive-pictures: /usr/share/texlive/texmf-dist/tex/latex/curve/curve.cls
    $ apt show texlive-pictures
    [snip]
    curve -- A class for making curriculum vitae
    [snip]
    $ sudo apt install texlive texlive-pictures
    [snip]
    $ pdflatex test.tex
    [snip]
    ! LaTeX Error: File `settings.sty' not found.
    

    I think that that example CV you have is missing some of the LaTeX source, the stuff that’s in its settings.sty. Like, it might not be the best starting point, unless you’ve resolved that bit.

    EDIT: If you just want a functioning CurVe example, I can render this one:

    https://github.com/ArwensAbendstern/CV-LaTeX/tree/master/simple CurVe CV English

    Need to download CV.ltx and experience.ltx. Then $ pdflatex CV.ltx renders it to a PDF for me.





  • I assume you’ve tried multiple USB ports?

    That’s a thought.

    Check fdisk -l and see if it shows up in there.

    It won’t hurt, but if he’s not seeing anything with lsblk, fdisk -l probably won’t show it either, as they’re both iterating over the block devices.

    Honestly, if he doesn’t know that the hard drive itself functions, the drive not working would be my prime theory as to culprit. I have had drive enclosures not present a USB Mass Storage device to a computer if they can’t talk to the hard drive over SATA.