Trying a switch to tal@lemmy.today, at least for a while, due to recent kbin.social stability problems and to help spread load.

  • 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle


  • Reddit had the ability to have a per-subreddit wiki. I never dug into it on the moderator side, but it was useful for some things like setting up pages with subreddit rules and the like. I think that moderators had some level of control over it, at least to allow non-moderator edits or not, maybe on a per-page basis.

    That could be a useful option for communities; I think that in general, there is more utility for per-community than per-instance wiki spaces, though I know that you admin a server with one major community which you also moderate, so in your case, there may not be much difference.

    I don’t know how amenable django-wiki is to partitioning things up like that, though.

    EDIT: https://www.reddit.com/wiki/wiki/ has a brief summary.



  • I broadly agree that “cloud” has an awful lot of marketing fluff to it, as with many previous buzzwords in information technology.

    However, I also think that there was legitimately a shift from a point in time where one got a physical box assigned to them to the point where VPSes started being a thing to something like AWS. A user really did become increasingly-decoupled from the actual physical hardware.

    With a physical server, I care about the actual physical aspects of the machine.

    With a VPS, I still have “a VPS”. It’s virtualized, yeah, but I don’t normally deal with them dynamically.

    With something like AWS, I’m thinking more in terms of spinning up and spinning down instances when needed.

    I think that it’s reasonable to want to describe that increasing abstraction in some way.

    Is it a fundamental game-changer? In general, I don’t think so. But was there a shift? Yeah, I think so.

    And there might legitimately be some companies for which that is a game-changer, where the cost-efficiencies of being able to scale up dynamically to handle peak load on a service are so important that it permits their service to be viable at all.



  • I mean, scrolling down that list, those all make sense.

    I’m not arguing that Google should have kept them going.

    But I think that it might be fair to say that Google did start a number of projects and then cancel them – even if sensibly – and that for people who start to rely on them, that’s frustrating.

    In some cases, like with Google Labs stuff, it was very explicit that anything there was experimental and not something that Google was committing to. If one relied on it, well, that’s kind of their fault.





  • It’s available for me on kbin.social as of this writing, and I subscribed.

    As far as I can tell, what one needs to do on kbin is search for communityname@instance. I don’t think that “!” goes in the search string.

    But that’s already run by now.

    For people on kbin.social, you should be able to see it at:

    https://kbin.social/m/battlestations@lemmy.world

    If you’re on another kbin instance, do the above search. I’m still a little fuzzy about the right syntax in a comment to produce a link to perform such an initial search in a cross-lemmy/kbin, cross-instance fashion. I think that it should be:

    !@battlestations@lemmy.world
    
    

    Giving the following:

    @battlestations

    That generated link does work for me on kbin.social, but I could be wrong about it working elsewhere.

    I really wish that this particular issue would be made clear, as it’s important for community discoverability.

    EDIT: Nope, generated link does not work on lemmy.world, so doesn’t work on lemmy, at least.

    EDIT2: On fedia.io, another kbin instance, the link also doesn’t work, so someone on the instance may need to have already subscribed for the link to be auto-generated. The ability to have a link format that directs to one’s local instance in a way that works on all lemmy and kbin instances, regardless of whether anyone has subscribed, would be really nice.

    EDIT3: Trying:

    [battlestations@lemmy.world](/search?q=battlestations%40lemmy.world)
    
    

    Yields

    battlestations@lemmy.world

    Which works to generate a search on kbin.social.

    It also appears to work on fedia.io, so this is probably the right way to do a link, at least for kbin users.

    EDIT4: It also appears to work for lemmy instances! This should probably be the new syntax used on newcommunities@lemmy.world to link to a community!




  • tal@kbin.socialtoSelfhosted@lemmy.worldWhy is DNS still hard to learn?
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Yeah, I don’t think I really agree with the author as to the difficulty with dig. Maybe it could be better, but as protocols and tools go, I’d say that dig and DNS is an example where a tool does a pretty good job of coverage. Maybe not DNSSEC, dunno about how dig does there, and knowing to use +norecurse is maybe not immediately obvious, but I can list a lot of network protocols for which I wish that there were the equivalent to dig.

    However, a lot of what of what the author seems to be complaining about is not really stuff at the network level, but the stuff happening on the host level. And it is true that there are a lot of parts in there if one considers name resolution as a whole, not just DNS, and no one tool that can look at the whole process.

    If I’m doing a resolution with Firefox, I’ve got a browser cache for name resolutions independently of the OS. I may be doing DNS over HTTP, and that may always happen or be a fallback. I may have a caching nameserver at my OS level. There’s the /etc/hosts file. There’s configuration in /etc/resolv.conf. There’s NIS/yp. Windows has its own name resolution stuff hooked into the Windows domains stuff and several mechanisms to do name resolution, whether via broadcasts without a domain controller or with a DC whether that’s present; Apple has Bonjour and more-generally there’s zeroconf. It’s not immediately clear to someone the order of this or a tool that can monitor the whole process end to end – these are indeed independent systems that kind of grew organically.

    Maybe it’d be nice to have an API to let external software initiate name resolutions via the browser and get information about what’s going on, and then have a single “name resolution diagnostic” tool that could span multiple of these name resolution systems, describe what’s happening and help highlight problems. I can say that gethostbyname() could also use a diagnostic call to extract more information about what a resolution attempt attempted to do and why it failed; libc doesn’t expose a lot of useful diagnostic information to the application, though libc does know what it is doing in a resolution attempt.


  • make dig’s output a little more friendly. If I were better at C programming, I might try to write a dig pull request that adds a +human flag to dig that formats the long form output in a more structured and readable way, maybe something like this:

    Okay, fair enough.

    One quick note on dig: newer versions of dig do have a +yaml output format which feels a little clearer to me, though it’s too verbose for my taste (a pretty simple DNS response doesn’t fit on my screen)

    Man, that is like the opposite approach to what you want. If YAML output is easier to read, that’s incidental; that’s intended to be machine-readable, a stable output format.


  • There are a lot of subtle differences. That being said, they’re mutually-intelligible. Even if you don’t know the other variety, you can probably figure out just about everything from context.

    https://en.wikipedia.org/wiki/Comparison_of_American_and_British_English

    EDIT: Even that’s not a comprehensive list, though. For example, style guides for American English typically use title case for headlines, where nearly all words are capitalized (“Sinead O’Connor Mourned in Irish Mountain Village Where She Once Lived”) and British English style guides typically use sentence case (“Sadiq Khan wins high court battle over London Ulez extension”), though that’s really a matter of style and not an absolute divide between the two.

    Or how the British usually use “River” first (“the River Thames”) and the Americans “River” second (“the Mississippi River”) in names.


  • Duplicity uses rsync internally for efficient transport. I have used that. I’m presently using rdiff-backup, driven by backupninja out of a cron job, to backup to a local hard drive and which does incremental backups (which would address @Nr97JcmjjiXZud’s concern). That also uses rsync. There’s also rsbackup, which also uses rsync and I have not used.

    Two caveats I’d note that may or may not be a concern for one’s specific use case (which apply to rdiff-backup, and I believe both also apply to the other two rsync-based solutions above, though it’s been a while since I’ve looked at them, so don’t quote me on that):

    • One property that a backup system can have is to make backups immutable – so that only the backup system has the ability to purge old backups. That could be useful if, for example, the system with the data one is preserving is broken into – you may not want someone compromising the backed up system to be able to wipe the old backups. Rdiff-backup expects to be able to connect to the backup system and write to it. Unless there’s some additional layer of backups that the backup server is doing, that may be a concern for you.

    • Rdiff-backup doesn’t do dedup of data. That is, if you have a 1GB file named “A” and one byte in that file changes, it will only send over a small delta and will efficiently store that delta. But if you have another 1GB file named “B” that is identical to “A” in content, rdiff-backup won’t detect that and only use 1GB of storage – it will require 2GB and store the identical files separately. That’s not a huge concern for me, since I’m backing up a one-user system and I don’t have a lot of duplicate data stored, but for someone else’s use case, that may be important. Possibly more-importantly to OP, since this is offsite and bandwidth may be a constraining factor, the 1GB file will be retransferred. I think that this also applies to renames, though I could be wrong there (i.e. you’d get that for free with dedup; I don’t think that it looks at inode numbers or something to specially try to detect renames).



  • I remember this story from about twenty years back hitting the news:

    https://www.theregister.com/2001/04/12/missing_novell_server_discovered_after/

    Missing Novell server discovered after four years

    In the kind of tale any aspiring BOFH would be able to dine out on for months, the University of North Carolina has finally located one of its most reliable servers - which nobody had seen for FOUR years.

    One of the university’s Novell servers had been doing the business for years and nobody stopped to wonder where it was - until some bright spark realised an audit of the campus network was well overdue.

    According to a report by Techweb it was only then that those campus techies realised they couldn’t find the server. Attempts to follow network cabling to find the missing box led to the discovery that maintenance workers had sealed the server behind a wall.


  • I do not myself like /r/TwoXChromosomes, and I don’t think that it was a great idea to make it a default sub on Reddit back when that happened – IIRC the rationale was to try to improve Reddit’s appeal to women. But it’s…kind of a bitter, angry place, and I don’t think that that makes for a very good default experience for new users.

    However, I certainly think that people who want that environment should have access to it. I mean, people need to vent about stuff. So, hey, good to have it as an option for people.