• 1 Post
  • 65 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle




  • The problem I’ve observed with XMPP as an outsider is the lack of a standard. Each server or client has its own supported features and I’m not sure which one to choose.

    That’s a valid concern, but I wouldn’t call it a problem. There are practically 2 types of clients/servers: the ones which are maintained, and which work absolutely fine and well together, and the rest, the unmaintained/abandoned part of the ecosystem.

    And with the protocol being so stable and backwards/forwards compatible in large parts, those unmaintained clients will just work, just not with the latest and greatest features (XMPP has the machinery to let clients and servers advertise about their supported features so the experience is at least cohesive).

    Which client would you recommend?

    Depends on which platform you are on and the type of usage. You should be able to pick one as advertised on https://joinjabber.org , that should keep you away from the fringe/unmaintained stuff. Personally I use gajim and monocles.


  • They both qualify as “open, federated messaging protocols”, with XMPP being the oldest (about 25 years old) and an internet standard (IETF) but at this point we can consider Matrix to be quite old, too (10 years old). On the paper they are quite interchangeable, they both focus on bridging with established protocols, etc.

    Where things differ, though, is that Matrix is practically a single vendor implementation: the same organization (Element/New Vector/ however it’s called these days) develops both the reference client and the reference server. Which incidentally is super complex, not well documented (the code is the documentation), and practically not compatible with the other (semi-official) implementations. This is a red herring because it also happens that this organization was built on venture capital money with no financial stability in sight. XMPP is a much more diverse and accessible ecosystem: there are multiple independent teams and corporations implementing servers and clients, the protocol itself is very stable, versatile and extensible. This is how you can find XMPP today running the backbone of the modern internet, dispatching notifications to all Android devices, being the signaling system behind millions of IoT devices, providing messaging to billion of users (WhatsApp is, by the way, based on XMPP)

    Another significant difference is that, despite 10 years of existence and millions invested into it, Matrix still has not reached stability (and probably never will): the organization recently announced Matrix 2 as the (yet another) definitive answer to the protocol’s shortcomings, without changing anything to what makes the protocol so painful to work with, and the requirements (compute, memory, bandwidth) to run Matrix at even a small scale are still orders of magnitude higher than XMPP. This discouraged many organizations (even serious ones, like Mozilla, KDE, …) from running Matrix themselves and further contributes to the de-facto centralization and single point of control federated protocols are meant to prevent.






  • Well, that is boldly assuming:

    • that endlessly duplicating services across containers causes no overhead: you probably already have a SQL server, a Redis server, a PHP daemon, a Web server, … but a docker image doesn’t know, and indeed, doesn’t care about redundancy and wasting storage and memory

    • that the sum of those individual components work as well and as efficiently as a single (highly-optimized) pooled instance: every service/database in its own container duplicates tight event loops, socket communications, JITs, caches, … instead of pooling it and optimizing globally for the whole server, wasting threads, causing CPU cache misses, missing optimization paths, and increasing CPU load in the process

    • that those images are configured according to your actual end-users needs, and not to some packager’s conception of a “typical user”: do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not

    • that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization

    And this is even before assuming that docker abstractions are free (which they are not)








  • Interesting. Were the apps/features installed comparable between the OC and NC instances? I can’t even find an “email” equivalent app for owncloud from their marketplace.
    I don’t want to sound like I’m coming in defence of NC, but I’d be curious to find an as factual as possible comparison between “bare-bones NC” vs “bare-bones OC”.


  • The thing is that I have experience with other complex or high usage PHP applications and I know how to optimize things. What I see in NC is poorly structured code, warnings and erros thrown around left and right.

    Well, on my instance, logs are pretty quiet and I am not a PHP developer to form an opinion on the overall architecture. But if you take the time to write down what you feel is wrong with the nextcloud codebase, I’m pretty sure many people (and me first) will read it with interest and perhaps even do something about it (typically the kind of “HN frontpage” content, if it’s well written).

    The OP also said that ownCloud gave him a much better experience out of the box, and that’s still a “complex” PHP application.

    Last I heard of ownCloud, people were saying that it had been rewritten in go or something similar. Funny bit of history, nextcloud forked off of owncloud, got a ton of mindshare in the early days, and quickly became the better/faster of the two (perf was one compelling reason for me to migrate back then). I wouldn’t mind NC following suit (in the end, we benefit from this type of competition).

    NC webmail is unusable

    I don’t plan on ever using it, but thanks for the heads-up. That said, if you feel that roundcube performs better, it happens that someone has packaged it for NC, so you should be able to use that instead of the troublemaking client.


  • Yeah sure. I’m not the only complaining as you can read on this post

    I’m not saying that you are the only one complaining, but from what I can tell, most people in your situation are deploying their instance from “cookie-cutter” docker images. In practice, it often means that the same machine end-up hosting multiple web servers, database servers, application servers, etc etc. And those servers are developed around heavily-optimized event loops that assume direct access to the full server resources. So if you want predictable and good performance, there’s no way around tweaking some knobs and be very mindful of how each and every service is deployed alongside the next one. And of course, you can’t trust someone else to know better than yourself what’s running in your box (not even the nextcloud developers) and which service should get preference over what under heavy load.

    Nextcloud has that against itself that it uses advanced php features and large objects that need to be cached at different layers. That makes it a slightly more complex app than your go-to php CRM, but it’s not unheard of either (you’d be at the same spot hosting a large mediawiki or wordpress).

    Does that make it garbage? Well, you are entitled to your own opinion, of course.

    Also your comment tells me that you’re full of shit, because you’re implying that both a generic Docker setup and mines are all shit. You can’t have it both ways. What are you suggesting? That the NC guys made a bad job out in their Docker images?

    Do I deserve the insult? I answered the docker part, though. In general, I’d say that you are better-off not using docker in PROD , unless you have the time and energy to spend rebuilding images to make them fit your pre-existing deployment (what nobody does), and then invest the time in fine-tuning through multiple containers (which very much goes against the “fire and forget” mindset of most docker users).

    How many users? How much data?

    About a dozen, 2TB, upwards 700k files

    Btw do you use the webmail at all or are you about to tell me that these screenshots are hallucinations?

    I’m definitely not using the webmail. If you have performance issues, you should rather start with just the “core” (i.e. files) and add on incrementally.

    And again, I’m not saying that nextcloud doesn’t deserve being optimized, or somehow be made more foolprof. I too went through a phase of “that can’t be real, this cannot be that slow” and walked my way out of it.