I’m in the process of re-configuring my home lab and would like to get some help figuring out log collection. My setup was a hodgepodge of systems/OSes using rsyslog to send syslogs to a syslog listener on my qnap but that’s not going to work anymore (partly because the qnap is gone).

My end-goal is going to be as homogeneous as I can manage. Mostly Debian 12 systems (phy and vm) and Docker containers. Does anyone know of a FOSS solution that can ingest journald, syslog, and if it’s even possible to send docker logs to a log collector?

Thanks

  • Dogeek@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    You could use grafana loki to handle logs, it’s similar to Prometheus so if you’re already using that and/or grafana it’s an easy setup and the API is really simple too.

    • farcaller@fstab.sh
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I second this. Loki for logs, VictoriaMetrics for metrics―it’s significantly more lightweight than ELK logging (and any lags are irrelevant for a homelab), and VM is similarly much more careful with RAM than Prometheus.

      • keyez@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Much less intensive than an elasticsearch anything. I have Loki, grafana and 3 promtail clients running for my env (switched from Graylog/elasticsearch) and over the last few days Loki is sitting at 3GB memory and 8% CPU processing logs for about 6 devices.

  • tko@tkohhh.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I use a Graylog/Opensearch/Mongodb stack to log everything. I spent a good amount of time writing parsers for each source, but the benefit is that everything is normalized to make searching easier. I’m happy with it as a solution!

    • vegetaaaaaaa@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I also use graylog to aggregate logs from various devices (mostly from rsyslog over SSL/TLS). The only downsides for me are the license (not a big problem for a personal setup), and resource usage of the general graylog/elasticsearch stack. I still think it’s great.

      I use this ansible role to install and manage it.

      For simpler setups with resource constraints, I would simply use a rsyslog server as aggregator instead of graylog, and lnav for the analysis/filtering/parsing part

  • Lodra@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    You can also take a look at open telemetry. It’s a huge open source project with lots of functionality. Handles logs just fine and also can provide metrics and traces too. Might be overkill for your needs but it’s an excellent tool.

  • wildbus8979@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    All these new fang projects, but really I just use remote rsyslogd. Works just fine, super robust, easy setup. You can literally be up an.running within minutes.

  • DefederateLemmyMl@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    FWIW I use an elastic stack for that: filebeat, journalbeat to collect logs. Logstash to sort and parse them. Elasticsearch to store them. Not sure if it satisfies your FOSS requirement, as I don’t believe it’s entirely open source.