I noticed a bit of panic around here lately and as I have had to continuously fight against pedos for the past year, I have developed tools to help me detect and prevent this content.

As luck would have it, we recently published one of our anti-csam checker tool as a python library that anyone can use. So I thought I could use this to help lemmy admins feel a bit more safe.

The tool can either go through all your images via your object storage and delete all CSAM, or it canrun continuously and scan and delete all new images as well. Suggested option is to run it using --all once, and then run it as a daemon and leave it running.

Better options would be to be able to retrieve exact images uploaded via lemmy/pict-rs api but we’re not there quite yet.

Let me know if you have any issue or improvements.

EDIT: Just to clarify, you should run this on your desktop PC with a GPU, not on your lemmy server!

  • snowe@programming.dev
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Cloudflare still has false positives, the NCMEC does not care if they get false positives. If you read some of those links I provided it wouldn’t be considered a generic filtering operation, from how I’m reading it at least. I wouldn’t take the chance, especially not with running the software on your own hardware in your own house, split from the server.

    I think you’re not in the US? So it’s probably different for your jurisdiction. Just want to make it clear that in the US, from what i’ve read up on, this would be considered against the law. You are running software to filter for CSAM, so you are obligated to report it. Up to 1 year jail time for not doing so.

    • db0@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      3
      ·
      10 months ago

      One can easily hook this script to forward to whoever is needed, but I think they might be a bit annoyed after you send them a couple hundred thousand false positives without any csam.

      • snowe@programming.dev
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        5
        ·
        10 months ago

        The problem is you aren’t warning people that deleting CSAM without following your applicable laws can potentially get people that use your tool thrown in jail. You went ahead and built the tool without detailing any of the applicable laws around it. Cloudflare explicitly calls out that in their documentation because it’s very important. I really like the stuff you put out, but this is not the way to do it. I know lots of people on Lemmy hate CF and any sort of large company, but running this stuff yourself without understanding the law is sure to get someone in trouble.

        I don’t even know why you think I was recommending for your system to forward the reports to the authorities. I didn’t sleep very much last night, so I must have glazed over it, but I see nowhere where I said that.

        • db0@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          4
          ·
          10 months ago

          Honestly, I thinking you’re grossly overstating the legal danger a random small lemmy sysadmin is going to get into for running an automated tool like this.

          In any case, you’ve made your point, people can now make their own decisions on whether it’s better to pretend nothing is wrong on their instance, or if they want at least this sort of blanket cleanup. Far be it from me to tell anyone what to do.

          I don’t even know why you think I was recommending for your system to forward the reports to the authorities

          You may not have meant it, but you strongly implied something of the sort. But since this is not what you’re suggesting I’m curious to hear what your optimal approach to those problem would be here.

          • snowe@programming.dev
            link
            fedilink
            English
            arrow-up
            5
            ·
            10 months ago

            You may not have meant it, but you strongly implied something of the sort. But since this is not what you’re suggesting I’m curious to hear what your optimal approach to those problem would be here.

            Optimal approach is to use the existing systems that are used by massive corporations to solve this problem already. I know everyone on lemmy hates that, but this isn’t something to mess around with. The reason this is optimal is because NCMEC provides the hashes only to these companies. You’re not going to be able to get the hashes (this is a good thing… imagine some child abuser getting access to these hashes and then using them to evade detection). So if you can’t get these hashes (and you shouldn’t want them either) then you should use a service that has them. It is by far the best way to filter and has been proven time and time again to be successful.

            The easiest is CloudFlare’s, and yes, you will have to use them as your DNS which I also understand a vast majority of admins hate. But there are other options as well

            • PhotoDNA
            • Safer
            • Facebook PDQ

            Because access to the original hash databases is considered sensitive, NCMEC will not provide these to smaller platforms. Neither will Microsoft provide the source code of its PhotoDNA algorithm except to its most trusted partners, because if the algorithm became widely known, it is thought that this might enable abusers to bypass it.

            In that article, it actually points out that a solution called Safer that uses machine learning and image recognition has very flawed results and is incredibly biased. So if these massive platforms can’t get this kind of image recognition right then it’s probably best to not waste money and time on it. The article even points out that for smaller platforms it’s not worth it.

            We also know in general terms that machine learning algorithms for image recognition tend to be both flawed overall, and biased against minorities specifically. In October 2020, it was reported that Facebook’s nudity-detection AI reported a picture of onions for takedown. It may be that for largest platforms, AI algorithms can assist human moderators to triage likely-infringing images. But they should never be relied upon without human review, and for smaller platforms they are likely to be more trouble than they are worth

                • db0@lemmy.dbzer0.comOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  10 months ago

                  Have you already registered for this services and using them on your lemmy? If so the success is something displayed in time.

                  • snowe@programming.dev
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    10 months ago

                    They have been used on millions of websites already. It’s pretty clear that it works. It doesn’t need to be used on lemmy to prove it works. And my application is currently in review so no I haven’t used it. But that really doesn’t matter. Especially if you’re comparing it to a tool written by one person that has been out for a few days.