I want to move away from Cloudflare tunnels, so I rented a cheap VPS from Hetzner and tried to follow this guide. Unfortunately, the WireGuard setup didn’t work. I’m trying to forward all traffic from the VPS to my homeserver and vice versa. Are there any other ways to solve this issue?

VPS Info:

OS: Debian 12

Architecture: ARM64 / aarch64

RAM: 4 GB

Traffic: 20 TB

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    3 months ago

    You don’t want to forward all traffic. You can do SNAT port forwards across the VPN, but that requires the clients in your LAN to use the VPS as their gateway (I do this for a few services that I can’t run through a proxy; its clunky but works well).

    Typically, you’ll want to proxy requests to your services rather than forwarding traffic.

    1. Setup Wireguard or OpenVPN on the VPS as a server VPN. Allow whatever listener port in the firewall (I use ufw on Debian, but you can use iptables if you want)
    2. Install HAProxy or Nginx (or Nginx Proxy Manager) on the VPS to act as your frotnend. Those will listen on ports 80/443 and proxy requests to your backend servers. They’ll also be responsible for SSL termination, and your public-facing certs will be set there.
    3. Point your DNS records for your services to the VPS’s public IPv4
    4. On your LAN, configure your router to connect to the VPS as a VPN client and route into your LAN from the VPN subnet -or- install the VPN client (WG/OVPN) on each host
    5. In your VPS’s reverse proxy (HAProxy, etc), set the backend server address and port to the VPN address of your host

    I’ve done this since ~2013 (before CF tunnels were even a product) and has worked great.

    My original use case was to setup direct connectivity between a Raspberry PI with a 3G dongle with a server a home on satellite internet. Both ends of that were behind CG-NAT, so this was the solution I came up with.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      Out of curiosity, why not a simple reverse proxy on the VPS (that only adds client real IP to headers), tunneled to a full reverse proxy on the home server (that does host routing and everything else) through a SSH tunnel?

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          3 months ago

          Variant 1:

          • SSH tunnel established outgoing from home server to VPS_PUBLIC_IP:22, which makes an encrypted tunnel that “forwards” traffic from VPS_PUBLIC_IP:443 to HOME_LOCALHOST:443.
          • Full reverse proxy listening on HOME_LOCALHOST:443 and does everything (TLS termination, host routing, 3rd-party auth etc.)
          • Instead of running home proxy on the host you can ofc run it inside a container, just need to also run the ssh tunnel from inside that container.

          Pro: very secure, VPS doesn’t store any sensitive data (no TLS certificates, only a SSH public key) and the client connections pass through the VPS double-encrypted (TLS between client browser and home proxy, wrapped inside SSH).

          Con: you don’t get the client’s IP. When the home apps receive the connections they appear to originate at the home end of the SSH tunnel, which is a private interface on the home server.

          Variant 2 (in case you need client IPs):

          • SSH tunnel established same way as variant 1 but listens on VPS_LOCALHOST:PORT.
          • Simple reverse proxy on VPS_PUBLIC_IP:443. It terminates the TLS connections (decrypts them) using each domain’s certificate. Adds the client IP to the HTTP headers. Forwards the connection into VPS_LOCALHOST:PORT which sends it to the home proxy.
          • Full reverse proxy at home set up same way as variant 1 except you can listen to 80 and not do any TLS termination because it’s redundant at this point – the connection has already been decrypted and will arrive wrapped inside SSH.

          Pro: by decrypting the TLS connection the simple proxy can add the client’s IP to the HTTP headers, making it available to logs and apps at home.

          Con: the VPS needs to store the TLS certificates for all the domains you’re serving, you need to copy fresh certificates to the VPS whenever they expire, and the unencrypted connections are available on the VPS between the exit from TLS and the entry into the SSH tunnel.

          Edit: Variant 3? proxy protocol

          I’ve never tried this but apparently there’s a so called proxy_protocol that can be used to attach information such as client IP to TLS connections without terminating them.

          You would still need a VPS proxy and a home proxy like in variant 2, and they both need to support proxy protocol.

          The frontend (VPS) proxy would forward connections in stream mode and use proxy protocol to add client info on the outside.

          The backend (home) proxy would terminate TLS and do host routing etc. but also it can unpack client IP from the proxy protocol and place it in HTTP headers for apps and logs.

          Pro: It’s basically the best of both variant 1 and 2. TLS connections don’t need to be terminated half-way, but you still get client IPs.

          Please note that it’s up to you to weigh the pros and cons of having the client IPs or not. In some circumstances it may actually be a feature to not log client IPs, for example If you expect you might be compelled to provide logs to someone.

            • lemmyvore@feddit.nl
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              The SSH tunnel is just one command, but you may want to use autossh to restart it if it fails.

              If you choose variant 2 you will need to configure a pass-through reverse proxy on the VPS that does TLS termination (uses correct certificates for each domain on 443). Look into nginx, caddy, traefik or haproxy.

              For the full home proxy you will once again need a proxy but you’ll additionally need to do host routing to direct each (sub)domain to the correct app. You’ll probably want to use the same proxy as above to avoid learning two different proxies.

              I would recommend either caddy (both) or nginx (vps) + nginx proxy manager (home) if you’re a beginner.

              • AlexPewMaster@lemmy.zipOP
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                How do I make the SSH tunnel forward traffic? It can’t be as easy as just running ssh user@SERVER_IP in the terminal.

                (I only need variant 1 btw)

                • lemmyvore@feddit.nl
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  3 months ago

                  You also add the -R parameter:

                  ssh -R SERVER_IP:443:HOME_PROXY_IP:HOME_PROXY_PORT user@SERVER_IP
                  

                  https://linuxize.com/post/how-to-setup-ssh-tunneling/ (you want the “remote port forwarding”). ssh -R, -L and -D options are magical, more people should learn about them.

                  You may also need to open access to port 443 on the VPS. How you do that depends on the VPS service, check their documentation.

                  • AlexPewMaster@lemmy.zipOP
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    3 months ago

                    Hi, whenever I try to enter the ports 80 and 443 at the beginning of the -R parameter, I get this error: Warning: remote port forwarding failed for listen port 80. How do I fix this?

    • AlexPewMaster@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      The biggest obstacle for me is the connection between the VPS and my homeserver. I have tried this today and I tried pinging 10.0.0.2 (the homeserver IP via WireGuard) and get this as a result:

      PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
      From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
      ping: sendmsg: Destination address required
      From 10.0.0.1 icmp_seq=2 Destination Host Unreachable
      ping: sendmsg: Destination address required
      ^C
      --- 10.0.0.2 ping statistics ---
      2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1019ms
      

      Not sure why though.

      • Admiral Patrick@dubvee.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 months ago

        Can you post your WG config (masking the public IPs and private key if necessary)?

        With wireguard, the allowed-ips setting is basically the routing table for it.

        Also, you don’t want to set the endpoint address (on the VPS) for your homeserver peer since it’s behind NAT. You’ll only want to set that on the ‘client’ side. Since you’re behind NAT, you’ll also want to set the persistent keepalive in the client peer so the tunnel remains open.

        • AlexPewMaster@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Hi, thank you so much for trying to help me, I really appreciate it!

          VPS wg0.conf:

          [Interface]
          Address = 10.0.0.1/24
          ListenPort = 51820
          PrivateKey = REDACTED
          
          PostUp = iptables -t nat -A PREROUTING -p tcp -i eth0 '!' --dport 22 -j DNAT --to-destination 10.0.0.2; iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source SERVER_IP
          PostUp = iptables -t nat -A PREROUTING -p udp -i eth0 '!' --dport 55107 -j DNAT --to-destination 10.0.0.2;
          
          PostDown = iptables -t nat -D PREROUTING -p tcp -i eth0 '!' --dport 22 -j DNAT --to-destination 10.0.0.2; iptables -t nat -D POSTROUTING -o eth0 -j SNAT --to-source SERVER_IP
          PostDown = iptables -t nat -D PREROUTING -p udp -i eth0 '!' --dport 55107 -j DNAT --to-destination 10.0.0.2;
          
          [Peer]
          PublicKey = REDACTED
          AllowedIPs = 10.0.0.2/32
          

          Homeserver wg0.conf:

          [Interface]
          Address = 10.0.0.2/24
          PrivateKey = REDACTED
           
          [Peer]
          PublicKey = REDACTED
          AllowedIPs = 0.0.0.0/0
          PersistentKeepalive = 25
          Endpoint = SERVER_IP:51820
          

          (REDACTED would’ve been the public / private keys, SERVER_IP would’ve been the VPS IP.)

          • Admiral Patrick@dubvee.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            On the surface, that looks like it should work (assuming all the keys are correct and 51820/udp is open to the world on your VPS).

            Can you ping the VPS’s WG IP from your homeserver and get a response? If so, try pinging back from the VPS after that.

            Until you get the bidirectional traffic going, you might try pulling out the iptables rules from your wireguard script and bringing everything back up clean.

            • AlexPewMaster@lemmy.zipOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              I do not get a response when pinging the VPS’s WG IP from my homeserver. It might have something to do with the firewall that my VPS provider (Hetzner) is using. I’ve now allowed the port 51820 on UDP and TCP and it’s still the same as before… This is weird.

              • Admiral Patrick@dubvee.org
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 months ago

                I’m not familiar with Hetzner, but I know people use them; haven’t heard any kinds of blocks for WG traffic (though I’ve read they do block outbound SMTP).

                Maybe double-check your public and private WG keys on both ends. If the keys aren’t right, it doesn’t give you any kind of error; the traffic is just silently dropped if it doesn’t decrypt.

                • AlexPewMaster@lemmy.zipOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  3 months ago

                  Hmm, the keys do match on the two different machines. I have no idea why this doesn’t work…

                  • Admiral Patrick@dubvee.org
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    edit-2
                    3 months ago

                    Dumb question: you’re starting wireguard right? lol

                    In most distros, it’s systemctl start wg-quick@wg0 where wg0 is the name of the config file in /etc/wireguard

                    If so, then maybe double/triple check any firewalls / iptables rules. My VPS providers don’t have any kind of firewall in front of the VM, but I’m not sure about Hetzner.

                    Maybe try stopping wireguard, starting a netcat listener on 51820 UDP and seeing if you can send to it from your homelab. This will validate that the UDP port is open and your lab can make the connection.

                    ### VPS
                    user@vps:  nc -l -u VPS_PUBLIC_IP 51820
                    
                    ### Homelab
                    user@home:  echo "Testing" | nc -u VPS_PUBLIC_IP 51820
                    
                    ### If successful, VPS should show:
                    user@vps:  nc -l -u VPS_PUBLIC_IP 51820
                    Testing
                    
                    

                    I do know this is possible as I’ve made it work with CG-NAT on both ends (each end was a client and routed through the VPS).