• 0 Posts
  • 26 Comments
Joined 11 months ago
cake
Cake day: August 2nd, 2023

help-circle
  • Yes, ULA are one of the exceptions I mentioned. It covers fc00::/7 which is fc00 to fdff, though I believe most use just the top half. I use one for an intermediate network between my edge router and my primary firewall to not consume one of my limited /64 networks.

    I haven’t played with IPV6 NAT much. I know its use is a bit discouraged as NAT was always designed as a stopgap measure for IPV4 exhaustion. It might be a good option if you need additional space and your ISP doesn’t support additional prefixes. Just keep in mind that if you use these in DNS, they won’t be accessible externally.


  • Its a bit complicated and depends on your ISPs support level.

    If your ISP supports basic IPv6 they will likely use SLAAC or DHCPv6 to advertise the /64 that any directly connected devices, like your router, can use (/64 being the default size for a single LAN segment, even between point-to-point connections). If you have devices behind that router that want to use IPv6, you will need additional prefixes. The most common method nowadays is to use Prefix Delegation (DHCPv6-PD) where your router will ask the upstream router for an additional routeable prefix which you will use on another interface of the router. The RFC for prefix delegation recommends a /48, but many ISPs are not delegating that much. I only get half of a /60 from my ISP’s modem.

    If the ISP just provides you a static routeable prefix, then you would just assign that to your router’s interface and enable SLAAC/DHCPv6 to give out that prefix. This would only need to be configured in a single device and is why they don’t recommend hard coding servers and workstations with IPV6 addresses.

    Keep in mind that your router will also need a firewall as all of these IPv6 prefixes are routeable and public. While IPV6 space is quite like finding a needle in a haystack, you could still find yourself having a bad day if you treat it like private IPV4 space.

    The end result though is that you would setup DNS so that devices register their IPv6 addresses and it just works. There’s also the MDNS protocol that supports IPv6 which will do segment-local resolution for device names.


  • On one hand you definitely don’t want to be assigning manual/static IPv6 to all your devices because if your prefix ever changes you’ll have to update it everywhere. IPv6 doesn’t really have a concept of private address space (with a few exceptions). On the other hand most modern IPv6 stacks support dynamic protocols like SLAAC while also assigning a static suffix to the published prefix (e.g. You want :0:0:1234:1 to go to your server, and SLAAC gets the prefix 200x::5678/64 your server would assign itself 200x::5678:0:0:1234:1).

    DHCPv6 fixes a lot of these headaches for managed networks by allowing you to reserve specific IPv6 for a given DUID.

    IMO, your network, do what you want. I have two jump Raspberry PIs that I have static suffixes so I always know where they are without relying on DNS or whatever. Edit: I apparently misremembered how I had these setup. I use a custom interface up script to take the SLAAC prefix and append the custom suffix to it as a secondary IP.


  • You’ll probably have to provide the netmask info for us to review. If you’re using /24 then those all reside in the same network so I would expect them to be in the same broadcast domain.

    If you have mismatched netmasks that could be trying to route traffic to the gateway which then reflects back. Ensure your devices have the same network, netmask and broadcast ip (e.g. 192.168.1.0/24 will have broadcast ip of 192.168.1.255)





  • For the disks, you may have a small issue with having multiple types of disks in a single RAID10, as those disks might have slightly different physical attributes. ZFS is an option here as you can add two vdevs for the different drive types and add them to the same zpool, which effectively creates the RAID10 you’re looking for. You would typically not use LVM on top of ZFS but if you go with RAID10 it would let you create logical partitions that can be expanded easily at a later time.

    Another ZFS option is to use RAIDZ1 with the 4 disks in a vdev. The vdev will use 1 disk of space across all disks to maintain a parity with the other disks. You will have 12TB of usable storage on your 16TB raw storage. This will allow you to lose one drive with no data loss.




  • Since we don’t know what server or VM tech you’re using the advice will be pretty generic. For self hosting, you can likely get away with your ISCSI traffic sharing the LAN interface with your usual vm traffic but if you need high throughput you will want ISCSI optimized nics and turn on jumbo frames (mtu of 9000 is the standard here). This requires a switch that supports jumbo frames as well.

    For Windows, I find the ISCSI support to be very lacking. Every time I have used it I have had sporadic loss of connectivity, failure to mount on boot, and other issues. I would avoid it.

    For ESXi you can map an ISCSI lun as a datastore and create vmdks on top. This functions the same if you use actual FC luns or NFS mounts, and have had no issues with reliability. There’s also RDM which is raw direct map which can mount the ISCSI lun as a disk of the vm. If you’re using vSphere I would advise against this as you lose the ability to vMotion or use DRS.


  • Cool. Yeah, as a professional I am constantly aware of data integrity and have most of my shit stored on redundant drives. I had a WoW Guild Officer who shared his home setup with like 8x12TB drives in Windows Storage Spaces with no redundancy that was like 80% full. I had to ask how he slept at night knowing he could lose 80TB of data at any time.

    Personally my TrueNAS has 5x1.92TB SSDs setup in two mirror vdevs and a hot spare for my ISCSI LUNs and 8x1.2TB 10K drives in a raidz2 (2 disk parity) for my NAS storage.


  • I believe ZFS works best when having direct access to the disks, so having a md underlying it is not best practice. Not sure how well ZFS handles external disks, but that is something to consider. As for the drive sizes and redundancy, each type should have its own vdev. So you should be looking at a vdev of the 2x6TB in mirror and a vdev of the 2x12TB in mirror for maximum redundancy against drive failure, totaling 18TB usable in your pool. Later on if you need to add more space you can create new vdevs and add them to the pool.

    If you’re not worried about redundancy, then you could bypass ZFS and just setup a RAID-0 through mdadm or add the disks to a LVM VG to use all the capacity, but remember that you might lose the whole volume if a disk dies. Keep in mind that this would include accidentally unplugging an external disk.







  • Honestly I would describe it as Ark-lite. It has base building/taming and that’s pretty fun, and you can also get random encounters at your base. The leveling system is a bit grindy. There are dungeons and bosses in the world to go find and explore. The map is huge, I think I’ve hardly explored a tenth of it.

    Been playing about 15 hours or so and enjoyed it, but the game is definitely early access. I’ve had a number of crashes, fell through the world a few times, etc. I’d give it a month or two if that bothers you.