• 0 Posts
  • 75 Comments
Joined 2 years ago
cake
Cake day: November 5th, 2023

help-circle


  • Connection refused is generally a network level issue meaning that a firewall is rejecting the tcp handshake, or more likely samba is not listening on that IP.

    So you are attempting to connect to your samba server and the OS (not Samba) is saying there is no service running on that port so I am refusing your connection request.

    So you have one of these problems

    • Samba isn’t running in the first place
    • Samba is crashing, systemd might be restarting it so if this problem is intermittent this is most likely
    • you have firewall issues (you said firewall was off, but are they on the same subnet? Might be other firewalls in your network rejecting the connection?)
    • Samba is listening on a different interface. I see you have lo/eth0 in your config most distros don’t use eth0 anymore are you sure that is correct?

    Even if those interface names are correct sometimes network managers rename interfaces. So when Samba starts that might be the wrong interface name, but by the time you login it is correct. I would just remove that line and the bind interfaces only line as well unless you specifically need to bind to specific interfaces.

    Try connecting to samba from the server itself on 127.0.0.1, which will probably work just fine because lo is probably a correct interface.

    You can also look at what interfaces/IP samba is listening on by running one of these commands as root

    ss -tlp | grep 445

    netstat -nlp | grep 445


  • When you saw that 20v on the board I assume that was right next to the charge port? There are often fuses that should be very close to that connector that you can check for continuity on. Usually marked with zeros because they act like a zero ohm resistor.

    Even if the fuse is blown that might just be a sign that something further down the line failed but it would be an easy thing to check at least.


  • I am not exactly an expert at this but it could just be from heat. Do you have a multimeter to check if current can pass through it still?

    Either way it seems like this shouldn’t be affecting the laptop when plugged in because it is so close to the battery connector and it looks like the traces are related to the battery connector.

    Do you get anything at all (battery/power LEDs) trying to run off of the battery? Is it possible that the charge port failed and the battery is just dead now? Maybe check the battery voltage to see how far drained it is.



  • Nope, the switch only keeps saves on the internal storage or synced to their cloud if you pay for it. When doing transfers between devices like this there is no copy option only a move and delete.

    There are some legitimate reasons they want to prevent this like preventing users from duplicating items in multiplayer games, etc. Even if you got access to the files they are encrypted so that only your user can use them.

    I think the bigger reason they do this is there are occasionally exploits that are done through corrupted saves. So preventing the user from importing their own saves helps protect the switch from getting soft modded.

    If you mod your switch you can get access to the save files and since it has full access it can also decrypt them, so that you can back them up. One of several legitimate reasons to mod your switch.


  • Probably a terrible idea but have you considered a private Lemmy instance? At the end of the day Lemmy/PieFed/Reddit are just forums with conversation threads and upvotes.

    Lemmy is probably way more of a resource hog than the other various php options but from a usability standpoint if you have a favorite Lemmy mobile app it would work for your private instance as well.

    There appears to be a private instance mode that disables federation.


  • Since the ER-X is Linux under the hood the easiest thing to do would be to just ssh in and run tcpdump.

    Since you suspect this is from the UDR itself you should be able to filter for the IP of the UDRs management interface. That should get you destination IPs which will hopefully help track it down.

    Not sure what would cause that sort of traffic, but I know there used to be a WAN speed test on the Unifi main page which could chew up a good amount of traffic. Wouldn’t think it would be constant though.

    Do you have other Unifi devices that might have been adopted with layer 3 adoption? Depending on how you setup layer 3 adoption even if devices are local to your network they might be using hairpin NAT on the ER-X which might look like internet activity destined for the UDR even though it is all local.


  • I am assuming this is the LVM volume that Ubuntu creates if you selected the LVM option when installing.

    Think of LVM like a more simple more flexible version of RAID0. It isn’t there to offer redundancy but it take make multiple disks aggregate their storage/performance into a single block device. It doesn’t have all of the performance benefits of RAID0, particularly with sequential reads, but in the cases of fileservers with multiple active users it can probably perform even better than a RAID0 volume would.

    The first thing to do would be to look at what volume groups you have. A volume group is one or more drives that creates a pool of storage that we can allocate space from to create logical volumes. Run vgdisplay and you will get a summary of all of the volume groups. If you see a lot of storage available in the ‘Free PE/Size’ (PE means physical extents) line that means that you have storage in the pool that hasn’t been allocated to a logical volume yet.

    If you have a set of OS disks an a separate set of storage disks it is probably a good idea to create a separate volume group for your storage disks instead of combining them with the OS disks. This keeps the OS and your storage separate so that it is easier to do things like rebuilding the OS, or migrating to new hardware. If you have enough storage to keep your data volumes separate you should consider ZFS or btrfs for those volumes instead of LVM. ZFS/btrfs have a lot of extra features that can protect your data.

    If you don’t have free space then you might be missing additional drives that you want to have added to the pool. You can list all of the physical volume which have been formatted to be used with LVM by running the pvs command. The pvs command show you each formatted drive and if they are associated with a volume group. If you have additional drives that you want to add to your volume group you can run pvcreate /dev/yourvolume to format them.

    Once the new drives have been formatted they need to be added to the volume group. Run vgextend volumegroupname /dev/yourvolume to add the new physical device to your volume group. You should re-run vgdisplay afterwards and verify the new physical extents have been added.

    If you are looking to have redundancy in this storage you would usually build an mdam array and then do the pvcreate on the volume created my mdadm. LVM is usually not used to give you redundancy, other tools are better for that. Typically LVM is used for pooling storage, snapshots, multiple volumes from a large device, etc.

    So one way or another your additional space should be in the volume group now, however that doesn’t make it usable by the OS yet. On top of the volume group we create logical volumes. These are virtual block devices made up of physical extents on the physical disks. If you run lvdisplay you will see a list of logical volumes that were created by the Ubuntu installer which is probably only one by default.

    You can create new logical volumes with the lvcreate command or extend the volume that is already there. Or resize the volume that you already have with lvresize. I see other posts already explained those commands in more detail.

    Once you have extended the logical volume (the virtual block device) you have to extend the filesystem on top of it. That procedure depends on what filesystem you are using on your logical volume. Likely resize2fs for ext4 by default in Ubuntu, or xfs_growfs if you are on XFS.



  • The problem is that on top of the pins occasionally not making good contact on these new connectors, Nvidia has been cheaping out on how power is delivered to the card.

    They used to have three shunt resistors that the card could use to measure voltage drop. That meant that the six power pins were split into pairs and if any pair did make contact the card could detect it and prevent the card from powering up.

    There could be a single pin in each of those pairs not making contact meaning that the remaining pins are being forced to handle double their rated power. It is unlikely that you would lose one pin on each pair so that is an unlikely worst case, but a single pin in a single pair failing could be fairly common.

    But on the 40 series they dropped to two shunt resistors. So instead of three pairs, they can only monitor 2x bundles of three wires. Meaning the card can only detect that the plug isn’t plugged in correctly if all three wires in the same bundle are disconnected.

    You could theoretically have only two out of six power pins plugged in and the card would think everything is fine. Each of those two remaining pins being forced to handle three times their normal current.

    And on the 5090 FE they dropped down to one shunt resistor… So five of the six pins can be disconnected and the card thinks everything is fine, forcing six times the current down a single wire.

    https://www.youtube.com/watch?v=kb5YzMoVQyw

    So the point of these fused cables is to work around a lack of power monitoring on the card itself with cables that destroy themselves instead of melting the connector on your $2000 GPU.



  • Good catch on the redundancy, at the time posting this I didn’t realize I needed the physical space/drives to set up that safety net. 8 should be plenty for the time being. Say if I wanted to add another drive or two down the road, what sort of complications would that introduce here?

    With TrueNAS your underlying filesystem is ZFS. When you add drives to a pool you can add them:

    • individually (RAID0 - no redundancy, bad idea)
    • in a mirror (RAID1 - usually two drives, a single drive failure is fine)
    • raidz1 (RAID5 - any single drive in the set can fail, one drive’s worth of data does to parity). Generally a max of about 5 drives in a raidz1, if you make the stripe too wide when a drive fails and you start a rebuild to replace it the chances of one of the remaining drives you are reading from failing or at least failing to read some data increases quickly.
    • raidz2 (RAID6 - any two drives can fail, two drives worth of data goes to parity). I’ve run raidz2 vdev up to about 12 drives with no problems. The extra parity drive means the chances of data corruption, or of a other drive failing while you are rebuilding is much lower.
    • raidz3 (triple parity - any three drives can fail, three drives worth of data goes to parity). I’ve run raidz3 with 24 drive wide stripes without issues. Though this was usually for backup purposes.
    • draid (any parity level and stripe switch you want). This is generally for really large arrays like 60+ disks in a pool.

    Each of these sets is called a vdev. Each pool can have multiple vdevs and there is essentially a RAID0 across all of the vdevs in the pool. ZFS tends to scale performance per vdev so if you want it to be really fast, more smaller vdevs is better than fewer larger vdevs.

    If you created a mirror vdev with two drives, you could add a second mirror vdev later. Vdevs can be of diferent sizes so it is okay if the second pair of drives is a different size. So if you buy two 10TB drives later they can be added to your original pool for 18TB usable.

    What you can’t do is change a vdev from one type to another. So if you start with a mirror you can’t change to a raidz1 later.

    You can mix different vdev types in a pool though. So you could have two drives in a mirror today, and add an additional 5 drives in a raidz1 later.

    Drives in a vdev can be different sizes but the vdev gets sized based on the smallest drive. Any drives that are larger will be wasting space until you replace that smaller drive with a similar sized one.

    A rather recent feature lets you expand raidz1/2/3 vdevs. So you could start with two drives today in a raidz1 (8TB usable), and add additional 8TB or higher drives later adding 8TB of usable space each time.

    If you have a bunch of mismatched drives of different sizes you might want to look at UnRAID. It isn’t free but it is reasonably priced. Performance isn’t nearly as good but it has its own parity system that allows for mixing drives of many sizes and only your single largest drive needs to be used for parity. It also has options to add additional parity drives later so you can start at RAID5 and move to RAID6 or higher later when you get enough drives to warrant the extra parity.


  • My server itself is a little HP mini PC. i7, 2 TB SSD, solid little machine so far. Running Proxmox with a single debian VM which houses all my docker containers - I know I’m not using proxmox to its full advantage, but whatever it works for me. I mostly just use it for its backup system.

    Not sure how mini you mean but if it has spots for your two drives this should be plenty of hardware for both NAS and your VMs. TrueNAS can run VMs as well, but it might be a pain migrating from Proxmox.

    Think of Proxmox as a VM host that can do some NAS functions, and TrueNAS as a NAS that can do some VM functions. Play with them both, they will have their own strengths and weaknesses.

    I’ve been reading about external drive shucking, since apparently that’s a thing? Seems like my best bet here would be to crack both of these external drives open and slap them into a NAS. 16TB would be plenty for my use.

    It’s been a couple of years since I have shucked drives but occasionally the drives are slightly different than normal internal drives. There were some western digital drives that had one pin that was different from normal and worked in most computers, but some power supplies which had that pin wired required you to mask the pin before the drive would fire up.

    I wouldn’t expect any major issues just saying you should research your particular model.

    You say 16TB with two 8TB drives so I assume you aren’t expecting any redundancy here? Make sure you have some sort of backup plan because those drives will fail eventually, it’s just a matter of time.

    You can build those as some sort of RAID0 to get you 16TB or you can just keep them as separate drives. Putting them in a RAID0 gives you some read and write performance boost, but in the event of a single drive failure you lose everything.

    If 8TB is enough you want to put them in a mirror which give you 8TB of storage and allows a drive to fail without losing any data. There is still a read performance boost but maybe a slight loss on write performance.

    Hardware: while I like the form factor of Synology/Terramaster/etc, seems like the better choice would be to just slap together my own mini-ITX build and throw TrueNAS on it. Easy enough, but what sort of specs should I look for? Since I already have 2 drives to slap in, I’d be looking to spend no more than $200. Alternatively, if I did want the convenience and form factor of a “traditional” NAS, is that reasonable within the budget? From what I’ve seen it’s mostly older models in that price range.

    If you are planning on running Plex/Jellyfin an Intel with UHD 600 series or newer integrated graphics is the simplest and cheapest option. The UHD 600 series iGPU was the first Intel generation that has hardware decode for h265 so if you need to transcode Plex/Jellyfin will be able to read almost any source content and reencode it to h264 to stream. It won’t handle everything (i.e. AV1) but at that price range that is the best option.

    I assume I can essentially just mount the NAS like an external drive on both the server and my desktop, is that how it works? For example, Jellyfin on my server is pointed to /mnt/external, could I just mount a NAS to that same directory instead of the USB drive and not have to change a thing on the configuration side?

    Correct. Usually a NAS offers a couple of protocols. For Linux NFS is the typical filesystem used for that. For Windows it would be a Samba share. NFS isn’t the easiest to secure, so you will either end up with some IP ACLs or just allowing access to any machine on your internal network.

    If you are keeping Proxmox in the mix you can also mount your NFS share as storage for Proxmox to create the virtual hard drives on. There are occasionally reasons to do this like if you want your NAS to be making snapshots of the VMs, or for security reasons, but generally adding the extra layers is going to cut down performance so mounting inside of the VM is better.

    Will adding a NAS into the mix introduce any buffering/latency issues with Jellyfin and Navidrome?

    Streaming apps will be reading ahead and so you shouldn’t notice any changes here. Library scans might take longer just because of the extra network latency and NAS filesystem layers, but that shouldn’t have any real effect on the end user experience.

    What about emulation? I’m going to set up RomM pretty soon along with the web interface for older games, easy enough. But is streaming roms over a NAS even an option I should consider for anything past the Gamecube era?

    Anything past GameCube era is probably large ISO files. Any game from a disk is going to be designed to load data from disk with loading screens, and an 8tb drive/1gb Ethernet is faster than most disks are going to be read. PS4 for example only reads disks at 24MB/s. Nintendo Switch cards aren’t exactly fast either so I don’t think they should be a concern.

    It wouldn’t be enough for current gen consoles that expect NVMe storage, but it should be plenty fast for running roms right from your NAS.


  • Btrfs is a copy on write (COW) filesystem. Which means that whenever you modify a file it can’t be modified in place. Instead a new block is written and then a single atomic operation is done to flip that new block to be the location of that data.

    This is a really good thing for protecting your data from things like power outages or system crashes because the data is always in a good state on disk. Either the update happened or it didn’t there is never any in-between.

    While COW is good for data integrity it isn’t always good for speed. If you were doing lots of updates that are smaller than a block you first have to read the rest of the block and then seek to the new location and write out the new block. On ssds this isn’t a issue but on HDDs it can slow things down and fragment your filesystem considerably.

    Btrfs has a defragmentation utility though so fragmentation is a fixable problem. If you were using ZFS there would be no way to reverse that fragmentation.

    Other filesystems like ext4/xfs are “journaling” filesystems. Instead of writing new blocks or updating each block immediately they keep the changes in memory and write them to a “journal” on the disk. When there is time those changes from the journal are flushed to the disk to make the actual changes happen. Writing the journal to disk is a sequential operation making it more efficient on HDDs. In the event that the system crashes the filesystem replays the journal to get back to the latest state.

    ZFS has a journal equivalent called the ZFS Intent Log (ZIL). You put the ZIL on fast SSDs while the data itself is on your HDDs. This also helps with the fragmentation issues for ZFS because ZFS will write incoming writes to the ZIL and then flush them to disk every few seconds. This means fewer larger writes to the HDDs.

    Another downside of COW is that because the filesystem is assumed to be so good at preventing corruption, in some extremely rare cases if corruption gets written to disk you might lose the entire filesystem. There are lots of checks in software to prevent that from happening but occasionally hardware issues may let the corruption past.

    This is why anyone running ZFS/btrfs for their NAS is recommended to run ECC memory. A random bit flipping in ram might mean the wrong data gets written out and if that data is part of the metadata of the filesystem itself the entire filesystem may be unrecoverable. This is exceedingly rare, but a risk.

    Most traditional filesystems on the other hand were built assuming that they had to cleanup corruption from system crashes, etc. So they have fsck tools that can go through and recover as much as possible when that happens.

    Lots of other posts here talking about other features that make btrfs a great choice. If you were running a high performance database a journaling filesystem would likely be faster but maybe not by much especially on SSD. But for a end user system the snapshots/file checksumming/etc are far more important than a tiny bit of performance. For the potential corruption issues if you are lacking ECC backups are the proper mitigation (as of DDR5 ECC is in all ram sticks).



  • I’ve had one of these 3d printed keys in my wallet as a backup in case I get locked out for 5 years now. I certainly don’t use it often but yeah it holds up fine.

    The couple of times I have used it works fine but you certainly want to be a little extra careful with it. I’ve got locks that are only 5ish years old so they all turn rather easily, and I avoid my door with the deadbolt when I use it because that would probably be too much for it.

    Mine is PETG but for how thin it is, it flexes a lot. I figured flexing is better than snapping off, but I think PLA or maybe a polycarbonate would function better. A nylon would probably be too flexible like the PETG.


  • Netflix had Dolby Vision and HDR, this is just adding HDR10+. HDR10+ is similar to Dolby Vision in that it give your TV dynamic metadata for the HDR. Constantly adjusting min/max brightness of the scene.

    For dynamic metadata Dolby Vision support is much more common in TVs, some brands like LG don’t have any support for HDR10+ even in their high end TVs.

    I am pretty sure from a content perspective Dolby Vision is also much more prevalent. It does look like most streamers support HDR10+, but I don’t think much of their content is available in HDR10+.

    Anyways still a good change. HDR10+ is royalty free unlike Dolby Vision, and it is backwards compatible with regular HDR TVs.


  • All of the modern yubikeys (and it looks like the nitro keys as well) can have fido2 enabled so that you can use them as a hardware token for sites that support passkeys. I think yubikeys come with only OTP enabled so you need to download their utility to enable the other modes.

    If you are a Linux user (that’s required to be on Lemmy right?) you can use either the fido2 or ccid (smart card through pkcs11) mode to keep SSH keys protected. The fido2 ssh key type (ed25519-sk) hasn’t been around that long so some service might not support it. The pkcs11 version gives you a normal RSA key, but is harder to get setup, and if you want extra security they don’t have any way to verify user presence. With fido2 you can optionally require that you must physically touch the key after entering the pin.

    There are also pkcs11 and fido2 pam modules so you can use it as a way to login/sudo on your system with an easy to use pin.

    And if you have a luks encrypted volume you can unlock that volume with your pin at boot with either pkcs11 or fido2.

    Unlocking LUKS2 volumes with TPM2, FIDO2, PKCS#11 Security Hardware on systemd 248

    If you are on an Ubuntu based distro initramfs-tools doesn’t build the initramfs with the utilities required for doing that. The easiest way to fix that is to switch to dracut.

    Dracut is officially “supported” on 24.10 and is planned to be the default for Ubuntu 25.10 forward, but it can work on previous versions as well. For 24.04 I needed hostonly enabled and hostonly_mode set to sloppy. Some details on that in these two links:

    https://askubuntu.com/questions/1516511/unlocking-luks-root-partition-with-fido2-yubikey-and-ideally-without-dracut

    https://discourse.ubuntu.com/t/please-try-out-dracut/48975

    So a single hardware token can handle your passkeys, your ssh keys, computer login, and drive encryption. Basically you will never have to type a password ever again.