

One correction to this:
The Arch package manager is Pacman, not AUR. AUR is the Arch User Repository and is definitely not stable :)
One correction to this:
The Arch package manager is Pacman, not AUR. AUR is the Arch User Repository and is definitely not stable :)
Some ISPs block that site via DNS. If you switch your DNS server to something like 1.1.1.1 it may work.
So, the questions really are can your hardware support Windows 11 and if not can you easily flip to Linux.
The Asus Z170 motherboard looks like it supports TPM 2.0, but it doesn’t look like the i7-6700K does as that is a 6th gen Skylake CPU and Win11 starts at 8th gen. You might double check that with the TDM tool Microsoft offers though.
Cakewalk and Ableton appear to work in Linux, but not without some tweaking.
My suggestion would be to do nothing. If you can’t update without a rebuild and you can’t migrate without a lot work, just do nothing. Your Windows 10 installation will still work. You won’t receive any additional updates for it, but if that is the best solution for you at this time, then that’s what you should go with.
For the kiddo: Get a body wrap. It lets you because hold the baby to you securely while you do other things. I worked on-call shifts handling downed MPLS circuits for a carrier back in the day with my daughter strapped to me. A couple years later she would get to visit me at work. She was the only 2 year old who technically had PBX configuration experience (I didn’t know the keyboard was still connected).
It seems like a well supported shell on windows
But you aren’t using Windows. You’re also now adding a .NET Core requirement for any Linux box wanting to use it. That means limited functionality as its not the full blown .NET framework. So, compared to something like bash, you now have added requirements with less functionality.
To answer your original question though, a lot of people prefer zsh as its got a crazy amount of customization you can do. People also like fish due to it being very friendly and interactive.
I’ve used i3wm for a long time now before switching to hyprland. The top useful thing: Workspaces. Even without tiling, workspaces give a massive productivity boost. You can have email clients open on one, monitoring systems on another, browsing on a third, gaming on a fourth. When you combine with tiling, everything is in its own perfect space and nothing overlaps. This is especially useful on single-monitor or laptop setups as you don’t need multiple monitors to keep track of everything.
I also see people struggle with notifications tiling.
You probably don’t want a bluetooth connected message to take up half your screen, so you’ll want to make sure to properly configure those things.
At least in i3wm/hyprland, you can use the window class name to exclude a window from tiling (ex. for_window [class="mako"] floating enable
or windowrulev2 = float,class:^(mako)$
).
At most I have about 3 windows open at a time per workspace with 4 workspaces being used at a time for specific tasks. With the combo of tiling and workspaces I have never run into an instance of “clutter” on my desktop. This is off a single monitor setup too that I also use on my laptop.
I’ve had good experience with smollm2:135m. The test case I used was determining why an HTTP request from one system was not received by another system. In total, there are 10 DB tables it must examine not only for logging but for configuration to understand if/how the request should be processed or blocked. Some of those were mapping tables designed such that table B must be used to join table A to table C, table D must be used to join table C to table E. Therefore I have a path to traverse a complete configuration set (table A <-> table E).
I had to describe each field being pulled (~150 fields total), but it was able to determine the correct reason for the request failure. The only issue I’ve had was a separate incident using a different LLM when I tried to use AI to generate golang template code for a database library I was wanting to use. It didn’t use it and recommended a different library. When instructed that it must use this specific library, it refused (politely). That caught me off-guard. I shouldn’t have to create a scenario where the AI goes to jail if it fails to use something. I should just have to provide the instruction and, if that instruction is reasonable, await output.
You could use AI for self-healing network infrastructure, but in the context of what this tool would do, I’m struggling. You could monitor logs or IDS/IPS, but you’d really just be replacing a solution that already exists (SNMP). And yeah, SNMP isn’t going to be pattern matching, but your IDS would already be doing that. You don’t need your traffic pattern matching system pattern matched by AI.
Would you really trust your system to something that can do this? I wouldn’t…
I wouldn’t trust a Sales team member with database permissions, either. This is why we have access control in sysadmin. That AI had permission to operate as the user in Replit’s cloud environment. Not a separate restricted user, but as that user and without sandboxing. That should never happen. So, if I were managing that environment I would have to ask the question: is it the AI’s fault for breaking it or is it my fault for allowing the AI to break it?
AI is known for pulling all kinds of shit and lie about it.
So are interns. I don’t think you can hate the tool for it being misused, but you certainly can hate the user for allowing it.
Yeah, GPG keys expire, but that happens with all package management systems if left alone long enough. I mean you’d have to maintain like 3 packages (linux, wireguard-tools, archlinux-keyring). In Debian you’d have to maintain the kernel, debian-archive-keyring, and wireguard-tools. Its the same.
If its solely for setting up a wireguard server, it doesn’t need to be rolling release. Nothing should really need changing.
I really like btop/bpytop too. Its more useful than glances imo.
lazydocker:
terminal based docker managementncdu
: disk usage analyzernmtui
: terminal based network managementbrowsh
: terminal based web browser with headless Firefox backendOne other thing I didn’t mention is it depends on the backup tool you use. Not all of them are filesystem aware. What that means is if you have hardlinks present those will not be preserved.
That can be important to remember as it will bork things down the road with the restoring. If you aren’t familiar with linking: Hard links point to actual data (think of it like a pointer in C). Soft links (symbolic) point to file path.
Have any other distros been tried on this box and do the same issues present with them? I think the recommended PSU combined with an RX580 is 600W, so you might try swapping PSUs. Another option if you don’t have a spare to test with is to undervolt the GPU. If it stabilizes at that point, it would suggest the PSU needs replacement. At least that way you wouldn’t be dropping money on a hunch.
Another good indicator of that being GPU/PSU issues is the fact you mention not being able to get past the login screen. Both X11 and Wayland (especially Wayland) crank up the VRAM usage at that point due to compositors caching and whatnot
For me, I tend to focus on specific directories I know I’d need data from (or that will just be a hassle to rewrite config for). I have a scripts folder that gets backed up, Books, .mozilla
, etc. A lot of things I just know I won’t need like .cache
. That folder is 7GB and mostly just the cache from yay needing to be cleared out.
I don’t backup my entire home directory because I’m worried ACLs may change or other little issues that will take more time than its worth to correct. That said, you could. You worried about something like that, you could pull the existing ACLs: find ~/ -type f -exec getfacl --absolute-names {} + > home_acls_backup.txt
and then restore them: setfacl --restore=home_acls_backup.txt
I haven’t really used KDE much, but I know it has a theme data in .local/share
that you’d want (and probably the .cache
folder as well). GNOME keeps theme data in .themes
, .icons
, .fonts
. They might just be defaults, but if you have anything custom, you’d want those folders too.
For Arch you run pacman -Qe
which lists all installed packages that were not installed as a dependency.
I output that to a location via golang script which is monitored by the pCloud client for automatic backup along with a lot of other configs from $HOME/.config
.
I then have a systemd service that fires the script and a timer to kick off that service periodically.
You can use the gparted
tool to graphically remove the partition(s) and then format them to whatever file system type you are interested in and just have those mounted as extra data drives. Or merge them into your Linux partition (depending on setup). That will require gparted to be run as sudo as you are interacting with disks.
Alternatively, you can a tool like fdisk
to change partitioning in terminal. You can pull the disk info using something like lsblk
, so if you had a specific drive it might be sudo fdisk /dev/nvme0n1
, then you’d want to print the current table and look through the help.
I’ve been an Arch user for about 15 years now, and I’ve never posted to the forums. Not because I’m great at this and don’t break things. I constantly break things and need to fix them. I don’t ask questions there because before you’ll get any help you are going to get sat down and explained (in great detail sometimes) how you are the stupidest piece of shit on Earth.
I think its easier and shorter to say what is the same between the two than different, but some things that are different:
Performance is dependent on use case, but in general:
In what context? For gaming maybe, but that’s one single use. There is more to computers than video games, at least for the majority of Linux users. I wouldn’t trust Windows on any server I run.