• 0 Posts
  • 891 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
  • Yes magnets can affect HDDs. But it needs to be very strong and close to the HDD. I wouldn’t worry unless you are directly attaching it to you HDD and even then it probably won’t do much if anything at all.

    Remember HDDs already have strong permanent magnets inside them. Probably way stronger then the one on the bottom of that support.


  • Yes. They can. But they do not mix well with required checks. From githubs own documentation:

    If a workflow is skipped due to path filtering, branch filtering or a commit message, then checks associated with that workflow will remain in a “Pending” state. A pull request that requires those checks to be successful will be blocked from merging.

    If, however, a job within a workflow is skipped due to a conditional, it will report its status as “Success”. For more information, see Using conditions to control job execution.

    So even with github actions you cannot mix a required check and path/branch or any filtering on a workflow as the jobs will hang forever and you will never be able to merge the branch in. You can do either or, but not both at once and for larger complex projects you tend to want to do both. But instead you need complex complex workflows or workflows that always start and instead do internal checks to detect if they need to actually run or not. And this is with github actions - it is worst for external CICD tooling.


  • If you have folderA and folderB each with their own set of tests. You don’t need folderAs tests to run with a change to folderB. Most CI/CD systems can do this easily enough with two different reports. But you cannot mark them both as required as they both wont always run. Instead you need a complicated fan out pipelines in your CICD system so you can only have one report back to GH or you need to always spawn a job for both folders and have the ones that dont need to run return successful. Neither of these is very good and becomes very complex when you are working with large monorepos.

    It would be much better if the CICD system that knows which pipelines it needs to run for a given PR could tell GH about which tests are required for a particular PR and if you could configure GH to wait for that report from the CICD system. Or at the very least if the auto-merge was blocked for any failed checks and the manual merge button was only blocked on required checks.




  • Even if true, SIMD is doing the heavy lifting here. Probably followed by the fact that almost any rewrite of a code base will result in performance improvements due to the nature of being more familiar with the domain and where the bottle necks are. I would be surprised if the assembly was responsible for more then about 1% of the gains here. So why highlight the fact assembly was used here? It is just missleading. If you want to show how ASM is so much better you need a much better example then this. For all we know the use of ASM could have made things slower and harder to develop. There is just no details at all as to why ASM is beneficial here except some author seems to love it.


  • We have a few non-required checks here and there - mostly as you need an admin to list a check as required and that can be annoying to do. And we still get code merged in occasionally that fails those checks. Hell, I have merged in code that fails the checks. Sometimes checks take a while to run, and there is this nice merge when ready button in GH. But it will gladly merge your code in once all the required checks have passed ignoring any non-required checks.

    And it is such a useful button to have, especially in a large codebase with lots of developers - just merge in the code when it is ready and avoid forgetting about things for a few hours and possibly having to rebase and run all the checks again because of some minor merge conflict…

    But GH required checks are just broken for large code bases as well. We don’t always want to run every check on every code change. We don’t need to run all unit tests when only a documentation has changed. But required checks are all or nothing. They need to return something or else you cannot merge at all (though this might apply to external checks more then gh actions maybe). I really wish there was a require all checks to pass and a at least one check must run. Or if external checks could tell GH when they are required or not. Either way there is a lot of room for improvement on the GH PR checks.






  • For a lot of things I would rather have something web based than app based. I hate having to download some random app from some random company just to interact with something one time. Why do all restaurants, car parking places etc require apps rather than just having a simple site. Not everything should be native first IMO.





  • I am not afraid of some tech journey, but even though arch seems the coolest, with Wayland, kde, hyperland customization, i am not confident enough to use it for work.

    The only way you will gain confidence in it is to try it out. But also, most distros use wayland these days and it is more up to the desktop environment you use rather than the distro you use. hyperland is a wayland compositor and is in the repos of most if not all major distros. You should be able to install it on anything really. You can replace the desktop environment or install multiple ones side by side if you want to on just about any distro. The biggest difference between them is which ones they come with by default. But really if you are looking for a highly customized experience then Arch tends to be the way to do as you have less extra fluff you have to remove or work around when getting the system exactly as you want it. The hardest part of Arch is installing it the first time. Really after that it is not any harder to use or maintain. IMO it is easier to maintain as you have a much better understanding of how you set up your system as you are the one that set it up to start with.

    I heard it can completely crash your system if your a noob.

    You can break any distro if you mess with things. The only big difference is Arch encourages/requires more messing around at the start then other distros. And IMO is easier to fix if you do mess things up - you can always just boot a live USB and reinstall broken packages or reconfigure things without needing a full reinstall again. You can basically follow the install guides again for the bits that are broken to fix just about anything. And that is only if you break something critical in booting. In my early days I broke (requiring a full reinstall) way more ubuntu installs then I have ever broken my Arch ones later on. It is really just about how much you want to tinker with things and how much you know about what you are tinkering with as to if they will break or not rather then what base distro you use.

    And you can always try the install process and play around with different distros in a VM to get a feel for them and learn what they are like. So don’t be afraid to try out various different ones and find the one you like the most. Your choice is never set in stone either. Just ensure you have good backups of everything you care about and the worst that will happen is you need to reinstall and restore your backups every once in a while.


  • but my main needs are not really discussed

    So in essence i need something stable that is relatively easy to use and has great ue5 and gaming perf.

    That is probably the most common set of requirements people ask for. In reality, with a few exceptions, there is really not that much difference between distros given those requirements. UE5 is newer so the biggest change there would be that you might find distros that ship newer versions of stuff might run it slightly better then distros that ship older software. In practice I think it has been out for long enough that you wont see much difference unless you want to play something new on the day of release (but these days those are all buggy messes anyway… not sure your choice of distro will make as big a difference as waiting a few weeks/months for the initial patches to rollout).

    Remember, all distros are essentially based off the same software, the biggest difference being what desktop environment they ship with and what versions of software there ship (and how how long they stay on that version). By far the biggest difference you will see if what desktop environment they use and all distros essentially package the same set of desktop environments - each might come with different ones by default but they typically contain all the popular ones in their repos.

    i need something stable… great gaming perf

    In particular these two points. Do you know what you are asking for here? These are the most bland and wishy washy requirements. Everyone wants something stable and fast, never seen anyone ask for something that crashes all the time and is slow. But worst these tend to be on the opposite ends of the spectrum in terms of requirements, if you optimize for one you tend to trade off the other.

    Even stability has multiple meanings. In terms of crash stability you will find all distros to be about the same. No one distro wants to ship buggy crashy software. But at times they do. And it is really just the luck of the draw as to when this might happen to you based on what software you use, how you configure your system and what hardware you have. Some combinations just don’t work for some weird reason and you wont know until you hit it. This is why you hear some people claim one distro is a buggy mess while some other one is rock solid while someone else argues the exact opposite. All main stream distros are just as good as any other in terms of this and you are just unlucky if you ever do run into that type of issue. The biggest problems in this regard tends to be when a new major version of something comes out - but like with gaming it can be beneficial to wait a few months for any issues to be patched before jumping to the latest big distro version.

    The other side of stability is API stability - or the lack of things changing over time as new versions of stuff get released. There are two main types of distros in this regard, point release distros which freeze major versions of packages between their major releases so you wont get any new features during the release cycle that version of the distro. Then you have to deal with all the breaking changes from newer versions of software once every so often when a new distro version comes out. Vs rolling release distros that upgrade major versions constantly and so generally follow a lot closer to the latest versions of things than point release distros. Really the big trade off here is not if you encounter breaking changes.

    Any distro will need to deal with them at some point, the choice is how often you deal with them. You can wait years on the same version of a point release distro and only need to deal with all the breaking changes once every few years, or once every 6 months. Or you can deal with things as they come out with a rolling release distro. But while it might sound nice to only deal with it every few years it also means you need to deal with all the changes at once. Which can be much more disruptive when you do decide to. Quite often I find the slower upgrading distros are better off with just a full reinstall on the latest version than upgrading from one to the next. Personally I prefer dealing with small things frequently as they tend to be easier to fix and less disruptive over longer periods of time. When I was running kubuntu I used to end up reinstalling it ever 6 months as the upgrades never worked for me (though this was a long time ago), but my oldest arch install lasted probably probably 5-10 years or so.

    And at the same time how frequently you get the latest versions of things means you get any performance optimizations and support for newer hardware or newer games as well. But also any bugs or regressions. It is a double edged sword. Which is why stability and performance tend to be a leaver you can tune between rather than two separate things to can achieve. Just like overclocking, the more performance you can get out of a system tends to result in the system becoming less stable overall. Everyone wants the most stable and fastest system, but in reality everyone has a different limit on how much or what types of stability they are willing to give up on to achieve different levels of performance.

    But out the box, you will find most distros to be very much within a couple of % of each other and which is fastest will vary depending on which games you want to play and what hardware you have. But they all tend to have quite a bit of head room to optimizes for specific use cases as they all are optimizing for the general use case which is typically just trading off performance in one thing for another. But again, we are talking about tiny difference overall.


  • If the package is popular then it is very likely already packaged by your distro. You should always go there first if you care that much. If the package is not popular enough to be packaged by a distro then how does another centralized approach help? Either it is fully curated like a distro package list and likely also wont contain some random small project, or it is open for anyone to upload scripts to so will become vulnerable to malicious scripts. Worst yet people would be able to upload scripts to projects they don’t control as the developers of said project likely wont.

    Basically it is not really any safer then separate dev owned websites if open nor offer better package support then distro repos if curated.

    Maybe the server was hacked and the script was changed?

    Same thing can happen to any system though. What happens if your servers for this service are hacked? Being a central point makes you a bigger target and with more people able to change (assuming you are not going to be the only one to curate packages) things you have a bigger area of attack. And once hacked they can compromise far more downloads than a single package.

    Your solution does not improve security - just shuffles it around a bit. Sounds nice on paper but when you look at it in more details there are a lot more things you need to consider to create an actually secure system that is better then what we currently have.



  • There is also no way to verify that the software that is being installed is not going to do anything bad. If you trust the software then why not trust the installation scripts by the same authors? What would a third party location bring to improve security?

    And generally what you are describing is a software repo, you know the one that comes with your distro.