• 3 Posts
  • 1.31K Comments
Joined 2 years ago
cake
Cake day: September 24th, 2023

help-circle




  • The only rule you need is: preserve history that is worth preserving.

    99% of the time, that means you should squash commits in a PR. Most commits should be small enough that they don’t need more fine grained history than one commit.

    I will grant a couple of exceptions:

    1. Sometimes you have refactorings where you e.g. move a load of files and then do something else… Or do a big search and replace and then fix the errors. In these cases it’s nice to have the file moves or search/replace in separate commits to a) make review easier, b) make the significant changes easier to see, and c) let git track file moves reliably.
    2. Sometimes you have a very long lived feature branch that multiple people have worked on for months. That can be worth keeping history for.

    Unfortunately, if you enable merge queues on GitHub it forces you to pick one method for all PRs, which is kind of dumb. We just use squash merges for everything and accept that sometimes it’s not the best.




  • I don’t think anyone disputes that, it’s just that nobody has come up with anything better.

    Take home exercises were a potentially better option (though they definitely have other big downsides) but they aren’t a sensible choice in the age of AI.

    Just taking people’s word for it is clearly worse.

    Asking to see people’s open source code is unfair to people who don’t have any.

    The only other option I’ve heard - which I quite like the sound of but haven’t had a chance to try - is to get candidates to do “live debugging” on a real world bug. But I expect that would draw exactly the same criticisms as live coding interviews do.

    What would you do?









  • Yeah I’ve seen Nix and Guix suggested but they seem like a huge extra layer of complexity.

    Also, strict backwards compatibility in APIs is totally worth it. It makes developing larger systems so much easier.

    Usually not for first party code. It adds extra maintenance burden for little benefit.

    For example suppose you want to add an extra parameter to a function. In a monorepos you just do that and fix all the compilation errors. Send one PR through CI. Job done.

    With submodules… You have to add a new function so it’s backwards compatible. Deal with a default value for the old call, maybe add a deprecation warning. Oh and you need to version your library now. Then good luck finding all the places that function is called and updating them…



    • You have to tediously git submodule update --init --recursive every time you checkout a commit. There’s an option to do it automatically but it’s super buggy and will break your .git directory.
    • Switching between branches that have different sets of submodules doesn’t really work. Git won’t remove/recreate the submodules like it will for normal directories. Worst case is changing a directory to a submodule or vice versa.
    • If you’re working on a feature that spans several submodules you have to switch branches in all of them instead of once.
    • Making co-dependant changes across submodules is a nightmare.
    • If you’re using submodules for first party code (not uncommon), it basically creates a new public interface where you didn’t have one before. Now you have to worry about backwards compatibility and testing becomes much harder. Monorepos don’t have that problem.

    The list goes on… Some of them depend on exactly what you’re using them for.

    The slightly frustrating thing is that there isn’t a great answer for what to use instead. Git subtree has its own problems. Language-specific package managers do too. There aren’t any good languages agnostic package managers I know of.

    I’m really hoping Jujutsu will come up with a better solution because it is possible. But it’s hard, and they are constrained by Git compatibility so I won’t hold my breath.