

Yeah also calling anything where you raise your hand a “Nazi-like salute” is dumb af. Musk does enough real shit without having clutch at straws like this.
Yeah also calling anything where you raise your hand a “Nazi-like salute” is dumb af. Musk does enough real shit without having clutch at straws like this.
The UK is definitely going to try to ban VPNs. Some MPs are already talking about it.
I doubt this will fly in the UK. They are legally required to implement “highly effective” age verification, and that definitely rules out AI and probably face recognition (pornhub doesn’t support that for example). It’s going to be credit card checks all round, yeay.
The only rule you need is: preserve history that is worth preserving.
99% of the time, that means you should squash commits in a PR. Most commits should be small enough that they don’t need more fine grained history than one commit.
I will grant a couple of exceptions:
Unfortunately, if you enable merge queues on GitHub it forces you to pick one method for all PRs, which is kind of dumb. We just use squash merges for everything and accept that sometimes it’s not the best.
Not really because I’ve never seen a setup that requires every commit in a branch to compile and pass tests. Only the merge commit needs to.
Also if your PR is so big that it would be painful to bisect within it, then it should be broken into smaller PRs.
Stock ctrl-r is kinda shit but you can make it a lot better with McFly or Atuin.
I don’t think anyone disputes that, it’s just that nobody has come up with anything better.
Take home exercises were a potentially better option (though they definitely have other big downsides) but they aren’t a sensible choice in the age of AI.
Just taking people’s word for it is clearly worse.
Asking to see people’s open source code is unfair to people who don’t have any.
The only other option I’ve heard - which I quite like the sound of but haven’t had a chance to try - is to get candidates to do “live debugging” on a real world bug. But I expect that would draw exactly the same criticisms as live coding interviews do.
What would you do?
I don’t have a particular story. A lot of industries use “competency based questions”, you know “tell us about a time when…”.
They’re awful. If you don’t know what I’m talking about count yourself lucky.
Which would you rather, “write some code to filter off numbers from a list”, or “tell us about a time when you worked with someone difficult. How did you win them over and subsequently become best friends?”.
No it has very significant differences to Git beyond the CLI.
It is no secret that git’s interface is a bit too complex
Right but that’s mostly because the CLI is a mess, not because the fundamental data model is bad.
Some bad interview questions are like that, sure. But they’re supposed to be things you are very unlikely to have done before and can reasonably figure out. It’s not too hard to come up with simple questions like that. (Though I will grant many people don’t seem to bother.)
IMO this is not a helpful way to put it. They measure skill under stress. Stress may have a large effect on skill level for some people but highly unlikely that it’s so large that performance is completely random.
I failed because I’d misformatted the for loop
Unlikely that you failed the interview because of a basic syntax mistake.
My personal preference when evaluating candidates ability to code is reading their actual production code
This would be a great interview method! But 99% of people are not working on open source code professionally so it doesn’t really work in general.
You don’t know how good you’ve got it. The hiring process in other industries is much worse.
Yeah I’ve seen Nix and Guix suggested but they seem like a huge extra layer of complexity.
Also, strict backwards compatibility in APIs is totally worth it. It makes developing larger systems so much easier.
Usually not for first party code. It adds extra maintenance burden for little benefit.
For example suppose you want to add an extra parameter to a function. In a monorepos you just do that and fix all the compilation errors. Send one PR through CI. Job done.
With submodules… You have to add a new function so it’s backwards compatible. Deal with a default value for the old call, maybe add a deprecation warning. Oh and you need to version your library now. Then good luck finding all the places that function is called and updating them…
Yeah me too but if you keep reading they didn’t actually “move on” in the way that it sounds.
git submodule update --init --recursive
every time you checkout a commit. There’s an option to do it automatically but it’s super buggy and will break your .git
directory.The list goes on… Some of them depend on exactly what you’re using them for.
The slightly frustrating thing is that there isn’t a great answer for what to use instead. Git subtree has its own problems. Language-specific package managers do too. There aren’t any good languages agnostic package managers I know of.
I’m really hoping Jujutsu will come up with a better solution because it is possible. But it’s hard, and they are constrained by Git compatibility so I won’t hold my breath.
I agree, but if you take away the hard numbers from this (which you should) all you’re left with is what we all already knew from experience: fast languages are more energy efficient, C, Rust, Go, Java etc. are fast; Python, Ruby etc. are super slow.
It doesn’t add anything at all.
IMO it’s not as good a language as Rust, so I wouldn’t learn it for the purposes of making something. However it’s very easy to learn (at least to a productive level), so you may as well if you want to.
Just work through go by example and see what you think.
By far the best thing about Go is the tooling. Language itself is eh.
Really tempting but you can get such good computers second hand these days. I got a Ryzen 9 3950X (a few years old but 16 core and still awesome), with 128 GB of RAM and a 1TB SSD for £325. No way I’m paying 6 times that for a new machine that’s 50% faster at best.