cross-posted from: https://midwest.social/post/31797333

I came across the post about Milk-V Titan, and there was a comment asking about the lack of the V extension would hinder running Ubuntu 25.10 which was targetting a particular RISC-V configuration, and it made me wonder if there were an opportunity for micro kernels to exploit.

Now, up-front: it’s been literally decades since I had an OS design class, and my knowledge of OS design is superficial; and while I’ve always been interested in RISC architectures, the depth of my knowledge of that also dates back to the 90’s. In particular (my knowledge of) RISC-V’s extension design approach is really, really shallow. It’s all at a lower level than I’ve concerned myself with for years and years. So I’m hoping for an ELI-16 conversation.

What I was thinking was that a challenge of RISC-V’s design is that operating systems can’t rely on extensions being available, which (in my mind) means either a lot of really specific kernel builds – like, potentially an exponential number – or a similar number of code paths in the kernel code, making for more complicated and consequently more buggy kernels (per the McConnell rule). It made me wonder if this is not, then, an opportunity for micro kernels to shine, by exploiting an ability to load extension-specific modules based on a given CPU capability set.

As I see it, the practicality of this depends on whether the extensions would be isolatable to kernel modules, or whether (like the FP extension) it’d just be so intrinsic that even the core kernel would need to vary. Even so, wouldn’t having a permutation of core kernel builds be smaller, more manageable, and less bug-prone than permutations of monolithic kernels?

Given the number of different possible RISC-V combinations, would a micro kernel design not have an intrinsic advantage over monolithic kernels, and be able to exploit the more modular nature of their design?

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    I think of RISC-V as processors in the 90s; not very fast, so everything has a price. The IPC cost of a microkernel communicating with all the user processes that are normally in a monolithic kernel has real consequences to performance on a RISC-V. Today, on AMD64, a microkernel’s advantages of surviving crashes of the dependent processes would be great to have, given that IPC calls don’t really register on the performance of modern processors and memory. But RISC-V has a ways to go to get to that point of performance. Process calls aren’t cheap, especially if you want the level of complexity and wide compatibility of modern Linux.

    • Modern microkernel designs have largely addressed the performance issues. L4Linux claims to suffer a mere 5% maximum throughout penalty:

      For L4Linux, the AIM benchmarks report a maximum throughput which is only 5% lower than that of native Linux TU Dresden

      It’s not going to win benchmark competitions, but it’s also probably going to not have a substantial impact on average workloads.

      I get that you’re saying that µ-kernels would compound the performance challenges of current RISC-V implementations (it’s an extremely young architecture), and I’m not suggesting that it would have a performance benefit. If the tradeoff is easier support of the permutations of the RISC-V feature set and fewer bugs in simpler code, it may be worth it.