• 34 Posts
  • 886 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • Not necessarily and also not how I understood the post. It’s certainly an option and an ergonomic one at that. So it’s not unthinkable they go that route.
    But you could also offer a choice of backend when creating a new graph. Text-based allows you to sync you files between devices, e.g. via source control, and offer asynchronous collaboration this way, while the DB-based approach forfeits source control and opts to keep its data consistent through simultaneous edits from multiple clients itself.
    But managing that consistency takes quite considerable effort, as we’re witnessing, for no clear advantage, just a tradeoff. And I at least think it would be a shame to let the work already gone into the development of the text backend go to waste. I think the devs might share that opinion.







  • The reason I compare them to autocomplete is that they’re token predictors, just like autocomplete.
    They take your prompt and predict the first word of the answer. Then they take the result and predict the next word. Repeat until a minimum length is reached and the answer seems complete. Yes, they’re a tad smarter than autocorrect, but they understand just as little of the text they produce. The text will be mostly grammatically correct, but they don’t understand it. Much like a compiler can tell you if your code is syntactically correct, but can’t judge the logic.



  • Or, and hear me out on this, you could actually learn and understand it yourself! You know? The thing you go to university for?
    What would you say if, say, it came to light that an engineer had outsourced the statical analysis of a bridge to some half baked autocomplete? I’d lose any trust in that bridge and respect for that engineer and would hope they’re stripped of their title and held personally responsible.

    These things currently are worse than useless, by sometimes being right. It gives people the wrong impression that you can actually rely on them.

    Edit: just came across this MIT study regarding the cognitive impact of using LLMs: https://arxiv.org/abs/2506.08872