• mech@feddit.org
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    22 hours ago

    This brings me back to my first startup job supporting a software that was used by several large corporations you’ll know.

    Configuration and data entry was done by directly manipulating the prod database, with commands that were stored locally in text files, pre-written in the SQL Manager or created from Excel macros.

    If you fucked up, you copied over the parts you had changed from the testing db, which doubled as the only backup.

    If you started a transaction and forgot to commit or roll back, the application on the customer’s end would lock up.

    The web frontend was based on hand-written html+css spaghetti code and Microsoft Silverlight.

    Without any prior job experience or SQL knowledge, I got an account with domain admin rights and full write access to prod on my first day.

    Only 2 people could work on the server at a time cause they didn’t want to pay for terminal server licenses.

  • RustyNova@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    23 hours ago

    I feel this pain. Having to deal with an inheritantly sync database in an async app is painful. You need to make sure at no point the transaction is stopped, make sure to set the timeout to a reasonable time instead of an iPad kids’s attention span, and a whole deal of other things

    Can’t wait for turso to be stable.

  • marcos@lemmy.world
    link
    fedilink
    arrow-up
    25
    ·
    2 days ago

    SQLite doesn’t do highly concurrent tasks. Your life will be much, much better if you don’t even try.

    It also doesn’t do Windows shared files, because any access into Windows shared files is highly concurrent on the speeds Windows is able to manage its shares.

    • okwhateverdude@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      SQLite doesn’t do highly concurrent tasks. Your life will be much, much better if you don’t even try.

      One small counter point, with WAL, it is much more feasible to manage more concurrency.

      It also doesn’t do Windows shared files, because any access into Windows shared files is highly concurrent on the speeds Windows is able to manage its shares.

      I didn’t know this! But then again, I don’t think I’ve ever used sqlite on windows before.

    • tatterdemalion@programming.devOP
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      1 day ago

      There is a subtle scenario with read-modify-write transactions in MVCC where SQLite lacks some grace (in my opinion).

      In MVCC, transactions work with a point-in-time (read “between atomic transactions”) consistent “read snapshot” of the database.

      Consider this example:

      1. Transaction A begins and reads from table foo.
      2. Transaction B begins and updates table foo.
      3. Both transactions commit.

      There is no conflict here because these transactions are isolated from each other via the snapshot mechanism. Transaction A’s read snapshot is immutable and will not see any writes from transaction B, even if they are happening concurrently.

      Now what happens in this example (from the OP):

      1. Transaction A begins and reads from table foo.
      2. Transaction B begins and updates table foo.
      3. Transaction B commits.
      4. Transaction A tries to update foo.

      This is a true conflict because both transactions are trying to write to foo, and transaction A’s writes might be based on what it just read. There is no consistent way for A to proceed, because B already wrote to foo, invalidating A’s read snapshot.

      So SQLite handles this by returning an error to A, effectively requiring A to restart the transaction.

      There are other ways this could be handled though. The DB could optimistically retry the transaction for you. There is even a special BEGIN IMMEDIATE; statement that it could use to proactively take a write lock on foo so that the transaction doesn’t get starved by other writers. But SQLite puts all of the responsibility on users to handle this.

      I’m not an expert, so there could be a very good reason that SQLite works this way, but it feels a bit annoying as a user.

      I don’t actually know off the top of my head how PostgresQL handles this particular scenario.

      • FishFace@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        23 hours ago

        Wouldn’t that be an error due to serialisation failure if in postgres if you enabled serialisable isolation?

        If you didn’t, you could get anything, because unless I’m mistaken this is the typical example used to illustrate what goes wrong when you don’t have full serialisable isolation.

        • tatterdemalion@programming.devOP
          link
          fedilink
          arrow-up
          1
          ·
          23 hours ago

          I believe both SERIALIZABLE and REPEATABLE READ isolation levels would raise an error for this example.

          And I looked this up: Postgres won’t retry the transaction for you either. Though ultimately I think this a good default behavior.

          • FishFace@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            22 hours ago

            Why would repeatable read raise an error? I’m a little rusty but I thought it was only serialisable that could do, and that repeatable read simply ensured that the committed writes of transaction 2 would not be read in transaction 1, so it updating over the top of 2 is fine.

    • themaninblack@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      1 day ago

      I’m going to try. Could be:

      1. A long running UPDATE which can temporarily lock all of the data that is being updated. Basically a lock is when the relevant data is frozen while the transaction executes. This can happen at the field or row or table level in most robust database management systems, but in SQLite, during the time when a create, update, or delete is actually being written to disk, the whole file (database) is locked while that happens, even for processes wishing to perform reads.

      The solution is to wait for completion, but your query could take 7 million years to complete so… you might not have the patience. You could also just exhaust the compute/memory resources of the machine.

      This feels bad when you expected it to be a simple transaction or when you only expected the update to apply to a small subset of data… it’s possible that you’re using a suboptimal query strategy (e.g. many JOINs, lack of indices, not using WITH) or that you’re running your UPDATE on a huge number of records instead of the three you expected to change.

      And/or

      1. A deadlock, meaning the same data is being operated on at the same time, but the operations can’t execute because there is a competing/circular lock.

      The use of BEGIN means that the transaction has started. You usually use COMMIT to actually finish and complete the transaction. If you’ve got another query operating on the same data happening during this time, even if it’s data that is incidental and only used to make the JOIN work, there can be “overlap” which makes the transactions hang, because the DB engine can’t work out which lock to release first.

      SQLite is single file based and has a more basic and broad lock vs Postgres or other DMBSes. This means that SQLite doesn’t deadlock because it processes each transaction one after another, but this paradigm may slow everything down vs. MariaDB, Postgres etc

      Also see ACID compliance for further reading (https://en.wikipedia.org/wiki/ACID)