

That depends on whether you consider an LLM to be reading the text, or reproducing it.
Outside of the kind of malfunctions caused by overfitting, like when the same text appears again and again in the training data, it’s not difficult to construct an argument that an LLM does the former, not the latter.
They definitely weren’t working on starship back then. Their first successful launch was in 2008, so that was when they were working on the falcon 1.
You can’t claim all the work they’ve ever done has just been early versions of starship, because the falcon rockets are the most successful rockets in history. They’re a perfectly good product, and the fact that they’ve gone on to try and create something even better isn’t remotely the same as changing direction so often that they never actually get anywhere, like the Orion program has