

the issue is that foreign companies aren’t subject to US copyright law, so if we hobble US AI companies, our country loses the AI war
I get that AI seems unfair, but there isn’t really a way to prevent AI scraping (domestic and foreign) aside from removing all public content on the internet
I think 10x is a reasonable long term goal, given continued improvements in models, agentic systems, tooling, and proper use of them.
It’s close already for some use cases, for example understanding a new code base with the help of cursor agent is kind of insane.
We’ve only had these tools for a few years, and I expect software development will be unrecognizable in ten more.