

Is this a robot.txt alternative?
Is this a robot.txt alternative?
The reason is because company decisions are largely driven by investors, and investors want their big investments in AI to return something.
Investors want constant growth, even if it must be shoehorned.
Arch is not the most widely supported distro (as in supported by the creators of programs). You will see it supported most by some of the more indie open source programs, but beyond that, Debian and Ubuntu are more likely to be explicitly supported.
Arch definitely requires you to read. It’s a distro for those who want to assume greater amount of choice and freedom in their system. If you prefer an out of the box experience, try another distro.
Arch’s limitation is that you kinda have to stick with the latest version of things. This is usually a good limitation, and imo better than the limitation of having to stick with an old frozen version.
Depending on the package, trying an older version may not work or even break the system if dependencies or reverse dependencies are expecting it to be a certain version, which is often the case.
bringing up RSS feeds is actually very good, because although you can paginate or partition your feeds, I have never seen a feed that does that, even when they have decades of history. But if needed, partioning is an option so you don’t have to pull all of its posts but only recent ones, or by date/time range.
I would also respectfully disagree that people don’t subscribe to 100’s of RSS feeds. I would bet most people who consistently use RSS feed readers will have more than 100 feeds, me included.
And last, even if you follow 10,000, yes it would require a lot more time than reading from a single database, but it is still on the order of double digit seconds at most. If you compare 10,000 static file fetches with 10,000 database writes across different instances, I think the static files would fare better. This isn’t to mention that you are more likely to have to write more than read more (users with 100k followers are far more common than users with 100k subscriptions)
And just to emphasize, I do agree that double digit seconds would be quite long for a user’s loading time, which is why I would expect to fetch regularly so the user logs onto a pre made news feed.
Sure, but constantly having to do it is not really a bad thing, given it is automated and those reads are quite inexpensive compared to a database query. It’s a lot easier to handle heavy loads when serving static files.
Yes, precisely. The existing implementation in the Fediverse does the opposite: everyone you follow has to insert their posts into the feed of everyone that follows them, which has its own issues.
Oh my bad, I can explain that.
Before I do, one benefit of this method is that your timeline is entirely up to your client. Your instance becomes primarily tasked with making your posts available, and clients have the freedom of implementing the reading and news feed / timeline formation.
Hence, there are a few ways to do this. The best one is probably a mix of those.
This is not a good approach, but I mention it first because it’ll make explaining the next one easier.
Cons: loading time for the user may be long, depending on how many subscriptions they have it could be several seconds. P90 may even be in double digits.
Think like a periodic job (hourly, or every 10 min, etc) , which fetches posts in a similar manner as described above, but instead of doing it when user requests it, it is done in advance
Pros:
In this approach, we primarily do the second method, to achieve fast loading time. But to get more up-to-date content, we also simultaneously fetch the latest in the background, and interleave or add the latest posts as the user scrolls.
This way we get both fast initial load times and recent posts.
Surely there’s other good approaches. As I said in the beginning, clients have the freedom to implement this however they like.
If a CDN is involved, we would have to properly take care of the invalidations and what not. We would have to run a batch process to update the CDN files, so that we are not doing it too often, but doing it every minute or so is still plenty fast for social media use cases.
Have to emphasize that I am not expert, so I may be missing a big pitfall here.
Don’t think it has that info.
Who split?
He most likely isn’t the one who did it. The eyebrows from his suspect photo are way off.
They likely wanted to charge someone so the public doesn’t get ideas about doing a similar thing and not getting caught.
Anyone looking for the best package manager needs to look only at portage/emerge and nix
I tried LFS one time, and accidentally ran one or more of the commands on my host machine, rendering it unusable
Does anyone know of a similar comparison but with more modest GPUs, like maybe 3060 Ti or equivalent ? I feel like phoronix did something like that but I cant manage to find it
I disagree that this is a concern. If you are already exaggerating about federation wars, chances are you already tried lemmy and know a good bit about selecting instances. The average user will not care as much as you do.
The average user will go to join-lemmy site, will not care at all about the different instances and likely choose the biggest one or first one they see. None of them will think “oh no this one is involved in federation wars” because thats not something you find out before knowing some about the fediverse.
ahh, so the game itself can use vulkan and it is not necessary for sway itself to use vulkan?? wow well that makes me very happy, thanks a lot!!
I did that, but it did not produce any logs either :(
I did not try that, but I did try battle.net tjrough bottles and I get a similar issue. Does not launch , no errors.
Why the interest in BEAM?