• 2 Posts
  • 637 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle
  • From a business perspective it makes sense, to throw all the rendering to the devices to save cost.

    Not just to save cost. It’s basically OS-agnostic from the user’s point of view. The web app works fine in desktop Linux, MacOS, or Windows. In other words, when I’m on Linux I can have a solid user experience on apps that were designed by people who have never thought about Linux in their life.

    Meanwhile, porting native programs between OSes often means someone’s gotta maintain the libraries that call the right desktop/windowing APIs and behavior between each version of Windows, MacOS, and the windowing systems of Linux, not all of which always work in expected or consistent ways.


  • The Walkman and other tape players were so much superior to CD players for portability and convenience. Batteries lasted a lot longer for portable tape players than for CD players. Tapes could be remixed easily so you could bring a specific playlist (or 2 or 3) with you. Tapes were much more resilient than CDs. The superior audio quality of CDs didn’t matter as much when you were using 1980’s era headphones. Or, even if you were using a boombox, the spinning of a disc was still susceptible to bumps or movement causing skips, and the higher speed motor and more complex audio processing drained batteries much faster. And back then, rechargeable batteries weren’t really a thing, so people were just burning through regular single use alkaline batteries.

    It wasn’t until the 90’s that decent skip protection, a few generations of miniaturization and improved battery life, and improved headphones made portable CDs competitive with portable tapes.

    At the same time, cars started to get CD players, but a typical person doesn’t buy a new car every year, so it took a few years for the overall number of cars to start having a decent number of CD players.


  • You don’t remember NetZero, do you? A free dial up ISP that gave free Internet connections under the condition that you give up like 25% of your screen to animated banner ads while you’re online.

    Or BonziBuddy? Literal spyware.

    What about all the MSIE toolbars, some of which had spyware, and many of which had ads?

    Or just plain old email spam in the days before more sophisticated filters came out?

    C’mon, you’re looking at the 1990s through rose tinted glasses. I’d argue that the typical web user saw more ads in 1998 than in 2008.


  • No, 1990s internet just hadn’t actually fulfilled the full potential of the web.

    Video and audio required plugins, most of which were proprietary. Kids today don’t realize that before YouTube, the best place to watch trailers for upcoming movies was on Apple’s website, as they tried to increase adoption for QuickTime.

    Speaking of plugins, much of the web was hidden behind embedded flash elements, and linking to resources was limited. I could view something in my browser, but if I sent the URL to a friend they might still need to navigate within that embedded element to get to whatever it was I was talking about.

    And good luck getting plugins if you didn’t use the right operating system expected by the site. Microsoft and Windows were so busy fracturing the web standards that most site publishers simply ignored Mac or Linux users (and even ignored any browser other than MSIE).

    Search engines were garbage. Yahoo actually provided a decent competition to search engines by paying humans to manually maintain an index, and review user submissions on whether to add a new site to the index.

    People’s identities were largely tied to their internet service provider, which might have been a phone company, university, or employer. The publicly available email address services, not tied to ISP or employer or university, were unreliable and inconvenient. We had to literally disconnect from the internet in order to dial into Eudora or whatever to fetch mail.

    Email servers only held mail for just long enough for you to download your copy, and then would delete from the server. If you wanted to read an archived email, you had to go back to the specific computer you downloaded it to, because you couldn’t just log into the email service from somewhere else. This was a pain when you used computer labs in your university (because very few of us had laptops).

    User interactions with websites were clunky. Almost everything that a user submitted to a site required an actual HTTP POST transaction, and a reloading of the entire page. AJAX changed the web significantly in the mid 2000’s. The simple act of dragging a map around, and zooming in and out, for Google Maps, was revolutionary.

    Everything was insecure. Encryption was rare, and even if present was usually quite weak. Security was an afterthought, and lots of people broke their computers downloading or running the wrong thing.

    Nope, I think 2005-2015 was the golden age of the internet. Late enough to where the tech started to support easy, democratized use, but early enough that the corporations didn’t ruin everything.


  • Yeah, you’re describing an algorithm that incorporates data about the user’s previous likes. I’m saying that any decent user experience will include prioritization and weight of different posts, on a user by user basis, so the provider has no choice but to put together a ranking/recommendation algorithm that does more than simply sorts all available elements in chronological order.





  • Windows is the first thing I can think of that used the word “application” in that way, I think even back before Windows could be considered an OS (and had a dependency on MS-DOS). Back then, the Windows API referred to the Application Programming Interface.

    Here’s a Windows 3.1 programming guide from 1992 that freely refers to programs as applications:

    Common dialog boxes make it easier for you to develop applications for the Microsoft Windows operating system. A common dialog box is a dialog box that an application displays by calling a single function rather than by creating a dialog box procedure and a resource file containing a dialog box template.



  • Some people actively desire this kind of algorithm because they find it easier to find content they like this way.

    Raw chronological order tends to overweight the frequent posters. If you follow someone who posts 10 times a day, and 99 people who post once a week, your feed will be dominated by 1% of the users representing 40% of the posts you see.

    One simple algorithm that is almost always better for user experiences is to retrieve the most recent X posts from each of the followed accounts and then sort that by chronological order. Once you’re doing that, though, you’re probably thinking about ways to optimize the experience in other ways. What should the value of X be? Do you want to hide posts the user has already seen, unless there’s been a lot of comment/followup activity? Do you want to prioritize posts in which the user was specifically tagged in a comment? Or the post itself? If so, how much?

    It’s a non-trivial problem that would require thoughtful design, even for a zero advertising, zero profit motive service.



  • My anecdotal observation is the same. Most of my friends in Silicon Valley are using Macbooks, including some at some fairly mature companies like Google and Facebook.

    I had a 5-year sysadmin career, dealing with some Microsoft stuff especially on identity/accounts/mailboxes through Active Directory and Exchange, but mainly did Linux specific stuff on headless servers, with desktop Linux at home.

    When I switched to a non-technical career field I went with a MacBook for my laptop daily driver on the go, and kept desktop Linux at home for about 6 or 7 more years.

    Now, basically a decade after that, I’m pretty much only driving MacOS on a laptop as my normal OS, with no desktop computer (just a docking station for my Apple laptop). It’s got a good command line, I can still script things, I can still rely on a pretty robust FOSS software repository in homebrew, and the filesystem in MacOS makes a lot more sense to me than the Windows lettered drives and reserved/specialized folders I can never remember anymore. And nothing beats the hardware (battery life, screen resolution, touchpad feel, lid hinge quality), in my experience.

    It’s a balance. You want the computer to facilitate your actual work, but you also don’t want to spend too much time and effort administering your own machine. So the tradeoff is between the flexibility of doing things your way versus outsourcing a lot of the things to the maintainer defaults (whether you’re on Windows, MacOS, or a specific desktop environment in Linux), mindful of whether your own tweaks will break on some update.

    So it’s not surprising to me when programmers/developers happen to be issued a MacBook at their jobs.



  • Installing MacOS on Intel Macs is really easy if you still have your recovery partition. It’s not even hard even if you’ve overwritten the recovery partition, so long as you have the ability to image a USB drive with a MacOS installer (which is trivial if you have another Mac running MacOS).

    I haven’t messed around with the Apple silicon versions, though. Maybe I’ll give it a try sometime, used M1 MacBooks are selling for pretty cheap.



  • Which is is such a high dollar count that this simply cannot be USD

    So I haven’t used Windows on my own machines in about 20 years, but back when I built my own PCs that seemed about right. So I looked up the price history, didn’t realize that Microsoft reduced the license prices around Windows 8.

    I remember 20 years ago, Windows XP Home was $199 and Professional was $299 for a new license on a new computer. Vista and 7 were similarly priced.

    Since Windows 8, though, I just don’t understand their pricing or licensing terms.


  • I think back to the late 90’s investment in rolling out a shitload of telecom infrastructure, with a bunch of telecom companies building out lots and lots of fiber. And perhaps more important than the physical fiber, the poles and conduits and other physical infrastructure housing that fiber, so that it could be improved as each generation of tech was released.

    Then, in the early 2000’s, that industry crashed. Nobody could make their loan payments on the things they paid billions to build, and it wasn’t profitable to charge people for the use of those assets while paying interest on the money borrowed to build them, especially after the dot com crash where all the internet startups no longer had unlimited budgets to throw at them.

    So thousands of telecom companies went into bankruptcy and sold off their assets. Those fiber links and routes still existed, but nobody turned them on. Google quietly acquired a bunch of “dark fiber” in the 2000’s.

    When the cloud revolution happened in the late 2000’s and early 2010’s, the telecom infrastructure was ready for it. The companies that built that stuff weren’t still around, but the stuff they built finally became useful. Not at the prices paid for it, but when purchased in a fire sale, those assets could be profitable again.

    That might happen with AI. Early movers over invest and fail, leaving what they’ve developed to be used by whoever survives. Maybe the tech never becomes worth what was paid for it, but once it’s made whoever buys it for cheap might be able to profit at that lower price, and it might prove to be useful in the more modest, realistic scope.


  • For example, as a coding assistant, a lot of people quite like them. But as a replacement for a human coder, they’re a disaster.

    New technology is best when it can meaningfully improve the productivity of a group of people so that the group can shrink. The technology doesn’t take any one identifiable job, but now an organization of 10 people, properly organized in a way conscious of that technology’s capabilities and limitations, can do what used to require 12.

    A forklift and a bunch of pallets can make a warehouse more efficient, when everyone who works in that warehouse knows how the forklift is best used, even when not everyone is a forklift operator themselves.

    Same with a white collar office where there’s less need for people physically scheduling things and taking messages, because everyone knows how to use an electronic calendar and email system for coordinating those things. There might still be need for pooled assistants and secretaries, but maybe not as many in any given office as before.

    So when we need an LLM to chip in and reduce the amount of time a group of programmers need in order to put out a product, the manager of that team, and all the members of that team, need to have a good sense of what that LLM is good at and what it isn’t. Obviously autocomplete has always been a productivity enhancer for long before LLMs have been around, and extensions of that general concept may be helpful for the more tedious or repetitive tasks, but any team that uses it will need to use it with full knowledge of its limitations and where it best supplements the human’s own tasks.

    I have no doubt that some things will improve and people will find workflows that leverage the strengths while avoiding the weaknesses. But it remains to be seen whether it’ll be worth the sheer amount of cost spent so far.