We used to make these all the time as kids. You need a second pair of magnets in push configuration on the back for this to work.
We used to make these all the time as kids. You need a second pair of magnets in push configuration on the back for this to work.
I would 100% volunteer to be the first person to cross the event horizon of a spinning supermassive black hole, just to see what’s on the other side.
Like yeah it’s guaranteed to be a one-way trip and probably a horrible death, but there’s also the possibility that it’s actually a gateway to alternate universes, and that’s something I’d give anything to see with my own eyes.
I played Powerslave: Exhumed but it wasn’t quite the same game I remembered playing. I think it’s more of a “reimagined” version than a remastered one.
I’m hopeful that this means the game could be ported to modern platforms so it doesn’t have to be run in DOSbox.
TFW you post propaganda using a picture of an American vehicle
I like how you’ve deliberately ignored the specifically chosen wording of my statement, and completely disregarded the rest of my point, simply because you perceive it as counter-factual in your world-view, thus exhibiting the exact kind of behavior you were talking about. That’s really funny.
A neurotypical human mind, acting rationally, is able to remember the chain of thought that lead to a decision, understand why they reached that decision, find the mistake in their reasoning, and start over from that point to reach the “correct” decision.
Even if they don’t remember everything they were thinking about, they can reason based on their knowledge of themselves and try to reconstruct their mental state at the time.
This is the behavior people are expecting from LLMs but not understanding that it’s something they’re fundamentally incapable of.
One major difference (among many others, obviously) is that AI models as currently implemented don’t have any kind of persistent working memory. All they have for context is the last N tokens they’ve generated, the last N tokens of user input, and any external queries they’ve made. All the intermediate calculations (the “reasoning”) that led to them generating that output is lost.
Any instance of an AI appearing to “correct” their mistake is just the model emitting what it thinks a correction would be, given the current context window.
Humans also learn from their mistakes and generally make efforts to avoid them in the future, which doesn’t happen for LLMs until that data gets incorporated into the training for the next version of the model, which can take months to years. That’s why AI companies are trying to capture and store everything from user interactions, which is a privacy nightmare.
It’s not a compelling argument to compare AI behavior to that of a dysfunctional human brain and go “see, humans do this too, teehee!” Not when the whole selling point of these things is that they supposed to be smarter and less fallible than most humans.
I’m deliberately trying not to be ableist in my wording here, but it’s like saying, “hey, you know what would do wonders for productivity and shareholder value? If we fired half our workforce, then found someone with no experience, short-term memory loss, ADHD and severe untreated schizophrenia, then put them in charge of writing mission-critical code, drafting laws, and making life-changing medical and business decisions.”
I’m not saying LLMs aren’t technically fascinating and a breakthrough in AI development, but the way they have largely been marketed and applied is scammy, misleading, and just plain irresponsible.
All aboard the enshittification train! Choo choo!
I mean, it’s been well underway for a while now but this is certainly a transfer over to an express train.
I get scoffed at every time I call LLMs “glorified auto-correct” so it’s nice being validated.
Anyone who actually has a grasp of how Large Language Models work should not be surprised by this, but too many people, even engineers who should really know better, have drunk the Kool-aid.
A triumphant return to the series’ roots with the exact same game-breaking bugs as Battlefield 3 had. Nice job, EA.
Normalize not naming new languages with a single letter.
Yeah, but the malware can just wait for a system upgrade where you sign a new boot image and slip itself in then.
It works for Windows because theoretically only Microsoft would have the signing key and it’s not just sitting on disk somewhere. But then you’re just trusting Microsoft, and also subject to vendor lock-in.
Actually, I would love for you to explain to me how Secure Boot alone would protect someone from any of that. If you want to protect files, you need full disk encryption, not Secure Boot.
Or are you seriously expecting a government-level threat actor to bother to:
That’s the great thing about fascist governments, is they have no need to be that sneaky. They can just change the laws to make whatever you’re doing illegal and jail you until you agree to give up your documents, or simply hit you with a $5 wrench until you tell them the password.
For a home desktop that’s never left unattended with anyone untrustworthy, I don’t see that Secure Boot is worth the effort in setting up.
Given that you have to re-sign the boot image every time you upgrade, any malware already running with root privileges on the machine could easily slip itself into the new signed image.
The best security is not running untrusted software to begin with.
No information on the 9000 series, why? Kinda sus.
Imagine how many emergency room visits could be avoided every year if they just taught this in sex ed class.
I don’t DM and tell.
Thank you boo
I don’t even need to read the message to know it’s a scam. No one ever DMs me otherwise.
Kinder, the Brookings fellow, said she worries that companies soon will simply eliminate the entire bottom rung of the career ladder.
What the fuck do they think is gonna happen when the current seniors start to retire? Are they just betting that AI is gonna be good enough to replace all of them then?
Cue all these companies in 5-20 years’ time having to completely rewrite their software stacks because they have no fucking clue how any of it works anymore.
Big if true, but this unfortunately just seems like wild speculation.
There’s articles going back to before the election talking about how Trump hasn’t been seen in several days. I couldn’t find anything more recent than June. He apparently has a habit of flaking on commitments, which doesn’t surprise me in the slightest.