different language auto-correct really didn’t help my inborn lack of spelling, and I (apparently) didn’t even glance back to check what I wrote.
different language auto-correct really didn’t help my inborn lack of spelling, and I (apparently) didn’t even glance back to check what I wrote.
I haven’t Bhad a case which sisn’t didn’t mirror the camera bump in a long while unfortunately
Side note, basically every smart phone out there has orientation sensors, so it should be just as easy as downloading a Bubble Level app from the app store.
not when almost every phone has a camera bump, volume rockers and a power button.
i.e. no long flat sides, that still allow you to see the screen.
When asked about Nintendo’s solution for backwards compatibility with Switch games and the GameCube classics available on the system, the developers confirmed these games are actually emulated. (This is similar to what Xbox does with backwards compatibility).
“It’s a bit of a difficult response, but taking into consideration it’s not just the hardware that’s being used to emulate, I guess you could categorize it as software-based,” Sasaki said of the solution.
They are (mostly?) talking about Gamecube right?..
right?
Or is that the reason for the Switch-Emulator-Witchhunt, they actually “bought” the tech?
I doubt that it’ll really have killer features.
You’ll most likely be able to exchange the 2 Hotend-Toolhead for a Laser-Hotend, it’ll have a heated AMS, it may have a vinyl-cutter head.
I don’t really think I’d want to Laser on my heated bed, or cut on it either. The Fumes from lasering will impact durability of anything in the printer, without really lots of ventilation it will produce lots of dust (well, ash).
Cutting on the same head is weird, as a cutter needs to resist a bit or cutting force.
The dual-nozzle-design is interesting, but I think it’s still vastly inferior to multiple toolheads, with anything over 2 materials there is still cutting required. Depending on how they solved the issue with feeding the two hotends, I’m not sure how there won’t be quite a bit of added complexity for loading the AMS, where you have to think which head needs which filament.
Using a single extruder gear for both hotends also increases chances and risk of cross-contamination. I’ve never had a printer who didn’t occasionally chew filament.
Moving the Hotends on linear rails, having a mechanical drop-stopper on the hotend all increase complexity, I’m not sure how bad blob of dooms will get here.
If they use their touted servo-design actually on the corexy kinematics, that will be interesting, because conventional wisdom says it doesn’t really improve 3d-printing performance. At least not until you get to ridiculous builds (think minuteman)
Cost will be interesting, as apparently the H2D was touted to “be above current X1 line”, if that were to include X1E and the $2500 price tag it would be… rather expensive.
But even when it’s “just” more expensive than the X1C at $1200/$1450, coming to… idk, $1500 in it’s bare configuration, that’s rather big chunk of change for a hobbyist. And they will (hopefully) have lost lots of enthusiasts with their firmware-stunt.
Something kinda cool that could theoretically be done would be print smoothing with the laser. Print it, change the tool, laser (at least) the stairstepping on top away.
I’ve had partial clogs that manifest like that.
Cold pulls (several) ended up resolving my issue.
my best explanation was, that there was some debris in the nozzle, which would sometimes (nearly) seal the nozzle, and at other times be retracted with the filament, get stuck somewhere else and filament flows freely.
The whole idea is they should be safer than us at driving. It only takes fog (or a painted wall) to conclude that won’t be achieved with cameras only.
Well, I do still think that cameras could reach “superhuman” levels of safety.
(very dense) Fog makes the cameras useless, A self driving car would have to slow way down / shut itself off. If they are part of a variety of inputs they drop out as well, reducing the available information. How would you handle that then? If that would have to drop out/slow down as much, you gain nothing again /e: my original interpretation is obviously wrong, you get the additional information whenever the environment permits.
And for the painted wall. Cameras should be able to detect that. It’s just that Tesla presumably hasn’t implemented defenses against active attacks yet.
You had a lot of hands in this paragraph. 😀
I like to keep spares on me.
I’m exceptionally doubtful that the related costs were anywhere near this number.
cost has been developing rapidly. Pretty sure several years ago (about when tesla first started announcing to be ready in a year or two) it was in the tens of thousands. But you’re right, more current estimations seem to be more in the range of $500-2000 per unit, and 0-4 units per car.
it’s inconceivable to me that cameras only could ever be as safe as having a variety of inputs.
Well, diverse sensors always reduce the chance of confident misinterpretation.
But they also mean you can’t “do one thing, and do it well”, as now you have to do 2-4 things (camera, lidar, radar, sonar) well. If one were to get to the point where you have either one really good data-source, or four really shitty ones, it becomes conceivable to me.
From what I remember there is distressingly little oversight for allowing self-driving-cars on the road, as long as the Company is willing to be on the hook for accidents.
well, Apollo was not part of the Saturn program, was it?
Rocket did fine, even during Apollo 13 it wasn’t the Saturn.
They talk about the safety record of Saturn rockets without mentioning that using those isn’t currently possible
And that, at least from my memory, multiple people in the Saturn Program considered it to have been extremely good luck to not have had a failure which led to deaths.
Judging by the fact that he has an imagineer-video out (effectively) at the same time as the space-mountain mapping, I’d expect that Disney was fully aware of what he was doing, and the whole sneaky-thing was just to make it more appealing to viewers.
They do.
But “all self driving cars” are practically only from waymo.
Level 4 Autonomy is the point at which it’s not required that a human can intercede at any moment, and as such has to be actively paying attention and be sober.
Tesla is not there yet.
On the other hand, this is an active attack against the technology.
Mirrors or any super-absorber (possibly vantablack or similar) would fuck up LIDAR. Which is a good reason for diversifying the Sensors.
On the other hand I can understand Tesla going “Humans use visible light only, in principle that has to be sufficient for a self driving car as well”, because, in principle I agree. In practice… well, while this seems much more click-bait than an actual issue for a self-driving taxi, diversifying your Input chain makes a lot of sense in my book. On the other hand, if it would cost me 20k more down the road, and Cameras would reach the same safety, I’d be a bit pissed.
And, with Alzheimers, he might have gone through grief and panic multiple times.
Shining3D also makes professional-level scanners that cost as much as a car, so I banked on them putting some of that expertise into their consumer models and went with the Einstar.
Seems like you were correct.
hardware is fine. If you’re not experienced the 3000km will fuck you though. Stuff will arise where you will need to get at it.
I’ve been using two laptops als “servers” for years.
well, the first one died after about 6 years of use.
But I can get at them reasonably.
that is more editing and good lighting.
there are always layer lines.
Depending on you I would recommend
Voron (DIY, will take about a week to build and a week to tune),
Prusa (depends on your preference, assemble yourself or built, depending on required time a mk4s with mmu3, or the core one which will take several months to get to you, and the mmu3 later when it will become compatible)
Qidi (cheap, chinese, will likely work decent after some tuning)
there isn’t a “best” really. Depends on your wants/needs.
for true Openness, I don’t think anything beats a Voron.
Prusa is great (good track record, good support, Hardware is not open anymore though, can work out of the box), but expensive.
plenty of others get you something largely decent for low prices (qidi, creality for example) but long term support seems likely to lack, and there are always reports of some issues for some and great results for others.
The Security argument doesn’t hold water when you’re pushed toward the cloud use for transmitting data over your own network cable would suffice.
Define APIs and API keys (local and cloud).
Instant safe communication, local and/or cloud.
I don’t see it this way, for multiple reasons.
If my understanding is correct, they are (imho) misleading if not lying in this post, when they say:
these claims are entirely false:
Bambu Lab will remotely disable your printer (“brick” it).
Firmware updates will block your printer’s ability to print
But they integrate a certificate which has a validity date.
Once that update is on, you’re kind of locked to their releases. Yes they now, after the backlash have realized that they are putting up the walls a bit too quick. But I do not see anything in there that says “we were wrong to do it this way” - which they are.
There is little reason to - by default - put the cloud inbetween your PC and your Printer, which may sit 2m or less apart. That never makes anything more secure.
Neat.
Curious to see if the reliability of ejection (and adhesion) really will end up being worth the added mechanical complexity.
remember, most prints take hours, and taking them off plates takes… a few seconds to minutes.
US military industrial complex is reading this as “Oh damn, we need to build much more of these then”