Look into dead reckoning vs lock step for networking. Lockstep requires determinism at the simulation layer, dead reckoning can be much more tolerant of differences and latency. Quake and most action games tend to be dead reckoning (with more modern ones including time rewind and some other neat tricks).
Very common that replay/demo uses the network stack of it's present in a game.
I used to be a professional sailor, and love finding nautical terminology in programming. At sea dead reckoning is navigating using the speed and direction of the ship, and adding tide and wind to calculate a fix based on the last known position. The term dates back to the 1600s.
It is fun to point at a chart and confidently state “We’re here! I reckon...”
There's a book I read a while back named "Longitude" that maps the storied quest in science to improve upon dead reckoning by devising greater and greater accuracy in time pieces used on ships. Iirc it was a fun read if anyone else finds that sort of thing interesting (as I do.)
It's a great read! A story of how the scientific elite stalled progress because the right answer wasn't the one they hoped it would be, and didn't come from the sort of person they thought it should.
If you get the chance, you can see some of Harrison's chronometers at the Royal Observatory in London, though I don't know if they're always on display.
I'll add a recommendation for Sextant by David Barrie.
"Build Your Own Metal Working Shop From Scrap" by David Gingery which covers everything from building a foundry to making all your tools from first principles using nothing but river sand and junk metal for smelting.
"On Trails" by Robert Moore that discusses how walking paths from the first peoples persist, grow and change over hundreds of years, along with advances in walking trail design in recent years to become a part time recreational activity vs the pure utility of terrain traversal as they first were. Covers how a trail is a "living thing", as it were, because any who tread on it help reinforce it. Covers non human trails like ants and their reenforcement via pheromones and the like.
An interesting thing about the a lockstep solution which only considers inputs is that any RNG required in the game must be generated from the input history somehow. This could lead to players being able to manipulate their luck with extremely precise inputs.
The other interesting trick is you need a separate RNG for visual only affects such as particles than the one you use for the physics simulation. Depending on the game during replays, you could position the camera differently and then particle effects would render differently depend, depending on what’s on screen. Obviously that shouldn’t affect the way objects decide to break during the physics simulation.
That could lead to other subtle problems elsewhere though, because it requires synchronizing the seed. If you can't do that, it could lead to problems. E.g. when comparing offline speedruns where everyone would have a different seed. Then some players could have more luck than others even with the same inputs, which would be unfair. (Though I can't think of anything else at the moment.)
If you synchronize the seed at game start for speedruns, the seed is the same for everyone, and players can again manipulate their luck, so nothing was gained.
If you run a game entirely between colluding parties, cheating speed runners can just hack it to do whatever they want anyway. See the Dream Minecraft thing from several years back. Speed running claims may be cheated in a thousand ways. It's up to the people who care about it to establish and enforce rules.
But if you're running a multiplayer game with random elements and aren't colluding, you don't have to let a malicious party set the RNG seed to whatever they like just because you agree on it at game start. There's any number of simple cryptographic protocols that allow each peer to contribute equally to the RNG state based on having a separate commitment phase. And it's a lot easier to run a quick cryptographic setup than it is to have constant input-driven adjustment.
Typical deterministic game engines will do this, send it to every machine as part of the initial game state, and also check the seed across machines on every simulation frame (or periodically) to detect desyncs.
DonHopkins on Feb 16, 2022 | parent | context | favorite | on: Don't use text pixelation to redact sensitive info...
When I implemented the pixelation censorship effect in The Sims 1, I actually injected some random noise every frame, so it made the pixels shimmer, even when time was paused. That helped make it less obvious that it wasn't actually censoring penises, boobs, vaginas, and assholes, because the Sims were actually more like smooth Barbie dolls or GI-Joes with no actual naughty bits to censor, and the players knowing that would have embarrassed the poor Sims.
The pixelized naughty bits censorship effect was more intended to cover up the humiliating fact that The Sims were not anatomically correct, for the benefit of The Sims own feelings and modesty, by implying that they were "fully functional" and had something to hide, not to prevent actual players from being shocked and offended and having heart attacks by being exposed to racy obscene visuals, because their actual junk that was censored was quite G-rated. (Or rather caste-rated.)
But when we later developed The Sims Online based on the original The Sims 1 code, its use of pseudo random numbers initially caused the parallel simulations that were running in lockstep on the client and headless server to diverge (causing terribly subtle hard-to-track-down bugs), because the headless server wasn't rendering the randomized pixelization effect but the client was, so we had to fix the client to use a separate user interface pseudo random number generator that didn't have any effect on the simulation's deterministic pseudo random number generator.
[4/6] The Sims 1 Beta clip ♦ "Dana takes a shower, Michael seeks relief" ♦ March 1999:
(You can see the shimmering while Michael holds still while taking a dump. This is an early pre-release so he doesn't actually take his pants off, so he's really just sitting down on the toilet and pooping his pants. Thank God that's censored! I think we may have actually shipped with that "bug", since there was no separate texture or mesh for the pants to swap out, and they could only be fully nude or fully clothed, so that bug was too hard to fix, closed as "works as designed", and they just had to crap in their pants.)
The other nasty bug involving pixelization that we did manage to fix before shipping, but that I unfortunately didn't save any video of, involved the maid NPC, who was originally programmed by a really brilliant summer intern, but had a few quirks:
A Sim would need to go potty, and walk into the bathroom, pixelate their body, and sit down on the toilet, then proceed to have a nice leisurely bowel movement in their trousers. In the process, the toilet would suddenly become dirty and clogged, which attracted the maid into the bathroom (this was before "privacy" was implemented).
She would then stroll over to toilet, whip out a plunger from "hammerspace" [1], and thrust it into the toilet between the pooping Sim's legs, and proceed to move it up and down vigorously by its wooden handle. The "Unnecessary Censorship" [2] strongly implied that the maid was performing a manual act of digital sex work. That little bug required quite a lot of SimAntics [3] programming to fix!
Multi app works pretty well too, when I need to cross reference between apps throwing them each up on the split halves is way better than swapping back and forth.
Flatbuffers lets you directly mmap from disk, that trick alone makes it really good for use cases that can take advantage of it(fast access of read-only data). If you're clever enough to tune the ordering of fields you can give it good cache locality and really make it fly.
We used to store animation data in mmaped flatbuffers at a previous gig and it worked really well. Kernel would happily prefetch on access and page out under pressure, we could have 10s of MBs of animation data and only pay a couple hundred kb based on access patterns.
One capability mechanism that's in wide use but not really well known or touched on in the article is Androids RPC mechanism, Binder(and a lot of the history predates Android from what I recall).
Binder handles work just like object capabilities, you can only use what's sent to you and process can delegate out other binder handles.
Android hides most of this behind their permission model but the capability still exist and can be implemented by anyone in the system.
Yes, and macOS/iOS have XPC which is similar to the Binder. Binder is a BeOS era thing. Parts of Android were written by former Be engineers so the API terminology is the same (binders, loopers, etc).
Binder is also somewhat like Mojo in that you can do fast in-process calls with it, iirc. The problem is that, as you note, this isn't very useful in the Android context because within a process there's no way to keep a handle private. Mojo's ability to move code in and out of processes actually is used by Chrome extensively, usually either for testing (simpler to run everything in-process when debugging) or because not every OS it runs on requires the same configuration of process networks.
That only applies when dynamic dispatch is involved and the linker can't trace the calls. For direct calls and generics(which idiomatic Rust code tends to prefer over dyn traits) LTO will prune extensively.
Depends on what is desired, in this case it would fail (through the `?`), and report it's not a valid HTTP Uri. This would be for a generic parsing library that allows for multiple schemes to be parsed each with their own parsing rules.
If you want to mix schemes you would need to be able to handle all schemes; you can either go through all variations (through the same generics) you want to test or just just accept that you need a full URI parser and lose the generic.
See, the trait system in Rust actually forced you to discover your requirements at a very core level. It is not a bug, but a feature. If you need HTTPS, then you need to include the code to do HTTPS of course. Then LTO shouldn't remove it.
If your library cannot parse FTP, either you enable that feature, add that feature, or use a different library.
That assumes that people know what they're doing in C/C++, I've seen just as many bloated codebases in C++ if not more because the defaults for most compilers are not great and it's very easy for things to get out of hand with templates, excessive use of dynamic libraries(which inhibit LTO) or using shared_ptr for everything.
My experience is that Rust guides you towards defaults that tend to not hit those things and for the cases where you really do need that fine grained control unsafe blocks with direct pointer access are available(and I've used them when needed).
Is there a name for a fallacy like "appeal to stupidity" or something where the argument against using a tool that's fit for the job boils down to "All developers are too dumb to use this/you need to read a manual/it's hard" etc etc?
I think there is something to be said about having good defaults and tools that don't force you to be on every last detail 100% lest they get out of control.
It also depends on the team, some teams have a high density of seasoned experts who've made the mistakes and know what to avoid but I think the history on mem vulns show that it's very hard to keep that bar consistently across large codebases or disperse teams.
This is ultimately the crux of the issue. If Google, Microsoft, Apple, whatever, cannot manage to hire engineers that can write safe c/c++ all the time (as has been demonstrated repeatedly), it’s time to question whether the model itself makes sense for most use cases.
Grandparent can’t argue that these top tier engineers aren’t RTFM here. Of course they are. Even after the manual reading they still cannot manage to write perfectly safe code. Because it is extremely hard to do
Personally my argument would be the problems at the low level are just hard problems and doing them in rust you'll change one set of problems of memory safety to another set of problems probably of unexpected behaviour with memory layouts and lifetimes at the very low level.
It's not that all developers are dumb/stupid. It's that even the smartest developers make mistakes and thus having a safety net that can catch damaging mistakes is helpful.
I've read several posts here where people say things like "this is badly designed becausw it assumes people read the documentation".
???????
Yes you need to read the docs. That is programming 101. If you have vim set up properly then you can open the man page for the identifier under your cursor in a single keypress. There is ZERO excuse not to read the manual. There is no excuse not to check error messages. etc.
Yet we consistently see people that want everything babyproofed.
On the other hand, there's no excuse for designers & developers (or their product manager, if that's the one in authority) not to work their ass off on the ergonomics/affordance of the tools they release to any public (be it end users or developers, which are the end users of the tool makers, etc.).
It benefits literally everyone: the users, the product reputation & value, the builders reputation, the support team, etc.
Do people read the docs? Often, no, they don't. So, are you creating tools for the people we have, or for the people you think we should have? If the latter, you are likely to find that your tool makes less impact than you think it should.
Computer languages are not tools for illiterates. You need to learn what you're doing. And yet, programmers do so less than we think they should. If we don't license programmers (to weed out the under-trained), then we're going to have to deal with languages being used by people who didn't read the docs. We should give at least some thought to having them degrade gracefully in that situation.
RAII is fine when it is the right tool for the job. Is it the right tool for every job? Certainly there are other more or less widely practiced approaches. In some situations you can come up with something that is provably correct and performs better (in space and/or time). Then there are just trade-offs.
WDT patterns are highly underrated, even in pure software there's value in degrading/recovering gracefully vs systems that have to be "perfect" 100% of the time and then force user intervention when they go wrong.
One of my favorite blogs on the topic https://ferd.ca/the-zen-of-erlang.html that does a great job of covering how Erlang approached the topic, lots of learnings that can be applied more broadly.
Both are true, as praise is absolutely a currency that can be devalued like any other.
Given with abandon, even honestly, it loses its value to both the disponer and recipient. That's something that many in management roles never appreciate and is one of the reasons that some give it too sparingly. They've found that abundant praise loses its utility and come to the incorrect conclusion that scant praise is best.
You could already talk freely to the MQTT on the printer and it was already secured with a unique password. This feels like making it a second class feature that could disappear at a future point.
They’ve suffered real brand damage. Any of the changes (original, or these) seem like they would win over unconvinced potential customers, yet they’ve actively turned some away.
I don't really see what having a "developer mode" offers here beyond the existing solution. The current mqtt is already locked down with a unique password and AFAIK the endpoint was read-only anyway.
Don't get me wrong I'm glad they're responding to feedback but the feedback shouldn't have been required in the first place.
I'm all for better security on products(esp ones that heat up to 300C!) but interoperability with open standards makes it a better product overall and given the direction we've seen in the IoT space I think they've done quite a bit of damage(even if not intentionally) by not taking more care in this area.
Developer mode is just "how it works today" mode. It's insecure, and uses private APIs, and thus shouldn't be used, but people will anyway, so they're listening to their customers.
> Developer mode is just "how it works today" mode.
For LAN mode yes, but for how the printer works — not exactly. Right now you can print from a 3rd party slicer (Orca Slicer) and at the same time use Bambu Handy mobile app to for example monitor the print. LAN mode disables the cloud connection (always has). Which means that with the revised changes you have to choose either a 3rd party slicer _or_ active cloud connection
Adapting to Bambu Connect means passing the print files to another app, and it's unclear if that app will work without a cloud account. Plus Orca does lose functionality like controlling the printer directly from the slicer, and one of the critiques of the new system is needing a separate app in the first place.
If you mean that OrcaSlicer will reimplement Bambu Connect protocols inside — I doubt that will happen, since Bambu Connect is not open source, so this would involve reverse engineering the protocols and potentially including Bambu Connect certificates.
Very common that replay/demo uses the network stack of it's present in a game.
reply