Seeing this article, and how much webextensions manage to mess up the browser, I'm wondering how bad this experiment would've been with the legacy XUL extensions. Maybe they had a point in getting rid of them...
The DS has you dealing with two cores you need to write a firmware for that have to communicate to do anything useful, a cartridge protocol to fetch any extra code or assets that wouldn't all fit into RAM at runtime, instruction and data caches, an MMU, ...
And that's without mentioning some of the more complex peripherals like the touch screen and wifi.
All official games used the same firmware for one of the cores, a copy of which is embedded into every single cartridge ROM. There's some homebrew firmwares included in the respective SDKs, but they aren't well documented for standalone use.
Granted, all of the above isn't completely impossible, but if you think of how much code you'd need to get a simple demo (button input, sprite moving across the screen), especially for a beginner, the DS requires a nontrivial amount of code and knowledge to get started without an SDK. Meanwhile, you can do something similar in less than 100 lines of ASM/C for GBA.
Agreed. I spent a lot of time programming the GBA in the early 2000s (back when the state of the art devkit was a flash cartridge writer with parallel cable...) and I consider it the last "grounded" console that Nintendo made, where you immediately and directly get to touch hardware right off the bat, without any gyrations. After having worked with the SNES in the 90s the GBA was a very familiar and pleasant platform to experience, in many ways similar to and built upon the SNES' foundation.
I've never coded for SNES, but the GBA having access to a mainline, modern C compiler is a massive buff. Also, emulators for it have always been available on practically any computer, console and mobile phone, and there's many so-called "emulation handhelds" that bring its (and similar) form-factor handheld devices to the market. If you really need an upgraded OG experience, many upgrade kits for the handheld exist as well.
None of this fixes the audio, but it sure gets damn close.
Just curious what you mean by "fixing the audio"? In GBA emulation or on the hardware?
I'm aware that if you need/want PCM audio, there's going to be mixing, probably with a software library, and significant CPU use for it. Is emulated GBA audio buggy?
One of my first gigs was Game Boy and Game Gear programming. I know the GBA allows DMG audio compatibility and, with all its constraints, well it sure does keep things simple. And emulation is reliable AFAIK.
I see what happened, I was replying to a different comment, that did mention the GBA audio, when I wrote that, but somehow ended up replying to this one.
The DS, more specifically the arm946e-s has an MPU, not a MMU (you're confusing it with the 3DS's Arm11). Not like it makes much of a difference anyway, you configure either once or twice then leave them be.
Honestly, I think why the GBA is more popular than the DS for that kind of thing is because it only has one screen (much less awkward to emulate), has high-quality emulators that are mostly free of bugs (mGBA most notably), and its aspect ratio is better than the DS anyway (3:2 upscales really well on 16:10 devices). That is to say, it's much easier to emulate GBA software on a phone or a Steam Deck than it is to emulate DS software.
gah, you're right, I was thinking of memory protection (as in, marking the relevant regions as read-write and read-execute) when I wrote MMU.
It's of course optional, and you can ignore it for trivial examples, but most games and SDKs will tweak it all the time when loading additional code modules from the cartridge.
It's just another way in which the DS is more complex to use properly without an SDK to do this for you - there's just more to think about. At least compared to how the GBA lacks all of this and the entire cartridge is mapped into memory at all times.
I agree, the GBA is a pleasure to work with. It's just a shame that the poor quality of the (stock) screens, low resolution, and lousy sound hardware make it feel like such a downgrade from the otherwise gnarlier and technically inferior SNES.
There's a pretty big renaissance of GBA clones out there right now that put better screens and speakers to the platform. And of course with emulators you can get all the modern hardware affordances for the platform.
The screen can be improved, but the resolution and sound system can't be.
The issue with the sound isn't just the speakers - you could always use headphones, after all. The GBA only has the original GB's primitive PSG (two square waves, a noise channel, and a short programmable 4-bit waveform) plus two 8-bit PCM channels. 8-bit PCM samples are unavoidably noisy with lots of aliasing, and all sound mixing, sequencing, envelopes, etc. for those channels needs to be done in software, which tends to introduce performance and battery life constraints on quality, channel count, effects, and sample rate.
The SNES, by comparison, uses high-quality 16-bit 32kHz samples, and all the places on the GBA where devs may have had to cut corners are done in hardware: eight separate channels, no need for software mixing, built-in envelopes and delay.
Compare the SNES FFVI soundtrack to the GBA version; the difference is dramatic. Frankly, using high quality speakers or headphones just makes the quality difference more obvious.
In addition to the screen and the sound, don't forget having just 2 face buttons after 4 buttons had become standard and almost mandatory. Many ports suffer mightily in the control department.
Not a Go dev, but I typically set up a CI with the oldest toolchains I support (usually a debian release), and only bump those versions when I really need something from the latest versions. Locally I build with the most recent tools. This ensures good enough coverage for very little work, as I notice when I start using something that's newer and can bump the toolchain accordingly.
Sure, but if you start a new small project and throw it on GitHub, it's not totally insane to just put the version you tested. Just because someone put up their tiny library doesn't mean they've put in the effort to figure out which version they need.
I could've sworn firefox had an "all tabs" preview button that looked like 4 blocks in a grid, before the Australis era. Can't find any pictures/video footage of it in action however.
I just think it goes to show how little the window management of desktop OSes has improved over the years that desktop applications have had to up the ante...
I also think the differing behaviour between different apps implementing split panes (e.g. keyboard controls for creating/switching) is very annoying. Somtimes this flies in the face of any desktop's native window splitting or tab support as e.g. an app stops supporting multiple windows. For example, current browsers don't have a good way to configure usage without tabs, and at some point removed support for setting the window icon to the site's favicon.
Yeah. I'm surprised this along with the money thing are listed in the article at all. These are the sort of things you learn within the first month of writing assembly, and were widely used across the industry at the time (and times prior). The bit shifting optimization is performed by GCC even at -O0, and likely already was at the time, as it's one of the simpler optimizations to make. It's like calling "xor eax, eax" a masterful optimization tactic for clearing a register.
Looking at the macro-level optimizations like the rest of the article does is significantly more interesting.
I did a bunch of distro hopping in the 90's but locked onto Debian (mainly testing, now largely unstable) not long after. I'm still just not sure what compels people elsewhere. Especially now: the Debian installer was vicious if you were a newbie, but I hear it's pretty ok now.
This is largely a me problem! I don't understand what the value add is of other offerings. It's unclear what else would be good or why. Debian feels like it really has what's needed now. Things work. Hardware support is good. Especially in the systemd era, so much of what used to make distros unique is just no longer a factor; there's common means for most Linux 's operation. My gut tells me we are forking and flavoring for not much at all. Aside from learning some new commands, learning Arch has been such a recent non-event for me. It feels like we are having weird popularity contests over nothing. And that amplifies my sense of: why not just use Debian?
But I also have two and a half plus decades of Linux, and my inability to differentiate and assess from beginner's eyes is absolutely key to this all. I try to ask folks, but it's still all so unclear what real motivations, and more, what real differences there are.
The real differences are things that maintainers do. Like how... OBS I think? ...had a bunch of people come in with issues that only existed in the Debian version. Debian software has a bunch of patches, Arch software has far fewer and sticks closer to upstream, other distros will vary. Derivatives also made nonfree easier to set up, which was especially important when MP3 was still encumbered. Nowadays Debian still has the reputation of having old, outdated versions of software, which is going to be hard to shake, especially considering stability is meant to be their main draw.
It's worth noting that while these videos may have been unintentional, this was also an era when youtube was still inventing itself. Sure, there was real content creation, but the structures of sponsors and ad revenue that can be a real income today weren't there. Let's plays were just starting to dominate the platform, and people were still figuring out how to make money off of that.
As a result, there was a lot of this type of content: barely edited, poorly performed, honest moments of real life, amateurish creations of any kind, be that digital animation, music, acting, etc. I feel these IMG_xxxx videos reflect some of the vibe of the era. Now, sharing videos with people is easy enough in group chats, and youtube content feels so manufactured that people feel it's less appropriate to share this sort of thing via youtube.
One might say that early Youtube was mainly thought of as a video pastebin that allowed (JS-assisted) hotlink-embedding into other pages. Youtube was to video as image-hosting sites like Imgur were to images. Which was important, in both cases, because not just video but even (HQ) images were hard to host yourself at the time, and also hard to send to other people without hosting them somewhere.
With both video and image-sharing sites, you didn't really expect the site itself to function as a social network that was worth "browsing." Rather, you expected the "front page view" to be an upload view; and from there, to take your uploaded assets and embed them onto a page to put them into proper context. And it's these webpages-that-contextualize-image/video-assets that you'd share links to, on forums and on early social bookmarking platforms (Fark, StumbleUpon, etc.)
I love wondering if and how this kind of "Wild West frontier" in technology and communication and social interaction will ever come again:
Say we colonize Mars. Streaming anything from Earth takes hours (well 3-22 light minutes). Martians may invent their own planetary social network and share their own weird Martian memes for a while.
Or interstellar colony ships traveling for decades between the stars, and then practically cut off from Earth at whatever new exoplanet we land on.
There will definitely be lots of "golden eras of creativity" still to come, if we survive that long.
Mars' gravity is only 38% of Earth so I think quite a few would be crazy feats of strength or odd trajectories of objects. At least they would be if I were making them.
Any time someone carves out a new space online, the same sort of thing happens. Pioneers create infrastructure. Early adopters rush to explore the new medium. New possibilities or new constraints spur creativity. Then, usually one of two things happens: the new space was a brief fad, and it dies away; or the masses arrive and it undergoes an eternal September, standardization, commercialization, enshittification, drama… in other words, becomes integrated into the wider net. Those fed up leave and begin to carve out a new space…
Some initiatives (like the Gemini Protocol) remain (for now) in a tenuous niche where mass adoption seems impossible and yet they also don’t seem to be going away.
Yeah, the _reason_ this was in the iPhone is that YouTube was a normal and reasonable (if unusual - because sharing videos online was unusual) way to share videos with friends and family. And people cared way less about privacy back then.
How do you back up and restore things without root? I've found that even with root, these days many backups are useless thanks to using hardware-backed encryption...
There are various options, but if you use syncthing, it is as easy as creating a share on your PC or backup machine and on the phone. Everything gets synced automatically.
Well everything you chose to sync gets synced. Like photos, documents etc. You can also set up apps in such a way where they make automatic local backups on your phone in the folder that gets synced. There are multiple android apps for syncthing