More like people try doing anything other than use the base OS, and realize the bottom-tier x86 mini-PCs are 3-4x faster for the same price, and can encode a basic video stream without bogging down.
If the RPI came with any recent mid-tier Snapdragon SOC, it might be interesting. Or if someone made a Linux distro that supports all devices on one of the Snapdragon X Elite laptops, that would be interesting.
Instead, it's more like the equivalent of a cheap desktop with integrated GPU from 20 years ago, on a single board, with decent linux support, and GPIO. So it's either a linux learning toy, or an integrated component within another product, and not much in between.
Qualcomm has rebranded a Snapdragon with quadruple Cortex-A78 cores (and 4 small Cortex-A55), from the expensive smartphones of 2021, as "Dragonwing" QCM6490 and they now sell it for embedded devices.
There are at least 3 or 4 SBCs with it, in RPI sizes and prices.
Cortex-A78 is much faster than the Cortex-A76 from RK3588 or the latest RPI (e.g. at least 50% faster at the same clock frequency), and its speed at the same clock frequency does not differ much from that of recent medium-size cores like Cortex-A720 or Cortex-A725.
Cortex-A78 is the stage when Arm stopped making significant micro-architectural changes in medium-sized cores. The later improvements were in the bigger Cortex-X cores. The main disadvantage of the older Cortex-A78 is that it does not implement the SVE instruction set of the Armv9-A ISA.
While mini-PCs with Intel/AMD CPUs are usually preferable, for an ARM SBC I would no longer buy any model that has older cores than Cortex-A78.
Besides the Qualcomm Dragonwing based SBCs, there are also Cortex-A78 based SBCs with Mediatek or NVIDIA CPUs, but those are more expensive.
> So it's either a linux learning toy, or an integrated component within another product, and not much in between.
Raspberry Pi are excellent at being general-purpose, full-Linux boxes that consume very low power (some can idle at <1W). Perfect for ambient computing, cron-jobs, MQTT-related hackery, VPN gateways, ad-blocking DNS servers, or anything else that isn't CPU-bound, but benefits from being always available[1].
1. In my case, this ironically includes orchestrating higher-wattage computers via Wake-on-Lan and powering them down when not needed
Since the introduction of the OG Raspberry Pi, 14 years ago, there's been an ongoing cognitive problem wherein people look at the price of a brand new, never used SBC that can purchased from a reliable retail company.
Then they also look at the price of a used corpo PC (that is bigger, and noisier) that some rando in Iowa is selling on eBay.
And then they boldly compare the prices of the two things as if these details just don't exist.
But the details do exist. The details show that the two things are not the same. They can never be the same.
One is a shiny fresh apple that is free of blemishes, and the other is a bruised old grapefruit that someone has already started eating. They're both fruit, but they're very different things.
I've used them for mostly dedicated tasks, at least the RPi3 and older. I've used the RPi3 as CUPS servers at a couple of sites, for a few printers. Been running for many years now 24/7 with no issues. As I could buy those SBCs for the original low price and the installation was a total no-brainer, I would never consider using any kind of mini PC for that.
I have a couple of RPi4 with 8GB and 4GB RAM respectively, these I have been using as kind-of general computers (they're running off SSDs instead of SD cards). I've had no reason so far to replace them with anything Intel/AMD. On the other hand they can't replace my laptop computer - though I wish they could, as I use the laptop computer with an external display and external keyboard 100% of the time, so its form factor is just in the way. But there's way too little RAM on the SBCs. It's bad enough on the laptop computer, with its measly 16GB.
I built a nice little cyberdeck around an RPi 5 but it's turned out to be very disappointing. I was counting on classic X11's virtual display stuff to enable a 1080x480 screen to be usable with panning (virtual 720p or something, just a cool vertical pan). Problem is, the X11 support sucks, and so there's almost no 2D acceleration, so this simple thing that used to work great on a 486 with an ATI SVGA doesn't work very well at all on a machine a thousand times faster. Wayland has of course no support for a feature like this one, so I'm stuck with a screen too narrow to use, and performance for everything else that's pretty sub-par.
Aah, I had totally forgotten about that X11 feature, I did use it for something very many years ago.
I have only used the default setup (which is presumably Wayland) on the Pi, looks good but I don't actually use display features much.
Yeah Raspi even sells a keyboard formfactor and there was a Raspi laptop made from 3D printable casing and basic peripherals (screen, keyboard with mouse nub) for it. A cheap quasi-open source laptop at the time.
People do all manner of wacky stuff with Pis that could be more easily done with traditional machines. Kubernetes clusters and emulation boxes are the more common use cases; the former can be done with VMs on a desktop and the latter is easily accomplished via a used SFF machine off of eBay. I've also heard multiple anecdotes of people building Pi clusters to run agentic development workflows in parallel.
I think in all cases it's the sheer novelty of doing something with a different ISA and form factor. Having built and racked my share of servers I see no reason to build a miniature datacenter in my home but, hey, to each their own.
I concur with this. The novelty of the Pi is getting a computer somewhere that you normally wouldn't due to the size and complexity. GPIO is a very nice addition, but it looks like conventional USB to GPIO is a thing so it's not really a huge driver to use a Pi.
> Is the 1 percenters getting dumber or acting like it?
I feel like their messages are designed to derail people's train of thought.
People start to realize that technology isn't fulfilling and they need to re-access their lives? Nah... introspection is a modern invention and that act of reflection is actually the source of your discontent. Stop thinking about it and just go with the flow, you'll be much happier when you stop concerning yourselves with the state of the environment, other people's well being, if your work is fulfilling, or the fact that you have no retirement.
The rub is that people don't want transmission networks to go away. They just don't want to pay for the maintenance.
In many US municipalities the cost of infrastructure is rolled into the per unit fee meaning high consumers pay more. This works fine until folks adopt solar and their consumption goes negative.
The right answer is a connection fee based on the cost to maintain your hookup to the grid.
> The government does most things poorly and with little regard to budget or quality.
That's a common line by conservatives who are actively sabotaging government with policies and laws which they then point to as evidence of such inefficiencies.
> It is interesting that IBM dominated this generation of consoles, and was vanquished in the next.
IBM's Power was the only logical option at the time.
These consoles were being designed around 2000. Intel and AMD weren't partnering on bespoke CPUs at that time. I don't even think AMD would have been considered a viable partner. Neither had viable 64 bit options and part of console marketing at the time was the ever increasing bit depths.
Prior console generations had use MIPS which wasn't keeping up with ever increasing performance expectations and players like Toshiba and Sony were looking for a higher performance CPU architecture. IBM's Power architecture was really the only option. Sony, Toshiba, and IBM partnered to develop their a new 64 bit microarchitecture called Cell.
Microsoft's first console was basically a PC and that's how everyone saw it. The 360 was an opportunity for Microsoft to show that it could compete with the big boys. It was also an opportunity to keep a toe dipped in RISC, because it had dropped support for RISC CPUs with Windows 2000.
Yeah that part didn't make sense, not to mention that neither the PS3 nor the 360 were running 64-bit software. They didn't have enough memory for it to be worth it.
you don't need memory to make 64 bit software worth it. Just 64 bit mathematics requirements. Which basically no video game console uses as from what I understand 32-bit floating point continue to be state of the art in video game simulations
Fundamentally it's still a memory limitation, just in terms of memory latency/cache misses instead of capacity. If you double the size of your numbers you're doubling the space it takes up and all the problems that come with it.
No it isn't. The 64-bit capabilities of modern CPUs have almost nothing to do with memory. The address space is rarely 64 bits of physical address space anyways. A "64-bit" computer doesn't actually have the ability to deal with 64 bits of memory.
If you double the size of numbers, sure it takes up twice the space. If the total size is still less that one page it isn't likely to make a big difference anyways. What really makes a difference is trying to do 64-bit mathematics with 32-bit hardware. This implies some degree of emulation with a series of instructions, whereas a 64-bit CPU could execute that in 1 instruction. That 1 instruction very likely executes in less cycles than a series of other instructions. Otherwise no one would have bothered with it
"Bitness" of a CPU almost always refers to memory addressing.
Now you could build a weird CPU that has "more memory" than it has addressable width (the 8086 is kind of like this with segmentation and 8/16 bit) but if your CPU is 64 bit you're likely not to use anything less than 64 bit math in general (though you can get some tricks with multiple adds of 32 bit numbers packed).
But a 32 bit CPU can do all sorts of things with larger numbers, it's just that moving them around may be more time-consuming. After all, that's basically what MMX and friends are.
The original 8087 implemented 80-bit operands in its stack.
It would also process binary-coded decimal integers, as well as floating point.
"The two came up with a revolutionary design with 64 bits of mantissa and 16 bits of exponent for the longest-format real number, with a stack architecture CPU and eight 80-bit stack registers, with a computationally rich instruction set."
Typically, it doesn't have the ability to deal with a full 64 bits of memory, but it does have the ability to deal with more than 32 bits of memory, and all pointers are 64 bits long for alignment reasons.
It's possible but rare for systems to have 64-bit GPRs but a 32-bit address space. Examples I can think of include the Nintendo 64 (MIPS; apparently commercial games rarely actually used the 64-bit instructions, so the console's name was pretty much a misnomer), some Apple Watch models (standard 64-bit ARM but with a compiler ABI that made pointers 32 bits to save memory), and the ill-fated x32 ABI on Linux (same thing but on x86-64).
That said, even "32-bit" CPUs usually have some kind of support for 64-bit floats (except for tiny embedded CPUs).
The 360 and PS3 also ran like the N64. On PowerPC, 32 bit mode on a 64 bit processor just enables a 32 bit mask on effective addresses. All of the rest is still there line the upper halves of GPRs and the instructions like ldd.
Parts of the 360 did. The hypervisor ran in 64bit mode, and use multiple simultaneous mirrors of physical address space with different security properties as part of its security model.
It's not like the games weren't running in 64 bit mode too (on both consoles)
They had full access to the 64 bit GPRs. There wasn't anything technically stopping game code from accessing the 64 bit address space by reinterpreting a 64 bit int as a pointer (except that nothing was mapped there).
It's only the pointers that were 32 bit, and that was nothing more than a compiler modification (like the linux x32 ABI).
They did it to minimise memory space/bandwidth. With only 512 MB of memory, it made zero sense to waste the full 8 bytes per pointer. The savings quickly add up for pointer heavy structures.
I remember this being a pain point for early PS3 homebrew. Stock gcc was missing the compiler modifications, and you had a choice between compiling 32 bit code (which couldn't use the 64bit GPRs) or wasting bandwidth on 64 bit pointers (with a bunch of hacky adapter code for dealing 32 bit pointers from Sony libraries)
The difference is that on PowerPC, 32bit mode on 64bit processors (clearing the SF bit in the MSR) is just enabling a hardware 32bit mask on the effective address before it gets translated into a virtual address.
Unlike on x86-64 and arm64, there's no free (or even that cheap) way to do an ILP32 abi purely in software. x86 and arm allow encodings for memory reference instructions that only use the bottom half of the registers (the E* registers on x86, and the W* registers on arm64). No such encoding exists on PowerPC for memory reference instructions, so you'd be stuck manually masking each generated pointer.
Because of that, the compiler hacks you're talking about are kind of the opposite from what you're describing. The hacks are because on the upstream gcc PowerPC backend, having a 32bit pointers in hardware and having operations that operate on 64bit quantities had the same feature flag despite technically being able to be separately enabled on actual hardware. It was just very rare to do so. So the goal of the hacks was to describe to the compiler that the target has 32 hardware pointers, but still can issue instructions like ldd to operate on the full 64bit GPRs.
You have to remember that the AMD and Intel of today are very different companies than they were 20-25 years ago. AMD split off it's fab capabilities, acquired ATI, adopted TSMC as a fab, and developed a custom silicon business.
At that time AMD wasn't in the custom CPU business, AMD64 was a new unproven ISA, and x86 based CPUs of that time were notoriously hot for a console. These were also some of the reasons why Microsoft moved away from the Pentium III it had used in the original Xbox.
The PS3 was launched in 2006 but the hardware design was decided years earlier to provide a reference platform for the software.
Because consoles don't use off-the-shelf CPUs for many reasons. Neither Intel nor AMD of that time would even consider making a bespoke CPU for Sony or MS.
Even they could use off-the-shelf SKU it wouldn't be viable - neither one had one that fits in power envelope (not that it helped xbox...)
Consoles used off-the-shelf CPUs until the 6th generation. Even the Dreamcast and the first Xbox used off-the-shelf CPUs, it was only the PS2 and the GameCube that started the trend of using custom-made CPUs.
The PSX's CPU is semi-custom. The core is a reasonably stock R3000 CPU, but the MMU is slightly modified and they attached a custom GTE coprocessor.... I guess you can debate if attaching a co-processor counts as custom or not (but then the ps4/xbone/ps5/xbs use unmodified AMD jaguar/zen2 cores)
IMO, the N64's CPU counts as off-the-shelf... however the requirements of the N64 (especially cost requirements) might have slightly leaked into the design of the R4300i. But the N64's RSP is a custom CPU, a from scratch MIPS design that doesn't share DNA with anything else.
But the Dreamcast's CPU is actually the result of a joint venture between Hitachi and Sega. There are actually two variants of the SH4, the SH4 and SH4a. The Dreamcast uses the SH4a (despite half the documentation on the internet saying it uses the SH4), which adds a 4-way SIMD unit that's absolutely essential for processing vertices.
We don't know how much influence Sega's needs had over the whole SH4 design, but the SIMD unit is absolutely there for the Dreamcast, I'm pretty sure it's the first 4-way floating point SIMD on the market. The fact that both the SH4/SH4a were then sold to everyone else, doesn't mean they were off the shelf.
Really, the original Xbox using an off-the-shelf CPU is an outlier (technically it's a custom SKU, but really it's just a binned die with half the cache disabled).
> actually the hardest part of a locally hosted voice assistant isn't the llm. it's making the tts tolerable to actually talk to every day.
I would argue that the hardest part is correctly recognizing that it's being addressed. 98% of my frustration with voice assistants is them not responding when spoken to. The other 2% is realizing I want them to stop talking.
My partner is on a conference call, I hop in the car to go run an errand. Suddenly I'm on a conference call.
My partner is in the kitchen listening to a podcast, I hop in our other car and suddenly I'm listening to a podcast.
My partner is sitting in the car having a driveway moment, I arrive home with the other car and now I'm having her driveway moment.
My partner is on a conference call at her desk and picks up her phone to respond to a message and then you hear "shit shit shit, hold on a moment!" and then frantic typing and clicking.
Core evolved from the Banis (Centrino) CPU core which was based on P3, not P4. Banias used the front-side bus from P4 but not the cores.
Banias was hyper optimized for power, the mantra was to get done quickly and go to sleep to save power. Somewhere along the line someone said "hey what happens if we don't go to sleep?" and Core was born.
Were people actually doing that?
reply