Has anyone that has set up microceph determined the overhead of the required multiple OSDs? The docs make it sound scary, but it's not clear if that's because people run it on a Pi with an sdcard for block storage or because someone once ran 18TB of OSDs in production that then fell over.
I do continue to be impressed/ over-awed by how effectively scared the Ceph docs are about just how many system resources you need. To run a mid tier not that fast storage cluster. Bother.
Impressive as hell software and I am so glad to have it. But man! The insistence on mountains of ram per TB, on massive IO is intimidating.
How is changing the architecture of a platform that only you make hardware for doing the impossible?
They could change the architecture again tonight, and start releasing new machines with it. The users will adopt because there is literally no other choice.
Every machine they release will be fastest and most capable on the platform, because there is no other option
Exactly this! Rosetta + the whole app developer community who really quickly released builds for M chips (voluntary or forced, but it did happen).
I had the initial m1 air, and it was remarkable how useable it was. You'd expect all sorts of friction and issue but mostly things just worked (very fast). Even with some Rosetta overhead it was still fast compared to intel macs.
Rosetta 1 delivered 50-80% of the performance of native, during the PPC->Intel transition. It turns out, you can deliver not particularly impressive performance and still not ruin your app ecosystem, because developers have to either update to target your new platform, or leave your platform entirely.
You can also voluntarily cut off huge chunks of your own app ecosystem intentionally, by giving up 32bit support and requiring everything to be 64bit capable.
...because users have no other choice when only one vendor controls the both the hardware+software. They can either use the apps still available to them, or they can leave. And the cost of leaving for users is a lot higher.
Yes. Apple put custom hardware support in the M series chips based on the needs of Rosetta 2. The x86_64 performance on Rosetta 2 was often higher at launch than the prior generation of Intel chips running those same binaries natively.
Microsoft and Qualcomm already knew the performance of x86 app emulation on windows was killing the ARM machine lineup, so Qualcomm was working on extensions to their chips and Microsoft on having Windows support them already, but ARM64EC and Prism didn't launch for two years after the M1 shipped.
Being unexpectedly unemployed also starts a virtual timer of sorts not on your terms. Regardless of how you feel about the event, the longer it persists is universally seen as a negative signal to those that would hire you for your next role. It gets exponentially worse as time goes on making it even harder to find a job, because of the increased time you don't have a job.
Yeah that is what I was going to do until I discovered the two VM limit. I was building a MacOS GitHub Actions farm, or rather, looking into it. I had written most of the code but my inertia screeched to a halt when I discovered the two VM limit for MacOS VMs.
macOS is proprietary software. You need a license for every copy you run, whether it's in a VM or not. The VM limit is written into the macOS EULA.
> to install, use and run up to two (2) additional copies or instances of the Apple Software, or any prior macOS or OS X operating system software or subsequent release of the Apple Software, within virtual operating system environments on each Apple-branded computer you own or control that is already running the Apple Software, for purposes of: (a) software development; (b) testing during software development; (c) using macOS Server; or (d) personal,
non-commercial use.
Yes. Apple's not going to come after you for running too many VMs on your personal machine, but if you're running a commercial enterprise involving macOS VMs they do care.
Yes. And the license only allows you to run macOS guests on macOS hosts. So using esxi means you don’t have any license for whatever macOS guests you run.
You are confusing macos guests on KVM (Linux) and macos guests on ESXi which is a real enterprise product, and officially enables you to run as many macos vms as your hardware supports.
Edit: so, this is the incus-ui-canonical package? It feels a bit ironic that canonical ships this, because I thought the whole point of incus was to avoid canonical and the direction they were taking lxd.
Just like KIND runs containerd inside docker, you can also run dockerd inside containerd backed pods.
Start a privileged pod with the dind image, copy or mount your compose.yaml inside and you should be able to docker compose up and down, all without mounting a socket (that won't exist anyway on containerd CRI nodes)
To go even further, kubevirt runs on kind, launch a VM with your compose file passed in via cloud-init.
At no point, have I invented a new/better method. Perhaps your way is better.
I just recognise that Docker Compose is loved by most open source developers. And invariably any project you touch will have a docker compose setup by default. And it isnt going away, no matter hard anyone tries to kill. Some things are just too well designed. Docker Compose is one of those things.
I'm just making it possible to run those on kubernetes seamlessly.
https://canonical-microceph.readthedocs-hosted.com/stable/tu...