am I crazy for thinking that the 16GB Pi 5 is just there to absorb money from people who purchase the most expensive version of things? Like really nobody needs that much RAM on a Pi?
I am running a bunch of stuff on my 8GB Pi and I've run out of memory to put more stuff on. I use it as a low power server running a bunch of Docker containers. Some of these require at least 200mb and some use 2G of memory.
I was going to buy a small nuc and load it up on memory but I've acquired an old Mac Mini with 16GB of ram, which will do.
Yes, you are crazy for thinking that. The extra ram is useful for small LLMs and also running lots of dock containers. The very low power consumption makes it ideal for a low end home server.
I use the 16GB SKU to host a bunch of containers and some light debugging tools, and the power usage that sips at idle will probably pay for the whole board over my previous home server, within about 5 years.
Docker is about containerization/sandboxing, you don't need to duplicate the OS. You can run your app as the init process for the sandbox with nothing else running in the background.
That makes docker entirely useless if you use it just for sandboxing. Systemd services can do all that just fine, without all the complexity of docker.
I think that on linux docker is not nearly as resource intensive as on Mac. Not sure of the actual (for example) memory pressures due to things like not sharing shared libs between processes, granted
Any node server app will be ~50-100 MiB (because that's roughly the size of node binary + shared deps + some runtime state for your app). If you failed to optimize things correctly, and you're storing and working with lots of data in the node process itself, instead of it serving as a thin intermediary between http service and a database/other backend services, you may get spikes of memory use well above that, but that should be avoided in any case, for multiple reasons.
And most of this 50-100 MiB will be shared if you run multiple node services on the same machine the old way. So you can run 6 node app servers this way, and they'll consume eg. 150MiB of RAM total.
With docker, it's anyone guess how much running 6 node backend apps will consume, because it depends on how many things can be shared in RAM, and usually it will be nothing.
Only Java qualifies under your arbitrary rules, and even then I imagine it's trying to catch up to .NET (after all.. blu-ray players execute Java).. which can run on embedded systems https://nanoframework.net/
I listed some popular languages that web applications I happened to run dockerised are using. They are not arbitrary.
If you run normal web applications they often take many hundreds of megabytes if they are built with some popular languages that I happened to list off the top of my head. That is a fact.
Comparing that to cut down frameworks with many limitations meant for embedded devices isn't a valid comparison.
"just as well"? lmao sure i guess i could just manually set up the environment and have differences from what im hoping to use in productio
> 1GiB machine can run a lot of server software,
this is naive
it really depends if you're crapping out some basic web app versus doing something that's actually complicated and has a need for higher performance than synchronous web calls :)
in addition, my mq pays attention to memory pressure and tunes its flow control based on that. so i have a test harness that tests both conditions to ensure that some of my backoff logic works
> if RAM is not wasted on having duplicate OSes on one machine.
Yes, it's exactly how docker works if you use it for where it matters for a hobbyist - which is where you are installing random third-party apps/containers that you want to run on your SBC locally.
I don't know why people instantly forget the context of the discussion, when their favorite way of doing things gets threatened. :)
Context is hobbyists and SBC market (mostly various ARM boards). Maybe I'm weird, but I really don't care about minor differences between my arch linux workstation, and my arch linux arm SBCs, because 1) they're completely different architectures, so I can't avoid the differences anyway 2) it's a hobby, I have one instance at most of any service. 3) most hobbyist run services will not work with a shitton of data or have to handle 1000s of parallel clients
> Yes, it's exactly how docker works if you use it for where it matters for a hobbyist
What you described is exactly the opposite of how it works. There is no reasonable scenario in how that is how it works. In fact, what you're saying is opposite of the whole point of containers versus using a VM.
> when their favorite way of doing things gets threatened
No, it's when someone (like you) thinks they have an absolute answer without knowing the context.
And by the way, in my scenario, container overhead is in the range of under a hundred MiB total . The thing I'm working on HAPPENS to require a fair amount of RAM.
But you confidently asserted that "1GiB machine can run a lot of server software". And that's true for many people (like you), but not true for a lot of other people (like me).
> most hobbyist run services will not work with a shitton of data or have to handle 1000s of parallel clients
neither of these are true for me but you need to take a step back and maybe stop making absolute statements about what people are doing or working on :)
you dont get to define "where it matters" for a hobbyist
> which is where you are installing random third-party apps/containers that you want to run on your SBC locally
this is such a consoomer take. for those of us who actually build software, we have actual valid reasons for using it during development
> they're completely different architectures, so I can't avoid the differences anyway
ironically this is a side benefit modern containers are useful
i think you have a fundamental misunderstanding of how containers work and why theyre useful for software development. based on your other posts in this thread only makes me more sure of that. im not saying containers/etc are a perfect solution or always the right solution, but your misconceptions are separate from that
No I don't have a fundamental misunderstanding. In the entire thread I'm talking about docker, not "containers" in general. You seem to have a misunderstanding apparently.
I've been working with "containers" since before docker existed, and I also wrote several applications that use basic technologies so called "docker containers" are based on in Linux. You can use these technologies (various namespaces, etc.) in a way that does not waste RAM. That will not happen for common docker use, where you don't control the apps and base OS completely. You can if you try hard make it efficient, but you have to have a lot of control. The moment you start pulling random dockerfiles from random sources, you'll be wasting colossal amounts of resources compared to just installing packages on your host OS, to share maximum amount of resources.
And for all these "let's have just a big static binary and put it into a container" containers, that don't really have/or need a real full OS userspace under them, there's barely any difference deployment wise from just running them without docker. In fact docker is just a very complicated additional duplicated layer in this case for what systemd does, that most people already have on their OS. So that's another RAM waste and additional overhead from what is now reduced to a service manager in this use case scenario.
> No I don't have a fundamental misunderstanding. In the entire thread I'm talking about docker, not "containers" in general. You seem to have a misunderstanding apparently.
i said modern containers. and you do have a FUNDAMENTAL MISUNDERSTANDING. you are repeating falsehoods throughout this entire thread.
> That will not happen for common docker use
again you are asserting a "common" use of software, when the people youre replying to are clearly using it for development
> where you don't control the apps and base OS completely
stopping saying "you" to me. id tell you to speak for yourself but you seem incapable of doing that
> And for all these "let's have just a big static binary and put it into a container" containers, that don't really have/or need a real full OS userspace under them, there's barely any difference deployment wise from just running them without docker.
ironically enough it does have differences, glaring big differences. like ironically the deployment differences are about the only reason to use docker in this situation
another stark example of you popping off with incorrect assertions. and yes there are reasons not to do use docker for this as well but it depends on multiple factors
> In fact docker is just a very complicated additional duplicated layer in this case for what systemd does, that most people already have on their OS. So that's another RAM waste and additional overhead from what is now reduced to a service manager in this use case scenario.
there are so many misconceptions in there asserted as if theyre the entire truth. yes people can use docker containers poorly but its not everyone.
> The moment you start pulling random dockerfiles from random sources, you'll be wasting colossal amounts of resources compared to just installing packages on your host OS, to share maximum amount of resources.
its a good thing that I'm not doing that! ive already stated that im using them to build software, not just "pulling random dockerfiles from random sources"
you are digging your heels in and you are now trying to assert a set of conditions and situation in which youre correct, even though youre dead wrong for the use cases that the people youre replying to are describing
you have repeated falsehoods as fact repeatedly and seem unable to adjust to people telling you "im not doing that thing youre complaining about"
frankly, i think youre out of your depth on this subject and youre trying to do anything you can justify your original claim that 1GiB is enough, or whatever
TLDR
feel free to have the last word, im sure youll have lots of them. maybe youll get lucky and a few will be correct. im exiting this conversation
there are no real deployment differences, eg. systemd has portable services, full containers via nspawn, etc. and there are many other ways to realize what docker does with or without containers (eg. what yandex does internally by just packaging their internal software and parts of configuration into debian packages, and manage reproducibility that way)
and you don't provide any other technical arguments
what remains is you strongly telling me something I already acknowledged in the previous post (that you can perhaps make efficient use of docker, but it's hard to make it not waste resources in general use case)
I bought a Pi 500+ (basically a 16gb Pi 5 in a keyboard with a built in NVME hat) to use as a family computer, otherwise I agree. Unless you're planning on using it as an actual desktop there's no real reason for that much ram
Browsers treat RAM as infinite, if you want to for whatever reason open LinkedIn, you might wanna get a bigger model. I’d personally rather buy more ram than I need rather than deal with the cost of fixing / working around the issue in future
No you are not crazy. It's silly to try to use a raspberry pi 5 16GB (or equivalent priced product) as a desktop workstation with a GUI on it when much better actual x86-64 based workstations exist. Ones with real amounts of PCI-E lanes of I/O, NVME SSD interfaces on motherboard, multiple SATA3 interfaces on motherboard, etc. In very small form factors same as you'd see in any $bigcorp office cubicle.
It’s an incredibly lopsided machine. The Pi 5 is decently powerful, but you really really should not be attempting to use one as a desktop replacement. While theoretically possible you are so much better off with a $50 used SFF PC.
Old web stuff is still around. RSS feeds are out there. Some parts of masto are generally chill and filled with people having interesting convos.
You don't have to give up on everything to participate, but it can be a space to go to if you're tired of every social interaction being mediated by (I'm being glib) hustlers
I'll bite: how do we take advantage of ZFS layering if not via the docker-style layering?
I find dockerfile layering to be unsatisfying because step 5 might depend on step 2 but not 3 or 4... the linearisation of a DAG makes them harder to maintain and harder to cache cleanly (with us also having monster single-line CMDs all in the main of image results).
Proofs tend to get generated upstream of people trying to investigate something concrete about our models.
A computer might be able to autonomously prove that some function might have some property, and this prove is entirely useless when nobody cares about that function!
Imagine if you had an autonomous SaaS generator. You end up with “flipping these pixels from red to blue as a servis” , “adding 14 to numbers as a service”, “writing the word ‘dog’ into a database as a service”.
That is what autonomous proof discovery might end up being. A bunch of things that might be true but not many people around to care.
I do think there’s a loooot of value in the more restricted “testing the truthfulness of an idea with automation as a step 1”, and this is something that is happening a lot already by my understanding.
I really don’t! I switched it all of months ago - autocomplete, autocaps, all of
it. I reached a point where the constant frustration had to be worse than any productivity gain it was hoping to offer.
A few months on… I like
it! Frustration is all gone, any errors are just on me now, and it forces me to slow down a bit and use the brain a bit more!
Not having to use stuff like whiteout and having undo is quite nice. Getting layers "for free" is nice. I've given myself permission to even do some digital manipulation like resizing on the fly rather than redrawing some eye.
But watching some pros go at it on paper + pen, I do get this feeling that when you don't have the undo button you really do gotta force yourself to get good at the nitty gritty. Really you need to get good at drawing lines nicely the first time when you're inking to paper.
Also, when going through this stuff slowly and annoyingly, or tracing other people's art, you really start internalizing things like how some visual effect is gotten by just a handful of lines. 6 well placed lines gives you a notion of very voluminous hair for example.
it does feel like touching the lower level parts of a craft can help so much with having good fundamentals at a higher level.
Who hasn't, as a kid, thought "Oh I can draw bubble letters" and then realize that it's actually kinda tough, and then after mastering it have some new appreciation for spacing lines out properly and knowing where the pen goes?
Seems like a useful way to get a feel for things. Everyone "knows" how perspective work, yet a lot of people can't commit it to a page. There's clearly some understanding for how things work hidden in being able to do the thing, isn't there?
> But watching some pros go at it on paper + pen, I do get this feeling that when you don't have the undo button you really do gotta force yourself to get good at the nitty gritty. Really you need to get good at drawing lines nicely the first time when you're inking to paper.
Totally! A lot of artists recommend to young folks that before they dive into Procreate / Illustrator - still get good at pen and paper and ink by hand. The lack of undo button forces you to make choices and commit to them. You also hear a lot of artists talking about how, past a certain point of creating a piece, you are now "solving problems" to finish it.
I highly recommend the Draftsmen podcast as a wonderful resource to learn. Marshall Vandruff is a master teacher and has many thoughtful things to say.
The precision and concentration, also forces you to slow down and think about the part once again. Is it correctly dimensioned and size. Is the material the correct one. Can it be machined and assembled that way. How can it be inspected? Etc.
> But watching some pros go at it on paper + pen, I do get this feeling that when you don't have the undo button you really do gotta force yourself to get good at the nitty gritty. Really you need to get good at drawing lines nicely the first time when you're inking to paper.
Often you envision what the line will look like in your head before placing. And then you have the motor skills/experience to recreate that line well. They're just some of the micro-skills that encompass "drawing".
I think on the first point, we have to start calling out authors of packages which (IMO) have built out these deptrees to their own subpackages basically entirely for the purpose of getting high download counts on their github account
Like seriously... at 50 million downloads maybe you should vendor some shit in.
Packages like this which have _7 lines of code_ should not exist! The metadata of the lockfile is bigger than the minified version of this code!
At one point in the past like 5% of create-react-app's dep list was all from one author who had built out their own little depgraph in a library they controlled. That person also included download counts on their Github page. They have since "fixed" the main entrypoint to the rats nest though, thankfully.
> entirely for the purpose of getting high download counts on their github account
Is this an ego thing or are people actually reaping benefits from this?
Anthropic recently offered free Claude to open source maintainers of repositories with over X stars or over Y downloads on npm. I suppose it is entirely possible that these download statistics translate into financial gain...
I'm completely apathetic about spicy autocomplete for coding tasks and even I wonder which terrible code would be worse.
The guy who wrote is even/odd was for ages using a specifically obscure method that made it slower than %2===0 because js engines were optimising that but not his arcane bullshit.
from a security perspective this is even worse than it looks. every one of those micro packages is an attack surface. we just saw the trivy supply chain get compromised today and thats a security tool. now imagine how easy it is to slip something into a 7 line package that nobody audits because "its just a utility." the download count incentive makes it actively dangerous because it encourages more packages not fewer.
> There is a user in the JavaScript community who goes around adding "backwards compatibility" to projects. They do this by adding 50 extra package dependencies to your project, which are maintained by them.
I remember seeing this one guy who infiltrated some gh org, and then started adding his own packages to their dependencies or something to pad up his resume/star count.
As usual, there's a cultural issue here. I know it's entirely possible to paste those seven lines of code into your app. And in many development cultures this will be considered a good thing.
If you're working with Javascript people, this is referred to as "reinventing the wheel" or "rolling your own", or any variation of "this is against best practice".
I think the fact that everyone cites the same is-number package when saying this is indicative of something though.
Like I legit think that we are all imagining this cultural problem that's widespread. My claim (and I tried to do some graph theory stuff on this in the past and gave up) is that in fact we are seeing something downstream of a few "bad actors" who are going way too deep on this.
I also dislike things like webpack making every plugin an external dep but at least I vaguely understand that.
Even there the "problem" was left-pad being used by one or two projects used in "everything".
So the problem isn't that everyone is picking up small deps, but that _some_ people who write libs that are very popular are picking up small deps and causing this to happen.
This is different because it doesn't really say that all JS developers are looking to include left-pad. But I _do_ think that lots of library authors are too excited to make these kinds of dep trees
The point isn't that everyone needs to write the same code manually necessarily. It's that an author could easily just combine the entire tree of seven line packages into the one package the create-react-app uses directly. There's no reason to have a dozen or so package downloads each with seven lines of code instead of one that that's still under under a hundred lines; that's still a pretty small network request, and it's not like dead code analysis to prune unused functions isn't a thing. If you somehow find yourself in a scenario where you would be happy to download seven lines of code, but downloading a few dozen more would be an issue, that's when you might want to consider pasting the seven lines of code manually, but I honestly can't imagine when that would be.
The article and (overall) this comments section has thankfully focused on the problem domain, rather than individuals.
As the article points out, there are competing philosophies. James does a great job of outlining his vision.
Education on this domain is positive. Encouraging naming of dissenters, or assigning intent, is not. Folks in e18e who want to advance a particular set of goals are already acting constructively to progress towards those goals.
People aren't criticizing the development philosophy in this subthread. This has been done by the article itself and by several people before.
What people are criticizing is the approach in pushing this philosophy into the ecosystem for allegedly personal gain.
The fact that this philosophy has been pushed by a small number of individuals shows this is not a widespread belief in the ecosystem. That they are getting money out of the situation demonstrates that there is probably more to the philosophy than the technical merits of it.
As usual, he's copying someone else who's been doing this for years:
https://www.npmjs.com/package/is-number - and then look and see shit like is odd, is even (yes two separate packages because who can possibly remember how to get/compare the negated value of a boolean??)
Honestly for how much attention JavaScript has gotten in the last 15 years it's ridiculous how shit it's type system really is.
The only type related "improvement" was adding the class keyword because apparently the same people who don't understand "% 2" also don't understand prototypal inheritance.
That's a good point, it's only been around for 30 years, and used on 95% of websites. It's not really popular enough for a developer to take an hour or two to read how it works.
The word "used" is doing some heavy lifting there. Not all usage is equal, and the fact that it's involved under the hood isn't enough to imply anything significant. Subatomic physics is used by 100% of websites and has been around for billions of years, but that's not a reason to expect every web developer to have a working knowledge of electron fields.
Let's compromise and say that whoever is responsible for involving (javascript|electron fields) in the display of a website, should each understand their respective field.
I don't expect a physicist or even an electrical engineer or cpu designer to necessarily understand JavaScript. I don't expect a JavaScript developer to understand electron fields.
I do expect a developer who is writing JavaScript to understand JavaScript. Similarly I would expect the physicist/etc to understand how electrons work.
The issue with this framing is that understanding something isn't a binary; you don't need to be an expert in every feature of a programming language to be able to write useful programs in it. The comment above describing prototypical inheritance as esoteric was making the point that you conflated the modulus operator with it as if they're equally easy to understand. Your responses don't seem to indicate you agree with this.
It sounds like you expect everyone to understand 100% of a language before they ever write any code in it, and that strikes me as silly; not everyone learns the same way, and some people learn better through practice than by reading about thinks without practice. People sometimes have the perception that anyone who prefers a different way of learning than them is just lazy or stupid for not being able to learn in the way that they happen to prefer, and I think that's both reductive and harmful.
Given that they literally changed the language to support the class keyword, I think we can safely assume it isn't just the beginners who never bothered to learn how prototypical inheritance works.
I don't exactly know the system for which restaurants pull out of the disposable chopsticks but I think that for example "normal" tempura, katsudon, or like soba restaurants will tend to be those.
I almost associate the cheapo reusable plastic chopsticks with some food courts or Matsuya at this point.
reply