Hacker Newsnew | past | comments | ask | show | jobs | submit | MuffinFlavored's commentslogin

Might want to add how this compares to other products in the space.

Some that come to mind that are potentially tangentially related/similar:

https://github.com/evidence-dev/evidence


Have you tried NixOS/flakes? What was your reaction?


I haven't tried it first hand.

I've written over ~10k lines of Ansible playbooks and roles to fully automate setting up servers to deploy Docker based web apps, so I do like the concept of declaring the state of a system in configuration and then having that become a reality. I know NixOS is not directly comparable to Ansible but in general I think IaC is a good idea.

It was important to me that my dotfiles work on a number of systems so I avoided NixOS. For example, the command line version works on Arch, Debian and Ubuntu based distros along with WSL 2 support and macOS too. The desktop version works on Arch and Arch based distros.

Beyond that, I also use my dotfiles on 2 different Linux systems so I wanted a way to differentiate certain configs for certain things. I also have a company issued laptop running macOS where I want everything to work, but it's a managed device so I can't go hog wild with full system level management.

Beyond that, since I make video courses I wanted to make it easy for anyone to replicate my set up if they wanted but also make it super easy for them to personalize any part of the set up without forking my repo (but they can still fork it if they want).

All of the above was achievable with shell scripts and symlinks. I might be wrong since I didn't research it in depth but I'm not sure NixOS can handle all of the above use cases in an easy to configure manner.


To have your Nix-based setup reproducible across different OS (Arch, Debian, Ubuntu, WSL2, MacOS, and NixOS), and have an extensible base config that can be customized to different situations, the go-to framework is home-manager (not NixOS, which only works on NixOS, or NixOS on WSL 2).

https://github.com/nix-community/home-manager


Nix offers a trade-off: near-perfect reproducibility in exchange for longer builds. Sometimes it's nice to just build a new .so for some library and let the rest of your binaries link to it without recompiling everything.

I'm not convinced about building whole systems around it. I can't remember the last time I ran into a reproducibility issue in practice, but I upgrade my system packages every day and that's definitely faster without Nix.


ABI stability exists for a reason.


Migrated from archlinux to nixos. I don't think I can use anything else now...

I have a CI at home that builds my nixos config on a weekly basis with the latest flake. The artifacts are pushed to atticd. With this setup, when I actually need to update my machines, its almost instantaneous.


Care to share some scripts on how you do it? I'm in similar position, maintaining multiple desktops, laptops, servers, but i do not know how to share the build artifacts.


Agreed. I'm at same point in my Nix journey and would like to share build artifacts.


I have never been more stress-free than when I was running nixos as a daily driver. Had to return to macos as primary unfortunately but still use nix as much as possible.


I am even so stress-free, that I once rebuilt my kernel (including simple patches) under the hood of my daily production/home pc.

edit: Using nixos ofc, otherwise I would never do this.


I wish they would just rip the bandaid to stop everybody's entitled whining.

"We're sorry, what we were able to give you for $100/mo before now needs to be $200/mo (or more). We miscalculated/we were too generous/gave too much away for too little. It's a new technology, we are seeing a ton of demand, we are trying to run a business, hope you understand. If you don't want it, don't pay for it."


This is my take too, although I'm not prepared for a max400 reality to replace the max200, but... I hate all of the whingeing. Piggies at the buffet line seem to be the loudest on this subject.


I would understand the move, but boy would it play right into the "AI is only here to make the rich even richer" feeling wouldn't it?


If I strain really hard, I can come up with a reason why it might play into such a narrative.

/s


> "We're sorry, what we were able to give you for $100/mo before now needs to be $200/mo (or more). We miscalculated/we were too generous/gave too much away for too little. It's a new technology, we are seeing a ton of demand, we are trying to run a business, hope you understand. If you don't want it, don't pay for it."

Anthropic's thing has always been that they are perceived as slightly ahead of the competition, if they 2X their pricing then the competition that used to be "slightly worse" suddenly becomes an absolute bargain and guts their user base.


It is one thing to pay 100 a month to make calendar apps for your linkedin and birds on bicycles to get invited to talks, paying 200 HOWEVER


If we didn’t have the birds on bicycles, how would we know the models are getting better?


Are we at the point where there are external constraints that cash can't solve?


can't tell if you're being facetious but yes, there's not enough cash in the world to double energy/silicon fab capacity in a year. Infrastructure takes time, hardware is hard, and you have to be willing to bet that the demand will be there 5 years from now to make an investment today.


Until one has the entire supply of world GPU production, cash can solve it by out bidding others


TSMC would never allow all of their output to only one customer. You have an over simplified view of this.


One could always make existing infrastructure more efficient. Nothing better than post-mature optimization.


Just put everyone on pay per use with the API and rip the band aid off.


Even the pay per use is heavily VC subsidied at current prices.


All indications are that inference for API use is margin positive for Open AI and Anthropic not the subscription.

It will basically cut the hobbyist out and entrench large corporations that can pay the real costs.

If that happened and I was working for myself, I would just buy the beefiest computer I could finance and do everything locally.


Honestly, I wish they couldn’t subsidize with VC cash and such and offer below cost to begin with. Like I wish it were illegal. Basically this allows things like Uber, more or less putting taxis out of business and then being worse than what they replaced.

I’d like to see a lot more than entitled whining. I would like to see the fist of regulation slammed down on the back of these tech shenanigans where they know they’ll never be able to match the prices they’re starting with


I wish they would too. I’d respect them more for the transparency. I think everyone’s enshitiffication sensors have rightly been dialed up over the years. So without explanations for the regressions it just feels like another example


I wish people would pay more attention to:

* Anthropic is in some way trying to run a business (not a charity) and at least (eventually?) make money and not subsidize usage forever

* "What a steal/good deal" the $100-$200/mo plans are compared to if they had to pay for raw API usage

and less on "how dare you reserve the right to tweak the generous usage patterns you open-ended-ly gave us, we are owed something!"


As an (ex) paying customer, I'm expecting some consistency. I used to be satisfied with the value I got, until the limits changed overnight, and I'd get a ten of my previous usage.

If Anthropic is allowed to alter the deal whenever, then I'd expect to be able to get my money back, pro-rata, no questions asked.


yes, $200/mo is a serious subscription, we are owed something, and I won't feel ashamed for saying that

especially when you are told using the subagent for code review "claude -p" is now billed on API on top of $200 sub


All those apply to OpenAI+Codex too, but they're far more generous with limits than Anthropic, and with granting fresh limits to apologize when they fuck up.


How big of a handicap on performance is the external enclosure for something like an RTX5090?


> Running DeepSeek V3 (685B) requires 8×H100 GPUs which is about $14k/month. Most developers only need 15-25 tok/s.

> deepseek-v3.2-685b, $40/mo/slot for ~20 tok/s, 465 slots total

> 465 users × 20 tok/s = 9,300 tok/s needed

> The node peaks at ~3,000 tok/s total. So at full capacity they can really only serve:

> 3,000 ÷ 20 = 150 concurrent users at 20 tok/s

> That's only 32% of the cohort being active simultaneously.


People work 8 hours a day presumably, I guess they are banking on this idea


only works if the users are evenly distributed around the globe (which is likely more of less the case). if the user concentrates in on century, the token rate will be terrible.


Can you help me understand why devenv is needed instead of a shell like this/what is gained?

    { pkgs }:
    
    pkgs.mkShell {
      nativeBuildInputs = with pkgs; [
        # build tools
        cmake
        ninja
        gnumake
        pkg-config
      ];
    
      buildInputs = with pkgs; [
        # java
        jdk8
    
        # compilers
        gcc
        clang
        llvmPackages.libcxx
    
        # libraries
        capstone
        icu
        openssl_3
        libusb1
        libftdi
        zlib
    
        # scripting
        (python3.withPackages (ps: with ps; [
          requests
          pyelftools
        ]))
      ];
    
      # capstone headers are in include/capstone/ but blutter expects include/
      shellHook = ''
        export CPATH="${pkgs.capstone}/include/capstone:$CPATH"
        export CPLUS_INCLUDE_PATH="${pkgs.capstone}/include/capstone:$CPLUS_INCLUDE_PATH"
      '';
    }


It is a more user friendly abstraction on top of Nix. Most people don’t want or need to understand the specifics of Nix or the Nix language.

Btw, I say this as a huge fan and heavy user of both Nix and NixOS.


To be honest, I don’t know. I just enjoy the simplicity of devenv. It’s the right amount of user friendly.


“Needed” is too strong, but this does not provide services, does not provide project-specific scripts, does not setup LSP, does not setup git hooks, can't automatically dockerize your build, does not support multiple profiles (e.g. local and CI), etc.


The UX is the big benefit, especially on teams who may not even know what nix is. I held off on exposing my nix setups for a long time, but devenv has made it possible to check things in without losing a ton of time to tech support.


devenv lets you express shells as modules.

Modules let you express the system in smaller, composable, reusable parts rather than express everything in one big file. (There are other popular tools which support modules: NixOS, home-manager, flake-parts).

That devenv also provides "batteries included" modules for popular languages (including linters, LSPs) is also a benefit.


devenv also has tasks/services. For example you need to start redis, then your db, then seed it, and only then start the server. All of that could be aliases, yeah, but if you define them as aliases you can have them all up with `devenv up`. It even supports dependencies between tasks ("only run the db after migrations ran")


really good question.

right now I have bought into the Nix koolaid a bit.

I have NixOS Linux machines and then nix-darwin on my Mac.

I use Nix to install Brew and then Brew to manage casks for things like Chrome what I'm sure updates itself. So the "flake.lock" probably isn't super accurate for the apps you described.


> What’s next for Deno?

Who cares? Why does the world need so many fringe tools/runtimes? So much fragmentation. Why does every project have to be a long-term success? Put some stuff out if its misery. Don't waste the time of the already few open-source contributors who pour hours into something for no good reason.


Deno is much more than a fringe tool. It's a genuine improvement in many ways.


The world doesn't need a dozen JS runtimes.

The world doesn't need a dozen JS engines.

The world doesn't need many dozens of Linux distros.

The world doesn't need a handful of BSD distros.

The world doesn't need many dozens of package managers.

The world doesn't need hundreds of JS frameworks.

The world doesn't need dozens of programming languages or chat protocols or CI/CD systems.

The world doesn't need dozens of init systems, service managers, display servers, audio stacks, universal app formats, build tools/bundlers.

Deno may have dragged the JS runtime space forward, fully agree. Maybe it served its purpose and it is time to say goodbye.


If Deno moved things forward, doesn't that suggest that we do need efforts like this to support ongoing progress? There doesn't seem to be strong evidence to the contrary in the JS ecosystem.


The world doesn't need so many people or anything they have to offer it.


I'd argue that the mainstream, lowest-common-denominator tools are the ones which waste people's time. (Especially when they're backed by an incumbent. Deno, on the other hand, clicked immediately.)


any reason why you did

    const { rows } = client.query(
      "select id, name, last_modified from tbl where id = $1",
      [42],
    );
instead of

    const { rows } = client.query(
      "select id, name, last_modified from tbl where id = :id",
      { id: 42 },
    );


That is the way node-postgres works. pg-typesafe adds type safety but doesn’t change the node-postgres methods


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: