Hacker Newsnew | past | comments | ask | show | jobs | submit | wouldbecouldbe's commentslogin

I work with large files a lot, running claude code on it is not token intense at all. Probably because it does a lot with scripts. But its a bit more raw, but i think in the end more powerful. Have to pick a good excel library and language. I do node, maybe python can work as well

QA merged originally out of programming.

emerged?

Developers are a tough crowd, stubborn, know it alls.

yeah i've had more downtime on managed db's & cloud servers then on my own managed VPS. And if it happens, with VPS i can normally fix it instantly compared to waiting 20-60 min for a response, just to let you know they start fixing it. And when they fix it, it doesnt always mean your instance automatically works.

just ask claude to do all that :), he is excellent and installing & managing new servers and making sure all security patches are updated. Just be careful if its a high risk project.

The irony is deploying NextJS on the railway platform is super slow since they use containers, on Vercel 2 min is like 12 min on railway, deployments on a vps are only like 20 seconds.

*I know this is just build time, so this is different then their deployement time


Not containers to blame but overprovisioning and how much resources dedicated to building. I am not sure how Vercel gets things build in literal seconds, but, hey, they are the creators of NextJS.

At DollarDeploy we building it also in containers but every build get 4GB/2CPU so it is quite fast but not as fast as Vercel.


I build some projects via pm2 deploy directly on a server and its much faster then vercel

Not every project can be compiled on production server since compiling NextJS might take quite a lot of RAM, I would advise against it.

we have 128ram on our production server I think it will be fine ;)

Turbopack, custom runtime infrastructure on top of AWS Lambda.

Turbopack does not work for every app, I think they skip some build steps when building like typescript validation etc and aggressively cache node modules.

> skip some build steps

There was a change in Next 16, not Turbopack, that removed `ESLint` during `next build`: https://nextjs.org/blog/next-16#breaking-changes-and-other-u...

This behavior is the same whether you use Turbopack or webpack. It doesn't make sense for us to couple ourselves with ESLint when there are many viable alternatives. No other popular frameworks run ESLint automatically during builds. This change in Next 16 brought up closer to parity with other frameworks and bundlers.

> typescript validation

There's no change here with Turbopack. We do still run `tsc` automatically to check your types. That's part of `next build` and not Turbopack. However, we may remove this in the future for similar reasons.

There's no good reason for the bundler to call the typechecker. Bundlers strip types. Historically this was done with Babel in webpack. Modern versions of Next.js use SWC for type stripping in both webpack and Turbopack.

> aggressively cache node modules

We aggressively cache everything. We don't have special-casing for `node_modules`. See our blog post about our caching system: https://nextjs.org/blog/turbopack-incremental-computation

Interestingly vite does actually special-case and cache `node_modules`: https://vite.dev/guide/dep-pre-bundling

There are tradeoffs to both approaches, and I think Vite's choice makes sense in the context of their broader minimal-bundling-in-dev design, but it makes less sense for Turbopack (as well as webpack and Rspack) where we produce bundles in dev.


Unfortunately turbopack have some bugs in bundling. Does not work on some of my projects. Can't reproduce outside of whole codebase, so no ticket.

The biggest issue is missing plugins, but they have an extension point to add them.

Naturally they expect them to be written in Rust, which might be an issue for some then again Vite folks are also going into RIR.


well that largely depends, lots of saas are running 90% operating profit margins

yeah it's strange, if you are building a CMS in AI times, I think you would want an llm integration first. Not a UX clone of WordPress. The UX of WP is sort of a historical clutch not a plus.


I'd go the other way. Have less stuff. Just a text box, markdown, insane support for mermaid and everything. Dump the "blocks" of wordpress etc. Too confusing and fiddly. Maybe have AI assist to help with styling the rendered page a bit.


yeah same, but hn is AIfobic, so gets downvoted. So many cool stuff is now possible, LLM with integrated review features etc.


This is really great, anyone know of a Dutch version?



yeah thats a great start, however the md files of every change are really helpful in going though history and understand steps with llms'


I exclusive use complete fullscreen mode for apps i'm actively using and on large screens connect the workspaces, on small screen swipe back and forth. So I you never actually use that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: