Hacker Newsnew | past | comments | ask | show | jobs | submit | KenoFischer's commentslogin

While we have you here, could you fix the bash escaping bug? https://github.com/anthropics/claude-code/issues/10153

I'm still surprised I was the first one to notice when Linus tried to change this - I always thought it was a pretty well known behavior.


I'll submit my bit flip story for consideration also :) https://julialang.org/blog/2020/09/rr-memory-magic/


Congratulations to the Oxide team! It's a tough market out there :)! I'm still personally frustrated that I don't get to play with the hardware (too expensive for our internal server needs; not the right fit for our datacenter partners/customers), but I'm excited to see that they're successful and hopefully they'll come around to my use case eventually :). In the meantime, I appreciate that they're building largely in the open - every once in a while I'll glance at their issue tracker for light bedtime reading. Just recently we had some fun internally throwing our controls software at their thermal loop as a usage example - it's often hard to find compelling real-world systems to use as openly sharable examples (of course we have interesting customer problems, but that's all NDA'd), so having companies build real stuff in the open is fantastic. Great company, wish them the best.


> too expensive for our internal server needs; not the right fit for our datacenter partners/customers

You and me both. They're doing neat stuff, but I wonder how many other potential customers feel that way too.

What is Oxide's market? It feels a bit like advanced alien technology that is ultimately a little too weird and expensive for most enterprises to adopt.


I always thought a company like Railway would be an Oxide customer. But Railway is building their own servers in their own datacenters. So I am really curious who is small enough to buy Oxide, but large enough to need Oxide?


The same sorts of customers that SGI used to sell to in the pre-cloud era. DoD. Oil and gas. Finance.

People with deep pockets and good reasons to want to keep certain parts of their infra very close to home. Also the kind of people that expect very highly skilled people to show up and get their in-house app running.

(I was an SGI HPC customer once. I still miss the old SGI. Sigh.)


Maybe DigitalOcean?


how does it compare to Nutanix?


Of topic, but where the hell did Nutanix come from? I've never heard of them until recently and all of a sudden, they are being marketed as a serious competitor to VMware etc.


They have been around for a long time and were one of the first to have a hyper-converged solution where all storage in the nodes is pooled and usable by any node. They also have their own hypervisor. You can get 4 nodes per 2U so pretty dense. In the datacenter my company uses a company had dozens of Nutanix boxes sitting in the hallway for months before they finally installed them. They are pretty notoriously expensive so only really used by companies with big IT budgets.


OK, must be one of those weird things where I just never noticed them. Super strange because I've been very aware of this space for decades.


They have been around for nearly 20 years. I viewed them as an also-ran until Broadcom decided they didn’t need any of us as VMware customers anymore. Now Nutanix seems like a viable path for on-prem VM workloads that need a new home for those who don’t want to part with an arm and a leg on licensing but can’t move to public cloud either. I’m not sure how much of that market Oxide can capture. Not sure Nutanix is still doing the hyperconverged hardware themselves anymore.


Nutanix / Oxide have a VERY different market / customer base.


I've been curious about Oxide for a year or two without fully understanding their product. People talking about the "hyperconverged" market in this thread gave me an understanding for the first time.

Given this, can you help me understand in what ways they are different?

When I went to the Nutanix website yesterday, the link showed that'd I'd previously visited them (not a surprise, I look up lots of things I see mentioned in discussions) but their website does an extremely poor job of explaining their business to someone who lacks foundational understanding, even once I'd started reading about "hyperconverged" just before.


If you want to KNOW the chain of custody for all of your OS and software, from the bootloader to the switch chip, and you want to run this virtualization platform airgapped, buying at rack-scale, you want Oxide. They are making basically everything in-house. That's government, energy, finance, etc. Customers that need descretion, security, performance, and something that works very reliably in a high-trust environment, with a pretty high level of performance.

Also check this out: https://www.linkedin.com/posts/bryan-cantrill-b6a1_unbeknown...

If you need a basic "vm platform", VMware, Proxmox, Nutanix, etc. all fit the bill with varying levels of feature and cost. Nutanix has also been making some fairly solid kubernetes plays, which is nice on hyperconverged infrastructure.

Then if you need a container platform, you go the opposite direction - Kubernetes/OpenShift and run your VMs from your container platform instead of running your containers from your VM platform.

As far as "hyperconverged"...

"Traditionally" with something like VMware, you ran a 3-tier infrastructure: compute, a storage array, and network switching. If you needed to expand compute, you just threw in another 1U-4U on the shelf. Then you wire it up to the switch, provision the network to it, provision the storage, add it to the cluster, etc. This model has some limitations but it scales fairly well with mid-level performance. Those storage arrays can be expensive though!

As far as "hyperconverged", you get bigger boxes with better integration. One-click firmware upgrades for the hardware fleet, if desired. Add a node, it gets discovered, automatically provisions to the rest of the configuration options you've set. The network switching fabric is built into the box, as is the storage. This model brings everything local (with a certain amount of local redundancy in the hardware itself), which makes many workloads blazing fast. You may still on occasion need to connect to massive storage arrays somewhere if you have very large datasets, but it really depends on the application workloads your organization runs. Hyperconverged doesn't scale compute as cheaply, but in return you get much faster performance.


https://news.ycombinator.com/item?id=30688865

Here is an answer by steveklabnik about this topic.


theyve been marketed as a serious competitor to vmware for 15 years. their sales reps mightve just not found you until recently. but we did a poc with them 10 years ago and i dont believe much has changed since


Is that what DO is using at the moment? Never heard about it


Yeah, me too. I know that they have already explained why they can't, but I'd love for them to build a mini box that we could try it out on.


> Do they have "leggo my eggo" itself trademarked?

As a matter of fact, they do:

https://tsdr.uspto.gov/#caseNumber=77021301&caseType=SERIAL_...

The full complaint linked above has a full list of trademarks. There's also a claim for trade dress infringement, since the food truck uses the same font and red-yellow-white color scheme.


However, that particular phrase appears to be trademarked for: waffles, pancakes, french toast


Skimming the complaint, Kellogg looks to be arguing it is a well-known mark,[1] and is also making a trade dress claim.

[1] https://www.uspto.gov/ip-policy/trademark-policy/well-known-...


You're supposed to generate a random one, but the only consequence of not doing so is that you won't be able to register your package if someone else already took the UUID (which is a pain if you have registered versions in a private registry). That said, "vanity" UUIDs are a bad look, so we'd probably reject them if someone tried that today, but there isn't any actual issue with them.


Funny to see this come back and see my write-up linked. I did this 8 years ago and think I was the first on this particular board (although others had done similar on other boards). I still have a pile of them sitting on my desk because I accidentally kept bricking them by being ... not careful. That said, even at the time this board was already old, so I guess it's positively prehistoric at this point. I eventually stopped working on this because I thought that others were making sufficient progress. It hasn't really fully materialized yet, but between openbmc, opensil, DC-SCM and the work the oxide folks are doing, I'm still hopeful that we'll get out of server firmware hell eventually.


Out of curiosity: how "bricked" are these boards? Is there irreversible hardware damage (and, if so, how?), or has some firmware just gotten overwritten?


One of them I managed to fry the pcie root complex somehow, not sure exactly how. One I damaged the traces to the BMC SPI flash. Two others I think just have bricked firmware, but it's been years, so I don't remember for sure.


Tried it out.

1. First authentication didn't work on my headless system, because it wants an oauth redirect to localhost - sigh.

2. Next, WebFetch isn't able to navigate github, so I had to manually dig out some references for it.

3. About 2 mins in, I just got ``` ℹ Rate limiting detected. Automatically switching from gemini-2.5-pro to gemini-2.5-flash for faster responses for the remainder of this session. ``` in a loop with no more progress.

I understand the tool is new, so not drawing too many conclusions from this yet, but it does seem like it was rushed out a bit.


Similar. Yesterday things seemed to be going okay. It was trucking along, making code changes.

Then I hit the rate limit. - Fine, no worries, it'll be interesting to see if the quality changes.

Then it starts getting stuck and taking forever to complete anything. So I shut it down for the day.

Today, I start it back up and ask it to pickup where it left off and it starts spinning. I forget about it and come back 7.5 hours later and it' still spinning. When I kill it it said: 1 Turn, 90k input tokens, 6.5 hours of API time... WTH?

And now I'm just totally rate limited - `status: 429, statusText: 'Too Many Requests'` - every time. Also, I can't find any kind of usage data anywhere!


I got the same experience. I ran `gemini --debug` and it spit out the authentication URL at least. However I got the same `Rate limiting detected` for a simple refactoring task in a repo with 2 python files after a few minutes.


not working for me either. getting "too many requests". my own CLI tool i wrote for gemini last week works better lol. maybe i have something configured wrong? though that really shouldn't be the case. my api key env var is correct.


What's the motivation for restricting to Pro+ if billing is via premium requests? I have a (free, via open source work) Pro subscription, which I occasionally use. I would have been interested in trying out the coding agent, but how do I know if it's worth $40 for me without trying it ;).


Great question!

We started with Pro+ and Enterprise first because of the higher number of premium requests included with the monthly subscription.

Whilst we've seen great results within GitHub, we know that Copilot won't get it right every time, and a higher allowance of free usage means that a user can play around and experiment, rather than running out of credits quickly and getting discouraged.

We do expect to open this up to Pro and Business subscribers - and we're also looking at how we can extend access to open source maintainers like yourself.


It's linked from the main website if you hit the "Log In" button and there was communication to customers about this, though I had the same initial reaction, which is why I looked around for corroboration before posting this.


Ah thanks, that's what I was looking for.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: