Hacker Newsnew | past | comments | ask | show | jobs | submit | AsmodiusVI's commentslogin


GitLab. Everything GiHub wishes it could be, never given enough credit, going to be key to the future of AI software development in a practical sense.

Time to get GitLab.


The TL;DR cagent is a multi-agent runtime that orchestrates AI agents with specialized capabilities and tools, and the interactions between agents. The example scenario I want to share with you is a Microsoft Dynamics 365 Business Central (programming language AL) coding assistant that orchestrates three agents


You're doing what you can, it's not easy. Thanks for handling this so well.


This is a hell of a lot like saying you ditched driving cars to ride a monorail with only two stops in the wrong neighborhood. You’re comparing apples and oranges and selectively leaving out info to highlight that.


Docker isn’t nearly the same $$$. Their catalog is growing.


Docker doesn’t have hardened / zero CVE containers


They do


And engineers never are the cause of mistakes? There can't possibly be any data to back up that major outages are more often caused by leadership. I've been in SIEs simply because someone pushed a network outage to a switch network. Statements like these only go to show how much we have to learn, humble ourselves, and stop blaming others all the time.


Leadership can include engineers responsible for technical priorities. If you're down for that long though, it's usually an organizational fuck-up because the priorities didn't include identifying and mitigating systemic failure modes. The proximate cause isn't all that important and the people who set organizational priorities are by-and-large not engineers.


PROLONGED outages are a failure point that more often than not, require organizational dysfunction to happen.


Think of airplane safety. I think it is similar. A good culture can make sure $root-cause is more likely detected, tested, isolated, monitored, easy to roll back and so on.


This article is so me it hurts. I live in the micro-efficiencies; calendar color-coding, protein packets in my laptop bag, dual-purpose walking meetings. It’s not about doing more; it’s about removing drag so I can stay focused on what actually matters. Optimization isn’t a hustle, it’s clarity.


Really appreciated this take, hits close to home. I’ve found LLMs great for speed and scaffolding, but the more I rely on them, the more I notice my problem-solving instincts getting duller. There’s a tradeoff between convenience and understanding, and it’s easy to miss until something breaks. Still bullish on using AI for exploring ideas or clarifying intent, but I’m trying to be more intentional about when I lean in vs. when I slow down and think things through myself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: