Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a security researcher, I am both salivating at the potential that the proliferation of TDD and other AI-centric "development" brings for me, and scared for IT at the same time.

Before we just had code that devs don't know how to build securely.

Now we'll have code that the devs don't even know what it's doing internally.

Someone found a critical RCE in your code? Good luck learning your own codebase starting now!

"Oh, but we'll just ask AI to write it again, and the code will (maybe) be different enough that the exact same vuln won't work anymore!" <- some person who is going to be updating their resume soon.

I'm going to repurpose the term, and start calling AI-coding "de-dev".



> Now we'll have code that the devs don't even know what it's doing internally.

I think that has already been true for some time for large projects continuously updated over a long time, and lots of developers entering and leaving the project throughout the years because nobody who has a choice wants to do that demoralizing job for long (I was one of them in the 1990s, the job was later given to an Indian H1B who could not switch to something better easily, not before putting in a few years of torture to have a better resume, and possibly a greencard).

Most famous post here, but I would like to see what e.g. Microsoft's devs would have to say, or Adobe's:

https://news.ycombinator.com/item?id=18442941

Such code has long been held together by the extensive test suites rather than intimate knowledge of how it all works.

The task of the individual developer is to close bug tickets and add features, not to produce an optimal solution, or even refactoring. They long ago gave up on that as taking too long.


That's the reality from software development at scale, pretty soon no individual will know how everything works and you need high-level architecture overviews on the one side, and strict procedures, standards, tools, test suites etc on the other hand to make sure things keep working.

But the reality is that most of us will never work in anything that big. I think the biggest thing i've worked in was in the 500K LOC range tops.


As the OP outlined 10x is common place now; where as my best day pre-AI may have been 500 LOC now 5K LOC per day is routine. So a few months on a solo project has produced ~500k lines of code.

The code base is disproportionally testing automation, telemetry and monitoring systems but a lot code none the less ;) So even in a solo/small team project depend on architecture, procedures, test suites etc. over knowing every line of code.


Forking a 500k LoC project takes only 5 seconds on github so that is a 1000000x.


First time seeing that post, oh my, I suggest everyone read it. And this is what half the world runs on.


In my opinion, AI-coding is basically gambling. The odds of getting a usable output are way better than piping from /dev/urandom/, but ultimately it's still a probabilistic output of whether what you want is in fact what you get. Pay for some tokens, pull the slots, and hopefully your RCE goes away.


replace 'AI' with 'intern' for the literally same result.


People post comments like this hoping for the dopamine shot of creating a “gotcha” moment. The problem, however is that these comments are: insulting, reductive, and just a straight up lie


There are some bright interns, I’ve worked with a couple. I’ve also worked with a few on the other end of the bell curve and that post is about them.

I’d rather tell it as a joke than be blunt about the left tail of engineers being made redundant for life, slowly, but inevitably.


That is expecting too much of most juniors and many seniors I had worked with.


> Now we'll have code that the devs don't even know what it's doing internally.

Haha, that already happens in almost any project after 2-3 years.


> that already happens in almost any project after 2-3 years.

Now with AI you’ll be able to not understand your code in only 2-3 days.

The next release will reduce the time to confusion to 2-3 hours.

Imagine a future where you’ll be able to generate a million lines of code per second, and not understand any of it.


IMHO we are in-progress to moving up higher level of abstraction so in the future may be we don't need to care about the code anymore same as we don't need to care about how high level language, instructions work.


> Now with AI you’ll be able to not understand your code in only 2-3 days.

Rookie. Numbers.

With ADHD I lose all understanding of my code in 20-30 minutes


Just few days ago I spoke with sec guy who was telling me how frustrating it is to validate AI code.

The problem is marketing.

Cycling industry is akin to audiophiles and will swear on their lives that $15,000 bicycle is the pinnacle of human engineering. This year's bike will go 11% faster than the previous model. But if you read last 10 years of marketing materials and do math it should basically ride itself.

There's so much money in AI right now that you can't really expect anyone to say "well, we had hopes, but it doesn't really work the way we expected". Instead you have pitch after pitch, masses parroting CEOs, and everyone wants to get a seat on the hype train.

It's easy to dispel audiophiles or carbon enthusiasts but it's not so easy with AI, because no one really knows how it works. OpenAI released a paper in which they stated, sorry for paraphrasing, "we did this, we did that, and we don't know why results were different".


>Now we'll have code that the devs don't even know what it's doing internally.

I am working on a legacy project. This is already the case!


Is that a reason to start every project in the same state?


No. I am not recommending it. This is a cry for help!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: