As a security researcher, I am both salivating at the potential that the proliferation of TDD and other AI-centric "development" brings for me, and scared for IT at the same time.
Before we just had code that devs don't know how to build securely.
Now we'll have code that the devs don't even know what it's doing internally.
Someone found a critical RCE in your code? Good luck learning your own codebase starting now!
"Oh, but we'll just ask AI to write it again, and the code will (maybe) be different enough that the exact same vuln won't work anymore!" <- some person who is going to be updating their resume soon.
I'm going to repurpose the term, and start calling AI-coding "de-dev".
> Now we'll have code that the devs don't even know what it's doing internally.
I think that has already been true for some time for large projects continuously updated over a long time, and lots of developers entering and leaving the project throughout the years because nobody who has a choice wants to do that demoralizing job for long (I was one of them in the 1990s, the job was later given to an Indian H1B who could not switch to something better easily, not before putting in a few years of torture to have a better resume, and possibly a greencard).
Most famous post here, but I would like to see what e.g. Microsoft's devs would have to say, or Adobe's:
Such code has long been held together by the extensive test suites rather than intimate knowledge of how it all works.
The task of the individual developer is to close bug tickets and add features, not to produce an optimal solution, or even refactoring. They long ago gave up on that as taking too long.
That's the reality from software development at scale, pretty soon no individual will know how everything works and you need high-level architecture overviews on the one side, and strict procedures, standards, tools, test suites etc on the other hand to make sure things keep working.
But the reality is that most of us will never work in anything that big. I think the biggest thing i've worked in was in the 500K LOC range tops.
As the OP outlined 10x is common place now; where as my best day pre-AI may have been 500 LOC now 5K LOC per day is routine. So a few months on a solo project has produced ~500k lines of code.
The code base is disproportionally testing automation, telemetry and monitoring systems but a lot code none the less ;) So even in a solo/small team project depend on architecture, procedures, test suites etc. over knowing every line of code.
In my opinion, AI-coding is basically gambling. The odds of getting a usable output are way better than piping from /dev/urandom/, but ultimately it's still a probabilistic output of whether what you want is in fact what you get. Pay for some tokens, pull the slots, and hopefully your RCE goes away.
People post comments like this hoping for the dopamine shot of creating a “gotcha” moment. The problem, however is that these comments are: insulting, reductive, and just a straight up lie
IMHO we are in-progress to moving up higher level of abstraction so in the future may be we don't need to care about the code anymore same as we don't need to care about how high level language, instructions work.
Just few days ago I spoke with sec guy who was telling me how frustrating it is to validate AI code.
The problem is marketing.
Cycling industry is akin to audiophiles and will swear on their lives that $15,000 bicycle is the pinnacle of human engineering. This year's bike will go 11% faster than the previous model. But if you read last 10 years of marketing materials and do math it should basically ride itself.
There's so much money in AI right now that you can't really expect anyone to say "well, we had hopes, but it doesn't really work the way we expected". Instead you have pitch after pitch, masses parroting CEOs, and everyone wants to get a seat on the hype train.
It's easy to dispel audiophiles or carbon enthusiasts but it's not so easy with AI, because no one really knows how it works. OpenAI released a paper in which they stated, sorry for paraphrasing, "we did this, we did that, and we don't know why results were different".
Before we just had code that devs don't know how to build securely.
Now we'll have code that the devs don't even know what it's doing internally.
Someone found a critical RCE in your code? Good luck learning your own codebase starting now!
"Oh, but we'll just ask AI to write it again, and the code will (maybe) be different enough that the exact same vuln won't work anymore!" <- some person who is going to be updating their resume soon.
I'm going to repurpose the term, and start calling AI-coding "de-dev".