No killer product? Coding assistants and LLM's in general are the single most awe-inspiring achievement of humanity in my lifetime, technological or otherwise. They've already massively improved my and others' lives and they're only going to get better. If pre and post industrial revolution used to be the major binary delineation of our history, I'm fairly confident it will soon be seen as pre and post AI instead.
I know right? 8-year-old me dreamed of being able to articulate software to a computer without having to write code. It (along with the original Stable Diffusion) are Definitely one of the coolest inventions to ever come along in my lifetime
Coding assistants are currently quite hard to run locally with anything like SOTA abilities. Support in the most popular local inference frameworks is still extremely half-baked (e.g. no seamless offload for larger-than-RAM models; no support for tensor-parallel inference across multiple GPUs, or multiple interconnected machines) and until that improves reliably it's hard to propose spending money on uber-expensive hardware one might be unable to use effectively.
GPU and RAM prices have definitely not made consumer PC's cheaper than they were before bitcoin blew up or before AI blew up.
Maybe you could make an argument that they are more cost efficient for the price point... But that's not the same as cheaper when every application or program is poorly optimized. For example why would a browser take up more than a GB or two of RAM?
And I'd postulate that R&D to develop localized AI is another example, the big players seem hellbent that there needs to be a most and it's data centers... The absolute opposite of optimization
We've had RAM shocks before. We nerds can't control the Wall Street or Virginians who like to break the world every so often for the lulz. However, a wobble on the curve doesn't change the curve's destination.
I've also been using the LLM in Posthog and it has been impressive. I need to check if I can also plug a MCP/Skill to my actual claude code so that I can cross reference the data from my other data source (stripe, local database, access logs etc.) for in depth analysis
This might be up your alley - have Posthog and a ton of other SaaS tools connected so you can run analysis across quant/qualitative data sources: https://dialog.tools
> Coding assistants and LLM's in general are the single most awe-inspiring achievement of humanity in my lifetime
Landing a man on the moon is way more impressive. Finding several vaccines for a once in a century pandemic within a year of its outbreak is and achievement that in its impact and importance dwarfs what the entire LLM industry put together has achieved. The near-complete eradication of polio, once again, way more important and impactful.
Those are all good things, but with the current AI boom we've invented something with the potential to invent those kinds of things on its own, if not now then in the near future. It's far more important and impactful to invent a digital mind that can invent an arbitrary number of vaccines than to just invent one vaccine, no matter how hard it was to invent the vaccine by hand.
Contingency plan? Just code without it like before. AI could disappear today and I would be very disappointed but it's not like I forgot how to code without it. If anything, I think it's made me a better programmer by taking friction away from the execution phase and giving me more mental space to think in the abstract at times, and that benefit has certainly carried over to my work where we still don't have copilot approved yet.
This entirely misses the point. Re-implementing code based on API surface and compatibility is established fair use if done properly (Compaq v. IBM, Google v. Oracle). There's nothing wrong with doing that if you don't like a license. What's in question is doing this with AI that may or may not have been trained on the source. In the instance in the article where the result is very different, it's probably in the clear regardless. I'm sympathetic to the author as I generally don't like GPL either outside specific cases where it works well like the Linux kernel.
The real test would be to see how much of generated code is similar to the old code. Because then it is still a copyright. Just becsuse you drew mickey mouse from memory doesnt above you if it looks close enough to original hickey mouse.
That’s I believe woefully inadequate. There are some levels of code similarity:
Level 0: the code is just copied
Level 1: the code only has white space altered so the AST is the same
Level 2: the code has minor refactoring such as changing variables names and function names (in a compiled language the object code would be highly similar; and this can easily be detected by tools like https://github.com/jplag/JPlag)
Level 3: the code has had significant refactoring such as moving functionality around, manually extracting code to new functions and manually inlining functions
Level 4: the code does the same conceptual steps as the old code but with different internal architecture
At least in the United States you have to reach Level 4 because only concepts are not copyrightable. And I believe chardet has indeed reached level 4 in this rewrite.
I love KDE, especially since Plasma 6 release but oh man is the Settings program poorly designed and littered with settings 99% of users will never need.
So many options placed seemingly at random. Similar options like lockscreen, login screen and desktop background settings spread out over 3 different main categories.
Customization options so extensive and granular one can only wonder about their purpose. Even in their latest release blog post they chose to brag about the new ability to change intensity/thickness of frames. I don't think most people care about stuff like this.
Until recently defaults were straight up insane like single click to open folders/launch programs, touchpad scroll being inverted etc.
If you navigate to Settings -> Sound you'll be presented with some options but also buttons in the top right that will open a mostly empty screen with a few additional options. Why not split the whole page into parts and present everything on a single screen? Why not tabs?
Sometimes those buttons in the top right have different behavior. Some will open a whole new page ansd sometimes it's just a popup and other times it's a dropdown.
And oh man just navigating Settings sucks. Main list consists of single and two level options with two level options opening another, mostly empty vertical pane so the actual size of the right pane changes with top text jumping around depending on what you press. So why some settings have two levels and some have tabs and some have those junky top right buttons that need their own back button to show up in the interface whenever they're pressed? I'm not for or against any of those design choices but why all of them at random? I just want some goddamn consistency.
Cherry on top is the bloat most distros choose to install alongside Plasma desktop. Dragon Player? kMail? Does anyone even use these? I dislike Gnome a lot but at least their preinstalled software is minimal, elegant and actively supported/developed. Most KDE programs look like they stopped receiving updates in 2008.
I still think it's a great DE but there's much room for improvement.
Can only speak for myself but the problem is that with KDE there's always stuff I need to go in and change because I don't like the defaults, and then I fall into a rabbit hole of endless tweaking from which it's difficult to escape because no matter how much time I spend I can never get it to be just right.
Funny I feel the same about gnome. I haven't played with others enough to comment I suppose but all are missing some basic creature comfort stuff like a full tcp/up config dialog or a real fluid working app store out of the box. Distros add these but what is going on here.
The thing with GNOME is having to stack a bunch of extensions (most of which will only somewhat meet your needs) to get desired features, half of which will break periodically because there’s no stable extension API.
GNOME and KDE sit on extreme opposite ends of the minimalist/maximalist spectrum.
It's quality issue from my experience. Nobody ever bothered with polishing the defaults and the "option bombardment" is really bad incoherent design instead of having too many things.
I remember spending hours customising the KDE 5 task bar clock, trying to correct the padding. Eventually I gave up customising it and switched to GNOME.
KDE app customisation is also a mess compared to something like foobar2000.
The defaults have been polished more times than I can count and virtually every KDE release changes some defaults to be more user friendly. It's been getting better for a long time.
The wealth of things in the KDE settings are things people will likely never change or are things that can be tweaked but don't necessarily need to be. For example, let's look at GNOMEs settings app. It has menus and options for all the things that the average user needs (network settings, mouse and display options, etc.) but leaves out, for example, things that people need to change for specific workflows (like the option to have focus follow the mouse). A settings app should let the user set things needed for the functions of a computer to work properly while separating deeper level customization for those who want it.
I think emacs does a very good job at this. You can configure most of the settings people need to be productive in a text editor from the menu bar while leaving the extremely rich customization of emacs to the options menu and elisp config files
The better the code is, the less detailed a mental map is required. It's a bad sign if you need too much deep knowledge of multiple subsystems and their implementation details to fix one bug without breaking everything. Conversely, if drive-by contributors can quickly figure out a bug they're facing and write a fix by only examining the place it happens with minimal global context, you've succeeded at keeping your code loosely-coupled with clear naming and minimal surprises.
I think there has always been some truth to that, long before AI. Being driven to get up and just do the thing is the most important factor in getting things done. Expertise and competency are force multipliers, but you can pick those up along the way - I think people who prefer to front-load a lot of theory find this distasteful, sometimes even ego-threatening, but it's held true in my observations across my career.
Yes, sometimes people who barrel forward can create a mess, and there are places where careful deliberation and planning really pay off, but in most cases, my observation has been that the "do-ers" produce a lot of good work, letting the structure of the problem space reveal itself as they go along and adapting as needed, without getting hung up on academic purity or aesthetically perfect code; in contrast, some others can fall into pathological over-thinking and over-planning, slowing down the team with nitpicks that don't ultimately matter, demanding to know what your contingencies are for x y z and w without accepting "I'll figure it out when or if any of those actually happen" - meanwhile their own output is much slower, and while it may be more likely to work according to their own plan the first time without bugs, it wasn't worth the extra time compared to the first approach. It's premature optimization but applied to the whole development process instead of just a piece of code.
I think the over-thinkers are more prone to shun AI because they can't be sure that every line of code was done exactly how they would do it, and they see (perhaps an unwarranted) value in everything being structured according to a perfect human-approved plan and within their full understanding; I do plan out the important parts of my architecture to a degree before starting, and that's a large part of my job as a lead/architect, but overall I find the most value in the do-er approach I described, which AI is fantastic at helping iterate on. I don't feel like I'm committing some philosophical sin when it makes some module as a blackbox and it works without me carefully combing through it - the important part is that it works without blowing up resource usage and I can move on to the next thing.
The way the interviewed person described fast iteration with feedback has always been how I learned best - I had a lot of fun and foundational learning playing with the (then-brand-new) HTML5 stuff like making games on canvas elements and using 3D rendering libraries. And this results in a lot of learning by osmosis, and I can confirm that's also the case using AI to iterate on something you're unfamiliar with - shaders in my example very recently. Starting off with a fully working shader that did most of the cool things I wanted it to do, generated by a prompt, was super cool and motivating to me - and then as I iterated on it and incorporated different things into it, with or without the AI, I learned a lot about shaders.
Overall, I don't think the author's appraisal is entirely wrong, but the result isn't necessarily a bad thing - motivation to accomplish things has always been the most important factor, and now other factors are somewhat diminished while the motivation factor is amplified. Intelligence and expertise can't be discounted, but the important of front-loading them can easily be overstated.
>Is this supposed to be an implicit dig at audiobooks? The scientific consensus seems to be that there's no difference to comprehension or retention
I wouldn't trust that "scientific consensus" if my life dependent on it.
For starters, there's no scientific consensus.
The linked post refers to merely 2 studies, both of doubtful quality. And one says "it's no different", the other says it's worse.
The one that says "it's no different" asked them to read/listen to mere two chapters of total ~ 3000 words.
That's a Substack essay or New Yorker article level, not a book, and only of one text type (non-fiction historical account. How does it translate to literature, technical, theoritical, philosophical, and so on?). The test to check retention was multiple choice - not qualitative comprehension. And several other issues besides.
And on the other study in the post, the audio group performed much worse.
The medium feels wholly immaterial in this case. The words reach your brain, and then it's up to you to think about them, imagine the scene, process ideas. Audiobooks let the narrator add inflection, which maybe takes a slight load off you, but I don't see the big deal. I've read lots of fiction, and listened to a lot on road trips, and I don't feel like my comprehension suffered in either case compared to the other. The important thing is you can have the same level of conversation about the material - I don't believe all this woo about reading being the only pure and intellectual way to process information.
Well, we don’t say that “seeing” a theater play is the same as “reading” a theater play - regardless of comprehension or retention - so why should we say that “listening” to a book is the same as “reading” a book?
Drawing these distinctions is complicated by multi-modal consumption. As an avid lifelong reader (nearly a book per week for about 50 years) I greatly enjoy reading on my kindle and seamlessly switching to listening while driving or doing the dishes. With most books these days it's probably 80% reading -- but in the past, when I had a long commute, it was closer to 50/50. When discussing a given book with others, it's practically irrelevant whether I read or listened to the audiobook narration.
As for theater plays, attending a live performance with actors is fundamentally different from reading the script.
I think GP is making a subtler point, not that listening to audio books is worse than reading books with your eyes, but that it's telling that people who listen to audio books themselves go out of their way to emphasize that it's equivalent to reading, thus betraying that in their own value system they put a higher value on (actual) reading.
Better at reading yes but not necessarily better at comprehension which is what I believe people are getting at in these discussions. I read and listen. Initially my comprehension and memory while listening was inferior, but you can learn the skill of deep concentration on audio (or some may have it natively).
I'm pretty sure it will vary a LOT from person to person... I remember what I see very well.. what I hear, not nearly as much. I say this as when I was commuting I'd listing to a lot of audio books and podcasts... I didn't retain much at all. But I can skim a written article and retain a lot more. Further still, if I literally copy something I see while writing it down, it's hard for me not to remember. That last bit got me through high school as I never did any homework, but always aced tests.
Everyone is definitely different in terms of how they learn best. That's not to say that listening to non-fiction is or isn't better for oneself than nothing, or even different forms of music may be different. There's nothing wrong with entertainment or factual knowledge... (See "Fat Electrician" on YouTube/Pepperbox for a lot of both.)
I mean no one is listening to an audiobook of an Eternal Golden Braid - even if one existed it couldn't lead to an equivalent outcome compared to reading it. Let's not even get started on the impact on literary devices like Wordplay and Neologisms.
There doesn't need to be an implicit dig; audiobooks are explicitly a different medium, and in the Marshall McLuhan sense obviously thus impact comprehension, retention, and the overall grok.
This sounds like the kind of low-thought pattern-based repetitive task where you could tell an LLM to do it and almost certainly expect a fully correct result (and for it to find some bugs along the way), especially if there's some test coverage for it to verify itself against. If you're skeptical, you could tell it to do it on some files you've already converted by hand and compare the results. This kind of thing was a slam dunk for an LLM even a year or two ago.
> It seems a lot of large AI models basically just copy the training data and add slight modifications
This happens even to human artists who aren't trying to plagiarize - for example, guitarists often come up with a riff that turns out to be very close to one they heard years ago, even if it feels original to them in the moment.
reply