There is non-stimulant medication as well for ADHD. If you're really struggling, it might be worthwhile to suspend judgement and actually try these out for a while. In the worst case you go back to how you were without medication. For many people the potential upside is worth the experiment.
Well, people who are not above a threshold of experience yet are not in a position to self-assess and course-correct if their long term learning is being affected. And even less so if there is pressure to be hyper-productive with the help of AI.
Speculating here but I think even seniors who rely on AI all the time and enjoy the enhanced output are going to end up with impostor syndrome over the things they suspect they can no longer do without AI, and FOMO about all the projects they haven't yet attempted with AI despite working as hard as they can.
It’s particularly interesting that Anthropic came out yesterday and basically said, yeah, this stuff cannot be held right.
One can argue, convincingly perhaps, that Anthropic isn’t right and/or is marketing, but what they’re saying could be complete BS but the fact that there is doubt suggests that most people believe that no one can hold it right exists.
I’m quite pro AI, but given the radical asymmetry between the upside vs the downsides (the upside is at best maximum bliss for all existing humans, which has a finite limit, while the downside is the end of humanity which is essentially infinitely bad), our march forward in this area needs to be at least slightly more responsible than what we are doing now.
Many other patterns in the text, re-arranging to make it more obvious:
Why do we estimate stories? Because developer time is expensive and someone has to budget for it.
Why do we prioritise features in backlogs? Because we can’t build everything and we need to choose what’s worth the cost.
Why do we agonise over whether to refactor this module or write that debug interface? Because the time spent on one thing is time not spent on another.
We have compilers: either it compiles or it doesn’t.
We have test suites: either the tests pass or they don’t.
Planning. Estimating. Feature prioritisation. Code review. Architecture review. Sprint planning. All of it is downstream of the assumption that writing code is the expensive part.
... type systems, linters, static analysis. Software gives us verification tools that most other domains lack.
Towards the end this article contradicts itself so severely I don't think a human wrote this.
But this isn’t really about AI enthusiasm or AI scepticism. It’s about industrialisation. It has happened over and over in every sector, and the pattern is always the same: the people who industrialise outcompete those who don’t. You can buy handmade pottery from Etsy, or you can buy it mass-produced from a store. Each proposition values different things. But if you’re running a business that depends on pottery, you’d better understand the economics.
So which is it?
Will an industrialised process always outcompete a pre-industrial process?
Or do they not compete at all, because they value different things?
Hand made pottery cannot compete on price with industrially made pottery and therefore majority of pottery is made industrially.
100% human written code cannot compete on price with AI assisted code and therefore majority of code will be written with assistance of AI.
The aside about etsy handmade pottery is that because they can't compete with industrially made pottery on price so they were killed in mass market pottery products and had to find a tiny niche. Before industrialization handmade pottery was mass market pottery. It was outcompeted in mass market and had to move into a niche.
And that part of doesn't even translate into code. People are not buying lines of code, so you're not going to be buying handmade code.
Handmade pottery can offer variety (designs) not available in mass produced pottery. When you look at software, you can't tell if it was 100% handwritten or written with assistance of AI.
If the argument was about cost per unit output, bringing in Etsy didn't make sense at all, especially when they explicitly mention it was about valuing different things.
Handmade pottery can certainly be better quality than mass-produced pottery, just like handwritten code can be better quality than AI-assisted code. There is a spate of new MacOS apps that are clearly AI-written, with memory leaks, high CPU usage and UI that doesn't conform to MacOS conventions (in one instance I'm aware of, the interface has changed completely between updates). Of course users can tell the difference.
If you're going to spend a lot of time making sure the AI-generated code is perfect, does the industrialisation analogy still hold? There's a spectrum here from vibe-coded to agentic to Copilot-level assistance to no AI assistance (which may be a little silly) of course.
This is interesting because the cost of cloning code is zero. The human written code could be cheaper than the AI one because of the cost of distribution. The same does not apply for pottery because to create/distribute an extra bowl, you need >0 resources.
My point is (and the issue I have with the article) is that the quality of code (whatever that means) is not measured by the number of lines. Whether the code is generated by AI or humans, the market is not going to care. Same where it didn't care whether it was written by someone in Silicon Valley or in the middle of East Asia.
Quality indie software in a niche that Ikea is not addressing can make a decent income unlike a lemonade stand.
And unlike at (this hypothetical) Ikea, you wouldn't have to maintain the impression of 20x AI-augmented output to avoid being fired. Well, you could still use AI as much as you want, but you wouldn't have to keep proving you're not underusing it.
> As far as I'm concerned we're content scarce and I don't care what makes the music - humans, robots, netherworld demons - I just want good music.
Presumably you've already listened to every piece of music ever recorded? Otherwise it seems it would be more efficient to do that first than wait for AI to generate it and you chancing upon it.
I think humans are machines, they are just vastly more advanced than any machine invented by humans. This is something I thought long before the current AI hype cycle.
What do you think are some important differences between machines and humans?
At an abstract enough level, not really. I treat them with care and try to give them whatever they need to do their thing. I want them to last as long as possible.
But in asking this question you must have some differences in mind. Could you speak to some of those?
What is the non machine part? What do you believe exists other than chemical and electrical systems?
Edit: If you mean machine in a more colloquial sense that's fine. Let us first get clear if we mean machine in that sense or in sense of any physical mechanism.
If the question is what is there about us that's not covered by the body, we can mention things like: feelings, intentions, perceptions, acts of consciousness.
Or however else you want to divide up things that have to do with the mind.
Eliminativists/illusionists may completely deny such things. The rest can fall into many camps, some of them religious.
It's not like there are any surprising new parts. It's about how one chooses to interpret/conceive those we are familiar with.
And what part remains in that space after we have mapped all the brain signals and configurations corresponding to these feelings, intentions and perceptions? I don't feel the need to bring up absurd unproven concepts without waiting for more data. It'd be like me saying there is something aphysical behind Mercury's orbital perturbations if I were born before SR and GR were discovered (as an example). No point in jumping to such an argument without first exhausting more believable causes first. History is very strongly against any kind of bet in the aphysical.
My question to you would be, what do you think remains that's not a simple natural system if/after something like Neuralink is successfully established?
Forgive me if I ramble for too long. I've been seeing a lot of comments in this vein and the thoughts have accumulated.
Tacit in your question is the notion that the inquiries that are important are those that can result in predictive models of phenomena encountered in the world — hence feelings, intentions & perceptions turn into a shorthand for reported accounts of the same — and that given enough reports (data), we could build a dictionary that maps a bundle of reports to a(n equivalence class) of physical system(s).
But when we speak of having feelings, or acting on intentions, most often we are not using these as stand-ins for our failure to pin down the current state of our physical system to another. If I am exposed to fire, I want to get away — I am unconcerned with how well I could translate my report of the pain to a patterns of neural activations. The reality of pain for me is unaffected by the fidelity of my "experience report dictionary". And it is there whether it's a brush fire or a neuralink streaming fire bits to my cortex.
If you decide that primacy ought to always be given to things as they can be modeled, you can choose to elevate the "experience report dictionary" and make the reality of experience a second-class citizen. Then you end up with an eliminativist ontology where indeed, we can rightly be called a mechanism.
But that is a "world-making" decision, a value judgement: "this is how things should be seen". It might be sponsored by our recent history, where we got high on the fruits of applied scientific modelling, nursed by the education which taught us that being a good engineer can have us continue in line with that, and pushed on us by impoverished modern eschatologies promising eternal youth, experience machines and what-not at this point. And it might seem preferable or more dependable than whatever equally impoverished, inhumane eschatologies we may have been presented with before.
It doesn't mean there isn't a whole world of places where we can go instead. But in general, we don't change our value judgements until the current one seems inadequate for some reason.
> If we created a molecule by molecule synthesis of a human being, you'd agree it is conscious and the same thing as a human created via typical reproduction, right?
Yes so that was my point, if we can agree that a molecular synthesis of a human being, being a pure naturalistic physical process is as good as any other human then if we assume some aphysical element to consciousness, then we have a purely physical process for achieving a system with aphysicality in it. Which means either its not in fact aphysical or then what, we are left with the quetion at what point during this assembly process this new special aspect arises.
It's my feeling that we are still getting too ahead of ourselves in judging some supernatural element, that it's much like the atomism question in ancient greece. An honest thinker back then could have no really firm reason to support one side over another and they tended toward thesse kinds of endless circular metaphysical discussions. That is until we had further data and observation tools which settled the question experimentally. Juat like certain aspects of consciousness, atomism felt an insolvable question in some ways back then. I feel the problems we will have with consciousness will eventually have a similar fate. This bet has always succeeded for millenia till now.
Eternal youth and experience machines don't seem like problems with any conceptual difficulties. We already know electrical and chemical signals change what the brain perceives and eger6nal youth is no more difficult a concept than making any other form of long lasting machine. Obviously there is a long sequence of research problems to solve in the line but none of it is conceptually impossible or blockaged.
Another different question to help me understand what you think of this. I think you agree with me, but just to clarify. A human being is independent of the process of creation right? If we created a molecule by molecule synthesis of a human being, you'd agree it is conscious and the same thing as a human created via typical reproduction, right?
Oh that's what you're banging on about. You think AI is like a demon, or you think LLMs are people too, something like that, hence "I don't care what makes the music". That would otherwise be a spooky and implausible phrase that says something strange about what gives music quality, as if quality in music is something ethereal and mathematical and objective and detached from the human condition, and detached from artists. But if you think the AI counts as a person too then it seems less cold and abstract.
(Belatedly) yes. Kind of a big argument to grapple with, but let's start by considering everything. I mean, all the stuff, the abstract stuff, that's out there objectively in the universe and in the future, waiting to be discovered. I believe there's quite a considerable amount of it. It's all potentially of interest to us eventually, and only a teeny tiny part of it is comprehensible to us now. That part is at the leading edge, the cutting edge of our enquiries, and in order for us to see and comprehend and even care about that part, it has to relate to us. It has to be oriented to us and our thoughts and things we can use.
You see what I'm getting at? Humans don't really like abstract things. Mathematicians seem to, but I doubt that even mathematics truly has an objective abstract quality that's distant from human concerns. I reckon humans do human mathematics, and it probably has fashions, too, it's probably modern and current, that is, of its time and place.
So you could accept that, but still claim that music relates strongly to mathematics as we know it. Of course there's such a thing as the mathematics of music. I could dispute the value of that to the quality of the music, as being too abstract and niche compared to the evocative qualities of music, where it evokes things in our physical world: the sounds of hitting things with sticks, heartbeats, tones of voice, meaningful instruments such as bugles evoking battles, mazy noodling around evoking contemplative thoughts (is that abstract?) ... but either way, the point is that we live in a sort of parochial Bag End, if Middle Earth represents everything abstractly possible, and so we only understand hobbit things and only appreciate hobbit art. So to speak.
My experience of the book was similar: the first third was great. Great idea, brilliantly executed. Definitely worth it for the first third alone.
In a way, maybe it going off-piste is coherent with the idea of the first third. I'm sure this was not the author's intent, but fun from an ironic perspective.
It certainly invites that level of meta-commentary on its own structure, though I agree it's inadvertent. And I know at some point someone is going invoke that point in full sincerity as if its an answer, and whatever that is, the satisfied meta-commentary that makes too much of irony as if its a sincere insight, I feel like is just looming as a possible and frustratingly shallow justification of the book. There's an interesting question there of the scales of abstraction at which anti-memes could function, and that's fascinating but as you noted in this instance not necessarily intentional.
It makes me think of the movie Doubt, where I remember being sincerely confused as to the central accusation at the center of the movie (though retrospectively its obvious and I knew it was at least one possibility but I wasn't sure if there was perhaps a different interpretation), and was told that not being sure was the point and by expecting an answer I was missing the point since the whole movie is about "doubt". I felt this explanation was, frankly, just stupid. Just because you're going meta doesn't mean any point coherently registered in the form of meta-analysis is insightful. But anyway, I'm off the rails a bit now going after imaginary adversaries, but agree with everything you've said.
That was very well articulated. I'm going to hold on to your point about trivial meta-analyses masquerading as serious ones, sadly a very common type of gotcha in tech-aligned circles.
Thanks! I was mostly building off of your point I think. You have to imagine there's a short term or phrase for this. The overestimation of the value of going meta, or treating the move of going meta like it's a skeleton key when it's actually frivolous.
>> you will not have that delightful experience of encountering something unexpected along the way to filling it.
> There's nothing stopping you from doing that with an LLM.
There may be, though. The LLM's initial output may anchor your thinking in insidious ways that may not be obvious at all especially since you're feeling productive. I bet the lack of confidence around starting would also increase over time every time you use an LLM to get over the hump.
I'm not talking about using a default mode LLM with LinkedIn Standard Obsequious Bullshit as a conversational imperative that emerges from simple prompts interacting with the heaviest weights. It pushes back because I told it to and it has redirects around common LLM failure modes, and modes unique to how I use them. That's in a set of instructions I've had a bunch of different models tear apart so I could put it back together better.
I treat it and describe it as a language coprocessor, not a buddy. The instructions are the kernel I boot it with.
Yeah, precisely. My "Bobby" knows my voice, but is not me, and is bad at using it. It is aware of all the tropes, and I've built a writing skill that describes, in great detail, how I write. I have also set it up to challenge me, not make me feel good.
Moreover, it's not like I spend my entire writing time arguing with an LLM, lol. I spend more time writing myself and/or doing research on the internet without an LLM, because sometimes they still get things wrong.
My experience is the same. There are modest gains compensating for lack of good documentation and the like, but the human bottlenecks in the process aren't useless bureaucracy. Whether or not a feature or a particular UX implementation of it makes sense, these things can't be skipped, sped up or handed off to any AI.
reply