Hacker Newsnew | past | comments | ask | show | jobs | submit | nilirl's commentslogin

I finished writing this book 2 weeks ago.

Copywriting after AI

https://www.nair.sh/books/copywriting-after-ai

It's 88 pages of me describing my mental models for marketing, those which I think still hold true even after the introduction of AI.


How to use a spreadsheet for creativity

https://www.nair.sh/guides-and-opinions/communicating-your-e...

I finished writing that over the weekend.

I talk about combinatorial creativity as a way to be creative under time pressure. I had fun writing it, it'd been on my mind for weeks.


Bottleneck for what? More features?

I don't think amount of software is what determines whether a company does well.

I don't think capturing quantity of context is that important either.

Now, quality of context. How well do the humans reason?

Then, attitude. How well do the humans respond to bad situations?

Then, resource management. How well does the company treat people and money?

Finally, luck. How much of the uncontrollables are in our favor?

Those are pretty good bottlenecks for a company. I doubt an agent is fixing any of those. At least any time soon.


For business, software applications are tools that facilitate "the thing" that generates money. (We in the software world think that _thing_ is software and software _features_, but outside that world, there's usually a different _thing_.)

The bottleneck for making software applications better at being used by (non-software) businesses is making sure the software does all the software things that actually benefit the business. Save time. Make humans more productive. Reduce human error. Make the business more efficient. Increase profit margins.

All of those things are a bit difficult to predict and quantify. You start with ideas of what might help the business, you maybe design, prototype, trial. Ultimately you build or enhance software applications, and try to measure how well they're making the business better.

In all of this, making sure software is addressing the right problem in the right way, and ultimately making the business better - that's a hard problem! Regardless of how fast and easy it is to make software.

But yes, the speed can really help. You can prototype and trial and improve the feedback loop.


> But yes, the speed can really help. You can prototype and trial and improve the feedback loop.

Based on what I’ve seen, prototyping has been always easy. You don’t even have to build software for the first iteration. For UI stuff you can use a wire-framing tool.

What has happened is that we abandoned the faster iteration methods (design think tank, quick demo and UX research,…) and we have full in on building the first idea that came in and fostering it on the users. That process is very slow and more often goes wrong.


Dealing with this exact issue now. 'Design' has become a dirty word to some folks - so much so that they've come up with new 'ideas' like 'simulations' to get feedback from users...

Hmm not agile or waterfall....

Tsunami?


Software solves problems, not problems like "What is beauty?" Because that would fall within the purview of your conundrums of philosophy. Software solves practical problems, for instance: how am I going to stop some mean mother Hubbard from coding me a structurally superfluous backend? The answer, use an LLM, and if that don't work... Use more LLM.

> Bottleneck for what? More features?

Code changes. Not necessarily features, but also bug fixes, plain old maintenance, and even refactoring to improve testability.

With AI coding assistants, what in the past were considered junior dev tasks are now implemented with a quick prompt and an agent working in the background.

These junior dev tasks are now effortlessly delivered by coding assistants, with barely any human intervention. Backlogs are cleared faster than new items are added. And new items are added more and more because capacity to clear them is no longer an issue. The challenge is now keeping up with the volume of changes. I see this first-hand at my org.

> Those are pretty good bottlenecks for a company. I doubt an agent is fixing any of those. At least any time soon.

Just because you can think of other bottlenecks that doesn't mean that generating code was not a bottleneck, and is not the bottleneck today. The mere notion of a backlog demonstrates that it is a bottleneck.


I was not merely stating other bottlenecks. I'm saying they're more important bottlenecks.

They can't all be equally important bottlenecks; a bottleneck is by definition a singular component or sub-system most-limiting to the system's output.

What are we trying to output from our businesses? Code?

What is this magical context floating around every business that will unlock AI agents to produce ... what?

[Edit] I apologize for my tone. You're right, dealing with the speed of code generation is an unprecedented problem. I was making the argument that it's not the most important to the business and that rate of code change is very rarely the top concern. But that does not mean it's not the most important problem for someone. For the developers dealing with the system, it is.


> I was not merely stating other bottlenecks. I'm saying they're more important bottlenecks.

This is a pointless statement though. The fact that writing code is a bottleneck, and a critical one, doesn't mean it's the only thing standing between us and fixing/implementing something.

It's like downplaying the time taken by international flights, because people can spend time passing through security.

The truth of the matter is that all software development processes is built around how slow code is written. Now code is ceasing to become a bottleneck and the current software dev process starts to emerge as inadequate.


> Now code is ceasing to become a bottleneck and the current software dev process starts to emerge as inadequate

If you say so. It's a honeymoon period for executives and a lot of them already got a lot of cold showers. It's just a very uncomfortable topic but it is already happening. Maybe not in your org but I am seeing it already.

I am not against the new reality even if it were to solidify (which it will not; productivity gains as sold are illusory and plateau VERY quickly; we're talking days -- and contrary to what many on HN believe, not all work is pitch decks and rapid prototypes).

I really like the acceleration and removal of dumb grunt work. I love it. But the current LLMs, even Codex and Opus, _are_ doing dumb stuff even on max effort still. People get hyped up and overshoot as they always do.

It must be said that I've used Opus for some pretty high-level and high-quality architectural work and it did very well -- but it took a lot of effort and steering, leaving me questioning whether I wouldn't have the done planning and scoping better and quicker.

Frontier LLMs do very well but people give them too much credit IMO.


No, it is not pointless to say other things are more important. An order of importance is how you prioritize things.

You're asserting that it is critical. Why is code change speed critical?

What is your analogy getting it? It does not map to my argument evidently.

You're asserting it is slow code that has evolved our development process. Why? It used to be correctness and appropriateness. How is it suddenly speed?

Bottleneck for what?


> Backlogs are cleared faster than new items are added

Totally depends on what kind of product and codebase.

Last time I checked, the number of open issues in Claude Code repo has increased.

And I have seen tons of tickets that are open for years. Not because it's technically hard or anything. An intern can do that. Those tickets are not closed because nobody wants to deal with what comes after it.


> Last time I checked, the number of open issues in Claude Code repo has increased.

The Claude Code repo features bug reports that are a mishmash of complains about prompt output, backend responses, documentation updates, browser extensions, etc.

Still, during the last week the repository reports ~2k closed issues vs ~1.3k new issues.

https://github.com/anthropics/claude-code/pulse


I'm afraid it'll lead to a weird music-ification of content.

Music can make you feel good and keep you engaged just purely out of engaging our pattern recognition.

AI videos and photos seem to have a similar effect. Even if it's not real, they encode enough patterns from good human work to be able to engage our attention.

Just proving people with an attentional escape is valuable on the internet.


I feel that pressure of not knowing how to definitively compete on the internet, especially when there's so much AI created noise.

I'm a copywriter and I used to get hired to write posts on behalf of founders on LinkedIn or for their company blog.

Now, the last three jobs I had were all focused on sending cold email.


How does a human designer even compete? I just looked at all the demos and they look beautiful.

I hand designed my site https://www.nair.sh/ and it feels like it doesn't even compare.

Sure, there's some judgment as to what design is appropriate in a given situation, but it just feels like so much harder for a human's design to feel valuable now.


Are you a designer? Everything AI does looks impressive if you are not familiar with it.

You're right in that our expertise can see how this was not generated with the same kind of thoughtfulness that we might apply.

But you're wrong in implying (if you are) that it's not valuable to be impressive to a non-expert.


Or isn't it? You are one step away from deploying superficially impressive things, without understanding what is lacking.

Yes, lacking for the expert.

To the non-expert, probably acceptable, even impressive.


That is precisely my point. The non-expert won't know what is missing and will be impressed, and there might be a price to pay. How would you like to trust your data to my vibe coded database, safety to my vibe coded mechanical designs, and health to my vibed up diagnosis?

What I'd said: it just feels like so much harder for a human's design to feel valuable now

I'm talking about competition; being valuable within a market; being seen as useful by others.

Maybe my focus on competition wasn't well communicated but you're making a precise but irrelevant point about personal integrity.


Maybe I misunderstood. I agree that not all buyers may appreciate the difference, and experts should educate them. Sometimes the price of their ignorance will educate them too.

I feel the designs they present are actually quite bad. Like... they are an anti-ad for this product. Just random fonts, bold, italics, underlines. Bad contrast, skinny small fonts.

Your site is actually really nice except the red color burns into my retina, so that's the only thing I would change about it (change your --primary to something more like #7c2c3e)


We are soon going to converge on all websites looking exactly the same, we’re almost there really

It’s just the same sterile template used for everything, yeah it looks good first time you See it. But the 100th? It starts to look like noise


Originality. The same as with art. Art and design are more than just a mean to satisfy a need. They are an opportunity to explore, to question. When Georges Seurat developed pointillism, he wasn't trying to compete with the people who could imitate Raphael. He created his own direction.

Yes but you're talking about groundbreaking work.

There's so much joy to be found in regular human creating and sharing.

The creating part still remains because it's intrinsic but the sharing part feels discouraging now.

Regular, non-groundbreaking creative work seems ... less worthy of sharing?


> The creating part still remains because it's intrinsic but the sharing part feels discouraging now.

Why? Is a chair that you made with your own hands not as valuable to you because somebody else got one from Ikea? Would you not show it to your friends for this reason?


Why would you pick an example that does not have AI as a competitor?

If people could generate an infinite variety of chairs in a few seconds, than yes, my sharing would be discouraged.


I can't draw. I'm learning to draw. I really don't give a flying toss if AI can generate pencil drawings of my loved ones. They'll know I made the effort myself.

Why would you use an example that limits sharing to loved ones?

Your point is thin.


Are you really that desperate for approval from the anonymous masses?

How do human artists compete with AI-gen images?

Yes, we're building a dystopia where AIs do the work humans enjoy, and humans get to hold on to drudgery. What's not to like?

Nothing stops humans from doing what they enjoy

The amount of shit that I need to deal with so that I can pay the bills has gone up markedly, and promises to continue going up. The things the tech industry are building are making things worse for me. Please stop making my life worse.

Your point? It's an analogical problem.

I love writing but even there I have to work doubly hard to make sure I'm doing something valuable.

My point is that the space within which human creators can distinguish themselves is diminishing rapidly.


One thing I've struggled with before is building a collection of data models based off of a collection of PDF forms.

I wanted to abstract away the PDF form building my own html form on top of a data model that can later be used to programmatically fill the PDF .

Since I had 100s of PDFs, I wanted an OCR+LLM pipeline to build a data model for each PDF. Unfortunately, OCR + LLM works ~90% of the time but sometimes fields are missed or mislabeled in the data model.

Does this sometimes get it wrong during programmatic filling? How do you deal with that?


This was surprisingly hard to read. Not in terms of sentence structure but in terms of coherence and meaning.

Also the book is $60 on Kindle and $80 for paperback? Who's the target audience?


CS fundamentals is about framing an information problem to be solvable.

That'll always be useful.

What's less useful, and what's changed in my own behavior, is that I no longer read tool specific books. I used to devour books from Manning, O'reilly etc. I haven't read a single one since LLMs took off.


Huh? The point of the article is that we should use git to store an LLMs output as it works?

How do any of the quotes and citations used coherently form that argument?

What is this writing style? Why does it feel like it doesn't want me to understand what the heck it's saying?


The point of the argument is that meaning emerges in conversation. A session between human and AI is a conversation.

Current AI storage paradigms offer lateral memory across the time axis. What exists around me?

A bit branch is longitudinal memory across the time axis. What exists behind me?

Persist type checked decision trees within it. Your git history just became a tamper-proof, reproducible O(1) decision tree. Execution becomes a tree walk.

It works. And it's not production ready yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: