For things like drawing (Procreate and co), editing images and even videos on the go, using it with a MIDI keyboard and AU plugins for gigs, reading ebooks, watching a movie in bed, etc its way better than both the Mac and the iPhone.
Paired with a BT keyboard, for niche stuff like focus writing apps (closer to fancy typewriter with no distractions than a full laptop or phone) it's also great.
They've used AI themselves to speedrun the company turning from customer goodwill/"Don't be Evil" to full GoDaddy/Adobe level scumbags. Companies usually wait severay years until after the IPO for this.
>I write detailed specs. Multifile with example code. In markdown.
Then hand over to Claude Sonnet. With hard requirements listed, I found out that the generated code missed requirements, had duplicate code or even unnecessary code wrangling data (mapping objects into new objects of narrower types when won't be needed) along with tests that fake and work around to pass.
Stop doing that. Micromanage it instead. Don't give it the specs for the system, design the system yourself (can use it for help doing that), inform it of the general design, but then give it tasks, ONE BY ONE, to do for fleshing it out. Approve each one, ask for corrections if needed, go to the next.
Still faster than writing each of those parts yourself (a few minutes instead of multiple hours), but much more accurate.
Might as well just write the code yourself at that point. And as a bonus, end up with a much better understanding of the codebase (and way better code)
>Might as well just write the code yourself at that point
"We have this thing that can speed your code writing 10x"
"If it isn't 1000x and it doesn't give me a turnkey end to end product might as well write the whole thing myself"
People have forgotten balance. Which is funny, because the inability of the AI to just do the whole thing end to end correctly is what stands between 10 developers having a job versus 1 developer having a job telling 10 or 20 agents what to do end to end and collecting the full results in a few hours.
And if you do it the way I describe you get to both use AI, AND have "a much better understanding of the codebase (and way better code)".
Writing the code is usually not the bottleneck, so you don’t gain that much speeding it up. And as I said, you lose a lot of knowledge about the code when you don’t write it yourself.
Unless coding is most of your job, which is rare, you’re giving up really knowing what your software does in order to achieve a very minor speed up. Just to end up having to spend way more time later trying to understand the AI generated code when inevitably something breaks.
> And if you do it the way I describe you get to both use AI, AND have "a much better understanding of the codebase (and way better code)".
Using AI is not a goal in itself, so I don’t care about “getting to use AI”. I care about doing my job as efficiently as possible, considering all parts of my job, not just coding.
Writing the code is A bottleneck. Except if by "writing the code" people just mean the mere physical act of typing it in. Which is not what I mean.
But if someone thinks that just the design/architecture decisions take time, and the fleshing out in actual code does not, they're wrong.
Some coders seem to think they're high end architects, and the fleshing out the design is a triviallity that's very fast. Wathing high end coders wrote, e.g. in code session streams or just someone at your office, will show you it's never that fast.
In actual programming practice, even if you know the design end to end, even if it's a 100-line thing, writing it takes time.
Look up how to call those APIs you need.
Debug when you inevitable got some of them wrong.
Figure out that regex you need to write.
Fix the 2-3 things you do wrong on the first pass of the "trivial" algorithm.
Add some logic to catch and report errors and handle edge cases.
Add tests.
All these are "trivial", but combined can take a couple of hours for something the AI will most of the time spit out correct the first time in a minute. And of course as you write you also explore dozens of decisions that could go either way, even with the same exact design and external interface to your code.
Getting that ready from the LLM within a minute means you can explore alternative designs, handle new issues that occured, add more functionality to make it more usable and smarter , etc, all the while you'd still be writing the original cruder version.
>Using AI is not a goal in itself, so I don’t care about “getting to use AI”. I care about doing my job as efficiently as possible, considering all parts of my job, not just coding.
Not the point. Nobody said AI is a goal in itself.
AI however does speed up the work, and if you take the black-and-white "if AI can't do it all by itself end-to-end without me intervening then I'd rather write everything myself" (what I respond to), then you're not doing your job "as efficiently as possible".
The goalposts move every month. We’re at the stage where handing an entire specification to a mid-tier AI and walking away while it does all the work and then being disappointed that it wasn’t perfect means it’s useless.
If I still have to do a ton of work to clean up whatever the AI shits out then it might as well have done nothing. The promise of these systems from the hypesters is that it can do everything, so don't be surprised when people expect exactly that.
Yes. The feature is quickly produced slop. Future LLMs will train on it too, getting even more sloppy. And "fresh out of uni juniors" and "outsourced my work to AI" seniors wont know any better.
Was that on a country that went on a genocidal rampage just before and lost the war after killing millions all around Europe, which was decided to be divided in several parts, of which USSR got to control one, and which still developed into an independent country less than a decade later?
Yes, but you're leaving out the other 9 countries the Soviet Union occupied, and immediately started killing the population to keep their conquests: Poland, Austria’s Soviet zone, Hungary, Romania, Bulgaria, Czechoslovakia, Estonia, Latvia, and Lithuania.
By contrast, the US retreated. And also didn't start killing any population.
"Killing their population" as in executing some Nazi collaborators, of which there was no shortage in all, down to full cooperation? Like the ones involved in the Axis alliance and in the eastern front offensives that caused the deaths of millions of their own people?
>And also didn't start killing any population.
Yes, just Korea, Vietnam, Cambodia, and anybody who leaned national sovereignity/left in the Latin America and later the middle east.
>Meta has about 10% more employees now than they did at the end of 2021.
So? They likely already had too many in 2021.
>They currently have less than half the employees of Google or Apple; only a third of Microsoft.
Technology (hw/sw) wise, they also have 1/10 the internal tech and public product breadth and scope of Google or Apple and Microsoft. Maybe 1/50 even. They do like 4-5 social media and chat apps (that they hardly ever update anymore), and some crappy VR stuff nobody cares for.
It seems you haven't done the due diligence on what the parent meant :)
It's not about "constructing a prompt" in the sense of building the prompt string. That of course wouldn't be costly.
It is about reusing llm inference state already in GPU memory (for the older part of the prompt that remains the same) instead of rerunning the prompt and rebuilding those attention tensors from scratch.
You not only skipped the diligence but confused everyone repeating what I said :(
that is what caching is doing. the llm inference state is being reused. (attention vectors is internal artefact in this level of abstraction, effectively at this level of abstraction its a the prompt).
The part of the prompt that has already been inferred no longer needs to be a part of the input, to be replaced by the inference subset. And none of this is tokens.
>It seems you haven't done the due diligence on what part of the API is expensive - constructing a prompt shouldn't be same charge/cost as llm pass.
I think you missed what the parent meant then, and the confusing way you replied seemed to imply that they're not doing inference caching (the opposite of what you wanted to mean).
The parent didn't said that caching is needed to merely avoid reconstructing the prompt as string. He just takes that for granted that it means inference caching, to avoid starting the session totally new. That's how I read "from prompting with the entire context every time" (not the mere string).
So when you answered as if they're wrong, and wrote "constructing a prompt shouldn't be same charge/cost as llm pass", you seemed to imply "constructing a prompt shouldn't be same charge/cost as llm pass [but due to bad implementation or overcharging it is]".
If you have asd or adhd (not uncommon in programmers) it can be a definitive minus for well-being. But even if you don't, between office politics and idiotic corporate mandates, it can be draining.
Especially as for the average office worker, originally you had an office of your own or at worse with one or two other people, then starting from the 80s you had a cubicle, then we got the hellish open plans. You're asked to focus on a screen and a codebase in an environment full of distractions, and full of activity around you.
And that's before we added any commute, and preparing for the commute, which can easily eat an additional 1-2 hours of your day, every day.
This is me. I'm not anti-social by any means, and I like people, but constant chatter around me drives me nuts. So I put my headphones on and now I'm unapproachable. It's tough.
This. And on top of that, headphones at office suck, at least for me.
They don't drown out enough even with large, well insulated cups. So you add noise cancelling. Which drowns out more but not everything. In fact it keeps some very annoying stuff around that is suddenly actually audible VS being drowned out without the headphones. And having noise cancelling on for 8 hours straight for days in a row actually creates some significant pain in my ears. The next idea is music to drown out what's left but that just distracts me too.
Remote is the only good way.
In fact, being remote means I have "social interaction budget" for the family again VS it all having been used up during work hours (being an introvert)
> The next idea is music to drown out what's left but that just distracts me too.
You could try using white noise, either an app or if you have a Mac or iPhone they have native white noise generation (Accessibility -> Hearing -> Background Sounds iirc)
Maybe I'll have to try something new at some point. Fair. It's been a while.
I just googled this and what I found was this for example:
The Sony WH-1000XM3 is much better at canceling noise above 100Hz than the Bose is. However, because the Bose QC35 II can block out more sub-100Hz noise, it does a better job at killing unwanted car engines and low rumbles.
So sounds like it's just gonna be a different kind of noise that will still come through. So instead of still hearing voices, but much clearer I might hear more of the AC humm. Sounds like a wash unfortunately. And one the company won't pay for ;)
One thing that immediately turned me off when finding the Sonys on Amazon: It says "Alexa". Sorry, immediate and 150% no thank you, see you, bye.
No, just try a pair from Amazon and return if you need to. I can mow the lawn with these on and it's nearly silent. There's a feature to recalibrate for air temp and ambient noise (use this every time you put them on). They are really good.
Wearing over the ear headphones all day can contribute to cranial pressure, tiring out your jaw muscles and strain your temporomandibular joint.
It can also encourage ear infections and clogging of the eustachian tubes, because covering or plugging your ears slows down the self cleaning process.
At first you won't notice, but after a decade, these problems will slowly creep up on you and fixing them is very expensive, because you're basically slowly deforming your bones.
I personally wouldn't let kids/teenagers use headphones that apply any amount of noticeable pressure.
Yep, ADHD and God know's what else here. Oddly enough, I am too gregarious, and it often gets me in a lot of trouble. So, by being WFH, I am not surrounded by distractions, and I am much more productive.
For things like drawing (Procreate and co), editing images and even videos on the go, using it with a MIDI keyboard and AU plugins for gigs, reading ebooks, watching a movie in bed, etc its way better than both the Mac and the iPhone.
Paired with a BT keyboard, for niche stuff like focus writing apps (closer to fancy typewriter with no distractions than a full laptop or phone) it's also great.
reply