Like in that comic strip[0], where one side uses AI to inflate his bullet points to make it look better and have more content in the email, then other side uses AI to summarize it to bullet points.
This happens a probably a billion times a day. I shudder to think of the cost of it. Especially after knowing how LLMs aren't great at summarizing nor are they flawless at expanding information
Especially when it waits a month and all the effort is either irrelevant or incompatible with latest changes that finally got through. So much token wastage to top off the recent chaos. Hopefully it improves just as fast as it materialised.
I think a lot of us are blinded by our own propaganda. I would expect many Chinese geeks to have the same values as us for the greater good of humanity.
If anyone's had 4.7 update any documents so far - notice how concise it is at getting straight to the point. It rewrote some of my existing documentation (using Windsurf as the harness), not sure I liked the decrease in verbosity (removed columns and combined / compressed concepts) but it makes sense in respect to the model outputting less to save cost.
To me this seems more that it's trained to be concise by default which I guess can be countered with preference instructions if required.
What's interesting to me is that they're using a new tokeniser. Does it mean they trained a new model from scratch? Used an existing model and further trained it with a swapped out tokeniser?
The looped model research / speculation is also quite interesting - if done right there's significant speed up / resource savings.
On API use, I am noticing verbose output across the board. When I task it with plans it now creates more detailed task counts and tasks descriptions. It is more constrained to its directions than 4.6.
Tl;dr's, quick references / QuickStarts / cheat sheets and FAQs are also some things they're great at generating.
reply