Hacker Newsnew | past | comments | ask | show | jobs | submit | razodactyl's commentslogin

Seeing this too. Machines are great at pumping out content.

Tl;dr's, quick references / QuickStarts / cheat sheets and FAQs are also some things they're great at generating.


Like in that comic strip[0], where one side uses AI to inflate his bullet points to make it look better and have more content in the email, then other side uses AI to summarize it to bullet points.

[0] https://marketoonist.com/2023/03/ai-written-ai-read.html


This happens a probably a billion times a day. I shudder to think of the cost of it. Especially after knowing how LLMs aren't great at summarizing nor are they flawless at expanding information

Especially when it waits a month and all the effort is either irrelevant or incompatible with latest changes that finally got through. So much token wastage to top off the recent chaos. Hopefully it improves just as fast as it materialised.

It's called: the CEO isn't staying in their lane and is injecting incompetence into the company - look for a new job.

So.. their billing system is using '$>claude | jq' somewhere?

See image attached: Tried Windsurf today - approaching daily usage on a Pro Trial - the daily usage isn't spread out over the total weekly usage?

Token prices for AI are getting a bit ridiculous. I'm thinking pi.dev is starting to make more sense.


And now we finally have CSS grid. Remember centering a div? Haha


I think a lot of us are blinded by our own propaganda. I would expect many Chinese geeks to have the same values as us for the greater good of humanity.


> I would expect many Chinese geeks to have the same values as us for the greater good of humanity.

Yes, they just can't talk about some of those values publically.


They certainly can.


please provide a link to a chinese geek publically posting in china that Xi Jinping needs to be replaced.

Equivalent, here look, us state-funded news agency posting discussions about how trump needs to be replaced:

https://www.pbs.org/newshour/politics/democrats-grow-bolder-...


These pelicans are clearly indicative of good RL training algorithms.


If anyone's had 4.7 update any documents so far - notice how concise it is at getting straight to the point. It rewrote some of my existing documentation (using Windsurf as the harness), not sure I liked the decrease in verbosity (removed columns and combined / compressed concepts) but it makes sense in respect to the model outputting less to save cost.

To me this seems more that it's trained to be concise by default which I guess can be countered with preference instructions if required.

What's interesting to me is that they're using a new tokeniser. Does it mean they trained a new model from scratch? Used an existing model and further trained it with a swapped out tokeniser?

The looped model research / speculation is also quite interesting - if done right there's significant speed up / resource savings.


Interesting. In conversational use, it's noticeably more verbose.


On API use, I am noticing verbose output across the board. When I task it with plans it now creates more detailed task counts and tasks descriptions. It is more constrained to its directions than 4.6.


Bad feedback loops. It's hard to tell with such a massive report if the numbers are real or bad data.

The worst part is how big AI generated reports are - so much time spent in total having to read fluff.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: