dang, i pitched this on reddit like 20 years ago. I've always wanted to know what putting the new page on the front page would do to content quality. Something like this https://keizo.github.io/hackernews/
We tried a variant of that once and it failed hard, because people have strongly different emotional responses to the front page vs. the newest page. Mixing the two produced a strong aversion.
Interesting! Like less overall engagement? Was it fully mixed or in distinct columns. I feel like it makes a difference.
I would at least interact with the new content if it was on the page. Vs almost never now. I assume people that are active on the new page must be 1% of users or mostly those directly involved with the story.
Anyway, thanks for the response and keep the place sane.
I came to the same conclusion. Except I decided I could make a simpler software. I'm still in the "one more feature bro" phase, but if this blog post resonates for anyone and you're open to a simple saas -- would love feedback https://grugnotes.com
This. Lately for some harder problems, I'll open two sessions. One writes a draft spec to a file. Then in the 2nd, i ask it to analyze, critique, etc. Often feed that response back to the first. A few ping pongs later, get a pretty polished plan. Open a new session to execute.
has anyone done some simple latency profiling of gemini embedding vs open ai embedding api? seem like that api call is one of the biggest chunks of time in a simple rag setup.
gemini flash and groq are pretty fast, and that part is streamable. curiosity got the best of me so i had claude code write a quick test. given this test is simply is 20 requests, with 1 second delay between requests ran once. so take with a grain of salt, but interesting still. Extra half second in a search is super noticeable so google looking like a reasonable improvement.
OpenAI Statistics:
- Average: 0.360 seconds
- Median: 0.292 seconds
- Min: 0.211 seconds
- Max: 0.779 seconds
- Std Dev: 0.172 seconds
Google Gemini Statistics:
- Average: 0.304 seconds
- Median: 0.273 seconds
- Min: 0.250 seconds
- Max: 0.445 seconds
- Std Dev: 0.066 seconds
The key insights from these numbers:
- Google has much lower standard deviation (0.066 vs 0.172), meaning more consistent/predictable performance
- Google's worst-case (max) is much better than OpenAI's (0.445s vs 0.779s)
- OpenAI had a slightly better best-case (min) performance (0.211s vs 0.250s)
- Google's performance is more tightly clustered around its average, while OpenAI has more variability
i like this. complexity bad, delete it! Most pkm, tools for thought, second brain apps confuse me. I drank the coolade with roam research but it drove me kinda nuts I've spent almost 3 years making my own tool. I mostly use it as a paste-bin, todos, lists -- and for the only thing i would never delete, voice notes on funny sayings or interactions with my 3 y/o daughter. my project is over at https://grugnotes.com if anyone else fits the anti note app vibe i'm kinda leaning into.
I've been working on a roam research replacement for myself for almost 3 years now. And initially, my main thought with self-organizing notes was to tag things automatically. And use ai for retrieval. Generally I avoid ai from 'touching' your notes. But 'breaking apart' a single note with ai is something i think about doing. But haven't done yet. Either way, i always want human in the loop. Given this is hn i'm sure you're working on your own solution, but feel free to check out my project! :) https://grugnotes.com