Hacker Newsnew | past | comments | ask | show | jobs | submit | jedbrooke's commentslogin

Fun fact, an electric guitar can also be used as a microphone if you shout in to it loud enough.

We discovered this while fooling around with some guitars and such as teenagers. We had a 4 track input device that was separating vocals and instruments, but even after turning down the vocal track, we could still hear it in the instrument track. We then of course followed it up with some experiments deliberately shouting into the guitar and enjoying the distorted recordings that came out of it


Honestly, I get pretty good improvement from just adding a “Emoji are forbidden” and a small list of banned words and phrases (the usual suspects like “it’s not just x, it’s y” etc)

Banning bold font and saying to avoid lists and tables as much as possible is an immediate massive improvement to LLM output quality

We’ve come full circle and started using Graphics Processing Units to process graphics again


> "alt+cmd m j/k" (media -> vol up/down)

if only keyboards came with built in buttons for adjusting the volume… oh wait. Unless of course you are suffering on a touch bar mac, then I completely understand.


It's not about "having" or "not having" keys for specific actions, it's all about freedom and feeling of control. When you take and apply the idea of modality, you quickly realize that you are no longer constrained with the number of combinations you can have or the type of keyboard you're using. Everything can be controlled by (mostly) using home-row keys - h/j/k/l - without having to memorize weird combinations of modifiers and keys - "was it Ctrl+Alt+Cmd F, or just Ctrl+Cmd F?"

alt+cmd (was a typo, I meant to say alt+space), which is configurable - I myself prefer using cmd+space. That opens the "main" modal, from where you can configure "conditional branching" - e.g. "m" - for "media", or "a" - for "apps", so with "alt+space m j/k" you can do volume up/down, while pressing h/l could be "previous/next song". Then, "alt+spc a b" activates the browser, and "alt+spc a t" - could be bind to activate "terminal", etc.

It only looks like you have to press more keys to achieve anything, in practice - you quickly develop muscle memory. Then switching between the apps, moving windows around and resizing them, controlling playback, etc. - it all gains incredible productivity without affecting the focus point. You don't need to keep moving your hand for the mouse, you don't need to memorize and deal with myriad of modifier-driven key combinations - you control precisely what you need, without ever having to contort your fingers to hold modifiers, without ever thinking "what should I bind this action to, all memoizable keys are already taken, I suppose I'll just bind it to this impossible combo with a key that has no semantic meaning for the thing..." With Spacehammer you can create mnemonically-handy actions e.g., "o f" for "Open in Finder", while in another context that may work as "Open in Firefox".


> if only keyboards came with built in buttons for adjusting the volume…

99% of my working day, my fingers are on or near alt/cmd/m/j/k (a nice easy position in the centre of the keyboard.)

They are not on or indeed anywhere even vaguely near fn+f10/f11/f12 (which are, in fact, diametrically opposite corners of the keyboard.)


My external mechanical keyboard doesn’t have media keys.


looks interesting, I agree that chat is not always the right interface for agents, and a LLM boosted cli sometimes feels like the right paradigm (especially for dev related tasks).

how would you say this compares to similar tools like google’s dotprompt? https://google.github.io/dotprompt/getting-started/


I've not heard of that before but after looking into it I think they are solving different problems.

Dotprompt is a promt template that lives inside app code to standardize how we write prompts.

Axe is an execution runtime you run from the shell. There's no code to write (unless you want the LLM to run a script). You define the agent in TOML and run with `axe run <agent name> and pipe data into it.


I tried it in Chinese and ChatGPT said No, and then gave a history of Saint Nicholas


Oof, off topic but the trains were out of service here for my commute last night so I though from the headline this meant that somehow all trains everywhere just stopped working. Glad to see it’s just some Saas product that’s down


Indeed! Remote bricking of trains is perhaps a thing: https://www.thedrive.com/news/hackers-beat-anti-repair-softw...


It's also mandated by Congress in the US, it's called PTC. (Remote control)


This wasn't PTC. It was repair lockouts instituted by the manufacturer of the trains based on a GPS geofencing beacon.


Sure! I'm just pointing out that technically you can stop the trains remotely - by design.


PaaS


the windows version is playable on macos through wine. Even modern version, I got it running on a m2 mac mini on Macos 15 sequoia

EDIT: this was for HL1 I’m not sure about HL2


so many times I catch myself asking a coding agent e.g “please print the output” and it will update the file with “print (output)”.

Maybe there’s something about not having to context switch between natural language and code just makes it _feel_ easier sometimes


I've found that a 250w incandescent bulb (can be had for ~$10) paired with a 4000 lumen LED produced decent results on a budget. Search for "reptile" or "chicken" lamps, they are usually red. You can feel the HEAT from a 250w light bulb.

The only thing to watch out for is that the lamp base you're using can support the high wattage.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: