Hacker Newsnew | past | comments | ask | show | jobs | submit | simojo's commentslogin

Liquid crystal elastomers will most likely never be used in humans because, in order to drive the phase transition (mematic mesogens going from isotopic to anisotropic phase) necessary for macro scale work, the LCE has to be heated well beyond 100C. Even in non-thermal contexts, you need kilovolts to influence a doped bulk LCE. I just don't see it happening.

to be fair, the approach is usually covered in snowpack for most of the year, so impact is minimal by foot traffic. However, most of the protection is fixed, which could have lasting effects if something were to rip out.

For other mountains with dry summits in the summers, I would agree: the effects of erosion are frightening


Do people still leave oxygen bottles up there? And what do they do with all of their excrement?

The saying is that the snowpack gives back everything you put in it.


GitHub has a spot to display your email on your profile; is this obfuscated as well? Most of my current spam is from putting my email on there..


Your email is still available from the actual commits presumably.


Today I scheduled a dentist appointment over the phone with an LLM. At the end of the call, I prompted it with various math problems, all of which it answered before politely reminding me that it would prefer to help me with "all things dental."

It did get me thinking the extent to which I could bypass the original prompt and use someone else's tokens for free.


https://bsky.app/profile/theophite.bsky.social/post/3mhjxtxr...

>> "claude costs $20/mo but attaching an agent harness to the chipotle customer service endpoint is free"

>> "BurritoBypass: An agentic coding harness for extracting Python from customer-service LLMs that would really rather talk about guacamole."


https://bsky.app/profile/weiyen.net/post/3m7kenmok4c2n

I did something similar. Try framing your maths question in terms of teeth


And this is another easily solved problem by someone who knows what they are doing…

Voice -> speech to text engine -> LLM creates JSON that the orchestrator understands -> JSON -> regular code as the orchestration -> text based response -> text to speech

Notice that I am not using the LLM to produce output to the user and if the orchestrator (again regular old code) doesn’t get valid input, its going to error. Sure you can jailbreak my LLM interpretation. But my orchestrator is going to have the same role based permission as if I were using the same API as a backend for a website. Because I probably am

Source: creating call centers with Amazon Connect is one of my specialties


> Notice that I am not using the LLM to produce output to the user

So what output does the user get?


The programmatically generated response from the orchestrator which could be either a confirmation or request for more information.


Sure - but does this have the context of the original question that the user asked? If not it seems that it isn’t really conversational and more of a “compiler”.

How would something like “I want an appointment either on Monday afternoon after 4pm or one on Tuesday before 11am” work?

Unless all the parameters given by the user fit within the constraints of the json format then the LLM would need the context of the request and the results to answer properly, would it not?


For reference, my last discussion about this

https://news.ycombinator.com/item?id=47241412

This is a constrained space. I would do the naive implementation at first and then talk to the humans (like you) and then my JSON definition would include a timespan type field.

My orchestrator would then say “I have these times available [list of times]. What time would you like?” and then return a specific LLM prompt to parse the information I need once the user responds. But I would send that exact text to the user. Yes I’m purposefully constraining the implementation where the LLM is never used for output and never directly controls the backend

There is also the concept of “semantic alignment” where you ask the LLM to generically answer the question - “does the users answer make sense with regard to the question” as a first level filter that only returns true or false. This is again a constrained function that you pass in the question and answer to the LLM and if you get something besides true or false your code errors.

The purpose of an LLM or even before that an old school intent based system (see my link) isn’t perfection it’s “deflection”. The more that you can handle through automation the less you have to bring a human in. An American based call center when a person is an agent costs from $3–$7 a call fully allocated. An automated call can costs tenths of a penny.

Of course that doesn’t include the cost of the accepting a call in the first place over a 1-800 number and in my case the price that AWS charges per minute for Amazon Connect


> This is again a constrained function that you pass in the question and answer to the LLM and if you get something besides true or false your code errors.

Code erroring is fine for code, but what is the user experience here? Some sort of “computer says no” generic response, or something more contextual?

I’m trying to picture what the user says and hears as a response to an off-the-beaten-path question. Is it just “I don’t understand, here’s how to phrase it?”.


If there is an issue, they are transferred to a human operator. “I’m having trouble understanding you, let me transfer you to someone who can help”. On the CSRs screen, they will see the conversation that has taken so far.

There is also sentiment analyst built into the prompt so it can detect a negative sentiment and automatically short circuit the process and transfer to a human.


Could just have used NLP


NLP doesn’t have world knowledge and with one prompt, I can support almost any language. Of course the speech to text engine is specific to the language


> politely reminding me that it would prefer to help me with "all things dental."

I'm amused to imagine it actually wasn't an LLM at all, just a good-natured Jeeves-like receptionist.

(AskJeeves came too early, much better suited as a name for Kagi or something like it!)


haha for sure some one has made a little aggregator for this and saving tokens. I bet you gotta dig for a while though before you find a company exposing Opust 4.6 to customers and not flash 2.5 lite


One of my buddies has beaten minecraft many times using only his mac trackpad. In fact, he's better than I am with a mouse.


What do you see as the bad part of this? That the user is trying to farm points by copying patterns of upvote-winning users, or that there's a flood of inauthentic new users? Genuinely asking.


every forum has busybodies who try to make something out of nothing


Not if you have a local amish plug


I suppose that's why the harpejji [0] has recently gained popularity? I too have wished for an isomorphic keyboard. All of the non-stacked ones become either too wide or the keys become too skinny. Example: Dodeka Keyboard [1]. I know that the Lumatone [2] exists too, but it is too progressive for my taste :)

As a side note, the traditional keyboard size is not representative of the average pianist's hand size. David Steinbuhler [3] has been making modified traditional keyboard layouts by varying the width of the keys slightly, and people rave about it. I've had the chance to visit his shop in Titusville, Pennsylvania, where he designs them. It's a totally enhanced playing experience, even for someone like me who can play a 10th without difficulty.

[0] https://en.wikipedia.org/wiki/Harpejji [1] https://dodekamusic.com/ [2] https://www.lumatone.io/ [3] http://dsstandardfoundation.org/the-standards/


This is orders of magnitude more complicated and risk prone than wire wrapping due to the possibility of cold joints, but as I understand it, this look is what people dig these days (just watch any EE youtuber). I too used to think that soldering on porto board was a great way to go about prototyping sans SBB, but you can't ignore the bomber connections that wire wrapping gives you.


Might be a dumb question, but isn’t the risk of cold joints proportional to your skill in soldering in general? Important context: I am definitely a noob to soldering


It is, yes. After some practice, you will not get cold joints. Or when there is a danger of a cold joint due to massive heat sinking around, you will know and be extra careful


I'm curious as to what kind of control stack Waymo uses for their vehicles. Obviously their perception stack has to be based off of trained models, but I'm curious if their controllers have any formal guarantees under certain conditions, and if the child walking out was within that formal set of parameters (e.g. velocity, distance to obstacle) or if it violated that, making their control stack switch to some other "panic" controller.

This will continue to be the debate—whether human performance would have exceeded that of the autonomous system.


From a purely stats pov, in situations where the confusion matrix is very asymmetric in terms of what we care about (false negatives are extra bad), you generally want multiple uncorrelated mechanisms, and simply require that only one flips before deciding to stop. All would have to fail simultaneously to not brake, which becomes vanishingly unlikely (p^n) with multiple mechanisms assuming uncorrelated errors. This is why I love the concept of Lidar and optical together.


The true self-driving trolley problem. How many rear-end collisions and riders' annoyance caused by phantom braking a manufacturer (or a society) is going to tolerate to save one child per N million miles?

Uncorrelated approach improves sensitivity at the cost of specificity. Early sensor fusion might improve both (maybe at the cost of somewhat lesser sensitivity).


With above-average human reflexes, the kid would have been hit at 14mph instead of 6mph.

About 5x more kinetic energy.


Yeah, if a human made the same mistakes as the Waymo driving too fast near the school, then they would have hurt the kid much worse than the Waymo did.

So if we're going to have cars drive irresponsibly fast near schools, it's better that they be piloted by robots.

But there may be a better solution...


But would a human be driving at 17 in a school zone during drop off hours? Id argue a human may be slower exactly because of this scenario


> would a human be driving at 17 in a school zone during drop off hours?

In my experience in California, always and yes.


Maybe we should not only replace the unsafe humans with robots, but also have the robots drive in a safe manner near schools rather than replicating the unsafe human behavior?


One argument for the robots is that they can be programmed to drive safer, while humans cant.

But that depends on reliability, especially in unforseen (and untrained-upon) circumstances. We'll have to see how they do, but they have been doing better than expected


Depends on the school zone. The tech school near me is in a 50 zone and they don't even turn on the "20 when flashing" signs because if you're gonna walk there, you're gonna come in via residential side streets in the back and the school itself is way back off the road. The other school near me is downtown and you wouldn't be able to go 17 even if you wanted to.


Kinetic energy is a bad metric. Acceleration is what splats people.

Jumping out of a plane wearing a parachute vs jumping off a building without one.

But acceleration is hard to calculate without knowing time or distance (assuming it's even linear) and you don't get that exponent over velocity yielding you a big number that's great for heartstring grabbing and appealing to emotion hence why nobody ever uses it.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: