Hacker Newsnew | past | comments | ask | show | jobs | submit | etimberg's commentslogin

The spice core that ngspice is built off is terrible code. It has a long history going back to 1970s era fortran. Starting fresh is probably preferable

That's not a revive though, revive (at least to me) implies it's dead.

> The spice core that ngspice is built off is terrible code. It has a long history going back to 1970s era fortran. Starting fresh is probably preferable

That code is also hyper-optimized for performance. I sincerely doubt you are going to match the performance easily with any random rewrite.

Now, if you had a very clear idea of why the code was making assumptions from the 1990s that are no longer valid, then you might stand a chance of producing something that would outperform it. Or, perhaps, if you had particular knowledge of modern high-performance numerical libraries that you could apply to the problem, then you might be able to beat it.

However, circuit simulation is remarkably difficult to get right (stiff systems with multiple time constants are not uncommon) and generally resistant to parallelization (each device can have its own model which are a unique set of linear differential equations).

If, however, the legacy of ngspice bugs you that much, go look at Xyce and see if that is more to your taste.


> and generally resistant to parallelization (each device can have its own model which are a unique set of linear differential equations).

Solving sets of differential equations is something that's parallelizable though

See for example how there's physics engines running on GPU. That's mechanics and not electric circuits, however it's differential equations all the same.


Which differential equations are you talking about? Linear ones have standard solutions and are definitely parallelisable (though you can basically just write the solution down by hand). Non-linear ones vary from can basically be approximated by a linear solution with corrections to needing to use relaxation methods (which are obviously not parallelisable).

Mechanics is generally linear, and for game physics engines fast is more valuable than correct (fast inverse square root being the obvious poster child). Add viscosity and you're in for a bad time.


To be specific, a linear solver can be (as in I have done) written in a week.

A serious non-linear solver that handles legacy Spice models is another beast entirely. And if you want to integrate modern advances in algebraic-differential systems you take that to a higher level.

These are not partial differential equations such as you find in Navier-Stokes. These are sparse non-linear differential equations that do not parallelize nearly as simply.

Another example of related problems that parallelize poorly even though they are linear are the FDTD formulations for Maxwell's equations. These are relatively simple systems, but the bottleneck is almost always the memory bandwidth because it is so hard to parallelize.


The type of people who need spice is dead serious about accuracy. 1ppm error sometimes is not tolerable. So, an optimization in a game engine is definitely not suitable for engineering simulation.

Dude these are incredibly oversimplified models of real components. How are you getting 1ppm when basic shit like tempco and self heating are missing from pretty much every vendor provided spice model?

and correctness too - I guess there aren't that many hardcore electrical engineers/physicists/mathematicians that can make sure the results it makes are correct and sound, and debug weird issues coming from numerical stability.

The sort of people who can do this are very rare, and it's not likely they will just randomly decide to donate their time to rewrite the codebase.


> That code is also hyper-optimized for performance. I sincerely doubt you are going to match the performance easily with any random rewrite.

Hyper optimized for '70s era fortran not gonna be all that optimized on modern CPUs.

I bet that just compiler optimizations that LLVM could do with clean code gonna be faster


> Now, if you had a very clear idea of why the code was making assumptions from the 1990s that are no longer valid, then you might stand a chance of producing something that would outperform it. Or, perhaps, if you had particular knowledge of modern high-performance numerical libraries that you could apply to the problem, then you might be able to beat it.

But that's exactly the sort of exotic domain knowledge that AI models have that I don't.


That code was optimized for performance for 1980s hardware. It’s very far from optimized for modern CPUs.

Glasses is another place where the savings are astounding

Eng leadership pings seems to be one of the few strategies that's been working on the hiring side for finding great talent the last few months


Rinse aid is an interesting topic. There's some evidence that it causes damage to the gut. https://pubmed.ncbi.nlm.nih.gov/36464527/


Not to mention, lots of places have time of use electricity pricing which makes it even worse. This is the problem with running my heatpump when its cold, some of the coldest times (right before dawn) coincide with peak time-of-use prices


where do you live that the highest electricity prices are before dawn?


Reminds me of when we used to drive past a Pinetree Line station every summer on the way to visit my grandparents.


Is this detecting people who work overnights?


I believe so yes. They tracked hours of light exposure at night over a week, and found this result in the 90-100th percentile. The 90th percentile here is pretty much going to be people working at night yeah.


The percentiles of 50-60, 60-70, 70-80, and 80-90, however, are obviously not shift workers (unless they're shift workers working in the dark); you compare the lowest percentile of daylight light exposure to the highest percentile of nighttime light exposure, and see that there just aren't that many shift workers present in the study. About 13% of the UK population is considered to be nighttime workers. We can safely assume nighttime workers aren't represented in the 50%-90% percentile, because there simply aren't enough of them to go around, and would not be statistically significant.


Almost certainly some kind of 3rd variable, yeah.


I have a kobo. The firmware is absolutely terrible. Somehow it's gotten itself to a state where it doesn't detect books that are added to the device, even after fully reseting the hardware


Give koreader a try, I regret not switching sooner


This sounds like wire fraud ....


That number for w20 will probably change. For example, when https://youtu.be/2GgcIuQ4X5k?t=324 was produced w20 was predicted to be up 0.84% YoY.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: