Hacker Newsnew | past | comments | ask | show | jobs | submit | chipguy's commentslogin

That's interesting because that's how humans actually do algebra. Who cares about the decimal expansion of some irrational if in the end we divide it by itself for example.


Unfortunately, most infinite-precision real-number implementations (including the one here, I bet, but I haven't checked) will not recognize that you're dividing something by itself. So you'll get a super-inefficient implementation of the number 1 :-).

[EDITED to add:] I checked; the implementation here indeed won't recognize that you're dividing something by itself. For the avoidance of doubt, nor do I think trying to recognize such things would be an improvement.

(Also, if you ask for e.g. the digit immediately preceding the decimal point, you will wait for ever because no amount of precision will enable the implementation to tell whether your number is just less than 1 or 1 or just over. So don't ask it for that, ask it for a very close approximation and draw your own conclusions.)


The notion of "by itself" is dubious in this context at best. You can only check up to finite precision that two numbers are equal. For something more, you need computer algebra, which is a tall order.

Even if you have computer algebra, math says it's not possible to check that arbitrary expressions are equal. It's an undecidable problem. So with that, I don't fault a computable reals package not being able to detect that x == y in an expression x/y.


You can imagine an infinite-precision-real package that checks whether numerator and denominator are the exact same object and optimizes that case away.

(And I wonder idly from time to time about making one that does know a bit about some particularly nice classes of number -- e.g., algebraic numbers -- and does all the things you'd like it to as long as you stay within such a class. Obviously as soon as you ask it for sqrt(2)+pi all that would go out the window.)


Also note that dividing something by itself is not necessarily 1.


I dunno, I think 0/0 is 1, but I cannot be sure.

https://www.wolframalpha.com/input/?i=lim+x%2Fx+as+x-%3E0


You can say 0/0 is 1, but it's also 2 and 3 and every other number, since x*0=0 for any x. That's why it's undefined.


Could we define it as a one to many function that maps from 0/0 to the set of all Complex numbers?

If 0/0 = Q, then

0 = 0 × Q. So, we get to define this operation. If we treat 0 as an integer, then we could treat Q (a set) like a 1 x inf. matrix and we have scalar multiplication.

But

0 × Q = [0]' × Q

So 0 can be an infinite set of 0. This is cyclical, I know. This cyclical nature leads to an interesting effect if we also define integer division. Try this around 0 in Q.

We could also define the × as an inner (dot) product, or as a cross product.

This is fun!


Is there a number set larger than a complex number? If so you can’t map to any concrete thing as it is really anything.

And I guess it is easy to define one more dimension of number.


That's not the impression I got from that thread. They seem to agree that this is bad for benchmarking, but remain undecided on whether that's good or bad for real-world processing.

It depends on the work. So as always benchmark suites are to be taken with a grain of salt. More specific benchmarks, such as compiling a standard set of real software packages, can give a clearer picture of performance for those more specific use cases.

Until we see more specific data on how these chips perform for certain tasks, this is just FUD.


Yes, that's why I qualified my "real-world tasks" with "random". What is clear is that:

* Ryzen has a longer branch prediction history than Intel's processors.

* This will give it an advantage on repetitive executions.

* It's a challenge to robustly measure tasks since using repeated executions to gain confidence intervals can interfere with the measurement itself.

What's not clear is to what extent real-world tasks are repetitive enough to benefit or random enough to be negatively impacted. It's likely a mix of both.

By no means am I attempting to spread FUD — I find it quite interesting and wanted to spark a bit of discussion on it.


Pardon. I didn't mean to imply you were intentionally doing that. Just trying to make sure there's skepticism of benchmarks as well as skepticism that the boost from branch prediction is dishonest.


> More specific benchmarks, such as compiling a standard set of real software packages, can give a clearer picture of performance for those more specific use cases.

Is there a good place to go for this? I've tried to find software development focused benchmarks before, but I've come up mostly empty.


there are many different types of benchmarks with many different CPUs/GPUs compared here: https://openbenchmarking.org/tests/pts

for a more specific example, linux kernel compilation benchmarks: https://openbenchmarking.org/showdown/pts/build-linux-kernel


Phoronix is a good place to go for compilation benchmarks - https://github.com/phoronix-test-suite/phoronix-test-suite


The link I posted in a sibling comment is a more direct way to get to the results of that suite.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: