I appreciate the scientists’ honesty. When asked about big G and time invariance, he says he just takes it on faith that it has been the same forever. If more people would admit their leaps I think the theistic schism would be far more shallow.
> he says he just takes it on faith that it has been the same forever
It’s not faith, it’s a working assumption. There are in facts Physicists working on figuring out what would change if some fundamental constants changed with time. Our best measurements and understanding right now is that they do not. If we show tomorrow that it’s wrong, then we’ll build better theories and move on. There is absolutely no faith involved.
The theistic schism? I had to look it up, and was not cleverer after. Nobody can ever know an ultimate why, for obvious and well established philosophical reasons. At least the scientists are trying to squeeze the knowledge gap down as small as possible instead of making up stories.
> Nobody can ever know an ultimate why, for obvious and well established philosophical reasons
Yes we can, you are just presupposing that philosophy is ultimately ineffective. For example Hegel gave a presuppositionless development of all metaphysics among other things. It’s not some kind of philosophical consensus that ultimate justification is impossible
Fractal Patterns in Reasoning – David Atkinson and Jeanne Peijnenburg
Abstract This paper is the third and final one in a sequence of three. All three papers emphasize that a proposition can be justified by an infinite regress, on condition that epistemic justification is interpreted probabilistically. The first two papers showed this for one-dimensional chains and for one-dimensional loops of propositions, each proposition being justified probabilistically by its precursor. In the present paper we consider the more complicated case of two-dimensional nets, where each ‘child’ proposition is probabilistically justified by two ‘parent’ propositions. Surprisingly, it turns out that probabilistic justification in two dimensions takes on the form of Mandelbrot’s iteration. Like so many patterns in nature, probabilistic reasoning might in the end be fractal in character.
Only by embracing solipsism, in which case why are you here debating things, since your entire truth is derivable from your existence alone with no other observations or interactions required?
Interesting word soup. Ultimately, no, you cannot build a valid representation of the universe from nothing and you need observation and validation. You can presupposition whatever you want when you are talking about unproveable models, but it says more about you than the universe. Until we have a reason to think that there is a "why", discussing what it is is completely unnecessary and futile because 1) it does not change anything about our understanding or the predictions we can make, and 2) it is not something we can observe, measure or prove.
> If more people would admit their leaps I think the theistic schism would be far more shallow.
There’s an important gap here between science as practiced and science communication.
Working scientists will absolutely admit their ignorance, shaky foundations, etc. This is especially important in astronomy and cosmology, as the field is relatively young and experiments are impossible, outside of those that nature has already done for us. (Both evolutionary biology and linguistics have similar problems but cosmology has it especially hard.)
This, however, is a losing strategy for communication. Most people equate confidence with credibility (and by high school we’ve beat children down enough that they do so as well), so if you do not sound confident people will not listen to you. (I could pontificate on how this is one of the greatest societal ills of our time, science or no science, but I won’t.) Even outside social situations, most people frankly cannot deal with holding a position and simultaneously not being confident in it, and absolutely cannot deal with holding an entire network of mutually-supporting positions and different degrees of confidence in each, while also having multiple alternatives with different degrees of plausibility for some of them. (This is somewhat more advanced than the programmer’s skill of relying on a deep stack of supporting services and debugging tools while keeping in mind that any given subset of them could be lying, which I’m sure you’re aware is also fairly difficult to communicate the experience of.)
Then there’s the active (if not always successful) effort towards never ever reasoning backwards from things you would prefer to be true or that would make the world nicer for you. (The “History Plots” section[1] of the biennial Review of Particle Physics is there solely as an admonishment never to go with the herd. And that’s for things that have no implications for anybody’s worldview, morality, or livelihood!) It is very uncomfortable to genuinely not know where you are going and also not be able to aim anywhere in particular. (It might among other things imply that the entirety of your life’s work only serves to seal off a dead end and you might not even live long enough to learn that. And either way you’re consigning yourself to a very lonely sort of life if you veer away from the mainstream.)
On the flip side from the vagueness, there’s the experience of doing everything you can to break something and failing, of your forefathers doing the same at their most imaginative and still failing. (The aforementioned RPP has pages and pages of tests for frickin’ energy conservation, without which most of physics and engineering just falls apart. And cosmologists can only dream of doing the same on the scales that are relevant to them, and indeed they do keep things like “modified Newtonian dynamics” around. Note that time invariance [as much as there is such a thing in general relativity] is energy conservation [ditto].) It is a sort of confidence that few others have justifiably had in their lives. (Few other things will infuriate a physicist more than offhand quoting a number with six significant digits. They know—in some cases from direct experience—that this sort of precision takes generations. And a well-established theory needs multiple times the effort.)
So when, say, a cosmologist says that cosmic inflation is a bit of a speculative crapshoot but probably true, the Big Bang is likely true, general relativity they’re fairly sure is true but it sure would be nice to find some cracks, the Standard Model is true despite everybody doing their level best to break it because the foundational issues are quite serious, the mass of a free electron is nearly certain, and the inability to surpass the speed of light is pretty much absolute—this is a dynamic range of confidence that none of us can adequately feel. Now take one of those statements in isolation and try to make your listener understand what the apparent equivocation in it really means.
(I do not believe the typical theist in a debate is on more than an advanced amateur level in all of this.)
Then you get into the cursed philosophical issues, like the (weak) anthropic principle (a class of “why” questions don’t and can’t actually have much of a meaningful answer) or nonexceptionalism in cosmology (it is possible that everything we can or will ever be able to see around us is in fact wildly atypical as a great cosmic joke, but if so we couldn’t ever know enough to join in and any science we do would be completely meaningless, so we might as well proceed on the assumption that it is not, and happily enough it’s been working out thus far.)
Thank you for the nice read. I empathize with many of your points, we are standing on the shoulders of giants. I refute on the claim around "our greatest societal ills". I think there is a difference between confident communication and being listened to. I have many a times said confidently "I don't know", as I have made decisions on low confident bets but leant into them with all my heart. Sometimes it paid of and sometimes it did not. It has served me well in my career and in life. As a scientist at heart I still agree that too often confidence is given too much weight and the quiet voice in the room should also be heard. However, we should teach everybody to communicate confidently even if they sometimes communicate wrongly. Of course we should not confuse confidence with credibility and accept that we know little for sure and are all just trying our best with the very limited understanding we have of our universe.
It might also be nice if cosmologists stopped claiming their Big Bang "Theory" wasn't more accurately termed a mere Hypothesis. IIRC, 12 out of 13 predictions failing and necessitating "model" "tweaks" is not a fantastic track record for a Theory, which are supposed to robustly survive investigation.
> We can literally observe cosmic microwave background and it fits our prediction that the universe was denser and hotter.
I can't observe that, because I don't have the gear. (Nor the time, budget, inclination nor training, for that matter. :-) But I am happy to admit the possibility that some of those observations, as reported in the literature, are correct.
However, unlike a depressingly large percentage of my former scientific colleagues, I also appreciate just how much of what gets reported in the literature, from the conclusions all the way back to the raw data, is anything from sloppily wrong to flat out lies. Witness the decades-long fiasco in genetics that is only this past month being corrected:
TLDR: The original work by the CSAC reported only a fraction of the actually relevant data and hid the remainder where nobody was going to look. This was not the kind of Reproducibility Crisis mess, where an undergrad isn't paying attention when he grabs the electrophoresis gel off the shelf and then writes down the wrong brand name in his lab book. This was fraud. They intentionally misrepresented the data and hence the conclusions by an order of magnitude, which allowed them to delude the whole world for decades that "humans are 98.8% the same as chimps!"
Many people had their entire worldview swayed by this pronouncement, myself included. I don't like being lied to.
So yeah, you'll have to forgive me if I'm a bit skeptical when it comes to scientific observations and reportage that I'm a few $million shy for confirming myself. And I'll continue to think poorly of those who have been making lucrative careers out of doing "well-established" physics that "everybody" "accepts", only to have to quietly admit under scrutiny that their predictions didn't work out quite as nicely as the popular press has told us.
Furthermore,
> It is a scientific theory.
It is a scientific hypothesis. It has not been subjected to repeated experimental trials or observations and found to be correct.
A hypothesis does not become "well-established" simply because every college professor whose salary depends upon supporting the grant authority's narrative repeats it.
I am fully aware that some people (present company excluded; I'm not placing any blame here) have watered down the definition of these terms. They are wrong. I do not consent to and will not be bullied into accepting changes to my language. Especially nothing as important as the language of science.
> You might be confusing the established big bang with the more speculative cosmic inflation model. They're very closely related.
Perhaps. I was never too terribly interested in things "smaller than an electron"[1] or larger than a whale.
Lerner's arguments[2], particularly on relative elemental abundances, are persuasive to me. That may be because during my formative years I was a bit preoccupied with H vs D, because deuterated compounds for the NMR were too expensive for me to just play around with as I liked, so I had to tinker with spectroscopy/spectrophotometry instead. In any case, he's right. You can't have a cosmological constant be one value to account for the D and another value to account for the He3.
As for the CMB, he addresses that as well, though once again I haven't done the work to confirm either side myself.
Lerner has a whole basket full of other arguments as well, but I'm not a fan of lazy people posting Youtube links to hour long videos and saying "watch this!", so I shan't be a hypocrite. I believe that pdf should give a good flavor of it. It's been a while since I read it and I only skimmed it now, but I believe it a good representative sample of his other work.
[1] PS: Yes, I know, I know. Stop being pedantic. This site is a hobby and I'm not about to cheat and get a chatbot to write me a 12 page essay every time I want to save a few words. I get to abuse quotation marks when I'm feeling lazy.
I think you have too high an expectation of the scientific community.
People work there, and it will have people's dramas and problems, like everywhere else: fraud, crime, jealousy, simple mistakes, etc.
Despite their imperfections, the reason people with power trust their consensus more is because they are a lot more useful than other groups of people.
If you reject this statement, you can start by joining the Amish, since virtually all modern technology is built on top of the scientific community's consensus and work.
> I think you have too high an expectation of the scientific community.
Indeed, I did. I joined expecting it to be above mere politics. I paid for my folly.
> People work there, and it will have people's dramas and problems, like everywhere else: fraud, crime, jealousy, simple mistakes, etc.
Yes, but it's far worse in the scientific communities. In the "real world" the average person is way better at doing their average job than the average scientist publishing their work.
Imagine even a civilizationally incompetent modern society today (without naming any names). Now imagine what it would be like if >50% of the time you got into a taxi something far worse than the expected result occurred. You got: taken to the wrong destination; or cheated by the driver; or woke up in a bathtub missing your kidneys; etc. Extend that behavior to even a tiny fraction of the whole. That society would have collapsed already.
Compare that to any journal you please and let's see what percentage of their published works can be verified. Even for the "better" fields their rates are shockingly bad. Some are below 50%.
I don't know about you, but my standards for the behavior of scientists are considerably higher than that for taxi drivers.
> Despite their imperfections, the reason people with power trust their consensus more is because they are a lot more useful than other groups of people.
Hard disagree. The reason people with power "trust" scientific consensus is because they manufacture that consensus by controlling the funding. This is fact. It's not pleasant, and is far from uncontroversial. But it is what it is.
The people with power today who are telling you that <topic X> is "settled science" are the spiritual (and in some cases genetic!) descendants of the people who were telling Columbus that he was going to sail off the edge of the Earth and locking up Galileo. Eppur si muove!
If you want to naively believe that calling one's self a big-s Scientist makes one, if not immune then perhaps we could say resistant, to that fraud, crime, corruption, etc. then I suppose you're entitled to that opinion. I look at the data on their output and my vote is to trash the lot of them and start over. They've fallen that far from grace.
I'm inclined to trust the ancient Greeks, sure. Modern scientists, not so much.
> If you reject this statement, you can start by joining the Amish, since virtually all modern technology is built on top of the scientific community's consensus and work.
Unfortunately the Amish are automatically suspicious of the academically tainted such as myself. Otherwise I'd love to chill with them. Their lives are blissfully stress-free compared to ours.
Your reference here is a 33 year old paper whose quoted observations and theoretical claims are totally out of date. The measured light element abundances are now consistent (and have been for decades).
The black body distribution of the CMB is the (confirmed, of course) prediction of the Big Bang. The structure, age, etc. all depend on the cosmological model, and the claims that no such model can explain observations is ridiculous, given the counterexample of the \Lambda CDM model, the cornerstone of the field for decades now, that explains them all.
It's almost impressive how obstinately you've convinced yourself of something so blatantly wrong and out of date, using only a reference predating the entire modern era of cosmology that you even admitted to not having read "for a while." A far, far cry from engaging seriously in a topic.
Like with the frontier LLMs, seeing commentary on this site on topics that I'm an expert in makes me seriously doubt whether I should lend any credence at all to what's said about those that I'm not.
> They intentionally misrepresented the data and hence the conclusions by an order of magnitude, which allowed them to delude the whole world for decades that "humans are 98.8% the same as chimps!"
Your second link doesn't work, but more broadly you and I both know that there are lots and lots of different ways to measure sequence similarity.
The model failing is a question of how accurately you want it to model the world.
Many laypersons have absolutely no conception of how accurate those "failing" models were.
A good example is Newtonian physics. Strictly speaking it is a failing model, after all, under certain conditions and if you look very closely ot falls apart. Yet, every bridge you ever walked on and the most precise mechanical watches ever made were all only calculated using newtonian physics. It is still accurate enough for most tasks on earth.
A model can still be useful despite its limitations, you just need to know those. People who are like "Ha! It is not accurate!" often have their own mental models of the world which are magnitudes worse, miss key bits or get other parts completely wrong (despite clear evidence to the opposite). As if a morbidly obese person for whom even walking presents a challenge made fun of an Olympic silver medalist for only getting second place. "Ha! You didn't get it 100% right so now my fringe theory that fails to even explain the most basic observations must be seen as equally valid!"
So if you say it fails, consider how many digits after the comma it was accurate before it failed and how many digits your own theory would manage.
This is what has always made it hard for me to go beyond the Newtonian physics. The only thing I know and use daily that relies on relativity is GPS and having looked into the equations on how it accounts for this it seemed to me that I could not discount that the equations account for some arbitrary consistent (or random) error, not relativity specifically. All experiments I have run never needed precision beyond Newtonian physics, but I am not at the end of my career yet so maybe relativity will become relevant some day. I will be looking forward to it if that is the case...
You could well live your whole life without needing anything more than Newtonian Physics. For most of us, relativity is a fun thought experiment. If you want to grapple with it, special relativity is the answer to "how can the speed of light be constant regardless of the speed of whoever is measuring it?" In his "vulgarisation" books, Einstein explains it with nothing more sophisticated than trains and stopwatches.
General relativity is more complex and quickly goes in complicated mathematical weeds but is just as profound from a philosophical point of view, which is that things do not merely affect other things around them, but instead change space-time itself. You can see with a couple of clicks observations of phenomena predicted by it, like black holes and gravitational lenses. It’s interesting to think about even if you are not directly affected.
Once upon a time most households had a small particle accelerator, used daily. While the the electrons in the cathode ray tube (CRT) traveled at relativistic speeds (something like 0.1-0.3 c, from what I can tell), people did not need need to know about special relativity to change the channel on their TV.
That said, those effects would have been small, and likely handled in practice as "some arbitrary consistent (or random) error."
Also, models are most interesting on the boundaries where they fail.
Take for example Lord Kelvin's model of thermal conduction in a solid Earth. He used it to incorrectly predict the age of the Earth, but if he had taken that failure to heart he could have used it to predict mantle convection and plate tectonics.
As I said in my effortpost above, the pdf I linked is a sample. There's more in his book, which I can't post here. And the videos he has put out are long, slow and some might find tedious, so I didn't bother to link them. (I didn't see your response while I was writing. Now I feel bad so I'm going to have to take a look and see what I can find that will post well.)
A simple search for "big bang predictions" will find plenty of even mainstream press discussing them, albeit in a positive light, usually along the lines of "oh look, some scientists are talking about how they discovered something really interesting!" when what they really ought to be saying is "some scientists discovered that their hypotheses were wrong, their models failed to predict observable reality, and they were forced to make corrections that they shouldn't have to, if their hypotheses were actually a correct theory."
As in so many different kinds of scientific endeavors, if your "theory" is based on a "model" and you have to keep correcting your "constants", they aren't constants, they are variables. And you don't have a theory.
> The Big Bang is a physical theory that describes how the universe expanded from an initial state of high density and temperature. .. A wide range of empirical evidence strongly favors the Big Bang event, which is now widely accepted. ...
> The Big Bang models offer a comprehensive explanation for a broad range of observed phenomena, including the abundances of the light elements, the cosmic microwave background, large-scale structure, and Hubble's law.
> Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently the subject of most active laboratory investigations. ... Viable, quantitative explanations for such phenomena are still being sought. These are unsolved problems in physics.
In reality it's just that the output of the procedural generation routines doesn't quite match that of the primary simulation loop. A classic worldbuilding inconsistency.
Even if your observation were 100% correct, it's also 100% irrelevant to the point.
We still refer to Maxwell's theory of electromagnetism as a theory even though we know quantum electrodynamics is a more precise match to the primary simulation loop.
I suppose a few might have decided to rename it Maxwell's hypothesis of electromagnetism, but I would consider them crackpots or dilettantes with little understanding of the meaning underlying those terms.
Classical logic has plenty of limitations/roadblocks, all logics do. Logic isn't a unified domain, but an infinite beach of structural shards, each providing a unique lens of study.
Classical logic was rejected in computer science because the non-constructive nature made it inappropriate for an ostensibly constructive domain. Theoretical mathematics has plenty of uses to prove existences and then do nothing with the relevant object. A computer, generally, is more interested in performing operations over objects, which requires more than proving the object exists.
Additionally, while you can implement evaluation of classical logic with a machine, it's extremely unwieldy and inefficient, and allows for a level of non-rigor that proves to be a massive footgun.
Classical logic isn’t rejected in computer science. Computer science papers don’t generally care if their proofs are non-constructive, just like in mathematics.
This entire thread is making clear that constructivists want to speak on behalf of everyone while in the real word the vast majority of mathematicians or logicians don’t belong to their niche school of mathematics/philosophy.
Intuitionism is just disallowing the law of the excluded middle (that propositions are either true or they are not true). Disallowing non-constructive proofs is a related system to intuitionism called “constructivism”. There are rigorous formulations of mathematics that are constructive, intuitionist or even strict finitist.
But proving the object exists is still useful, of course: it effectively means you can assume an oracle that constructs this object without hitting any contradiction. Talking about oracles is useful in turn since it's a very general way of talking about side-conditions that might make something easier to construct.
Of course. Though it's also important to note: whether or not an object exists is dependent on the logic being utilized itself, which is to say nothing of how even if the object holds some structural equivalent in the given logic of attention, it might not have all provable structure shared between the two, and that's before we get into how the chosen axioms on top of the logical system also mutate all of this.
It's not that classical logic is useless, it's just that it's not particularly appropriate to choose as the basis for a system built on algorithms. This goes both ways. Set theory was taken as the foundation of arithmetic, et al. because type theory was simply too unwieldy for human beings scrawling algebras on blackboards.
I am absolutely not even close to being an expert on the topic, but type theory wasn't all that well understood even relatively recently - Voevodsky coined the Univalence axiom in 2009 or so, while sets have been used for centuries.
So not sure it would be "unwieldy", it's a remarkably simple addition and it may avoid some of the pain points with sets? But again, not even a mathematician.
Set theory was chosen because it was a compatively simple proof of concept. You don't really refer to the foundation when scrawling algebra on a blackboard the way you would with a proof assistant, and this actually causes all sorts of issues down the line (it's a key motivation for things like HoTT).
> But proving the object exists is still useful, of course: it effectively means you can assume an oracle that constructs this object without hitting any contradiction.
I don’t think that logic holds in mainstream mathematics (it will hold in constructive mathematics by definition, and may hold in slightly more powerful philosophies op mathematics) because there, we can prove the existence of many functions and numbers that aren’t computable.
Intuitionistic logic is a refinement of classical logic, not a limitation: for every proposition you can prove in classical logic there is at least one equivalent proposition in intuitionistic logic. But when your use of LEM is tracked by the logic (in intuitionistic logic a proof by LEM can only prove ¬¬A, not A, which are not equivalent) it's a constant temptation to try to produce a constructive proof that lets you erase the sin marker.
In compsci that's actually sometimes relevant, because the programs you can extract from a ¬¬A are not the same programs you can extract from an A.
I think stuff like "synthetic topology", "synthetic differential geometry", "synthetic computability theory", "synthetic algebraic geometry" are the most promising applications at the moment.
It can also find commonalities between different abstract areas of maths, since there are a lot of exotic interpretations of intuitionistic logic, and doing mathematics within intuitionistic logic allows one to prove results which are true in all these interpretations simultaneously.
I'm not sure if intuitionism has a "killer app" yet, but you could say the same about every piece of theory ever, at least over its initial period of development. I think the broad lesson is that the rules of logic are a "coordinate system" for doing mathematics, and changing the rules of logic is like changing to a different coordinate system, which might make certain things easier. In some areas of maths, like modern Algebraic Geometry, the standard rules of logic might be why the area is borderline impenetrable.
These are more like computational-ish interpretations of sheaves, topological spaces, synthetic geometry etc. The link of intuitionistic logic to computation is close enough that these things don't really make it "non-computational". One can definitely argue though that many mathematicians are finding out that things like "expressing X in a topos" are effectively roundabout ways of discussing constructive logic and constructivity concerns.
You're walking down a corridor. After hours and hours you ask "is it possible to figure how far it is to the nearest exit?". Your classical logic friend answers: "Yes, either there is no exit then the answer is infinity. Or there is an exit then we just have to keep walking until we find it. QED"
This kind of wElL AcTUaLly argument is not allowed in constructive logic.
As far as I understand it, classical mathematics is non-constructive. This means there are quite a few proofs that show that some value exists, but not what that value is. And in mathematics, a proof often depends on the existence of some value (you can't do an operation on nothing).
The thing is it can be quite useful to always know what a value is, and there's some cool things you can do when you know how to get a value (such as create an algorithm to get said value). I'm still learning this stuff myself, but inuitionistic logic gets you a lot of interesting properties.
> As far as I understand it, classical mathematics is non-constructive.
It's not as simple as that. Classical mathematics can talk about whether some property is computationally decidable (possibly with further tweaks, e.g. modulo some oracle, or with complexity constraints) or whether some object is computable (see above), express decision/construction procedures etc.; it's just incredibly clunky to do so, and it may be worthwhile to introduce foundations that make it natural to talk about these things.
Would it be fair to say then that classical mathematics does not require computability, so it requires a lot more bookkeeping, while intuitionistic logic requires constructivism, so it's the air you live and breathe in, which is much more natural?
Intuitionistic logic is not really constrained to talking about constructive things: you just stuff everything else in the negative fragment. Does that ultimately make sense? Maybe, maybe not. Perhaps that goes too far in obscuring the inherent duality of classical logic, which is still very useful.
We still care about computation and algorithms even when proving theorems in a classical setting!
For e.g., imagine I'm trying to prove the theorem "x divides 6 => x != 5". Of course, one way would be to develop some general lemma about non-divisibility, but a different hacky way might be to say "if x divides 6 then x ∈ {1, 2, 3, 6}, split into 4 cases, check that x != 5 holds in all cases". That first step requires an algorithm to go from a given number to its list of divisors, not just an existence proof that such a finite list exists.
It’s not intuitive, it’s intuitionist. I’m not saying that to nitpick it’s just important to make the distinction in this case because it really isn’t intuitive at all in the usual sense.
Why you would use it is it’s an alternative axiomatic framework so you get different results. The analogy is in geometry if you exclude the parallel postulate but use all of the other axioms from Euclid you get hyperbolic geometry. It’s a different geometry and is a worthy subject of study. One isn’t right and the other wrong, although people get very het up about intuitionism and other alternative axiomatic frameworks in mathematics like constructivism and finitism.
Thank you for the correction I actually didn't realise that so have learned something.
Specifically for people who are interested it seems you have to replace the parallel postulate with a postulate that says every point is a saddle point (which is like the centre point of a pringle if you know what that looks like).
I thought of a concrete example of why you might use intuitionist logic. Take for example the “Liar’s paradox”, which centres around a proposition such as
A: this statement (A) is false
In classical logic, statements are either true or false. So suppose A is true. If A is true, then it therefore must be false. But suppose A is false. Well if it is false then when it says it is false it is correct and therefore must be true.
Now there are various ways in classical logic [1] to resolve this paradox but in general there is a category of things for which the law of the excluded middle seems unsatisfactory. So intuitionist logic would allow you to say that A is neither true nor false, and working in that framework would allow you to derive different results from what you would get in classical logic.
It’s important to realise that when you use a different axiomatic framework the results you derive may only be valid in the alternative axiomatic system though, and not in general. Lean (to bring this back to the topic of TFA) allows you to check what axioms you are using for a given proof by doing `#print axioms`. https://lean-lang.org/doc/reference/latest/ValidatingProofs/...
[1] eg you can say that all statements include an implicit assertion of their own truth. So if I say “2 + 2 = 4” I am really saying “it is true that 2+2=4”. So the statement A resolves to “This statement is true and this statement is false”, which is therefore just false in classical logic and not any kind of paradox.
In constructive logic, a proof of "A or B" consists of a pair (T,P). If T equals 0, then P proves A. If T equals 1, then P proves B. This directly corresponds to tagged union data types in programming. A "Float or Int" consists of a pair (Tag, Union). If Tag equals 0, then Union stores a Float. If Tag equals 1, then Union stores an Int.
In classical logic, a proof of "A or not A" requires nothing, a proof out of thin air.
Obviously, we want to stick with useful data structures, so we use constructive logic for programming.
Well, to translate my words to your liking: "In my opinion, everyone already uses a sort of constructive logic for programming."
I challenge you on "most proofs of algorithm correctness use classical logic". That means double negation elimination, or excluded middle. I bet most proofs don't use those. Give examples.
Oh, if you mean that most algorithm correctness proofs are finitary and therefore don't need to explicitly rely on the excluded middle, that may well be the case, but they certainly don't try to avoid it either. Look at any algorithm paper with a proof of correctness and see how many of them explicitly limit themselves to constructive logic. My point isn't that most algorithm/program proofs need the excluded middle, it's that they don't benefit from not having it, either.
> My point isn't that most algorithm/program proofs need the excluded middle, it's that they don't benefit from not having it, either.
Because if they benefited from it (in surfacing computational content, which is the whole point of constructive proof) they'd be comprised within the algorithm, not the proof.
> in surfacing computational content, which is the whole point of constructive proof
The point of a constructive proof is that the proof itself is in some way computational [1], not that the algorithm is. When I wrote formal proofs, I used either TLA+ or Isabelle/HOL, neither of which are constructive. It's easy to describe the notion of "constructive computation" in a non-constructive logic without any additional effort (that's because non-constructive logics are a superset of constructive logics; i.e. they strictly admit more theorems).
> When I wrote formal proofs, I used either TLA+ or Isabelle/HOL, neither of which are constructive.
True, but this requires using different formal systems for the algorithm and the proof. Isabelle/HOL being non-constructive means you can't fully express proof-carrying code in that single system, without tacking on something else for the "purely computational" added content.
That's not true. Non-constructive logics are extensions of constructive logics. You can express any algorithm in TLA+, and much more than algorithms.
You are right that when using non constructive logics, it's not guaranteed that the proof is executable as a program, but that's not a downside. Having the proof be a program in some sense is interesting, but it's not particularly useful.
How do you express computational content in non-constructive logic while both making it usable from proofs (e.g. if I have some algorithm that turns A's into B's, I want that to be directly referenceable in a proof - if A's have been posited, I must be able to turn them into B's) and keeping its character as specifically computational? Expressing algorithms in a totally separate way from proofs is arguably not much of a solution.
Not only is it easy, the ability to extend the computable into the non-computable is quite convenient. For example, computable numbers can be directly treated as a subset of the reals.
You create a subset or model of what's computable. Then, work in it. You might also prove refinements from high- to low-level forms.
Interestingly, we handle static analysis the same way by using language subsets. The larger chunk is unprovable. So, we just work with what's easy to analyze. Then, wrap it in types or contracts to use it properly.
And plenty of testing for when the specs are wrong.
Proofs of safety are proving a negative: they're all about what an algorithm won't do. So constructivism is irrelevant to those, because the algorithm has provided all the constructive content already! Proofs of liveness/termination are the interesting case.
You might also add designing an algorithm to begin with, or porting it from a less restrictive to a more restrictive model of computation, as kinds of proofs in CS that are closely aligned to what we'd call constructive.
The difference only becomes evident when proving liveness/termination (since if your algorithm terminates successfully it has to construct something, and it only has to be proven that it's not incorrect) and then it turns out that these proofs do use something quite aligned to constructive logic.
... and also to classical logic. Liveness proofs typically require finding a variant that converges to some terminal value, and that's just as easy to do in classical logic as in constructive logic.
I've been using formal methods for years now and have yet to see where constructive logic makes things easier (I'm not saying it necessarily makes things harder, either).
How have you used the Curry Howard correspondence to make proving the correctness of non-trivial algorithms easier (than, say, Isabelle/HOL or TLA+ proofs)?
I hardly use automated formal methods. Disappointing, I know. I use it for thinking through C and Labview programs. It helps with recognizing patterns in data structures and reasoning through code.
For example, malloc returns either null or a pointer. That is an "or" type, but C can't represent that. I use an if statement to decide which (or-elimination), and then call exit() in case of a null. exit() returns an empty type, but C can't represent that properly (maybe a Noreturn function attribute). I wrap all of this in my own malloc_or_error function, and I conclude that it will only return a valid pointer.
Instead of automating a correctness proof in a different language, I run it in my own head. I can make mistakes, but it still helps me write better code.
Oh, so I have used formal methods for many years (and have written about them [1]), including proof assistants, and have never found that constructive logic in general and type theory in particular makes proofs of program correctness any easier. The Curry-Howard correspondence is a cute observation (and it is at the core of Agda), but it's not really practically useful as far as proving algorithm correctness is concerned.
This isn’t quite right. Classical logic doesn’t permit going from “it is impossible to disprove” to “true”. For example, the continuum hypothesis cannot be disproven in ZFC (which is formulated in classical logic (the axiom of choice implies the law of the excluded middle)), but that doesn’t let us conclude that the continuum hypothesis is true.
Rather, in classical logic, if you can show that a statement being false would imply a contradiction, you can conclude that the statement is true.
In intuitionistic logic, you would only conclude that the statement is not false.
And, I’m not sure identifying “true” with “provable” in intuitionistic logic is entirely right either?
In intuitionistic logic, you only have a proof if you have a constructive proof.
But, like, that doesn’t mean that if you don’t have a constructive proof, that the statement is therefore not true?
If a statement is independent of your axioms when using classical logic, it is also independent of your axioms when using intuitionistic logic, as intuitionistic logic has a subset of the allowed inference rules.
If a statement is independent, then there is no proof of it, and there is no proof of its negation. If a proposition being true was the same thing as there being a proof of it, then a proposition that is independent would be not true, and its negation would also be not true.
So, it would be both not true and not false, and these together yield a contradiction.
Intuitionistic logic only lets you conclude that a proposition is true if you have a constructive/intuitionistic proof of it. It doesn’t say that a proposition for which there is no proof, is therefore not true.
As a core example of this, in intuitionistic logic, one doesn’t have the LEM, but, one certainly doesn’t have that the LEM is false. In fact, one has that the LEM isn’t false.
Ah, so if you had ¬p, and you negated it, you could technically construct ¬¬p in intuitionist logic, but only in classical logic could you reduce that to p? Since truth in classical logic means what you said here, where you didn't actually construct what p is, so you can't reduce it in intuitionistic logic.
Every post/comment is selecting across 100,000+ people worldwide for the individuals most likely to complain about it.
There’s no other place on earth I can invite 100,000 people to disagree with me. Exception is maybe a public office. (Which the vast majority of people shy away from, for just this reason)
Seriously. TOR is primarily funded by the US government. Maybe this or not all bugs are deliberately left in for the sake of allowing backdoors, but people should not forget this
You mean like converting packet timestamps into a (uniformly) sampled time series (e.g., bytes or packets per ms) and run a NumPy/SciPy FFT on that series?
Something like Lomb–Scargle would possibly be a better fit I suppose. But yes that sort of flow, I could do it as a one off with a Python script as you state, but my interest is more if anyone has sunk their teeth into network packet analysis in the frequency domain from the ground up and wrapped up all the learnings into a thoughtfully designed interface.
I was searching for a Wireshark type plugin to do this but I couldn’t find anything.
Alternatively, equally useful would be learning about anyone who has started to do something like this and then realized that it didn’t actually help them analyze anything.
Jake VanderPlas also has an article on Understanding the Lomb-Scargle Periodogram [1] which I can recommend if you want to get into the details (it also includes a treatment of fourier-pairs + convolution to explain the 'artifacts' in DFT). There's a module for it in scipy, so it should be rather straightforward to try your analysis using timestamps for x and an array of ones for y. That algorithm is essentially a least-squares fit with sinusoids at pre-selected frequencies.
I've tried to use Lomb-Scargle to reduce the number of sampling points in magnetic resonance experiments, but had another dimension to take into account (similar to doing the analysis for each network port separately in your case). I got some spikes on some of the 'ports' which I couldn't reason about or reproduce when I did the same with periodic sampling and FFT. But the individual periodograms looked reasonable, if I remember correctly. Maybe we have a more regular user of LS around, who can point out common pitfalls. Otherwise you could generate some data from known frequencies to see what kind of artifacts you get.
You could maybe also take a look at the auto-correlation of the packet timestamps to see whether you can extract timescales on which patterns arise.
Same could be said for SITHOAG. Yet, modern preachers have found far more success with other approaches.
Consider: if the tone of your writing will put off anyone who disagrees with you, what’s the value in “livening it up”? Again, it’s preaching to the choir.
My strategy with pretty much everything I can is to deeply research to find any Made In USA variant of whatever it is I’m trying to purchase, and buy whatever that is regardless of price. I’ve never had that fail me.
For backpacks, my Waterfield pack has held up fantastically across several years of regularly absolutely stuffing it with gear for my work travel.
So many people carry this dull heavy just in their pockets to fend off all attempts to revive the sense of wonder they buried deep in their childhood.
For me, just the very fact that there exist time, space, laws of physics, enormous complexity stemming from deceptive "simplicity", is absolutely awe-inspiring.
reply