I never said that my computer would run forever. I simply stated that it would fall into an infinite loop. You could argue that cosmic rays could cause a bit flip to get the computer out of an infinite loop, but that it is out of scope.
I think our disagreement is specious because it’s trying to import terms like “infinite loop”, “halt”, “program” from a formal theory into the material world.
I think I can only ask, how do you define infinite loop and forever?
Yes. A computer can be in an infinite loop, lose power, and then stop functioning.
>how do you define infinite loop and forever
Infinite Loop: A system is in an infinite loop when that system keeps returning to the same state making it unable to progress (depending on the context the definition might change to where making progress is okay and it in only returning to a partial match of an existing state (think of an echo server for example))
Forever: A time period that ranges from t=now to t=infinity.
In terms of physical computers I don't agree with must, but I will say there is a very high chance that it will. In terms of an arbitrary physical system something like gravity doesn't just halt.
>In many cases physical systems correspond to formal computational systems.
It is possible to model physical systems. It can also be useful to reason about a simplified model of something.
>So then because the system is deterministic, we know that if it returns to a previous state it must keep returning to this state forever?
Assuming it's deterministic then yes. Depending on how complicated your model is it may be hard to go back to a state you were in before.
Computationally this is correct, however the tetris complete definition also sets requirements to interface with the outside world, in particular the timing and I outs aren't covered by turing completeness. a turing cokplete system could require villions of years to simulate a game of tetris and still be turing complete.
I/O is covered by Turing completeness. Whether timing is depends significantly on how exactly you want to formalize things, but certainly any TC system has the capacity to keep track of time if the pieces are manipulated at a constant rate.
Not really. A plain turing machine only has one input and one output over the lifetime of the execution.
> certainly any TC system has the capacity to keep track of time if the pieces are manipulated at a constant rate.
Only if the turing-equivalent machine is iterating at a specific rate and can pause. Lots of these systems run through the entire machine in a single shot and can't manage timing.
It's not so much keeping track of time, but that that time matches the real world outside the machine. The whole Tetris Complete formalization is about how a given machine (which as a previous comment points out, doesn't need to be turing complete) relates to the non-theoretical non-computational world. Turing completeness requires or even suggests any of those things, it's an orthogonal concept.
Yeah, I was about to mention Doom. There’s a whole subreddit dedicated to running Doom on the weird and wonderful. Even had an eInk device running Doom, which looked painful but cool.
It mentions Pokemon Yellow, but really any code run isn't really "run in the game" more so that it uses bugs to hack together alternate instructions for the game boy to execute. Good example: https://www.youtube.com/watch?v=Vjm8P8utT5g (skip to 1:20 for craziness)
I agree, that one is cheating. It appears that it's not the game that is Turing Complete in this case, it's the Gameboy itself. The Magic and Minecraft ones for example are legit, because "execution" happens within the game's engine (and, more importantly, rules).
It has allowed us to emulate (numeric) const generics in crates like typenum (https://docs.rs/typenum) long before they were implemented as a proper language feature. Even today, typenum is more powerful in some regards than the stable version of const generics.
Another fun Rust one I found was trying to make a macro to do arbitrary levels of `Option` flattening. There's an existing method `Option::flatten`, but it only flattens a single level (i.e. `Option<Option<T>>` to `Option<T>`, which makes sense given you can't really specify return type any differently. I had a hunch that it might be possible to do this by making a trait with a generic type parameter specifying that a given "flattening" is possible, e.g. `Option<Option<Option<T>>` would implement `Flatten<Option<Option<T>>` and `Flatten<Option<T>>`. This did turn out to be possible to specify with a couple of impl blocks; however, the compiler tried to exhaustively enumerate all of the possible implementations of this trait on all levels of `Option` nesting, whereas I had implicitly assumed it would just verify the correctness of the trait bounds that I used in my code. The compiler generated all the implementations of flatten for `Option<T>`, `Option<Option<T>>`, `Option<Option<Option<T>>>`...until it understandably hit a max recursion limit. Since the compiler obviously could not tell that there was no actual end to the implementations, it suggested that I try increasing the recursion limit with a crate-level attribute, and just for fun I followed its suggestions to keep increasing the limit to no avail, after which I took pity on the compiler and stopped forcing it to continue toiling in its Sisyphean endeavor.
As you discovered, Rust's Generics aren't like a C++ template.
The C++ template really is just fill-in-the-blanks, (but with a special behaviour named SFINAE, Substitution Failure Is Not An Error) so I believe this trick would have worked if you'd been doing C++, producing all the necessary layers of flattening for whatever input you had.
To me, if I find that manually writing two more layers of flatten isn't enough, I would start to think this is the wrong problem to be solving.
Yep! I had heard this before, but like with a lot of things, I had to really dig into it and test the boundaries to fully understand at a fundamental level.
> To me, if I find that manually writing two more layers of flatten isn't enough, I would start to think this is the wrong problem to be solving.
Definitely! This was more of a fun intellectual exercise than anything else. I absolutely love seeing what can be done at compile time, and a lot of stuff that isn't useful in practice can still be pretty entertaining.
Rust lets me write safe implementations of well-known Traits like Ord (the trait which compares things because they are ordered) but promises the outcome when these traits are relied upon elsewhere is always safe even if my implementations are very silly.
So I can sort() things for which I've implemented nonsensical ordering (e.g. they all claim they're before each other) and Rust promises that's still safe. This futile task might well not finish and if it does finish they won't seem any more or less "sorted" than they were before I tried, but it won't crash or do anything unfathomable.
And indeed that's exactly what happens, cementing my faith in the idea of Rust.
[Not all Rust's traits are safe. If you have to say "unsafe" to write the implementation then it wasn't safe, and if you do something nonsensical in an unsafe trait then you get to keep both halves when it breaks, misfortunate is only interested in safe traits]
That's really cool! Surprisingly, I _did_ actually end up doing something like `BlackHole` at one point in a project. I was trying to implement a connection type so that it got automatically returned to a connection pool when dropped, but the combination of the trickiness of trying to do async things inside of a `Drop` implementation and some external requirements that were assuming a language without RAII or ownership semantics, I couldn't just pass the connection's ownership back to the pool. I ended up making a replica of the internals of the connection with I/O implementations that basically did the same thing as `BlackHole` and then swapping that out for the real connection internals and then passing a newly-constructed connection back to the pool and then letting the original connection (now with black hole internals) get dropped instead. In this case, the black hole implementations were never actually invoked for anything and were just there to satisfy the type system, but it turns out that there are bizarre cases where that might actually be useful!
Sort of reminds me of implementing multiplication in the type system of TypeScript with Peano arithmetic - endless recursion limit errors of Succ<Succ<Succ<Succ<Zero>>>>...
>> Magic: The Gathering is a popular and famously complicated trading card game about magical combat. In this paper we show that optimal play in real-world Magic is at least as hard as the Halting Problem, solving a problem that has been open for a decade. To do this, we present a methodology for embedding an arbitrary Turing machine into a game of Magic such that the first player is guaranteed to win the game if and only if the Turing machine halts. Our result applies to how real Magic is played, can be achieved using standard-size tournament-legal decks, and does not rely on stochasticity or hidden information. Our result is also highly unusual in that all moves of both players are forced in the construction. This shows that even recognising who will win a game in which neither player has a non-trivial decision to make for the rest of the game is undecidable. We conclude with a discussion of the implications for a unified computational theory of games and remarks about the playability of such a board in a tournament setting.
Nice, I didn't know of this one. It's the continuation and completion of work by the first author of the linked paper, Alex Churchill, who had earlier shown a method to prove M:tG Turing-Complete but in a setup that was not a full game:
I don't think folks realize how low a bar Turing Complete is.
A DFA with two stacks or two counters is Turing Complete.
Making a formal machine Turing Complete is trivial.
Any textbook on formal languages and automata. Michael Sipser’s is my favorite. Church Turing Thesis is sophomore CS stuff..
Two stacks can simulate a Turing Tape. E.g. moving left is simulated by popping off one stack and pushing onto other. One counter can encode the tape while the other is used to encode the tape position.
"It inspired movfuscator, the single instruction C compiler. Building on this, there is a branchless, mov-only version of the classic DOOM video game.
This is thought to be entirely secure against the Meltdown and Spectre CPU vulnerabilities, which require speculative execution on branch instructions.
The mov-only DOOM renders approximately one frame every 7 hours, so playing this version requires somewhat increased patience."
This means, we need about twenty doublings of the computing speed to play it smoothly. Will we ever reach it? And if yes, when?
> Technically, a problem is called PSPACE-complete if it is equal in computational power to a particular mathematical model of computation (called “polynomial-space-bounded Turing machines”).
I continue to object to the CSS one, because it's just applying a couple gates / arithmetic operations at a time and you have to manually copy the data to the next iteration.
Turing completeness is already a low bar, because arithmetic+iteration is enough, and this doesn't clear the bar.
This reminds me of Deutsch's idea of the "jump to universality" in a chapter of The Beginning of Infinity. The summary:
> All knowledge growth is by incremental improvement, but in many fields there comes a point when one of the incremental improvements in a system of knowledge or technology causes a sudden increase in reach, making it a universal system in the relevant domain. In the past, innovators who brought about such a jump to universality had rarely been seeking it, but since the Enlightenment they have been, and universal explanations have been valued both for their own sake and for their usefulness. Because error-correction is essential in processes of potentially unlimited length, the jump to universality only ever happens in digital systems.
I believe Deutsch might argue that when error correction is implemented, a system becomes digital. Babbage's early computer, for example: "thus Babbage's computers assigned only ten different meanings to the whole continuum of angles at which a cogwheel might be oriented. Making the representation digital in that way allowed the cogs to carry out error-correction automatically"
Interesting take on it. I was thinking mainly of DNA with its explicitly error-correcting features that try to fix transcription mistakes (at least as I understand it? I don't do much of the squishy science) but then branched off into control loops. Sure, if you're talking encodings and entropy etc. then digital is kind of implied, but I don't think the quote I replied to requires such things:
> in many fields there comes a point when one of the incremental improvements in a system of knowledge or technology causes a sudden increase in reach, making it a universal system in the relevant domain
If you're learning to balance a stick on your finger, there's no digital algorithm executing. You're just trying to tune your feedback loop to move the base of the inverted pendulum so as to keep it upright. Yet there's a clear point where you move from being able to balance the stick for 1 second, to 2 seconds, to 5 seconds, and suddenly you can now do it indefinitely until you mess up or get bored.
That stick carries no data, so it's not "error correction" as it normally means in this kind of context. And if you tried to apply any data in the angle it would immediately be destroyed. Or if you distinguished between "stick up" and "stick not up" you're now applying error correction to a digital system with 2 states.
Maybe it was not included, because John Conway suspected that it would be Turing complete from the start. Because he suspected this he contacted Martin Gardner and asked the public for searching for an ever growing pattern, resulting in the discovery of the glider gun. He realized that finding something like a glider gun would be the first step in establishing Turing Completeness. It took some time for it to be proven completely, but from the start the intuition of Conway was that Game of Life would probably be Turing Complete.
>DonHopkins on Sept 2, 2015 | parent | context | favorite | on: The most obsolete infrastructure money could buy –...
>I remember running across a Turing machine emulator implemented in TECO in Minsky's home directory that I'd REALLY love to get ahold of.
>hga on Sept 2, 2015 [–]
>Ask and yea shall receive:
MSG: APL 1
DISTRIB: *BBOARD
EXPIRES: 03/17/81 23:08:54
MINSKY@MIT-MC 03/11/81 23:08:54 Re: too-short programs
APL is compact, I suppose. So is TECO. When I wrote the following
Universal Turing Machine, which works, I actually understood it.
[ I've interpolated the non-printing characters as displayed by (Gnu) EMACS,
escape is ^], ^^ is one character, as is \356: ]
i1Aul qq+^^0:iqm^[29iiq\356y0L1 00L1 11L2 A1L1
y0L1 0yR2 1AR2 AyR6 yyL3 00L0 1AL3 A1L4 yyL4 0yR5 11L7 A1L4
yyR5 0yL3 1AR5 A1R5 yyR6 0AL3 1AR6 A1R6 y0R7 0yR6 11R7 A0R2
^[j<sR^[;-d-2ciql-^^^[ci"ed^^^[cii^[ciuq'^[>
j<sL^[;-d-2ciql-^^^[ci"ed^^^[cii-2c^[ciuq'^[>jxblx1lx2lx3lx4lx5lx6lx7hk
iyyAyyAyy^[32<i0^[>ji110101110000010011011^[ 1uq<htmbqq=>
I do not advise attempting to understand this code, which is
almost as bad as that for the Universal Turing machine.
>Please ack receipt of this and/or send me email (in my HN info); for others, note this is ITS TECO, which I was told was by far the most powerful version of it (fortunately, by the time I showed up learning it was no longer really necessary).
>DonHopkins on Sept 3, 2015 | root | parent [–]
>OOP ACK! It was a shot in the dark, but I am SO GLAD I asked!!! Thank you Harold!
>A Turing tarpit (or Turing tar-pit) is any programming language or computer interface that allows for flexibility in function but is difficult to learn and use because it offers little or no support for common tasks. The phrase was coined in 1982 by Alan Perlis in the Epigrams on Programming:
>>54. Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.
>In any Turing complete language, it is possible to write any computer program, so in a very rigorous sense nearly all programming languages are equally capable. Showing that theoretical ability is not the same as usefulness in practice, Turing tarpits are characterized by having a simple abstract machine that requires the user to deal with many details in the solution of a problem. At the extreme opposite are interfaces that can perform very complex tasks with little human intervention but become obsolete if requirements change slightly.
>Some esoteric programming languages, such as Brainfuck, are specifically referred to as "Turing tarpits" because they deliberately implement the minimum functionality necessary to be classified as Turing complete languages. Using such languages is a form of mathematical recreation: programmers can work out how to achieve basic programming constructs in an extremely difficult but mathematically Turing-equivalent language.
>As a programming language: The obscurity of the TECO programming language is described in the following quote from "Real Programmers Don't Use Pascal", a letter from Ed Post to Datamation, July 1983:
>>It has been observed that a TECO command sequence more closely resembles transmission line noise than readable text. One of the more entertaining games to play with TECO is to type your name in as a command line and try to guess what it does. Just about any possible typing error while talking with TECO will probably destroy your program, or even worse - introduce subtle and mysterious bugs in a once working subroutine.
https://everything2.com/title/Tetris+complete
Tetris Completion extends Turing Completion by adding the minimum requirements for a functional implementation of Tetris. From the link above:
* a display with at least 20 rows and 10 columns, and enough read/write random access memory to store the current contents of the display,
* a way of repainting one row or column of the display without affecting the others,
* a timer with a resolution of at least 100 milliseconds
* a small set of keys whose state can be read without blocking execution, which for Tetris are left, right, clockwise, and anticlockwise
* a linear bounded automaton programmed for the particular puzzle game