Hacker Newsnew | past | comments | ask | show | jobs | submit | phkamp's commentslogin

The other CPU that was designed for Ada succeeded spectaculary:

https://datamuseum.dk/wiki/Rational/R1000s400


I do not know much about the architecture of Rational/R1000s400, but despite that I am pretty certain the claims that it was particularly good for implementing Ada on it were not true.

Ada can be implemented on any processor with no particular difficulties. There are perceived difficulties, but those are not difficulties specific to Ada.

Ada is a language that demands correct behavior from the processor, e.g. the detection of various error conditions. The same demands should be made for any program written in any language, but the users of other computing environments have been brainwashed by vendors that they must not demand correct behavior from their computers, so that the vendors could increase their profits by not adding the circuits needed to enforce correctness.

Thus Ada may be slower than it should be on processors that do not provide appropriate means for error detection, like RISC-V.

However that does not have anything to do with the language. The same problems will affect C, if you demand that the so-called undefined behavior must be implemented as generating exceptions for signaling when errors happen. If you implement Ada in YOLO mode, like C is normally implemented, Ada will be as fast as C on any processor. If you compile C enabling the sanitizer options, it will have the same speed as normal Ada, on the same CPU.

In the case of Rational/R1000s400, besides the fact that in must have had features that would be equally useful for implementing any programming language, it is said that it also had an Ada-specific instruction, for implementing task rendez-vous.

This must have been indeed helpful for Ada implementers, but it really is not a big deal.

The text says: "the notoriously difficult to implement Ada Rendez-Vous mechanism executes in a single instruction", I do not agree with "notoriously difficult".

It is true that on a CPU without appropriate atomic instructions and memory barriers, any kind of inter-thread communication becomes exceedingly difficult to implement. But with the right instructions, implementing the Ada rendez-vous mechanism is simple. Already an Intel 8088 would not have any difficulties in implementing this, while with 80486 and later CPUs maximum efficiency can be reached in such implementations.

While in Ada the so-called rendez-vous is the primitive used for inter-thread communication, it is a rather high-level mechanism, so it can be implemented with a lower-level primitive, which is the sending of a one-way message from one thread to another. One rendez-vous between two threads is equivalent with two one-way messages sent from one thread to another (i.e. from the 1st to the 2nd, then in the reverse direction). So implementing correctly the simpler mechanism of sending a one-way inter-thread message allows the trivial implementation of rendez-vous.

The rendez-vous mechanism has been put in the language specification, despite the fact that its place would have better been in a standard library, because this was mandated by the STEELMAN requirements published in 1978-06, one year before the closing of the DoD language contest.

So this feature was one of the last added to the language, because the Department of Defense requested it only in the last revision of the requirements.

An equivalent mechanism was described by Hoare in the famous CSP paper. However CSP was published a couple of months after the STEELMAN requirements.

I wonder whether the STEELMAN authors have arrived at this concept independently, or they have read a preprint of the Hoare paper.

It is also possible that both STEELMAN and Hoare have been independently inspired by the Interprocess Calls of Multics (1967), which were equivalent with the rendez-vous of Ada. However the very close coincidence in time of the CSP publication with the STEELMAN revision of the requirements makes plausible that a preprint of the Hoare paper could have prompted this revision.


I'm not "refusing to add TLS support" I insist that the certificate is safely isolated in a separate process for security reasons. There are many ways to skin that cat.


Aside: Loved your bit talking about money and varnish in Gift Community[1]. And thanks for the Beerware License, I've started using it!

[1]: https://www.youtube.com/watch?v=tOn-L3tGKw0


And before anybody speculates too much about Matthias use of "jail-like":

I think this can make a lot of sense, because there are many situations, in particular in embedded systems, where you can and should confine at a much smaller scale than jails are really convenient for.

It will also be interesting to see if "Cells" can make inroads in the territory the original ACL abandoned, because writing the rules was so complex that it amount to parallel meta-anti-software development.

Hat tip to Matthias from here.


To me that text reads a lot like an affidavit supporting a Qui Tam suit ?


Here is what I submitted:

I am a well known FOSS developer.

At one point code I had written protected half the passwords on the entire Internet, and today around a quarter of all HTTP(S) traffic on the internet goes through software I have written ("Varnish").

That, and the fata morgana of retirement shimmering on my horizon, makes it my considered opinion that FOSS is the gift EU does not deserve, and runs a great risk of destroying on first contact.

However, closed source as we know it, is not compatible with an open, free and fair society, so I am more than aboard with the EU's long overdue recognition of FOSS as the way forward and out of the grubby, greedy claws of "Big Tech" and their endless enshitification of our lives.

The kind of FOSS software relevant to this discussion is usually rock steady and dependable in ways much commercial closed software, precisely because of the secrecy, can never be or become.

But the human communities which produces the FOSS software are fragile, fractious, and as a general rule, composed of people who may be great programmers, but who have absolutely no experience, and no interest, in fostering and stewarding stable human communities.

This is literally why there are who knows how many, different "distributions" of the Linux operating system, "window managers", "web-site frameworks" and programming languages.

Therefore the absolutely most important thing for EU to understand about FOSS, is that it probably is as close to the "ideal market", in the sense of economic theories, as anything will ever come: It literally costs nothing to become a competitor.

But that also means that if the EU member countries were to pick, no matter how fair and competently, a set of FOSS software to standardize on, and pour money into the people behind it, to provide the necessary resources to support and sustain the need for IT systems, for all the administrations in the EU countries, that software would instantly stop being FOSS - no matter what words the license might contain, because it would no longer be part of the market.

In other words: EU cannot "switch to FOSS", it would no longer be FOSS if EU did.

At the most fundamental level, the EU has three options:

1. Pick and bless a set of winners, consisting of:

a) Operating system, portable to any reasonable computer architecture. b) Text-processing, suitable for tasks up to a book. c) Spreadsheet d) Email client. e) Web Browser f) Accounting software, suitable for small organizations.

and fund organizations to maintain, develop and support the software for the future as open source, turning that software into infrastructure like water, power and electricity, free for all, individuals, startups and established companies alike, to use and benefit from.

2. Continuously develop/pick, bless and meticulously enforce open standards of interoperability, and then "let the competition loose".

3. Both. By providing a free baseline and de-facto reference implementations for the open standards, "the market" will be free to innovate, improve and compete, but cannot (re)create walled gardens.

To everybody, me included, option two seems the ideologically "pure" choice, because we have all been brought to believe that "governments should not pick winners".

But governments have always picked winners. Today all of EU has 230VAC electrical grids, because EU picked that as a winner, thereby leveling the market to everybody's benefit.

Therefore I will argue, that the wise choice for EU is option three.

First, it will be incredibly cheap, as in just tens of millions of Euro per year, to provide all EU citizens with a free and trustworthy software platform to run on their computers.

Second, it can be done incredibly fast: From EU makes the decision, the first version can be release in a matter of months, if not weeks.

Third, it will guarantee interoperability of data.

Sincerely,

Poul-Henning Kamp


"It literally costs nothing to become a competitor"

but

"The new strategy will address the economic and political importance of open source, as a crucial contribution to a strategic framework for EU technological sovereignty, competitiveness and cybersecurity." (as per the call document)

This means OSS, but with an ecosystem that does NOT rely on anything non-EU for development, maintenance and distribution. This brings the price from "literally costs nothing" to hundreds of millions Euro.


I huge factor in iAPX432 utter lack of success, were technological restrictions, like pin-count limits, laid down by Intel Top Brass, which forced stupid and silly limitations on the implementation.

That's not to say that iAPX432 would have succeeded under better management, but only to say that you cannot point to some random part of the design and say "That obviously does not work"


The important bit here is "their failed approach", just because Intel made a mess of it, doesn't mean that the entire concept is flawed.

(Intel is objectively the most lucky semiconductor company, in particular if one considers how utterly incompetent their own "green-field" designs have been.

Think for a moment how luck a company has to be, to have the major competitor they have tried to kill with all means available, legal and illegal, save your company, when you bet the entire farm on Itanic ?)


It isn't 100% proof that the concept is flawed, but the fact that the for decades most successful CPU manufacturer in the world couldn't make segmentation work in multiple attempts is pretty strong evidence that at least there are, er, "issues" that aren't immediately obvious.

I think it is safe to assume that they applied what they learned from their earlier failures to their later failures.

Again, we can never be 100% certain of counterfactuals, but certainly the assertion that linear address spaces were only there for backwards compatibility with small machines is simply historically inaccurate.

Also, Intel weren't the only ones. The first MMU for the Motorola MC68K was the MC68451, which was a segmented MMU. It was later replaced by the MC68851, a paged MMU. The MC68451, and segmentation, was both rarely used and then discontinued. The MC68851 was comparatively widely used, and later integrated in simplified form into future CPUs like the MC68030 and its successors.

So there as well, segmentation was tried first and then later abandoned. Which again, isn't definitive proof that segmentation is flawed, but way more evidence than you give credit for in your article.

People and companies again and again start out with segmentation, can't make it work and then later abandon it for linear paged memory.

My interpretation is that segmentation is one of those things that sounds great in theory, but doesn't work nearly as well in practice. Just thinking about it in the abstract, making an object boundary also a physical hardware-enforced protection boundary sounds absolutely perfect to me! For example something like the LOOM object-based virtual memory system for Smalltalk (though that was more software).

But theory ≠ practice. Another example of something that sounded great in theory was SOAR: Smalltalk on a RISC. They tried implementing a good part of the expensive bits of Smalltalk in silicon in a custom RISC design. It worked, but the benefits turned out to be minimal. What actually helped were larger caches and higher memory bandwidth, so RISC.

Another example was the Rekursiv, which also had object-capability addressing and a lot of other OO features in hardware. Also didn't go anywhere.

Again: not everything that sounds good in theory also works out in practice.


All the examples you bring up are from an entirely different time in terms of hardware, a time where one of the major technological limitations were how many pins a chip could have and two-layer PCBs.

Ideas can be good, but fail because they are premature, relative to the technological means we have to implement them. (Electrical vehicles will probably be the future text-book example of this.)

The interesting detail in the R1000's memory model, is that it combines segmentation with pages, removing the need for segments to be contiguous in physical memory, which gets rid of the fragmentation issue, which was a huge issue for the archtectures you mention.

But there obviously always will be a tension between how much info you stick into whatever goes for a "pointer" and how big it becomes (ie: "Fat pointers") but I think we can safely say that CHERI has documented that fat pointers is well worth their cost, and how we are just discussing what's in them.


> examples from different time

Yes, because (a) you asked a question about how we got here and (b) that question actually has a non-rhetorical historical answer. This is how we got here, and that's when it pretty much happened.

> one of the major technological limitations were how many pins a chip could have and two-layer PCBs

How is that relevant to the question of segmentation vs. linear address space? In your esteemed opinion? The R1000 is also from that erea, the 68451 and 68851 were (almost) contemporaries, and once the MMU was integrated into the CPU, the pins would be exactly the same.

> Ideas can be good, but fail because they are premature

Sure, and it is certainly possible that in the future, segmentation will make a comeback. I never wrote it couldn't. I answered your question about how we got here, which you answered incorrectly in the article.

And yes, the actual story of how we got here does indicate that hardware segmentation is problematic, though it doesn't tell us why. It also strongly hints that hardware segmentation is both superficially attractive and less obviously, but subtly and deeply flawed.

CHERI is an interesting approach and seems workable. I don't see how it's been documented to be worth the cost, as we simply haven't had wide-spread adoption yet. Memory pressure is currently the main performance driver ("computation is what happens in the gaps while the CPU waits for memory"), so doubling pointer sizes is definitely going to be an issue.

It certainly seems possible that CHERI will push us more strongly away from direct pointer usage than 64 bit already did, towards base + index addressing. Maybe a return to object tables for OO systems? And for large data sets, SOAs. Adjacency tables for graphs. Of course those tend to be safe already.


Author here.

This is one of those things, where 99.999% of all IT people have never even heard or imagined that things can be different than "how we have always done it." (Obligatory Douglas Adams quote goes here.)

This makes a certain kind of people, self-secure in their own knowledge, burst out words like "clueless", "fail miserably" etc. based on insufficient depth of actual knowledge. To them I can only say: Study harder, this is so much more technologically interesting, than you can imagine.

And yes, neither the iAPX432, nor for that matter Z8000, fared well with their segmented memory models, but it is important to remember that they primarily failed for entirely different reasons, mostly out of touch top-management, so we cannot, and should not, conclude from that, that all such memory models cannot possibly work.

There are several interesting memory models, which never really got a fair chance, because they came too early to benefit from VLSI technology, and it would be stupid to ignore a good idea, just because it was untimely. (Obligatory "Mother of all demos" reference goes here.)

CHERI is one such memory model, and probably the one we will end up with, at least in critical applications: Stick with the linear physical memory, but cabin the pointers.

In many applications, that can allow you to disable all the Virtual Memory hardware entirely. (I think the "CHERIot" project does this ?)

The R1000 model is different, but as far as I can tell equally valid, but it suffers from a much harder "getting from A to B" problem than CHERI does, yet I can see several kinds of applications where it would totally scream around any other memory model.

But if people have never even heard about it, or think that just because computers look a certain way today, every other idea we tried must be definition have been worse, nobody will ever do the back-of-the-napkin math, to see if would make sense to try it out (again).

I'm sure there are also other memory concepts, even I have not heard about. (Yes, I've worked with IBM S/38)

But what we have right now, huge flat memory spaces, physical and virtual, with a horribly expensive translation mechanism between them, and no pointer safety, is literally the worst of all imaginable memory models, for the kind of computing we do, and the kind of security challenges we face.

There are other similar "we have always done it that way" mental blocks we need to reexamine, and I will answer one tiny question below, by giving an example:

Imagine you sit somewhere in a corner of a HUGE project, like a major commercial operating system with al the bells and whistles, the integrated air-traffic control system for a continent or the software for a state-of-the-art military gadget.

You maintain this library, which exports this function, which has a parameter which defaults to three.

For sound and sane reasons, you need to change the default to four now.

The compiler wont notice.

The linker wont notice.

People will need to know.

Who do you call ?

In the "Rational Environment" on the R1000 computer, you change 3 to 4 and, when you attempt to save your change, the semantic IDE refuses, informing you that it would change the semantics of the following three modules, which call your function without specifying that parameter explicitly - even if you do not have read permission to the source code of those modules.

The Rational Environment did that 40 years ago, can your IDE do that for you today ?

Some developers get a bit upset about that when we demo that in Datamuseum.dk :-)

The difference is that all modern IDEs regard each individual source file as "ground truth", but has nothing even remotely like an overview, or conceptual understanding, of the entire software project.

Yeah, sure, it knows what include files/declaration/exports things depend on, and which source files to link into which modules/packages/libraries, but it does not know what any of it actually means.

And sure, grep(1) is wonderful, but it only tells you what source code you need to read - provided you have the permission to do so.

In the Rational Environment ground truth is the parse tree, and what can best be described as a "preliminary symbol resolution", which is why it knows exactly which lines of code, in the entire project, call your function, with or without what parameters.

Not all ideas are good.

Not all good ideas are lucky.

Not all forgotten ideas should be ignored.


Poul, have you looked at the Mill Architecture? It doesn't provide capabilities but provides "turfs". A turf is a set of [base..limit) regions in a single address space. Threads within a turf can only see what the turf allows them to see. Regions can be shared between turfs if so desired (& how one may communicate large amounts of data). When a thread makes a "portal call" it is in another turf and can only see what is in that turf (+ call args, passed on the "belt"). Not clear if this will go anywhere & might get forgotten but it is an interesting architecture worth exploring.


What a impressively arrogant double-down.

You once again present no technical argument for your extraordinary claims and position, let alone the extraordinary evidence required. You present no technical analysis of known problems or limitations with historical or conceptual designs. You present no solutions, workarounds, or even argue those problems are no longer relevant.

You fail to even fallaciously argue that past success indicates future success. You jump straight to past failure indicates future success.

You just assert, without evidence, that old, flawed, failed, outcompeted technology is the future and anybody who uses facts and evidence to disagree is a arrogant, ignorant inferior to your staggering intellect. Bravo.

Word to the wise, if you let off on the self-aggrandizing arrogance and had a little epistemological humility when proposing we should revisit old ideas instead of implying people who disagree are troglodytes then you would not come across as clueless.


Just think how many people the new owners can spy on, now that they control the backend-servers ?

Bandwidth is the only limit...


Somehow you overlooked that Article 8 has a second clause, even though it comes right after the bit you quoted ?

2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.


There’s no such clause in current version (and it’s article 7, not 8).

https://eur-lex.europa.eu/eli/treaty/char_2012/oj/eng


And now you overlooked article 52 ?

It's really very simple: NO human rights are absolute.


I overlooked nothing, just pointed out that you are referring to a wrong document. But since you mention article 52... it cannot be taken out of context of the whole title VII. Yes, human rights are not absolute. It does not mean that they can be restricted arbitrarily. In case of digital surveillance there are already some legal precedents in EU (I know that in EU precedents are not binding, but they may serve as indication of possible decisions). E.g. in Germany installing spyware is now allowed only to investigate serious crimes, according to recent decision of the Constitutional Court.


Could you link the version you're reading. I don't see anything saying this in the version linked above. It reads to me like "there may be limitations put on these rights but only to protect other rights, and even then the essence/spirit of the law should be maintained".


And that's precisely it.

Your right to encrypted communication, if there is such a thing, ends well before it is used to stage a coup or plan a bank robbery.


But the key words here are "in accordance with the law" and "necessary in a democratic society." That's a pretty high bar, not a free pass.


But it also leaves open the possibility for lawmakers to simply create a new law which allows snooping. What's "necessary in a democratic society" is also pretty open, and can change from one government to the next.


Scanning everyone’s messages does not meet the bar of necessity. Especially when you look at their reasoning, child safety. Every country in EU should be ashamed of the funding they give police to investigate and prosecute known abuse and abuse materials. When they’ve properly financed policing maybe then they can make an argument that additional steps are necessary but not before.


That is pretty much a carte blanche, not a high bar at all. We had house searches because someone called an official a dick.

The German "constitution" is simply not very good.


>That's a pretty high bar

Really? That reads as the lowest possible bar. The legislature just needs to pass a law that allows for the snooping and it is then in 100% compliance with that section. Not even to mention "necessary in a democratic society", I can't imagine wording more broad than that.


Which is /precisely/ what is going on right now:

ChatControl is a proposed new law, in compliance with the EU Treaty.

... Unless the EU courts find the law unconstitutionally broad.


And you can just say it is out of economic necessity and the constitution would mean shit, even with the EUs questionable legitimacy.


They shouldn't have even bothered with the first part.


That doesn't say what kind of interference, nor does it say anyone is required to provide assistance to them.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: