In Rust, a function definition left-hand-side looks like an annotated pattern, e.g.
foo(x : int)
Therefore, one would expect to annotate the return type as,
foo(x : int) : string
Since the pattern is showing foo applied to x. The Rust syntax is actually confusing for both Haskell/ML programmers (where the arrow comes from) and mainstream programmers. It's too small an issue to change now though.
Rust's support for proper "algebraic data types" is very good and gives it an advantage over languages like C++. However there are some small surprises, such as forcing all enum constructors/fields to be public (one must therefore wrap it to make an abstract data type).
Every language has its warts and these are particularly minor ones.
That would imply that "foo(x: int)" is a string rather than a function.
Haskell doesn't use that notation either, it uses -> both for the parameter list and for the return type, and separates argument names (arguably, these are poor choices, since currying is not an efficient CPU-native operation and not intuitive so distinguishing between multiple arguments and returning closures is useful, and argument names are useful for documentation).
> That would imply that "foo(x: int)" is a string rather than a function
But foo(x : int) is a string! It literally reads "foo applied to x". In the function definition, it appears to be used as a left-hand-side pattern which is "matched". The definition is written as if to say, whenever the term foo(x) is encountered, use this definition here. At least, that was my expectation.
> Haskell doesn't use that notation either
OCaml does and Haskell once had a proposal to add it. Haskell type signatures are normally written separately, but it does support annotating patterns with the right extensions.
No, foo(x: int) is not a string, it's not even an expression, it's not even an AST node. It's a fragment of the larger ast node
fn foo(x: int) -> ReturnType {
body
}
The ast here splits into
Function {
name: foo
signature: (x: int) -> ReturnType
body: body
}
I.e. the arrow binary op binds more tightly than the adjacency between foo and x: int. And the type of foo is a function, not a string.
A "better" way to write this (in that it breaks down the syntax into the order it is best understood) might be
static foo: (Int -> ReturnType) = {
let x = arg0;
body
}
Or to put it another way. Reading foo(x: int) as "foo applied to x" in this case is a mistake, because that's now how things bind. You should read that "foo is a (function that takes Int to String)". It's a syntactic coincidence that foo and x are beside eachother, nothing more.
Ya, I'm not really going to defend the current syntax past "function syntax is hard".
It's mixing up assigning a global variable, specifying that variables type, and destructuring an argument list into individual arguments, in one line. I've played at making my own language, and this is one part that I've never been satisfied with.
Personally I'd probably at least go with a `foo = <anonymous function>` syntax to split out the assigning part. But that's spending "strangeness budget" because that's not how C/Python/Java do it, and I can understand the decision to not spend that budget here...
I don't think that would be the implication, because we are in the context of a function. Otherwise, we should also not write foo(x: int) but foo: int -> ?
I wouldn't say this is a wart or confusing (to me at least). As someone who uses Haskell, C++, and Rust regularly, I just accept that each language has its own syntax. It's true that Rust borrows ideas from many languages, but I view Rust's syntax as its own thing, and the meaning of the symbols are what they are. It doesn't have to do things the C++ way or the Haskell way. It does things the Rust way, and that's not a wart.
Having used both Haskell and main stream programming languages I did not at all think that was confusing. The type of "fn foo(x: int) -> string" is quite obviously "fn(x: int) -> string" for people coming from languages like C. I do not see how a colon would make anything more clear. Imagine the function "fn bar(x: fn(x: int) -> string)", would that be more clear with a colon?
On the other hand the enum thing is certainly surprising.
> Imagine the function "fn bar(x: fn(x: int) -> string)", would that be more clear with a colon?
In your example, why bother naming the inner "x" variable for the function param? It cannot be used on the right-hand-side (definition of "bar"). For that reason, the notation is not exactly "clear". In OCaml the annotation would be:
Ocaml's syntax is more consistent, I agree, but its colon operator has different precedence than in Rust, so I am not sure its rational applies to Rust.
This is true but Python has a user community which is several orders of magnitude broader, which confuses the issue a bit. These days if I was designing a language I'd probably ask “How would I explain this to someone who learned JavaScript/Python?” since even if you have great reasons for doing things differently it's a pretty reasonable way to predict sources of confusion for newcomers.
This is just how web browsers have rendered unstyled text for as long as I've been using web browsers (probably about as long as this website has existed).
I mean, just look at it. HTML 4 Transitional. Uppercase tags. And a note from the author that browsers don't handle the "new" standard completely, yet.
Reading another article in this series, "Can You Boil an Egg Too Long?" [1] really made me smile. Apparently no one knows exactly what happens if you boil an egg for multiple months or years. This seems such a trivial thing compared to all the other stuff humans have discovered. On the other hand this also means almost anyone can expand the limits of human knowledge: you just need an egg, a reliable source of heat and water, and lots of patience. Granted, the knowledge gained may not change the world, but you will still be the first who is in possession of that knowledge!
Does it though? I found a video of Julia using this method, but there is also step 4.5 where you crack the egg into an egg poacher, which is designed to help keep the egg nicely shaped. Maybe it's just the egg poacher doing its job?
> but you will still be the first who is in possession of that knowledge!
Are you sure about that? How do you know that no one has done that experiment. Expanding the limits of human knowledge is not just about learning something new, but also sharing it in such a way that makes it part of humanities general knowledge base (even if still restricted to a relativly small group of people). Most people do not have the means to establish human knowledge in this way; and those that do are generally limited to a scope that does not contain hard-boiling eggs.
The experiment might not be that simple, though. You would no want to boil away all the water.
That means using a closed system (might go BOOM), starting with lots of water (expensive), or finding a way to add water while keeping the water at boiling temperature (you don’t have to add cold water, so that is probably not that hard, but not trivial, either)
I don’t think it’s that big of an explosion risk; it’s just a matter of finding the right pressure to hold water at 212F permanently. Pressure cookers already exceed this IIRC, so off the shelf equipment should suffice.
"Iron eggs", eggs boiled multiple times across multiple days, is a delicacy in Taiwan. In case anyone is curious to see what an egg boiled for longer than usual (with soy sauce and spices) looks like.
They seem to harden more and more upon boiling, so I'm not sure I subscribe to the idea that _eventually_ a super-boiled egg would disintegrate into soup...
Okay, I got way more of a kick than necessary from the image of a map which depicts 'you' traversing from 'the land of normal eggs' to '?'.
Also - and more relevant to HN - this is the first time I noticed Randall Munroe of xkcd fame has written for the New York Times. Good job on him for landing that gig! :D
In a linked blog post [0] and on the Rust performance page [1] the performance metric "instructions" is used. What exactly is meant by that? Number of instructions executed?
> I think it's important to note that human pattern recognition is basically black-box as well.
Agreed. But as you note, even though humans are basically black boxes we can ask them questions in order to find out how they came to a particular conclusion. (How reliable the answers to these questions are is of course a different matter.)
So maybe we don't necessarily need fully interpretable models but simply a way to ask black-box models specific questions about their state, e.g., "To what degree does a person's age influence the output?".
> But as you note, even though humans are basically black boxes we can ask them questions in order to find out how they came to a particular conclusion.
No, you can't. If somebody treats you with suspicion, it's because of a combination of their news intake, their culture, local events, what their friends and family would think, the way you present yourself, and many other factors. You can always ask somebody to state their reason as a simple "if-then" statement, and they can make one up on the spot, but it'll be so oversimplified that it's basically a lie.
> So maybe we don't necessarily need fully interpretable models but simply a way to ask black-box models specific questions about their state, e.g., "To what degree does a person's age influence the output?".
You can already do that. Just change that number in the input and see how the output changes. To that extent, even the most black box AI model is more transparent than human decision making.
> You can always ask somebody to state their reason as a simple "if-then" statement, and they can make one up on the spot, but it'll be so oversimplified that it's basically a lie.
Well, I guess it depends on how self-aware a person is. I think the biggest danger is trying to rationally explain your decision when in fact it was based mostly on your feelings, in which case I agree that the explanation is "basically a lie". One needs to be honest when something is not based on a fact but on a feeling to prevent pointless discussions. (If I hold an opinion based on a feeling then you cannot convince me that I am wrong by giving me facts.)
> You can already do that. Just change that number in the input and see how the output changes.
Makes sense. But I guess transparent models would still be generally preferable because you can fully understand how the output is produced, whereas in black-box models you might have to ask quite a lot of questions to get a feeling for it, but even then you can't be sure that you have a full understanding of it.
> even though humans are basically black boxes we can ask them questions in order to find out how they came to a particular conclusion. (How reliable the answers to these questions are is of course a different matter.)
You punctuate the second sentence as though it were of secondary importance. But in many cases, we have little ability to figure out how we came to a conclusion, while being much better at fabricating plausible and politically acceptable answers. I put it to you that having questions answered with plausible fabrications, is actually a significantly worse situation than not yet being able to ask the questions at all. At least in the latter situation, we know what we need to be working on.
> But in many cases, we have little ability to figure out how we came to a conclusion, while being much better at fabricating plausible and politically acceptable answers.
Hmm, too me it feels like I can explain the reasons why I came to a conclusion in many (but certainly not all) cases. You "just" need to clearly identify your feelings and emotions and separate them from your rational arguments.
Anyway, these are our own shortcomings and of course don't have to be adopted by any artificially built black-box model.
This is actually false. Early psychologists thought that humans could know everything about their own brain's processes, but observational psychology proved most of their assumptions wrong. It's called introspection, and it's almost always not indicative of a person's actual inner workings.
Expert systems from the 70s and 80s had a capability similar to this. They could explain how they reached a conclusion by reporting the rules they used to get there. The problem was that interviewing experts and coming up with a huge rules database was a ton of work and didn't scale very well.
Maybe the next direction in AI will be to bridge the gap between expert systems and black box models?
I was more thinking about higher level reasoning. But yes, a lot of the lower level stuff just appears as thoughts in my mind seemingly out of nowhere.
Introspectable Higher level reasoning is irrelevant. Any introspectable higher level reasoning a person can do has already been automated for efficiency.
$$invalidate('items', items = items.filter(i => i !== item));
So this invalidates the whole array, right? Would this then re-render the whole array, i.e., remove and recreate all DOM nodes? And if so, does Svelte support more fine-grained ways to update arrays?
Svelte author here. There's a couple of things to note:
- as Glench mentioned, it's not destroying and recreating stuff unnecessarily. By default it will create or destroy blocks at the end of the `each`, if the length of the array has changed. If you use a `key` then it will diff the input (as opposed to the output) and move elements around accordingly.
- `$$invalidate` is just an implementation detail, and is subject to change. There a couple of directions in which we plan to do so. One is to use bitmask-based change tracking, which would allow us to generate more compact code resulting in faster change checks (e.g. `changed & 7` instead of `changed.foo || changed.bar || changed.baz` when updating the view). Another is to track which nested properties of an object or array have changed — at the moment, `items[i] = item` invalidates all of `items`, but it would be great if we had a way to invalidate `items[i]` instead.
The TL;DR is that Svelte overpromises. They can't possibly write code for every transformation combination as code size would grow exponentially (I'm not completely sure, but I think predicting transforms would involve the halting problem).
React is very far from the fastest vdom. It notably suffers from needing to work with non-DOM back-ends where a particular heuristic that is good in the DOM may either be useless or (worse) actively degrade performance. Preact, Snabbdom, or Inferno would probably be better points of comparison to Svelte's approach as they are much more optimized for the web.
DOM nodes can actually be recycled relatively easily. Cache the nodes by type then by class. Most recycled nodes would be a 100% match based on those two alone. If type and class match, then you have a super-high probability of dealing with old nodes for the same object, so few modifications are required. Most nodes without classes tend to have no attributes changed (<div>, <p>, etc) so they will match as well. Store those nodes with their vdom attached and you will have a record of what has been modified so patching is fast.
This optimization exists in some vdom implementations and would make the case in question much faster. There also seems to be an implication that the diffing algorithm will bloat. If you look at React, the diffing algorithm is a very small part of the codebase.
This is hardly just an academic optimization though. It is the key to reducing overhead on long lists. With long lists of complex objects, you cannot rely on patching text values because there will undoubtedly be actual DOM differences. Rather than dooming the entire list to poor performance, you can recycle those nodes and retain most of the performance of a flyweight scroller where nodes are all identical.
Either I have completely misunderstood you or you are simply wrong. It looks like you are implying that vdom is magical solution to long lists and svelte should have problem here. While in real world we have this: https://svelte.dev/repl/f78ddd84a1a540a9a40512df39ef751b?ver...
Like most other scrollers, the svelte one tries to only show the elements on screen plus a couple above and below to keep up the illusion. The Google article makes it very clear that top performance requires re-using DOM nodes.
In that very simplistic example, there are only 4 nodes to each list, so creating and destroying them doesn't matter. I have a scroller in a business app where each item has a few hundred DOM nodes. Create and destroy them constantly and you'll definitely notice on a desktop and performance will crawl on a mobile device.
Instead, you want to save those DOM nodes in a cache and re-use them. In order to do this though, you must know what parts are default and what parts have been modified by their previous user. A vdom does this automatically, but the cost for Svelte to calculate which of the hundreds of properties have changed is too big, so they just throw it away and make another.
An even better optimization would be where each list item has the same node structure and only text or images change. I believe Svelte can handle this case. Unfortunately a lot of lists are slightly irregular in real-world applications. I work with these kinds of lists a lot and a vdom keeps it smoother than all the other libraries we've looked at.
Imagine you add a few attributes, some properties, and a couple event listeners to a node. In order to reuse the node, you have to reset it to factory settings.
How would Svelte know what things had been changed when looking at a node saved in a cache?
If it has to check every property, the cost to check will exceed the cost to create new. If it saves a list of changes, it's essentially created a vdom anyway.
Thank you for explaining that to me. It is good to know what technology to use if I will meet this specific case. So far all projects I have worked on can be implemented both with React and Svelte :)
Last time I tried keeping a collection of simple DOM nodes to "recycle" my perf tests showed it was slower than just creating new nodes. So I wouldn't necessarily consider this an optimisation, although memory use might be better.
It depends on how you recycle them I guess. One of Inferno's biggest optimizations (according to its creator) is the reuse of DOM nodes and vdom fragments. To my knowledge, it's still the fastest vdom implementation around.
I guess whether a structural comparison (VDOM) or a value comparison (Svelte) is more efficient really depends on the concrete use-case.
For example, even changing a single value in a big list requires the whole VDOM to be recreated, but should be very efficient in Svelte as it can directly modify the specific DOM node. On the other hand, as pointed out in the article you linked, if changing a value affects a large part of the DOM, but does not actually change that much, then value-comparing frameworks (e.g. Svelte) probably do a lot of unnecessary work compared to VDOM-based approaches (intuitively I would think that this case does not occur that often).
You don't have to recreate the entire vdom if only one part changes. You need only recreate that component and its children (something like React's PureComponent or shouldComponentUpdate optimizations can prevent the worst cases without too much trouble).
Changing branches in a component is extremely common in code I write (very large business application with lots of rules).
Another important optimization is caching and re-using DOM nodes. With a vdom, you know exactly which properties and attributes differ from the default node. To reuse a node you need only update these properties to their new values or restore their original values. Without that tracking, it would be computationally cheaper to just create a new node.
One important example of this is our use of flyweight scroller patterns in lists. The content of the sub-tree changes, but most of the sub-components stay the same, so a vdom could keep most of the dom nodes around.
A Svelte component is mutable & manages the related DOM elements which are also mutable. In the conditional, the tree is re-rendered.
Would React's diff algorithm be able to recycle the `<p>surgical</p>` DOM Node? How much more performant would detaching & reattaching `<p>surgical</p>` be than recreating `<p>surgical</p>`?
Assuming the `<p>surgical</p>` can be recycled, the use case having the largest effect is a large inner tree being wrapped/unwrapped; where the inner tree would simply be moved instead of recreated.
In Svelte, this can be optimized by using a subcomponent, as seen in the repl example. You can examine the compiled js.
I believe React recycles DOM nodes based on type (they decide some DOM nodes are faster to recreate instead of reuse depending on the circumstances)
I may be wrong, but I believe hyperapp recycles both the physical DOM node and the attached vdom node together (IIRC, they mark reused vdom nodes as "recycled" instead of directly comparing objects so they don't have to construct as many new vdom objects).
Preact kinda cheats here because they diff against the DOM directly.
When discussing structural changes, it seems like lifting constant fragments a compile time wasn't discussed. There's a Babel React optimizer to do exactly this. Lift those to their own functions. Since those functions don't take any props or state, their values are cached almost indefinitely.
The way I understood it is that it would only perform code generation for trusted kernel code, not for arbitrary code provided by the user. Doesn't this resolve most (all?) security concerns?
Not necessarily, because it means you need to be sure that the combination of the JIT, the trusted kernel code and user data will only ever result in safe code paths. E.g. consider a JIT that mistakenly optimizes away a bounds check in the original trusted code in certain cases where it is not safe to optimize it away.
This sounds nice! One Question: You said that DML changes are handled via "standard check in sql file". Does this simply mean a new SQL file for each migration? And how are DML changes connected to DDL changes? For example, if some code is two versions behind and updated to the current schema, wouldn't this mean that the DDL is updated in one step to the current state, but the DML potentially in two steps, breaking the update?
That's correct. The DML changes as part of CI are somewhat new so we haven't ironed it all out yet.
Here's the scenario that I think you're laying out:
1. Commit A creates column foo
2. Commit B has DML that reference column foo
3. Commit C removes column foo
This works fine if our CI deployer does each commit individually. First roll out any schema changes, then run any DML SQL.
However, our deployer might pick up all those changes and since we roll out the schema migrations first (in this case a create + drop -> NOP) and then runs the DML (which will error), this is an issue because of the rollup.
In practice, we have yet to see this case (most of the time, the dev who write the DML is close enough to the code to know if it's going to be dropped soon and we don't drop that many columns - in part because we know that there be dragons) but truthfully, I haven't thought about it much and need to think through what the impact is beyond this example. Thanks for helping me refine my thinking and I'll have something to ponder on this weekend!
> In general, if I need an rvalue and it's legal to convert the lvalue I have into an rvalue, the compiler should do it automatically.
This is already done in some places. Example:
std::unique_ptr<int> get_int() {
auto p = std::make_unique<int>(1);
// `p` is an lvalue but treated as an rvalue in the return statement.
// (This would not compile otherwise because `p` is not copyable.)
return p;
}
Yes RVO is one case where the compiler will do it automatically.
I think copy initialization of an object will have the same will apply copy elision as well to result in the same performance, but I'm not entirely sure.
That is the right approach IMO. What I'm basically saying is "more of this". It's clearly possible for C++ compilers to do this in many more cases. They already do most of the hard parts just to produce some of the error messages that they do. Why not use that knowledge more often to help the programmer instead of burdening them?
> It's clearly possible for C++ compilers to do this in many more cases. [...] Why not use that knowledge more often to help the programmer instead of burdening them?
You're pretty quick to assume I don't. Believe me, I understand. I just don't agree that it's a good idea to mix up object lifetimes and execution context by using destructors to "magically" release locks etc. It never was. The mistakes were made years ago.
I'm not quick to judge (in this case). I'm judging after careful consideration, because I know the difference between good and bad patterns. The ones being hasty are those who mistake their own comfort level with something (often because they know nothing else) for actual merit.
C++ without deterministic destructors to manage lifetime would be a (fundamentally) different language.
Furthermore, also disagree with you that it would be a better language. C++ has many warts but object lifetime and RAII is amongst its strong suits (unarguably, I thought — yes, resource lifetime is complex, but it’s inherently so; C++ just makes the complexity explicit and handles it in a good way). Handling resource lifetime in languages with nondeterministic GCs can be such a pain that it has fundamental, detrimental impact on the architecture. Just look at .NET’s handling of `Dispose`. I’ve written a ton of .NET GUI code, and handling resource lifetime (in particular GDI+) is an absolute pain point, which is uniquely caused by the lack of deterministic object lifetime.
> C++ without deterministic destructors to manage lifetime would be a (fundamentally) different language.
The problem is that it's not usefully deterministic once you add in exceptions, shared pointers, move semantics, lambda captures, etc. Not to mention every perverse combination of those things. Yes, it's deterministic in the tautological sense that almost everything is deterministic given enough information, but I'm not sure that's enough even for the people who write the compilers. Even they have bugs related to misunderstanding this "deterministic" system. I certainly wouldn't want to make it less deterministic, and defer certainly doesn't make it so.
> resource lifetime is complex, but it’s inherently so
A certain amount of complexity is natural, a certain amount is spurious and self-inflicted. See above for some of the causes of that spurious complexity.
> C++ just makes the complexity explicit
Being explicit about memory management isn't a goal. C is even more explicit about these things. Does that make it a better language? Even in C++, new/delete is more explicit than most of the current idioms, and every book on modern C++ recommends against them. Explicit memory management should be a last resort, for the cases not handled cleanly by the language's other constructs, and those should be kept to a minimum. C++ has notably failed at that. Literally every other popular language except for C does better.
> Handling resource lifetime in languages with nondeterministic GCs can be such a pain
Do you really see no alternatives between dumping all memory-management complexity on the programmer and a full tracing GC? The authors of Objective C's automatic reference counting or Rust's borrow checker might take issue with that, as would the people who developed the memory-lifetime rules and infrastructure for all of the bigger older C codebases I mentioned in another comment. It is in fact possible for object lifetimes to be far more deterministic than in C++ as it exists today, and I for one think that would be a good thing.
> The problem is that it's not usefully deterministic once you add in exceptions, shared pointers, move semantics, lambda captures, etc.
No, it really is, even in the presence of the things you mentioned. That’s the whole point.
> Being explicit about memory management isn't a goal. […]
I get the impression that you’re confusing explicit and manual memory management. They’re not the same.
> Do you really see no alternatives between dumping all memory-management complexity on the programmer and a full tracing GC?
Of course I do, but the alternatives aren’t without their own problems. I’m curious how Rust will fare but Objective C’s ARC, while attractively simple, has performance implications, and then there’s the problem with cycles.
> The problem is that it's not usefully deterministic once you add in exceptions, shared pointers, move semantics, lambda captures, etc. Not to mention every perverse combination of those things.
It really is though, and I can't stress that enough. I feel you just throwing out keywords to make it sound more complex than it is.
You can stress an untrue statement all you like. Shout it to the heavens. It will still be untrue.
Thought for you to consider: the behavior might seem predictable to you but that's not a relevant standard. A chess opening, a volleyball play, a snowboard run might all seem straightforward to me, but that doesn't mean they'd suit everyone. It doesn't mean they're the best. It just means I've invested my time in learning to do those things those ways. It's too easy to say everyone should have to memorize the same lists of rules, to retreat into "we don't care about the blubs" arrogance, ignoring the fact that it just doesn't have to be that way. It is in no way necessary, for any purpose, to make every single programmer in a language spend so much of their time looking over their shoulder to make sure the compiler is doing the right thing. It's a waste no matter how good those programmers are.
> I feel you just throwing out keywords
And I feel that you're just not even trying to understand their relevance because you've already decided on a conclusion. The lengths to which people in this thread go to rationalize the time they've already wasted is astonishing. People who use C++ should be the first to demand its improvement, but I guess not all humans are rational.
C++ destructors really are deterministic, even in the face of exceptions, lambdas, and moves: everything constructed gets destroyed. Modern wrinkles where the compiler is allowed to skip a construction and a destruction do not contradict that. You have to fool with heap memory or evoke UB to escape that law.
To be useful, determinism has to mean not only that something happens but that it happens at a predictable time. Otherwise you're into that tautological "everything is deterministic" territory, and you step into that even deeper when you acknowledge the "modern wrinkles". "Compiler is allowed" (but not required) is practically the definition of non-determinism.
"Compiler is allowed" not to destroy what it never constructed.
This is no different from the myriad other elisions modern compilers do -- and that CPU cores do.
Deterministically, destructors run exactly when the constructed object goes out of scope, after objects constructed later, before those constructed earlier. More determinism than that is something you will get from nobody.
You don't have to write any constructors at all. Write ordinary functions, and call them whenever you like. Been there, had enough of that for several lifetimes. Destructors are better.
The behavior is predictable to anyone versed in the language. C++ is a complex language to learn so that is inherently harder than most alternatives, but the concepts here are not.
I am first in line to acknowledge the flaws of C++ as well as sheering on more modern approaches. But I do find your criticisms shallow and a quite oddly chosen.
> The behavior is predictable to anyone versed in the language.
Ahh, there's that "don't care about the blubs" arrogance again. Never mind that your interlocutor is about 99.9% likely to be less of a blub than you are. Whether behavior is predictable given arbitrary amounts of information and effort is not the point. What matters is how much it distracts the programmer from the non-language-specific problem they're really trying to solve, and the answer for C++ remains way too damn much even when the programmer is highly skilled and well versed in the language.
C++ and even C has tons of subtle edges that are impossible to keep track of. You need to be experienced to have a fighting chance, I'm not defending that. But it is a fact of life and in a large part of that are relics from the past.
You can't just take a concept that is somewhat unique to C++ and proclaim that it is bad just because C++ has a lot of warts. Your criticism doesn't even register on the weirdness scale in my opinion. It works as intended, is easy to reason about and solves practical problems. Object lifetimes is something I almost always miss when I don't use C++. A garbage collector is hell to work with in comparison.
What is unfortunate is that we don't really have any alternatives. Rust shows promise but we've had to suffer through decades to even get to this point, which also implies that we have decades left. And that assumes that rust continues in the pace it has and preferably also that we get more alternatives to chose from.
I already mentioned "defer" as in Go or Zig. It's explicitly tied to scope exit, not object lifetime, so it avoids all of the problems inherent in confusing the two.
I didn't ask what language you would use. I asked what you would do in C++, given you're criticizing people for writing such code in the language. If you're put so much thought into this problem like you claim then surely you must have a better alternative in mind.
I don't care what people "would do" in C++ today, because my whole point is that C++ started down this bad path years ago. I'm not criticizing people for writing such code. I'm criticizing the standards-makers who made such hacks (seem) necessary. It's not about having put thought into it either. Lots of people had put plenty of thought into it when the various versions of C++ were standardized. This is about making the right choices from among the alternatives available, and that was not done.
Demanding a solution that is both applicable to C++ as it exists today and yet not in C++ today is demanding two contradictory things. It's demanding that the same thing both did and didn't happen. It's dishonest. You want a suggestion? Adopt "defer" for the next version of C++. It's the best we can do. We can't change the past, but we can learn from it if we don't get stuck trying to excuse past mistakes and attack those who point them out.
I'm curious, could you elaborate?