Hacker Newsnew | past | comments | ask | show | jobs | submit | hmry's commentslogin

It's been a few years since I worked with the dragon book, but I think the most common complaint was that it starts with like 350 pages on parser theory: generating bottom-up and top-down parsers from context free grammars, optimizing lexers for systems that don't have enough RAM to store an entire source file, etc... before ever getting to what most people who want to write a compiler care about (implementing type inference, optimizing intermediate representations, generating assembly code). Of course parsing is important, and very interesting to some. But there's a reason most modern resources skip over all of that and just make the reader write a recursive descent parser.

I guess "back in the day" you had to be able to write an efficient parser, as no parser generators existed. If you couldn't implement whatever you wanted due to memory shortage at the parser level, then obviously it's gonna be a huge topic. Even now I believe it is good to know about this - if only to avoid pitfalls in your own grammar.

I repeatedly skip parts that are not important to me when reading books like this. I grabbed a book about embedded design and skipped about half of it, which was bus protocols, as I knew I wouldn't need it. There is no need to read the dragon book from front to back.

  > But there's a reason most modern resources skip over all of that and just make the reader write a recursive descent parser.
Unless the reason is explicitly stated there is no way to verify it's any good. There's a reason people use AI to write do their homework - it just doesn't mean it's a good one. I can think of plenty arguments for why you wouldn't look into the pros and cons of different parsing strategies in an introduction to compilers, "everyone is(or isn't) doing it" does not belong to them. In the end, it has to be written down somewhere, and if no other book is doing it for whatever reason, then the dragon book it shall be. You can always recommend skipping that part if someone asks about what book to use.

The thing about parsing (and algorithms in general) is that it can be hair raisingly complex for arbitrary grammars, but in practice, people have recently discovered, that making simple, unambiguous grammars, and avoiding problems, like context dependent parsing, make the parsing problem trival.

Accepting such constraints is quite practical, and lead to little to no loss of power.

In fact, most modern languages are designed with little to no necessary backtracking and simple parsing, Go and Rust being noteworthy examples.


> In fact, most modern languages are designed with little to no necessary backtracking and simple parsing, Go and Rust being noteworthy examples.

But to understand how to generate grammars for languages that are easy to parse, you have in my opinion to dive quite deeply into parsing theory to understand which subtle aspects make parsing complicated.


My personal context to understand where I'm coming from - I'm working on my own language, which is a curly-brace C-style language with quite, where I didn't try to stray too far from established norms, and the syntax is not that fancy (at least not in that regard). I also want my language to look familiar to most programmers, so I'm deliberately sticking close to established norms.

I'm thankfully past the parsing stage and so far I haven't really encountered much issues with ambiguity, but when I did, I was able to fix them.

Also in certain cases I'm quite liberal with allowing omission of parentheses and other such control tokens, which I know leads to some cases where either the code is ambiguous (as in there's no strictly defined way the compiler is supposed to interpret it) or valid code fails to parse,

So far I have not tackled this issue, as it can always be fixed by the programmer manually adding back those parens for example. I know this is not up to professional standards, but I like the cleanliness of the syntax and simplicity of the compiler, and the issue is always fixable for me later. So this is a firm TODO for me.

Additionally I have some features planned that would crowd up the syntax space in a way that I think would probably need some academic chops to fix, but I'm kinda holding off on those, as they are not central to the main gimmickTM and I want release this thing in a reasonable timeframe.

I don't really have much of a formal education in this, other than reading a few tutorials and looking through a few implementations.

Btw, besides just parsing, there are other concerns in modern languages, such as IDE support, files should be parseable independently etc., error recovery, readable errors, autocomplete hints, which I'm not sure are addressed in depth in the dragon book. These features I do want.

My two cents is that for a simple modern language, you can get quite far with zero semantic model, while with stuff like C++ (with macros), my brain would boil at the thought of having to write a decent IDE backend.


I actually think the parsing part is more important for laymen. Like, there may be a total of 10K programmers who are interested in learning compiler theories, but maybe 100 of them are ever going to write the backend -- the rest of them are stuck with either toy languages, or use parsing to help with their job. Parsing is definitely more useful for most of us who are not smart enough :D

Yeah I agree, that seems vey true. Although the average person probably also benefits more from learning about recursive descent and pratt parsing than LL(k) parser generators, automata, and finding first and follow sets :)

1) If someone killed my child, I would probably want to kill them back. And yet we don't consider that sufficient reason to make revenge killing legal. The wishes of the victims need to be weighed against the cost it imposes on everyone else, including those who are innocent. The cost of violating everyone's right to privacy, the social impacts of mass surveillance, and the risk of that data being abused.

2) > Isn't that a privacy risk?

Yes, it is!

> Should we ban cameras in smartphones?

No? How about making it difficult for the police to seize everyone's videos without a good reason? We already do that for phone videos, it's called warrants. But Flock doesn't. They just ask cops to enter any arbitrary "reason" text into a HTML textbox and instantly get access to everyone's videos. And if the people explicitly said they don't want those specific cops to have access, like many people decided about ICE? Well, just ask the next county over and use their system, it's not checked in any way.


People don't have a right to privacy in public (at least in the US). Do people not realize anyone can photograph or film them in public at any time. Heck, photographers can even then around and sell the without the subject's consent. Case in point: https://en.wikipedia.org/wiki/Nussenzweig_v._DiCorcia

I'm really struggling to see the parallel between being filmed in public and committing revenge murder.


Sorry if I was unclear. My point was just that "if you were a victim, wouldn't you want this?" is not a very strong argument. What victims want does matter. But when it affects other people, their needs matter too.

Especially with mass-surveillance, which affects everyone. It's not possible to mass-surveil only people who would commit crimes, you need to surveil all innocent people too.


> My point was just that "if you were a victim, wouldn't you want this?" is not a very strong argument. What victims want does matter. But when it affects other people, their needs matter too.

Right and you used murder as an example. Do you think murder is even remotely comparable to putting up a security camera in a public space?

Yes, a victim might want some sort of response that is socially unacceptable, sure. But if you want to make a convincing argument you have to explain why the proposed response is unacceptable. Not some different, extreme, response of your own invention.

I'm really not sure how "committing vigilante murder is wrong" is supposed to be a good argument against putting up security cameras in a public space.


[flagged]


Which points do you disagree with?

The part that you prioritise your convenience over life-long tragedy of someone else.

Privacy is not "convenience", I'm not sure how you arrive at that. And it's also not mine, it's everyone's.

I don't want children to die (obviously). I also don't want governments to track the movement of protestors and dissidents, police to stalk their ex-girlfriends, etc.

I don't think the effectiveness of mass AI surveillance in preventing crime is high enough to justify the drawbacks.


Two different meanings of "forever" there. An OS runs for an arbitrarily large finite time, which is different from an infinite time.

Same way you can count to any finite integer with enough time, but you can never count to infinity.

Those kinds of interactive programs take in a stream of input events, which can be arbitrarily long, but eventually ends when the computer is shut down.

Termination checkers don't stop you from writing these interactive loops, they only stop non-interactive loops


I used to be able to run ROCm on my officially unsupported 7840U. Bought the laptop assuming it would continue to work.

Then in a random Linux kernel update they changed the GPU driver. Trying to run ROCm now hard-crashed the GPU requiring a restart. People in the community figured out which patch introduced the problem, but years later... Still no fix or revert. You know, because it's officially unsupported.

So "Just use HSA_OVERRIDE_GFX_VERSION" is not a solution. You may buy hardware based on that today, and be left holding the bag tomorrow.


The switch release is rated PEGI 18, ESRB M 17+

Agreed. Firefox ships with one, and it's very useful.

The HN title is not the article's title

-O3 also makes build times longer (sometimes significantly), and occasionally the resulting program is actually slightly slower than -O2.

IME -O3 should only be used if you have benchmarks that show -O3 actually produces a speedup for your specific codebase.


This various a lot between compilers. Clang for example treats O3 perf regressions a bugs In many cases at least) and is a bit more reasonable with O3 on. GCC goes full mad max and you don't know what it's going to do.

For companies too, judging by the number of LinkedIn posts along the lines of

"Our 4-person team's AI bill this month was $100K and I've never been more proud of an invoice"

"If your $250K a year engineers aren't spending $250K a year in tokens, you aren't getting your money's worth"

"If you aren't using at least $500 of tokens a day, it's time for a performance improvement plan"


What's the point if it's incompatible? The README suggests using go's testing toolchain and type checker, but that's unreliable if the compiled code has different behavior than the tested code. That's like testing and typechecking your code in a C++ compiler but then for production you run it through a C compiler.

Would have been a lot more useful if it tried to match the Go behavior and threw a compiler error if it couldn't, e.g. when you defer in a loop.

Is this just for people who prefer Go syntax over C syntax?


I don't work regularly on it but I have a proof of concept go to c++ compiler that try to get the exact same behaviour : https://github.com/Rokhan/gocpp

At the moment, it sort of work for simple one-file project with no dependencies if you don't mind there is no garbage collector. (it try to compile recursively library imports but linking logic is not implemented)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: