Hacker Newsnew | past | comments | ask | show | jobs | submit | prhn's commentslogin

I'm just here to share my love for this film. I'm a big movie fan. I've been watching the Fifth Element since high school, and I've only grown to appreciate it more and more as a film as I get older.

It's so full of life, creativity, color, humor, and themes we can all relate to (purpose, love, loss, etc).

This is peek Bruce Willis, and the movie is filled with other exceptional actors including Gary Oldman and Ian Holm. Milla Jovovich is extremely entertaining to watch as a sort fish-out-of-water, and I know Chris Tucker's character here isn't for everyone but in my opinion it's right on-brand for the film. Cracks me up every time for decades.

Mostly the effects have aged really well. That's generally thanks to heavy use of practical effects, as this article highlights.

I often get sad that this is becoming a lost art. Great filmmakers with big budgets are still doing this type of practical effects work (Nolan [Interstellar], Villeneuve [Dune]), but I think eventually it will be lost in time.


I cannot imagine anyone but Chris Tucker playing Ruby Rhod. He's one of the best parts of the film.

> Chris Tucker's character here isn't for everyone

Yeah this comment to me is incredibly surprising. Chris Tucker played an absolutely incredible character in that movie. So creative, so well executed, so memorable.

He was up there with Bruce Willis as top two in that film.

Such a brilliant movie - and definitely feels like a lost art.


I'm blown away by the idea of not using Chris Tucker for Ruby Rhod. It is like imagining anyone but Hugh Jackman as Wolverine. They are basically perfect castings.

You green?

Super green

yes, so many good moments, "btw, i have a recording of her talented voice" haha

Agreed -- it's a wonderful film, and deserves a special place right up there with Star Wars and Harryhausen for its practical effects.

While the article mentions Moebius, I think this level of praise still merits an extra Incal callout, even if it just serves as a recommendation to those who want more of this stuff: https://en.wikipedia.org/wiki/The_Incal


I must have watched it at least 8 times, and only on the 9th time did I pause and realize that in this movie the hero and villain never meet. Willis and Oldman almost cross at the elevator but never actually meet.

I got the 4K BD disk to watch with my kids a couple months ago and it has aged really well, particularly the special effects.

It's a wonderful movie, definitely one of my favorites.


> I often get sad that this is becoming a lost art. Great filmmakers with big budgets are still doing this type of practical effects work (Nolan [Interstellar], Villeneuve [Dune]), but I think eventually it will be lost in time.

Another one of the things that I appreciated from George Miller with Mad Max: Fury Road. There's definitely CG used, but so much of the stunts were real and not SpiderMan level nonsense.


In the recent Mad Max films, Miller used CG for compositing, but insisted that all the action be real. There are no CG people jumping bikes over 16-wheelers. CG was only used to get rid of safety equipment, change the sky, etc.. The results feel viscerally real.

Guitar dude's exploding rig was definitely CG. Don't kid yourself that it was limited to what you stated. Yes, the stunts were real humans, but it also had CG elements

Do you mean the flamethrower guitar? That was real.

I'm talking about the end of the flamethrower guy when the rig wrecks. There's a bunch of debris that flies around including the steering wheel that perfectly comes at camera spinning exactly times so the center wipes the frame. That sequence has lots of CG

This has tended to be significantly overblown recently with a huge amount of 'no CGI' advertising coming from studios, which often verges into utter BS. There's an incredible amount of CG at every level of modern productions, regardless of how much stuntwork and practical effects were done as well. (this video series has a good breakdown on it, which has included studios releasing doctored 'behind the scenes' footage! https://www.youtube.com/watch?v=7ttG90raCNo ).

That's not to say that doing these things is pointless or unimpressive, but it's often used to denigrate and minimize the work of a lot of already quite underappreciated artists.


What do you mean "Chris Tucker's character here isn't for everyone"?!?!? I absolutely LOVE Ruby! You green???

The cast is just perfect IMHO. Super green! ;

Also one of my all time favorites.


I thought slightly less of the casting for Fifth Element after I learned about the "Born Sexy Yesterday" thing in conjunction with Luc Besson's personal life. Same with Leon.

https://en.wikipedia.org/wiki/Born_Sexy_Yesterday

https://www.youtube.com/watch?v=0thpEyEwi80

https://en.wikipedia.org/wiki/Luc_Besson#Personal_life

While I enjoyed watching the movies, I feel like I would have to point out this dynamic if I were to show the movie to my kids.


Hmm, I mean that "thing" appears to be the opinion of one guy on YouTube. Which he is entitled to of course, but I don't necessarily agree.

Especially considering he's using Leeloo as "the most quintessential example" but then also "emphasizes that the Born Sexy Yesterday trope intensifies the dynamic by positioning women as submissive rather than equal partners", which is clearly not really the case here.

Or for example a scene early on where Korben tries to kiss her, to which she reacts with a gun to his head and says "never without my permission". Doesn't really sound very innocent or without agency to me.

I get the point of the analysis and it's certainly not completely wrong, but it seems to be a bit far-fetched and incoherent to be honest.


That's a pretty wild take, but ok. I think you really have to be digging deep and "looking for trouble" to take issue with a fun and relatively wholesome movie like Fifth Element.

Not to mention them getting together for the Fifth Element led to the Joan of Arc movie they did together afterwards (or at least contributed).

Let's just cancel everything.

> I'm just here to share my love for this film.

I love it too, and the best part is, I had not heard of it until my buddy dragged me to the theater to see it. I was completely blown away, and have watched it dozens of times over the years. I had the same experience when my mom took me to see the Matrix. I didn't watch much TV back then and didn't keep up with movie previews.


> but I think eventually it will be lost in time.

I don't believe it to be honest; model making and painting remains a popular hobby for millions of people, the only question is whether filmmakers will want to use it.

And recently, especially in e.g. Star Wars franchise entries, they have gone towards using models / sets again instead of just using CGI for everything.


CGI has amazing and always given me a wonderful advance for film making as a cinefile but I absolutely do not like how it replaced everything in movies for a while. Absolutely slop seeing actors in a green room trying to act through scenes and sfx puke we get instead of better directing, practical effects and magic of movies.

I agree that now we are finding our way back to a balance of using everything together to tell stories and I'm personally here for it.


I was flipping channels in a hotel and I assume the Peter Jackson hobbit/Lord of the Rings were on. The scene I watched was some sort of interior castle scene and it looked really bad. I felt like it was very flat and cardboardy and filmed on VHS.

But I wonder at what point digital effects become 'good enough' in some sense that they never look aged beyond the containing film. At some point surely there's no more perceptible 'resolution' to be had.

In practice digital effects haven’t approached being convincing the way practical effects do. In many cases, especially when used liberally, digital effects still clock as amazing digital effects rather than reality. It can be enjoyable but I don’t see what would move forward other than recognizing cgi isnt the best solution for everything.

This is not true, you just don't notice the vast majority of effects. You sit down to watch a summer blockbuster, there are 1000 shots that have been altered, pretty much anything that isn't two people talking in a room.

The advertising tries to tell you "we did everything practical!", it's always a lie and you believe it.


This comment doesn't respond to what I actually said. I said that heavy-handed CGI tends to read as CGI. You responded by "informing" me that more nuanced CGI is commonplace. Everybody knows that.

That's not at all what you said in your first comment, this is a total back pedal.

Let's forget for a second that "heavy handed cgi" is tautological because it wouldn't look "heavy handed" if it looked real, and forgetting that some things like energy beams have no analogue in real life so are obviously effects.

You said "digital effects haven’t approached being convincing the way practical effects do" and the truth is this isn't true at all, you just don't know that you're seeing digital effects and you think you're looking at photography or something practical.


Not sure if you're misreading what I wrote or arguing in bad faith, but either way I'm done here.

The one scene I dislike in this movie is Korben lying on the bed taking to Spider about his ex.

It always just seemed out of place to me. Exclude that one scene and it's perfect as far as I'm concerned.


i don't remember that, was that when he was talking to the food truck guy? heh "last time i checked my msgs one was from my wife saying she was leaving me, the next msg was from my lawyer saying he was leaving.. with my wife." lol so many great lines.

literally watched it last night and was struck by how much "personality" it has.

hah whenever i see a Stay Clear sign i whisper to myself, "i'm trying". Oldman did an amazing job btw, i really enjoyed every scene he was in, "you saved my life, so i'll spare yours".

Came here to say the same

I learned this lesson a couple decades ago.

Managing windows with OS idiosyncrasies becomes a task in itself.

However, I've also learned recently it depends what you're doing.

Software development, I just want one single maximized window on a single laptop monitor. If I have a near-retina DPI monitor with 120hz+ (I can't deal with low DPI fuzziness and low refresh all day) I'll usually have a 3-4 window layout on a single monitor with the IDE taking up half the screen.

There is a minor cognitive hit from switching focus between monitors for things like reading documentation, so I don't like doing that.

Music production? Man, I could probably use like 3+ monitors. Main stems view, a separate monitor for open VSTs, a separate monitor for video, a separate one for piano roll maybe. The window juggling gets really cumbersome on a single monitor.

My friend who is a professional musician (makes music for TV shows) uses 3 large TVs for music production.


> There is a minor cognitive hit from switching focus between monitors for things like reading documentation, so I don't like doing that.

Do you not feel like there's a similar hit from switching full screen windows? Or is your documentation within your full screen IDE?


> Do you not feel like there's a similar hit from switching full screen windows?

I feel like it should be, but in practice it isn't.

Sounds counter-intuitive, I know, but switching between windows on the same screen has near-zero context loss.

I also use a 3x3 grid of workspaces (center one is browser, all the others are dedicated to a single project/context/session/task each), and navigating workspaces (modifier+shift+arrows) also has near-zero contextual hit.

Even more counter-intuitively, while a second screen produces a large and irritating context-switch cost, using a little (physical, pen-and-paper) notepad next to me has even less context-switch loss than switching windows or workspaces do. It happens without me even realising it - sometimes I'd arise from a long session of coding and be surprised at some notes I made while coding.

There's probably something learnable about the human mind in all of this.


> Managing windows with OS idiosyncrasies becomes a task in itself.

Tiling window managers are a good solution.


Tiling merely changes the idiosyncrasies, and I say this as someone who primarily uses them. (hyprland in my case)

If you created a window right now, where will it go? Which window will it take its space from? Does it use your focused window? Your mouse position? If your WM supports mixed floating & tiling, how does it go when you flip a window between them? etc. That's all cognitive load when you aren't familiar and still requires some hand control when you are.


This is why I use no window management. Windows are arbitrary sizes of what I happened to drag out last time. Windows piled on top of eachother. Some stuff in the back of the pile dates back weeks. A couple other piles of various windows in other desktop spaces. I like to think it is like a messy desk. Maybe closer to how we think in real life. Like you the tiling was a lot of faff. What goes where, how big shoudl they be? How can I fit xyz on both these windows but they can only be 5 inches wide to fit it all on the screen? All that friction and mental load fades away with the pile of junk method of window management. You'd be surprised how easily you find things in that pile too.

After having tried many tiling window managers over the years, I have also come to the conclusion that the ‘pile of windows’ model (sometimes spread across desktops) works best for me.

The most important thing is to have a way to search through the pile. (Raycast window search is pretty good.)



I haven't used hyprland. I can answer your questions for XMonad, assuming you're using a typical standard layout.

> If you created a window right now, where will it go?

The new window becomes the focused window. It's inserted into the master position. Existing windows shift down the (conceptual) stack.

> Does it use your focused window?

It uses the same screen space, yes.

> Which window will it take its space from?

All of the other visible windows. It recomputes the tiles so that all tiles except the master become smaller, to make room for the new one.

> Your mouse position?

By default, mouse position is ignored. XMonad is keyboard-centric by design. You can set a mouse-follow configuration variable if you want. I've never tried it.

> If your WM supports mixed floating & tiling, how does it go when you flip a window between them?

It recomputes the tiles in much the same way as above. It's as though you deleted the window from the tiling and it becomes floating. And vice versa. It's a very consistent model.

I find it very natural and predictable. As far as "cognitive load" goes, that seems like an exaggeration, but again I haven't used hyprland.

If by "hand control" you mean using the mouse, that's definitely not needed for window management. In fact by default, XMonad doesn't even support resizing tiles using the mouse, and I've never tried to enable that. I do commonly use the mouse for switching focus, usually because I'm navigating to some location in another window anyway, in which case focus moves automatically.


I'm using Hyprland right now for its wayland support, but IMO so far the best mental model for window management I've seen is that of herbstluftwm with static layouts (you can still use dynamic tiling and tabs with it of course)

Yes, I tried a tiling window manager for about an hour and then stopped. It was absolute madness as the window size and positioning was seemingly random. I can't comprehend how anyone can use them

Used to use AwesomeWM for almost a decade before I forced myself to simplify to Gnome for other reasons, but I never felt like window sizing and positioning was random, it always opened in the logical place, which I kind of feel like is the whole value proposition of a window manager in the first place.

For curiosities sake, what window manager did you try?


Tiling window manager means relying heavily on workspaces. You distribute the windows over workspaces.

And most algorithms for management are deterministic. The position and the size of the new windows is always known.


I prefer desk based management. Windows like papers on my desk, piled on top of eachother peaking out from the sides. Seems chaotic but it is more aligned with how your brain works in the meatspace than looking at a bunch of things at once.

Do you use macOS? That's exactly how I feel on macOS because it is so so bad at this.

Dual 4k 27" monitors on Linux with KDE Plasma near perfect.


Even beyond the engineering there are 100 other things to do.

I launched a vibe coded product a few months ago. I spent the majority of my time

- making sure the copy / presentation was effective on product website

- getting signing certificates (this part SUCKS and is expensive)

- managing release version binaries without a CDN (stupid)

- setting up LLC, website, domain, email, google search indexing, etc, etc


Agreed. However, I just recently "launched" a side project and Cloudflare made a lot of the stuff you mentioned easier. I also found that using AI helped with setting up my LLC when I had questions.


Getting legal advice from AI is certainly an option you can take.


Legal advice is maybe the worst thing to get from an LLM.


Exactly. The "writing code" part is literally the easiest part of building a software business. And that was even before LLM assisted coding. Now it's pretty much trivial to just spew slop code until something works. The hard parts are still: making the right thing, making it good, getting feedback and idea validation, and the really hard part is turning it into a business.


At the risk of sounding ignorant, why didn't the various police cruisers and even the ambulance itself just push the damn thing out of the way? That's what the push bars attached to the front of their vehicles are for.


When my mom was a firefighter, and a car was blocking a hydrant, she happily broke windows and pushed the hose right through the car. Didn't happen a lot, but did happen more than once.


A human police over eventually got into the drivers seat to move the car. They sat around for minutes before doing so. They could’ve gotten into it immediately.

But yea they absolutely could’ve also just slammed it and moved on too.


The cops only do that if they can be sure they will be able to charge someone else for the damages. For their vehicle, for their "injuries", and also adding in to their department bonuses, but they get none of that if they can't charge somebody with a crime easily. Ambulances aren't really made to do that and are crazy expensive, plus the drivers would likely be taking on a bunch of the liability from the crash, although the ambulance itself likely would survive just fine because they are tough. The fire department are the only ones that have the trifecta of vehicles made to push other vehicles out of the way, are not paid by the extent that they can extort people for money, and are not going to be held liable for damages.


Ambulances aren’t exactly designed to act as battering rams.

They ram a car and the radiator goes bust and now you’ve got an ambulance with no engine.

Or you just hurt the passengers inside the Waymo and now you’ve got two emergencies.


Because its not a trailer for Grand Theft Auto 6.


Ambulances can be seriously damaged by attempting to do it. Police cruisers can do it, but then they may be sued for damages. I know that cars blocking fire hydrants were a serious problem in the past and owners sued firemen for pulling water hoses through their cars after breaking the windows - the law was not allowing it even if the line through the car was the only option.


> owners sued firemen for pulling water hoses through their cars after breaking the windows - the law was not allowing it even if the line through the car was the only option.

I'll bet anything you have no citation for this.

Sovereign immunity and necessity combine to make sure that firefighters and cops can do whatever the fuck is required.

The aftermath is even more brutal. You will receive multiple tickets for this, you will receive a bill for damages to the hose they had to thread through your windows (or to the police car that rammed you out of the way), and your car insurance will point to a clause in their policy that says that you are personally on the hook for all of this.

You may even face civil or even criminal liability for any damages to whatever is on fire, or loss of life, that a good prosecutor or plaintiff's lawyer can convince a jury is directly traceable to your egregious conduct in parking your precious car in front of the damn fire hydrant.


There is no sovereign immunity for firefighters, that is for states. Firefighters are city employees and cities don't have sovereign immunity.


Right, I should have written governmental immunity.

In any case, it's a losing lawsuit.


Sounds like trolling, but the idea of Waymo suing a responder to a terrorist attack is too ridiculous.


My thoughts exactly.

What an embarrassment.

"Authorities" paralyzed by politeness when lives are in the balance.


He also played Ronan in Guardians of the Galaxy and King Thranduil in Lord of the Rings!


how dare you mention Lee Pace and -not- mention his role in Foundation, he carried that entire show on his way too muscley back


It’s not cancelled, there’s a fourth season on the way.

But yes, him and Jared Harris are pretty much the primary reasons to watch. And given the limited Harris screentime, definitely Pace carries it.


People are using past tense, as David S. Goyer is leaving the show behind.


The articles I can find say he's staying on as a EP, just stepping down as the main show runner. That seems very different than leaving the show behind.


Yes, it could be there's no impact from any of it. I just remember seeing the headlines about the change.


oh no, this is how i found out my favorite show is dead wtf


You can still be a programmer and identify with and participate in that group. AI hasn't eliminated programmers or programming, and it never will.

However, my best advice as someone with many distinct interests is to avoid tying any one of these external things to your identity. Not a Buddhist, but I think that's the correct approach.

He sort of comes to this conclusion in the final "So then, who am I?" section. The answer is you are many things and you are nothing. You can live deeply in many groups and circles without making your identity dependent on them.

If you're a programmer, what happens when programming isn't needed anymore?

If you're a runner, what happens if you get injured?

It's always been helpful personally to remind myself that

I am not a programmer. I am a person who programs.

I am not a runner. I am a person who runs.


I align to everything in this post except the below excerpt. I think it's important to be a lot of things and nothing at the same time and to tie fulfilment to internal metrics rather than externalities.

> AI hasn't eliminated programmers or programming, and it never will.

It might not fully eliminate them tomorrow, but this technology is being pitched as at least displacing a lot of them and probably putting downwards pressure on their wages, which is really just as harmful to the profession. AI as it's being pushed is a direct attack on white collar CS jobs. There will always be winners and losers, but this is a field that will change in many ways in the not so distant future because of this technology - and most current CS prospects will probably not be happy with the direction the overall field goes.

Even if you do not personally believe this, you should be concerned all the same because this is the narrative major CEOs are pushing and we know that they can remain crazy longer than we can remain solvent, so to speak.


I don't dispute the shortening of attentions spans, which seems to be directly related to new forms of entertainment young people consume.

However. Films across the generations are very different in terms of how they lay out a narrative. Watch any film before 1980 and you'll start to see a pattern that the pacing and evolution of the narrative is generally very, very slow.

Art is highly contextualized by the period it's created in. I don't really think it's fair to expect people to appreciate art when it's taken completely out of its context.

Lawrence of Arabia, for example. What a brilliant, brilliant film. Beautiful, influential, impressively produced. And really, really boring and slow a lot of the time.

If I were a film professor today, hell even 20 years ago, I would not expect a modern film student to sit through that whole thing. I think it's my job as a professor to understand the context of the period, highlight the influential/important scenes, and get students to focus on those instead of having to watch 4 hours of slowly paced film making and possibly miss the important stuff.


> If I were a film professor today, hell even 20 years ago, I would not expect a modern film student to sit through that whole thing.

Our local cinematheque has just had a 70mm festival, where they of course screened Lawrence of Arabia. All screenings were sold out. My mom went to see another screening at the same time, and commented on how many young people were going to see Lawrence. The past couple of years there has been a strong uptick[1] here of younger people flocking to see older films.

[1]: https://www.nrk.no/kultur/analog-film-trender-blant-unge-1.1...


> And really, really boring and slow a lot of the time

If you only watch the story-driven scenes in Lawrence of Arabia, and skip the prolonged shots of the desert, you would miss out feeling the same vastness and heat Lawrence is feeling.

There is a limit to how much a film can make you think or feel. Films that reach the highest limits need "boring" voids in-between the primary scenes. These voids are not to ingest more, but to help digest what has been ingested in previous scenes, with subliminal scenes and silence that let the right thoughts and feelings grow.


> And really, really boring and slow a lot of the time.

It's not boring on a giant display with the original 6-track mix playing just a tad too loud all around you. I've seen it in 70mm at the AFI in Silver Spring, MD; candy for the eyes and ears.

It would likely be boring if played at a quiet volume on a small display. This is because movies are, in part, spectacle. Cirque du Soleil would likely be boring too if viewed very, very far away.


The pacing is irrelevant in this context. As a student the main point of watching these movies is not entertainment.

Although I will say it's pretty amazing that someone that supposedly has an interest in film would not be able to watch The Conversation or an even slower film like 2001.


No you're wrong. It's not about the era. Matt Damon talked about this on the Joe Rogan podcast recently. He was asked by Netflix to create a big action sequence in the first 5 mins so that people on their phones would get hooked into watching the entire movie. He was also asked to mention the plot of the movie several times throughout the movie because people on their phones will tend to miss plot details and it helped keep them engaged.

This is not about how movies are paced, it's about the way phones have changed attention spans.


No he's right, there is definitely a difference in pacing for films throughout the decades.

Much of the content that Netflix produces however is not made to be shown in a cinema like setting - its something that people put on while doing something else, like TV so whatever Damon was saying on a podcast makes sense in the context, its however not indicative of a whole generation of movies - there are still plenty of films being made that require full attention for an extended period of time, many of which are also on Netflix. One could argue that there was never a time in history where more excellent, deep and complex content was being made.

One other part is also that traditional TV (which arguably also never required full attention) has been replaced by new mediums. Personally I never owned a TV in my life.

The whole argument "phone bad" is a bit lazy IMO and doesn't at all take in account the nuance that would be required for a serious discussion.


There is different pacing throughout film history but that's not what the original article is about. The original article is talking about how film students can't sit through movies and that's because of attention spans and phones.


I think part of the problem too is there being so much slop content now that isn't worth paying 100% to which primes people to use their phone while watching for example netflix. And there is no differentiation between "higher quality pay attention to me content" that loses value by not paying attention and "quick and dirty low budget background movie" that loses value by paying attention to everything outside of a few key moments that covers the basic story line.


> If I were a film professor today, hell even 20 years ago, I would not expect a modern film student to sit through that whole thing.

Sorry, but this to me sounds completely insane. We're not even talking about the general population here, but people who are ostensibly serious about the art and craft of film making. And the bar is being set at literally just watching the movie, and not even some obscure marathon of a film that takes a degree to be appreciated, but a major mass-released picture that has already been enjoyed by countless people.


What seems to be missing for me at least is that I doubt I would have done well being assigned entire films on top of my regular course load worth of studies.

Paying attention to a film enough to emotionally connect with the content, take notes, synthesize an academic understanding of subtle things like the use of lighting, sound, camera work, etc while also doing the other several hours worth of homework from my other classes would be pretty daunting.

Much easier to get the clif notes from the Internet and fake it... though I had CS, math and Mandarin courses which were way way heavier on the homework side of things than most other classes I took, so maybe I'm overthinking it.


I like a lot of long films, but at nearly 4 hours, Lawrence of Arabia is a marathon of a film. I've not seen it, I did order a copy recently, but it was cancelled; and I missed the Fathom screening for some reason or another, but I'll see it eventually; I like long movies and movies involving sand, so it seems like an easy win.

I would think a film studies class might not want to spend so much time on a single film, so maybe several scenes would be more appropriate.


>And really, really boring and slow a lot of the time.

At no point was it "boring"


I guess that might be a modern interpretation. But I do disagree as well. I actually prefer older films because of the pacing, and fortunately live close enough to the TIFF cinema that I can see such films every other week.


We're not talking about random people pulled off the street and asked to watch Lawrence of Arabia. We're talking about film students. So I don't see how your post is relevant at all. It's like excusing poor literature students because your brother in law struggled with Moby Dick.


I imagine a film student watching the baptism scene of Michael Corleone and thinking it is boring.


> Watch any film before 1980 and you'll start to see a pattern that the pacing and evolution of the narrative is generally very, very slow.

Star Wars, Enter the Dragon, Game of Death, Mad Max, and many Bond films are fun counterexamples.


In fairness, Lawrence's own book on which the movie is based, Seven Pillars of Wisdom, is a disjointed, rambling, and usually boring book. The high points are really good, but you slog through a lot to get there.


Technically, yes it is still burglary.

It's an odd position to take, that a crime was not committed or the offense isn't as bad if the difficulties of committing the crime have been removed or reduced.


> odd position [...] offense isn't as bad if the difficulties of committing the crime have been removed or reduced

Not really, intent is a part of the crime. If the barrier for crime is extremely small, the crime itself is less egregious.

Planning a robbery is not the same as picking up a wallet on the sidewalk. This is a feature, not a bug.


This. 1000x this.

Yes, it’s still wrong to take things but the guy should get like community service teaching white hat techniques or something. The CEO should be charged with gross negligence, fraud, and any HIPPA/Medical records laws he violated - per capita. Meaning he should face 1M+ counts of …


What does "the crime is less egregious" even mean?

Morally, you burglarized a home.

Legally, at least in CA, the charge and sentencing are equivalent.

If someone also commits a murder while burglarizing you could argue the crime is more severe, but my response would be that they've committed two crimes, and the severity of the burglary in isolation is equivalent.


Now, how do we apply that to today’s current events?

Is it still a crime if the roadblocks to commit the crime are removed? Even applauded by some? What happens when the chief of police is telling you to go out and commit said crimes?

Law and order is dictated by the ruling party. What was a crime yesterday may not be a crime today.

So if all you did was turn a key and now you’re a burglar going to prison, when the CEO of the house spent months setting up the perfect crime scene, shouldn’t the CEO at least get an accomplice charge? Insurance fraud starts the same way…


It's a common attitude with people from low-trust societies. "I'm not a scammer - I'm clever. If you don't want us to scam your system why do you make it so easy?"


The Internet is the ultimate low-trust society. Your virtual doorstep is right next to ~8 billion other peoples' doorsteps. And attributing attacks and enforcing consequences is extremely difficult and rather unusual.

When people from high-trust societies move to a low-trust society, they either adapt to their new environment and take an appropriately defensive posture or they will get robbed, scammed, etc.

Those naïfs from high-trust societies may not be morally at fault, but they must be blamed, because they aren't just putting themselves at risk. They must make at least reasonable efforts to secure the data in their custody.

It's been like this for decades. It's time to let go of our attachment to heaping all the culpability on attackers. Entities holding user data in custody must take the blame when they don't adequately secure that data, because that incentivizes an improved security posture.

And an improved security posture is the only credible path to a future with fewer and smaller data breaches.

See also: https://news.ycombinator.com/item?id=25574200


We can start by stopping the use of posture like you’re squirming in your seat. I’ve heard that term for the last 10 years and never has it been useful. Policy yes, Practice if you must, Mandate absolutely, Governance required.

Using posture is a kin to modeling or showing off clothes, the likes of which will never see the streets. Let’s all start agreeing that the term is a rug cover for whatever security wants it to be. Without checks and balances.

If your posture is having your rear end exposed and up in public then…


It's a generic, albeit somewhat euphemistic term. I agree we could do with some better messaging. Dirty and direct is usually more effective. How about this framing?

The Internet is a dark street in rural India and your dumbass company is a pretty young white woman walking around naked and alone at 2AM. It's not your fault morally if someone rapes you, but objectively you're an idiot if you do not expect it. Now, you getting raped doesn't just hurt you; it primarily hurts people your company stores data about. Those rapists aren't going away, so we need you to take basic precautions against getting raped and we're gonna hold you accountable for doing dumb shit that predictably leads you to getting raped.

> If your posture is having your rear end exposed and up in public then…

Right, that is most companies' current security posture: Naked butt waving in the air. "Improving your security posture" is just a euphemism for "pull your pants up and put your butt down".

> Using posture is a kin to modeling or showing off clothes, the likes of which will never see the streets. Let’s all start agreeing that the term is a rug cover for whatever security wants it to be. Without checks and balances.

No, I will not agree with that; that's ridiculous. "Improve [y]our security posture" is not some magic talisman used to seize unchecked power within an organization. It's basically just the Obama Doctrine brought to computer security: "Don't do stupid shit".


“Improve [y]our security posture” absolutely is without a definition of posture. Does that mean more monitoring? More security team members?

Posture is no replacement for a plan.

Originally it was “how we follow our plan” but that has since been thrown out the window. Now, posture is code word for cover.

I don’t mean to vent it’s just tiring having to deal with varying degrees of posturing where everyone is just haphazardly laying on a couch watching TV.


Welcome to America


Powerful.


This is surprisingly basic knowledge for ending up on the front page.

It’s a good intro, but I’d love to read more about when to know it’s time to replace my synchronous inter service http requests with a queue. What metrics should I consider and what are the trade offs. I’ve learned some answers to this question over time, but these guys are theoretically message queue experts. I’d love to learn about more things to look out for.

There are also different types of queues/exchanges and this is critical depending on the types of consumer or consumers you have. Should I use direct, fan out, etc?

The next interesting question is when should I use a stream instead of a queue, which RabbitMQ also supports.

My advice, having just migrated a set of message queues and streams from AWS(AvtiveMQ) to RabbitMQ is think long and hard before you add one. They become a black box of sorts and are way harder to debug than simple HTTP requests.

Also, as others have pointed out, there are other important use cases for queues which come way before microservice comms. Async processing to free up servers is one. I’m surprised none of these were mentioned.


> This is surprisingly basic knowledge for ending up on the front page.

Nothing wrong with that! Hacker News has a large audience of all skill levels. Well written explainers are always good to share, even for basic concepts.


In principle, I agree, but “a message queue is… a medium through which data flows from a source system to a destination system” feels like a truism.


For me, I've realized I often cannot possibly learn something if I can't compare it to something prior first.

In this case, as another user mentioned, the decoupling use case is a great one. Instead of two processes/API directly talking, having an intermediate "buffer" process/API can save you headache


To add to this,

The concept of connascence, and not coupling is what I find more useful for trade off analysis.

Synchronous connascence means that you only have a single architectural quanta under Neil Ford’s terminology.

As Ford is less religious and more respectful of real world trade offs, I find his writings more useful for real world problems.

I encourage people to check his books out and see if it is useful. It was always hard to mention connascence as it has a reputation of being ivory tower architect jargon, but in a distributed system world it is very pragmatic.


Agree! In fact, I would appreciate more well written articles explaining basic concepts on the front page of Hacker News. It is always good to revisit some basic concepts, but it is even better to relearn them. I am surprised by how often I realize that my definition of a concept is wrong or just superficial.


Also it's nice to have a set of well-written explainers for when someone asks about a concept.


This has more depth on System V/POSIX IPC, and a youtube video.

https://www.softprayog.in/programming/interprocess-communica...

Fun fact: IPC was introduced in "Colombus UNIX."

https://en.wikipedia.org/wiki/CB_UNIX


> when to know it’s time to replace my synchronous inter service http requests with a queue

I've found that once it's inconveniently long for a synchronous client side request, it's less about the performance or metrics and more about reasoning. Some things are queue shaped, or async job shaped. The worker -> main app communication pattern can even remain sync http calls or not (like callback based or something), but if you have something that has high variance in timing or is a background thing then just kick it off to workers.

I'd also say start simple and only go to Kafka or some other high dev-time overhead solution when you start seeing Redis/Rabbit stop being sufficient. Odds are you can make the simple solution work.


I think the article would be a little bit more useful to non-beginners if it included an update on the modern landscape of MQs. Are people still using apache kafka lol?

it is a fine enough article as it is though!


Kafka is a distributed log system. Yes, people use Kafka as a message queue, but it's often a wrong tool for the job, it wasn't designed for that.


> but I’d love to read more about when to know it’s time to replace my synchronous inter service http requests with a queue. What metrics should I consider and what are the trade offs. I’ve learned some answers to this question over time, but these guys are theoretically message queue experts. I’d love to learn about more things to look out for.

Not OP but I have some background on this.

An Erlang loss system is like a set of phone lines. Imagine a special call center where you have N operators, each of which takes calls, talks for some time (serving the customer) and hungs up. Unlike many call centers, however, they don’t keep you in line. Therefore, if all operators are busy the system hungs up and you have to explicitly call again. This is somewhat similar to a server with N threads.

Let's assume N=3.

Under common mathematical assumptions (constant arrival rate, time between arrivals modeled by a Poisson distribution, exponential service time) you can define:

1) “traffic intensity” (rho) has the ratio between arrival time and service time (intuitively, how “heavy” arrivals are with respect to “departures”)

2) the blocking probability is given by the Erlang B formula (sorry, not easy to write here) for parameters N (number of threads) and rho (traffic intensity). Basically, if traffic intensity = 1 (arrival rate = service rate), the blocking probability is 6.25%. If service rate is twice the arrival rate, this drops to 1% approximately. If service rate is 1/10 of the arrival rate, the blocking probability is 73.3%.

I will try to write down part 2 when I find some time.

EDIT - Adding part 2

So, let's add a buffer. We said we have three threads, right? Let's say the system can handle up to 6 requests before dropping, 1 processed by each thread plus an additional 3 buffered requests. Under the same distribution assumptions, this is known as a M/M/3/6 queue.

Some math crunching under the previous service and arrival rate scenarios:

- if service = arrival time, blocking probability drops to 2%. Of course there is now a non-zero wait probability (close to 9%).

- if service = twice the arrival time, blocking probability is 0.006% and there is a 1% wait probability.

- if service = 1/10 of the arrival time, blocking probability is 70%, waiting probability is 29%.

This means that a buffer reduces request drops due to busy resources, but also introduces a waiting probability. Pretty obvious. Another obvious thing is that you need additional memory for that queue length. Assuming queue length = 3, and 1 KB messages, you need 3 KB of additional memory.

A less obvious thing is that you are adding a new component. Assuming "in series" behavior, i.e. requests cannot be processed when the buffer system is down, this decreases overall availability if the queue is not properly sized. What I mean is that, if the system crashes when more than 4 KB of memory are used by the process, but you allow queue sizes up to 3 (3 KB + 3 KB = 6 KB), availability is not 100%, because in some cases the system accepts more requests than it can actually handle.

An even less obvious thing is that things, in terms of availability, change if you consider server and buffer as having distinct "size" (memory) thresholds. Things get even more complicated if server and buffer are connected by a link which itself doesn't have 100% availability, because you also have to take into account the link unavailability.


I only really ever play one game, so that's not a blocker for me.

I would have switched by now but film and audio production software, including VSTs, don't seem to be greatly supported on Linux. I'd love to hear from someone if you are successfully doing this.


> I only really ever play one game, so that's not a blocker for me.

I play loads of games; its mainly AAA multiplayers that aren't able to run on linux due to kernel anti-cheat - nearly everything else runs well with minimal effort using proton via steam (either installed via steam or imported as a non-steam game).


Music production is indeed still a blocker. I used to use Windows for that; I am now on macOS for work and music (much better than Windows in every way! I use an old trashcan Mac Pro with Monterey for my studio computer) and Debian for my personal machines.


I'd say about less than .00000001 percent of the world is in the same use case as you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: