Hacker Newsnew | past | comments | ask | show | jobs | submit | kernelbandwidth's commentslogin

Counterpoint: k8s is a bad orchestration system with bad scaling properties compared to the state machine versions (Borg, Tupperware/Twine, and presumably others). I say this as someone who has both managed k8s at scale and was a core engineer of one of the proprietary schedulers.


Assertions don't actually have to kill the program. They send an abort signal, which means you can catch them with a signal handler.

I similarly wrote a simple C testing framework for a little project I was working on, based on assertions. The framework uses signal handlers for aborts and segfaults, and then setjmp/longjmp to handle resuming the test suite at the next test case. This has the particularly nice effect of turning segfaults into (marked as such) test failures, instead of just terminating everything. It probably wouldn't be too hard to fit custom messages in too, but I hadn't felt the need yet.


I concede that being able to handle segfaults as test failure is a very nice feature, however dealing with signal handlers is a bit more involved that simply using return values and I believe that it might cause portability issues (does it work on Windows for instance?).

I agree that for a decent test framework it might be the best approach but I think at this point we're no longer talking about "a minimal unit testing framework" which was the point of TFA.


It would be safer to fork before the segfault, to preclude other less obvious memory errors.


Missing Stephenson's The Diamond Age was a bit of an oversight as well, considering that Snow Crash is one of the canonical Cyberpunk novels, while The Diamond Age is (IMO) a defining Post-Cyberpunk novel. The Diamond Age opens with a Cyberpunk fake protagonist written with every Cyberpunk trope in mind, and then the fake protagonist is killed off before the end of the Prologue and the real protagonist, the slain punk's baby daughter, is revealed. Stephenson's intent is clear: "This is not a cyberpunk novel."

I'd also argue that Cyberpunk does not mean "everything is terrible", nor does Post-Cyberpunk have a softer and lighter view where "not everything is terrible." Rather, the difference is whether social control is rooted in 1984 or Brave New World.

From this point of view, Ghost in the Shell and Minority Report are still Cyberpunk (edit: settings); the viewer is just seeing things from a point of view other than the completely marginalized. Shadowrun is a Cyberpunk setting whether you work for a corporate power or in the streets.


It's funny to consider that one of the canonically great philosophers in history is known essentially by the equivalent of his WWE wrestling name. It's like if in the future there were classes taught on the philosophical ideas of "The Rock".


Some other amusing related stuff: so, Plato, was called for for the ancient greek word for broad/wide.

Modern english words that stem from the same root: plateau, platitude, plat, plate -- via French and Latin (plattus) from Greek (platis "flat, wide, broad").


Well, the first Pope was literally called The Rock (Peter). Jesus appointed him by saying "you are The Rock, and I'll build my church on this rock".

Exactly what he meant has led to centuries of debate between protestants and catholics.


As an adventure game lover, I think this is a great little game! I love dark, macabre adventure games and I'm really digging the art style here. The mood is very consistent and the creepiness really comes across. Your daughter has done a great job!


This is supposedly a feature and not a bug of CAPTCHA, at least according to the apocryphal story John Lafferty told me. IIRC, this was at CMU so he must have been referring to the 2003 CAPTCHA claim by von Ahn et al. The idea was primarily to stop spammers, but also secondarily to make them more useful. The argument was that if spammers managed to "break" CAPTCHA, then whatever technology was used to break it would necessarily be a useful (compared to 2003 knowledge) advance in AI, so it was a win-win whether it stopped spammers or just made them do free out-of-band research.


Yes, back in the day, CAPTCHA was a good way to use human time trying to submit forms to both block spam-bots, and do useful work (e.g. label training data). This worked well when ML was bad at image recognition, but that's no longer the case. Unfortunately, as ML got better at the tasks, CAPTCHAs forced humans to do more work, and we're now at a point where either the machines are better, or it's not worth the humans' time. If your site has a CAPTCHA, there's a good chance I'll just close the tab and move on.


If one uses Google's capture, then most likely that capture already has enough data on you, so it doesn't ask anything.


Greetings, comrade! I see by your comment that you don't often use VPNs, or tor, and even limit your use of incognito/private tabs[1]. Thank you for being a good citizen! Carry on!

/s NNAlphabetPopulationSentimentBOT 0.1alpha-3zulu-beta-5-mark20

[1] We are getting better all the time at tracking activity in private/incognito tabs, but there are still some gaps particularly when users manually enable extensions like ublock in incognito mode. This hurts the user's experience on the web, and we suggest only enabling google extensions in incognito. It's incognito even without privacy-enhancing extensions. We promise. Nevermind that we know enough about you to let you bypass our captchas.


Brave is my primary browser running under Firejail. I do not use private tabs. Rather I just delete all browser files time-to-time. After that Google capture do ask questions but after a couple of times it stops.


What a ...relief?


I had to put Google capture on a site running a forum with about 300 daily visitors. Otherwise number of bots that passed through email confirmation and tried to write comments with ads was 20-30 daily. It removed all bots.


I can't find the link, but I remember someone doing some experiments awhile back suggesting that the "I'm a human" checkbox just checks if you have Google's tracking "PREF" cookie installed. I don't remember if there seemed to be device fingerprinting involved as well. In any case, if I use one of those and don't have to click on street signs, I know it's a sign I should tidy up my browser state.


For whatever reason, probably my use of AdBlock/uBlock/PrivacyBadger/Disconnect/etc, I get the captcha 100% of the time from google.


I use Brave as my browser. It blocks ads much better than extensions. In addition my primary search engine is DuckDuckGo. Still most Google captures do not ask questions.


In this argument, how do we keep AI from destroying the internet?


If by "the internet" you mean "the ad-supported web," then we can't, and that may not be a bad thing.


This is pretty funny.

"Did Millennials kill fashion?" one of those headlines reads.

No, I've seen what y'all did in the 80's and 90's. If Millennials killed fashion, it was a merciful death after that.


Hey! I liked my parachute pants. They were comfortable. :(


I'm not saying it was all bad, just, you know, mistakes were made.


Would you call Erlang/Elixir actor-based or just having actor-like features?

Actors in Erlang seem more emergent to me than fundamental. To first order, Go and Erlang use similar concurrency strategies with cheap green threads and channels for communication. The main difference being that Go has channels as separates objects that can be passed around (leaving the goroutine anonymous), while Erlang fuses the channels into the process. In this respect, they both have similar levels of "actor-ness" in my mind. The biggest upside that I can see to channels being separate is that a goroutine can have multiple channels, they can be sent around (though so can PIDs), etc. This matters a lot because the channels in Go are statically typed, while Erlang/Elixir's dynamic typing makes this less meaningful since you can send anything on the same channel.

Of course, I supposed if one defines an actor as a sequential threaded process with a built in (dynamically-typed) channel, then Erlang is actor-based, so maybe I contradicted myself. In Erlang/OTP, the actor part is important to the supervision tree error handling strategy, which I think is the biggest upside, but it's not obvious to me that you need the channel and the process totally fused together to handle that.

Specifically in response to your "cheap concurrency good, actors bad" thought, my hypothesis is that actors are at least the more natural strategy in a dynamically typed language, which is why they seem to work well in Erlange/Elixir, but don't see much use in languages like Haskell (or Scala, it seems, where I at least though the Akka actors were kind of awful). Meanwhile, channels seem to fit better with static typing, though I can't quite put my finger on why.


"Would you call Erlang/Elixir actor-based or just having actor-like features?"

First, fair question and I the nature of your follow-on discussion and musings.

IMHO, one of the lessons of Go for other languages is just how important the culture of a language can be. I say that because on a technical level, Go does basically nothing at all to "solve" concurrency. When you get down to it, it's just another threaded language, with all the concurrency issues thereto. An Erlanger is justified in looking at Go's claim to be "good at concurrency" and wondering "Uh... yeah... how?"

And the answer turns out to be the culture that Go was booted up with moreso than the technicals. When you have a culture of writing components to share via communication rather than sharing memory, and even the bits that share memory to try to isolate those into very small elements rather than have these huge conglomerations of locks for which half-a-dozen must be taken very carefully to do anything, you end up with a mostly-sane concurrency experience rather than a nightmare. Technically you could have done that with C++ in the 90s, it's just that nobody did, and none of the libraries would have helped you out.

That did not directly bear on your question. I mention that because I think that while you are correct that Erlang is technically not necessarily actor-oriented, the culture is. OTP pushes you pretty heavily in the direction of actors. Where in Go a default technique of composing two bits of code is to use OO composition, in Erlang your bring them both up as actors using gen_* and wire them together.

"my hypothesis is that actors are at least the more natural strategy in a dynamically typed language, which is why they seem to work well in Erlange/Elixir, but don't see much use in languages like Haskell"

I can pretty much prove that they don't: https://github.com/thejerf/suture It's the process that needs monitoring, and that process may have 0-n ways to communicate. But that's not a criticism of Erlang, as I think that's actually what it does and it just happens to have a fused message box per process.

"my hypothesis is that actors are at least the more natural strategy in a dynamically typed language, which is why they seem to work well in Erlange/Elixir, but don't see much use in languages like Haskell"

An intriguing hypothesis I'll have to consider. Thank you.


I had not really considered the design patterns of the culture vs the design patterns of the language; this is a very good point.

> I can pretty much prove that they don't: https://github.com/thejerf/suture It's the process that needs monitoring, and that process may have 0-n ways to communicate. But that's not a criticism of Erlang, as I think that's actually what it does and it just happens to have a fused message box per process.

One, very neat library. Two, while I agree this proves the point that the actor model is not needed in the language to build a process supervisor, I think that your Go Supervisor looks a lot like an actor, at least in the way Erlang/Elixir uses them. From what I can see, the Supervisor itself works by looping over channel receives and acts on it. The behavior of the Supervisor lives in a separate goroutine, and you pass around an object that can send messages to this inner behavior loop via some held channels. So basically the object methods provide a client API and the inner goroutine plays the role of a server, in the same separate of responsibilities that gen_* uses.

If we squint a little bit, actors actually look a lot like regular objects with a couple of specific restrictions: all method calls are automatically synchronized at the call/return boundaries (in Go, this is handled explicitly by the channel boundaries instead), no shared memory is allowed, and data fields are always private. I'm sure this wouldn't pass a formal description, but this seems like a pragmatically useful form.

I agree that Go is less actor-oriented than Erlang/Elixir, but given how often I've seen that pattern you used in the Supervisor (and it's one I have also naturally used when writing Go) I'd argue that "Actor" is a major Go design pattern, even if it doesn't go by that name. The difference then, is the degree to how often one pulls out the design pattern. I think the FP aspect pushes Erlang/Elixir in that direction more, as this "Actor" pattern has a second function there -- providing mutable state -- that Go allows more freely.

This discussion has really made me think, thanks. I think you're right that actor-like features are valuable and that the Actor Model in the everything-is-an-actor is not itself the value (or even a positive).


The request for use cases in Go seems a bit like begging the question to me. Since Go doesn't have generics, anything designed in Go will necessarily take this into account and design around this lack. So it's relatively easy to show that Go doesn't have a compelling use case for generics, since the designs implemented in Go wouldn't (usually) benefit from generics!

Rust has generics and traits/typeclasses, and the result is my Rust code uses those features extensively and the presence of those features greatly influences the designs. Similarly, when I write Java, I design with inheritance in mind. I would have trouble showing real world use cases for inheritance in my Rust code, because Rust doesn't have inheritance and so the designs don't use inheritance-based patterns.

Essentially, how does one provide real world evidence for the utility of something that's only hypothetical? You can't write working programs in hypothetical Go 2-with-generics, so examples are always going to be hypothetical or drawn from other languages where those tools do exist.


It reminds me of the old urban planning line "You can't decide on the necessity of a bridge based on the number of people who swim across the river".

Users who have heavy need for genetics have already moved away from Go.


This suggests that both generics and inheritance are unnecessary.


And if you look at C you'll see that interfaces and struct methods are also unnecessary, GC is also unnecessary, bound-checked arrays are also unnecessary.

The question is, do you want to write type safe code? which is memory safe? which has bound-checked array? or not? Assembly makes all that stuff unnecessary as well. This is not a good argument, especially when Go std lib is getting all these type unsafe API using interface {} everywhere. That is precisely what generics are for, to allow writing parametric functions (sort) or type safe containers instead of sync.Map like in the std lib.

If you care about type safety and code re-use then generic programming is a necessity. What do you think append, copy or delete are? these are generic functions. All people are asking is the ability to define their own in a type safe fashion.

Are these use cases Russ Cox don't know they exist?


What's unsafe about using one of the C libraries that do bounds checking, etc?

The argument about assembly is a red herring. Let's compare:

1) In addition to writing your business functions, learn these additional control structures to obtain safety

2) When writing your business functions, also use these well reviewed functions that enforce type and memory safety when moving code across interfaces.

3) Port all of your code to assembly, write you own memory safety control structures from scratch.

You see the difference? The choice in your mind between "add control structures to the language" and "do everything painstakingly by hand" but we are advocating a third option, which is: use well reviewed libraries written with only the basic control structures.

The reason I am advocating that is the more control structures you have the harder it is to analyze code. You end up slowly moving your codebase to a point where only people with deep deep knowledge of an advanced programming language can read it.

I think culturally, in 2017, programmers underestimamte how much can be done with just functions and literals and well written helpers.

The reason for that is we are rewarded (psychologically and professionally) for learning new control structures, but not as often for writing better code using the beginner structures.


Assemblies over the ages denote that so are loops, functions[0] and named variables, you can do everything using addresses, offsets and jumps.

They sure are useful to readability and maintainability.

[0] which can obviously be used to replace looping constructs anyway


Thanks to Turing equivalence, all programming languages are unnecessary. We should all go back to writing machine code.


It only suggests you can't easily give an example because the language is forcing a design where such things aren't needed. Sort of like linguistic relativity.


Which still proves the parent's point: it's not necessary. The question is what absolutely can't be done without them (probably nothing) so a better question is how much design/engineering/test time could be saved with them?

On the latter part I'm fairly cynical these days, since I'm presently on a team where the lead embraced a Scala-DSL heavy, functional design and the net result has been a total loss of project velocity because when we need to prototype a new requirement, the push back has been paraphrasing "oh, could you not do that in <core product> and handle it somewhere else?" - so we end up with a bunch of shell scripts cobbled together to do so.


> what absolutely can't be done without them

Of course nothing can't be done without them. With or without generics or inheritance the language is still Turing complete.


This is so very true. I find Scala tends toward the classic "read-only" joke. You can write very powerful code, but it's often hard to debug and it tends to be quite difficult to read down the line. We have a Scala application that is being slowly replaced, but which I maintain in the meantime, and it is by far my least favorite job duty, despite that I theoretically like Scala more than, say, Java.

Probably my biggest disappointment with Scala is really the type system. It's incredibly powerful, but it's unwieldy to use, and it's still full of footguns. "object" is great and useful until someone accidentally turns it into a giant race condition (true story). Traits can easily turn into a maintenance nightmare (ever seen an 8-way multiple inheritance scheme that fans out into 23 separate ancestor classes? I have!) Exceptions end up in a strange territory. There's ways to handle all of this (much of it involves "don't do that!"), but the point is that for all the type system's powers, it's not actually buying you that much except in comparison to Java 6/7. If I want powerful type magic, I can use Rust or Haskell or similar and get stronger guarantees from my compiler. If you're writing "better Java", which is traditionally one of the more common Scala uses, it's harder to justify over Java 8 or Kotlin these days.

All of which is to say, Scala is a really cool language, but I wouldn't write new projects in it if I didn't have to.

Bonus round: We had a Scala iOS app via RoboVM before RoboVM support was yanked. It still gives me nightmares.


Yeah, all that pretty much tracks for me. I think the best way I've heard it described is thus: you can write Perl in any language, but it's really hard to write Scala in anything else.

I wrote an Android Scala app once. I regret the experience.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: