No, it's not right. When put in context, the quote claims that that manner of speaking is used because the speaker has an unwarranted belief that they've done something absolutely incredible and unprecedented. In actuality, the manner of speaking is being used because the intended audience of the article is likely to have little-to-no knowledge of the technical details of what the speaker is talking about.
For example, if the article was aimed at folks who were familiar with the underlying techniques, the last two paragraphs of the "Enforcing Determinism" section would be compressed into [0]
Each FCM is time-synced and runs a realtime OS. Failures to meet processing deadlines (or excessive clock drift) reset the FCM. Each FCM uses triply-redundant RAM and NICs. *All* components use ECC RAM. Any failures of these components reset the FCM or other affected component.
But you can't assume that a fairly nontechnical audience will understand all that, so your explanation grows long because of all of the basic information it contains. People looking for an excuse to sneer at something will often misinterpret this as the speaker failing to recognize that the basic information they're providing is about things that are basic.
[0] I'm assuming that the time being wildly out of sync will indicate FCM failure and trigger a reset. [1] I'm also assuming that a sufficiently-large failure of a network switch results in the reset of that network switch. If the article was intended for a more technical audience, that level of detail might have been included, but it wasn't, so it isn't.
[1] If it didn't, why even bother syncing the time? I find it a little hard to believe that the FCMs care about anything other than elapsed time, so all you care about is if they're all ticking at the same rate. I expect the way you detect this is by checking for time sync across the FCMs, correcting minor drift, and resetting FCMs with major drift.
Which works out at $100 USD / year. You might think that's trivial, but when you start provisioning multiple environments over multiple projects it starts to add up.
It's a shame that Google haven't managed to come up with a scale to zero option or serverless alternative that's compatible.
Sheet Ninja is 108 USD / year and has tiny capacities for every metric. SQLite is free and would stomp this in every aspect on low budget hosting. Even a tiny API that stores CSV would be magnitudes more efficient.
But what would scare me the most, is that google can easily shut this thing down.
It is trivial to set up a database on GCP given that you know what you are doing and I would pay Google for that stability and support for setting up multi-tenancy and region.
Using Google spreadsheets as a backend will just cause them to charge everyone later.
Sheet Ninja isn't free. Even on their side, "free" does not mean what you think it means.
setup a DB project , use same cloud sql instance for all DBs. Did that for years on non prod or experimental projects.
$100 is a bargain for what you get in terms of resiliency
Unless things have improved it's also hideously slow, like trivial queries on a small table taking tens of milliseconds. Though I guess that if the alternative is google sheets that's not really a concern.
When I first saw ads of "lost weight magic drug in a syringe" in NY Subway I thought it was some intelligent, sarcastic and learning campaign.
Then I realized US is selling strong, sometimes dangerous diabetes drug as "eat more, weigh less" spell. I’m more than I amazed! Freedom is awesome.
It summarized the nature of humans today nicely. We are ready to pay any amount nice, but when it gets to subscription mode we are not going to pay even 10x less than the one-time.
Thanks for such public confirming there is a lot of more us.
I’m just tired hearing how great ideas will save our overblown pseudo-microservice architecture and I’m also running into some projects during evening that just solve problems without use STOA, unnecessary solutions and architectures.
I’m not into RoR, because I was mainly PHP rescuer in the beginning of my career, but they both are just problem solvers. Sit down, write minimal (in case of PHP not so cool looking) code and proceed to next task.
I've just started using RoR for a live greenfield project since New Year.
Honestly, breath of fresh air.
It's the closest I've come to that old school "in the box" desktop development experience you used to get from building desktop software with Visual Studio or IntelliJ IDEA or NetBeans or Eclipse or any of the other IDEs of the 90s/00s (I never used Delphi or VB but I imagine in some sense they were even moreso than the ones I've listed, which are the ones I used), only it's web development.
For me web development has always felt like a frustrating ordeal of keeping track of 10,000 moving parts that add noise and cognitive load and distract you from fixing the actual problems you're interested in solving. This means the baseline ancillary workload is always frustratingly high. I.e., there's too much yak-shaving.
Whereas Rails seems to drag that all the way down to a level where it feels more similar to the minimal yak-shaving needed to (at least superficially) build, run, and distribute desktop software. Not that this is without its challenges, because every deployment environment is a little different in the desktop world, but the day to day developer experience is much lower friction that modern web development in general.
Also, no sodding TypeScript to deal with. I hate TypeScript: an ugly, verbose, boilerplatey abomination that takes one of the nicest and most fun features of JavaScript (duck typing) and simply bins it off. Awful.
TS doesn't "bin off" duck typing, it's a fundamentally structural type system. It's statically analyzed ducks, all the way down - when nominal behavior is preferred, people have to bend over backwards. Either you are using the wrong vocabulary or I don't think you've bothered to actually learn Typescript. In any case, it's the programming language that successfully brought high-level type system concepts like type algebra, conditional types, etc. to their widest audiences, and it deserves a ton of credit for that. The idea that JS and Ruby and Python and PHP developers would be having fairly deep conversations about how best to model data in a type system was laughable not that long ago.
> Either you are using the wrong vocabulary or I don't think you've bothered to actually learn Typescript.
All right, fine: TypeScript uses structural typing which is if you like a specialisation of duck typing but, whatever, compared with JS's unadorned duck typing it still leads to embellishment of the resulting code in ways that I don't enjoy.
I've been using TypeScript across different projects at different companies since 2013 and I've absolutely given it an honest go... but I just don't like it. I even allowed its use at a mid-size company where I was CTO because it fit well with React and a sensible person picks their battles, but I still didn't like it.
I'm now in the very privileged position where I don't have to use it, and I don't even have to allow it a foot in the door.
Now I'm sure that won't last forever, and I'll have to work with TypeScript again. I'll do it - because I'm a professional - but I'm still entitled to an opinion, and that opinion remains that I don't like the language. After 13 years of use I feel pretty confident my opinion has settled and is unlikely to change. I find it deeply unenjoyable to work with. BUT the plus side is that in the era of LLMs perhaps I no longer need to worry so much about have to deal with it directly when it eventually does impinge upon my professional life again.
I found it not just to lead to embellishment, but (1) the problems it did flag mostly would be caught by minimal testing; whereas (2) it regularly missed deeper problems. For an example of the latter: using TanStack (React Query) api caching, you have different data shapes for infinite scroll vs non infinite scroll. There were circumstances were an app confused them. Typescript had nothing to say. Nominal typing easily handles these cases and, ime, caught more actual problems.
> the problems it did flag mostly would be caught by minimal testing
Testing is more expensive up front and in maintenance than type annotations. A test suite comprehensive enough to replace type annotations would have an ass load of assertions that just asserted variable type; if you were involved in early pre-TS Node, you remember those test suites and how they basically made code immutable.
> (2) it regularly missed deeper problems
This is a skill issue. If your types do not match runtime behavior and you choose to blame the programming language rather than your usage of it, that's on you. There are a lot of unsafe edges to TS, but a diligent and disciplined engineer can keep them isolated and wrapped in safe blocks. Turn off `any`, turn on all the maximal strictness checks, and see if your code still passes, because if what you said about infinite scroll is true, it won't.
And one of its many problems is that it tries quite hard to pass itself off as not having those unsafe edges which, ironically, makes it easier to get tripped up by them.
> the problems it did flag mostly would be caught by minimal testing
Yeah, I agree, and the thing is, you're going to write automated tests whether you're developing in JavaScript or TypeScript so the extra cruft of TypeScript seems even less worthwhile.
The argument I've heard people put forward is that JS is fine for small projects or a couple of developers but doesn't scale to large projects or teams. I don't know how large large is, but I've worked on a project with around 30 devs where the front-end was all JavaScript and everyone was touching it and, sure, that project had some problems, but the root cause of those problems wasn't using JavaScript.
But we need contracts that go way further what static typing provides. If they add dependant types + ability to enforce the types at runtime so that you can use it on various inputs, then maybe it will be truly useful.
You are absolutely not alone, brother (or sister). In the past few years when a lot of the millennial generation started getting their first jobs, the shiny object syndrome took over and most everything being made now is a distributed monolith pile of spaghetti code trash.
There are endless tools available, and quick internet dopamine feedback loops, but almost no wisdom.
Give it a few more years and more inflation, and the remaining 35% of millennials will get out there to find their first jobs, and then the impact will be even worse.
reply