The shrapnel damage is pretty much established as there are multiple videos from inside the plane during this flight prior to the crash and you can see shrapnel damage on the fuselage. The survivors are also confirming it, and report that there was a bang(which might be confused for bird strike).
The question is, who shot the plane? This part is pure speculation at this point.
> The question is, who shot the plane? This part is pure speculation at this point.
It doesn't seem to be too difficult to put together what is likely. Grozny was under active drone attack at the time with air defenses working. And Russian air defense crews are pretty infamous for the jumpy trigger fingers at this point.
And notably, the "drones" were civilian propeller aircraft fitted out to fly an unmanned suicide trajectory. I'm not sure they would even look all that distinguishable on a SAM operator's screen from a small jet like this.
The Airliner has a transponder and a radio. Pretty sure the drone does not.
The transponder code, assigned by various ATC would identify that aircraft as a civilian airliner when it checks in, and on the screens of the SAM operators.
Also, the speed and altitude of the airliner, even approaching Grozny would not be the same as a drone. Airliners, even on approach, are somewhat faster, probably 200-250mph, or faster, and much higher in altitude, at least 5000ft, probably more like 10,000ft until close to the airport.
Out of curiosity, why wouldn't a hostile power also put a transponder on their drone (maybe one even replaying a nearby plane's code)? Surely that could help it blend in and avoid defenses
More importantly, it's not uncommon when crossing Air Route Traffic Control Centers (ARTCC) regions (eg. from Washington Center, to NY Center) for controllers to instruct pilots to change Squawk codes. Same applies when crossing from one country's airspace to another.
One of these drones, without a bunch of extra avionics would be unable to change transponder codes in flight, and talk with controllers via relay, that would probably double the cost of the drone, or at least significantly increase it.
So even doing something creative, like spoofing the transponder Squawk code, from another aircraft, probably wouldn't help.
Also, with Mode-C, and Mode-S transponders, the later used with ADS-B, which feeds all the flight tracking websites, the transponder transmits altitude.
A SAM operator will figure out somewhat quickly if an airliner is supposed to be at 10,000ft and 250mph but isn't according to primary radar tracking, but much lower and slower, that it's spoofing it's transponder.
A regional jet on approach and a prop aircraft in cruise don't necessarily look that different in ground speeds, altitudes, or even radar cross section to most radars.
This is the Russia war coming home to roost. They better admit that they are engaged in an actual war, and stop allowing civilian aircraft in areas that are attacked frequently.
Secondary surveillance radar does not depend on tracked aircraft knowing their own position, though.
ADS-B is an augmentation of that, which makes receivers simpler, but ATC generally does not rely on it exclusively (except in some very remote regions), nor on any type of active/cooperative signal or response – if everything else fails (maliciously or accidentally), there's usually primary radar as well.
These "drones" are more like enclosed ultralights, heavily loaded, 50-80mph, which an airliner would have already stalled at and be dropping out of the sky.
Ukraine is modifying a large variety of smaller aircraft to be suicide drones. Yes, A-22s/A-33s are used which cruise at like 100-120MPH or so (though there's been some talk of turboprop conversions of the same, too). But other small civilian aircraft which cruise at more like 160MPH have been employed, versus a late approach speed of the Embraer of 180-190MPH or so.
And remember, radars vary groundspeed, which can easily vary by +/- 25MPH from actual (and will be reading the Embraer's speed on the low side).
It's one of the reason Russia was very hesitant to shoot them down initially. Some of the planes were cessna and similar single engine prop planes that were loaded with explosives and remote controller:
My brother is flying to Japan tomorrow and we were talking about how the flight has gotten longer because of all the active war zones that they can't fly over any more.
Itavia 870 comes to mind: In the 80ies, a fighter jet was being smuggled to Gaddafi by hiding under a civilian aircraft, and the speculation is that France shot the convoy, killing 81. https://en.wikipedia.org/wiki/Itavia_Flight_870
That’s what I understood from a documentary, I think it was from Al Jeezira, a civilian was pointing that there was a double radar echo from the plane and it meant there were probably two planes on top of each other.
Sure the article mention a theory that Gaddafi was in a fighter jet, and the civilian plane was a collateral damage from a dogfight, but given the lack of information in the public space, both theories still have a lot of common points.
The aircraft was sent away / prohibited from landing after reporting the explosion over Grozny, and had to divert to Kazakhstan to land without real pitch control.
The pilots probably should have ignored that. There’s no such thing as “denying landing clearance” to an aircraft experiencing an emergency. There’s only “get out of the way of the incoming emergency.” But I can’t blame them for not wanting to take their chances when they were just shot by local air defense.
The more obvious explanation would be that air controllers surmised that it was at risk of being shot at again if it continued attempting a landing in Grozny and the safest thing to do was to divert it out of Russia.
Regional air traffic control is in Rostov (on-Don) - you’d think they’d at least be able to get the military controllers at Rostov (Southern military district HQ) on the horn?
Assuming they have an established channel of communication, yes they would have, but imagine trying to communicate to them which blip on the radar screen is an actual civilian aircraft, and hoping they're able to track it and make sure they only fire on other targets.
In particular some of the damage has a "linear" quality that could plausibly come from a continuous rod warhead, which would be typical for a system like Pantsir. If as the Russians claim, it was a bird-strike, you wouldn't expect debris from the failure of the engine to make a pattern like that on the tail, unless the entire engine body broke apart.
Likewise, it didn't look like it was on a glide path, but rather that as discussed the hydraulics failed and they has to use thrust vectoring to fly. Obviously for very fine correction on final approach that becomes difficult, and the result was what we saw. All of that is consistent with a missile.
Hedge funds are supposed to be uncorrelated, but I don't think many long-only equity funds make that claim. They just count it a win if they do better than the market (which often happens, but seldom continues).
Then there are the financial advisors. Last time I talked to one, the selling point was nothing but "trust us, we have a team of Ph.Ds."
One could equally say that mainly the mediocre people are excited, while the smart people actually see the limitations. For people doing non-trivial things, the output of ChatGPT seems currently rather far away from being of any deep value. To replace some mundane tasks, yes, but mundane tasks are anyway just for the mediocre.
> but mundane tasks are anyway just for the mediocre.
So when you’re working on something you just let other people take care of the inevitable mundane tasks that arise in daily knowledge work? I’m guessing you’ve never emptied the garbage can at the office either as that would be unworthy of your eminent intellect.
Not sure that I would recommend actually hiring that guy. He does not seem to understand that modelling is only one part of the equation. If you model something non-trivial, usually you can not just hand over your R scripts to someone else and say: Please implement that in python/C. You have implementation constraints which feed back into the modelling itself, like latency or scalability. Furthermore, good luck in letting someone translate your non-trivial math into another language hoping that he won't break it in some subtle or non-subtle way. Its just far more efficient if you have someone who can actually do the prototype directly in python or C, and let then a pro-developer optimize specific parts of the code.
Might go back to this model over time. Guess it is no secret that academia is in strong decline. Lot of excellent people just don't bother anymore to enter that circus. They might do their own independent research after they succeeded financially.
It is possible to do surprising amounts on a shoestring budget. Ive seen cheap (Sub $1k) electron microscopes go up for sale. As well as a ton of other lab equipment.
At its start, the quanta magazine was a good read. But, it degraded very quickly after they started to use it mainly to put scientists from under-represented groups into the spot light. Nowadays not worth any attention.
I am not sure if you are not trading "high human efficiency" against increased risk of blowing up at some point. Good luck doing forecasting without thorough understanding of priors and statistics in general.
Agreed, I see the "lower barrier to entry" in this particular case as coming with potentially huge risks. IMO, statistics is vastly, vastly, vastly under-appreciated and under-estimated.
I think that term already has usage as a proxy for "lowest sampling variance"; for example the Gauss Markov theorem shows that OLS is the most efficient unbiased linear estimator.
I guess this is echoing your point 2, but I would have generally said that "principled" statistical models are less efficient these days than DL (see: HMC being much slower than variational Bayes). Priors are usually overrated but I think the risk is more that basic mistakes are made because people don't understand what assumptions go into "basic" machine learning ideas like train/test splits or model selection. I'm not sure it warrants a lot of panic though.
In order to get admitted to University of Tokyo, you have to be top on the common national university entrance examination, and then be top on the University of Tokyo entrance examination. So only the best of the best in Japan gets admitted. Highly competitive, as a degree from that University sets you for life.
I have been in a university with that much competition (except that I enrolled back when the competition was less fierce, and I observed this as a teaching assistant), and it is very frequent that the competition only lasts until the point of matriculation because university curricula are more lenient in those countries. I would not expect most CS students in the U of Tokyo would be able to do the same.
Just 'cause you're smart, it doesn't mean you know how to design CPUs (or write C Compilers.) And if the national university entrance exam is testing for VLSI design domain knowledge, they're doing something wrong.
It sounds like UTokyo is like MIT. My first test there had material that wasn't in the book and wasn't covered in the lectures. When I asked my instructor about that, he said "Oh. You're supposed to know to do research outside of class," and when I asked my class-mates they said... "Oh. Yeah. My fraternity maintains a file of past tests for each instructor. You can get a good idea for what's going to be on various tests from reviewing the files. And if you don't understand it, you can get a frat brother to tutor you on it."
(As an aside... one of the frats had THE EXACT SAME TEST I had just taken. The prof didn't even change the problems. sigh)
So... my experience with MIT was that it's a place where bright kids go to get taught by upper-classmen and the classes are there only to prove you have some sort of mastery, or at least familiarity, with the material.
At Xmas that year I was bemoaning this fact to a friend from high school. His father was a physics prof at the local state university and overheard my complaints. He offered to give me a place in the lab if I xferred over. I took him up on his offer and wound up with a desk in the undergrad office, an account on the departmental VAX (this was a big thing back in the day) and a key to the physics building and the optics lab. I could drop in to my professor's offices virtually any time and most of them were excellent in explaining the finer points of quantum chromodynamics or math methods. I was recruited to be on the "let's build a super-cheap STM" team and landed a scholarship award for my work. After taking a hiatus to defend democracy I returned and landed a part-time gig at the Superconducting Super-Collider. That year I got my own MicroVAX. (Thank you, congress for all that SSC money.) But, of course, it didn't last. (Curse you, congress for taking all that SSC money away.)
My point may be that smart kids will do well at whatever university they attend. Also, I think I did MUCH better at the local state school than I would have at MIT. (Though I didn't actually graduate w/ a Physics B.S. as I planned. IBM hired me before I graduated and it was a PAIN IN THE ** to matriculate with even a B.A.)
Also. Lori Glaze (NASA director of Planetary Science) was a class-mate of mine, so... you know... it couldn't have been THAT bad of a school.
Now... as a software development manager, the thing that REALLY impressed me was the ability of a group of kids to effectively work together. The United States pumps out a lot of very bright CS grads, and our national mythos of "the rugged individual" helps in some ways, but I'm going to guess most CS students aren't getting a lot of experience in groups larger than 2 or 3 people. When I hire recent grads from US schools, the most important thing we have to teach them is how to play well with others. Sounds like the UTokyo team already learned this lesson.
It’s about being able to get in, getting out is irrelevant. People like me are a screwed for life because we didn’t get in to any good schools in the first place.
Problem is, there is a big divide in user base: For a normal user, 8 GB is mostly sufficient. If you have a lot of chrome tabs open, 16 GB makes your day. But, if you are any sort of a creator, you can not have enough ... my 256GB is barely enough.
8 gigabytes is not enough to run Windows. It ok for an iPad, but any Windows computer with 8GB ram essentially unusable and will crawl to a halt the second you open a web browser.
We use Ublock Origin and try to compress the RAM as a virtual swap everywhere.
Chrom* based browsers have optimized switches for low end machines, starting with --light, that's it, append that parameter to your desktop shortcut and things will speed up a bit.
Using the web today without UBo today it's suicidal.
By now, the good students and post-docs know what is going on: The system is already so broken that there is a high chance that you end up with a PI who got his job because he/she/* is good in making friends with some VIPs. For the actual research, that is what you are hired for. Have fun turning the "great vision" of these PIs into something that makes at least a bit of sense. And only if you are as well the type to make friends with VIPs, you will have a chance to tenure-track.