Are the black areas in this picture [0] residue or destruction from removing the heatspreder? What about the large areas that are not as dark? Is that residue from the solder/thermalpaste?
I'm especially curious because the die shot from AMD's Ryzen has smaller but kind of similiar looking areas. [1] But there they look too uniform to be damage.
It's a flip chip design, so the heatspreader and thermal paste does not touch the part that you are seeing in these images. The areas that I think you are referring to (the big "blob" areas) are computer place and routed standard cells (primarily transistors) that the PNR tool brute forces efficient placement under certain constraints. While it does not produce the most pretty or clean design, it does a good enough job in optimizing for speed and area for most designs.
Oh come on, this was supposed to be a harmless tongue-in-cheek comment. I come from a country which is far behind Russia in terms of technology and I really didn't think anyone could take it as some way of mocking. My bad
I thought it was funny and particularly relevant because the article is written in a macro->micro progression in the tear-down. Hackernews does tend to want to stay dry. It reminds me of my second post where I got dinged trying to be funny. https://news.ycombinator.com/item?id=10211855
These are mostly for government structures, military etc.
No one is using them (or, say, Elbrus) at home for PC's/workstations. Maybe only some geeks and just for fun.
Usually you can't even buy them in normal computer stores (at least I have never seen any).
It's probably only used in military technology and government devices; the only advantage is the control of supply chain, security (less risk of backdoors with manufacturer cooperation) and thus suitability for local defense industry. For example, I recall seeing some announcement about a locally designed tablet computer with this or similar CPU, designed for military map/geolocation use.
Both for supercomputers and ordinary workstations they'd use the same hardware as everyone else, the price/performance is better that way due to economies of scale.
No, actually these are mainly for IoT kind of use, multimedia and network devices, CNC systems (actually some companies already have ready CNC solutions with Baikal).
As for the PC (like in office PC) - https://en.wikipedia.org/wiki/Elbrus-8S is the main trend for state departmets and other places were security should be higher than your usual office. 8S will be replaced with 16S in 2018.
I think homegrown chips like these have many application in weapons systems. Russia wouldn't want their UAVs or missle defence systems to run with Intel CPUs or other COTS CPUs with embedded blackbox OSes for example.
Right now you have to be a legal entity (hope this is a correct term) to buy them, but I'm pretty sure that their board will be available for public soon: https://habrahabr.ru/post/320840/ (review in russian)
Hey, chipmakers deserve more credit than that! I'm a chip architect and we license from IMG and we license IP from others as well, but then there's a shitton of work and plenty of things which can go wrong, costing loads of money and big delays. For those chip makers rolling their own IPs, this is rarely the biggest risk, the biggest risk is integrating everything into a working, marketable chip delivered more or less on time and within budget.
Incidentally, the reason why chip companies are worth so much more than IP companies is that the cost and risk of making chips is much larger. IP and chip companies are both fabless, if the costs and risks were about the same, then I don't see why profits wouldn't be about the same.
Now it could be that in this instance, marketability, schedule and budget weren't a particularly big deal, and maybe the performance is so bad that this in itself made a lot of problems disappear. But I think it's likely that someone still sweated quite some to make it work. (TFA claims they were the first to implement this CPU in silicon, BTW - again, could be easy enough if they didn't care about performance at all, or if Imagination did all the work on the synthesis scripts and the backend or held their hand, but if they wanted high performance and had to optimize synthesis and placement themselves, that's serious work. TSMC won't do it for you, either - they want a GDS-II file, and I don't think they outsourced the actual chip design.)
Don't ask me, it was mostly luck in my case. In general I think in chips even more than elsewhere you want to work at a small place to get big responsibilities quickly, and from that angle, right now doesn't look like a great time to enter the industry, since FinFET mask costs are 10x what they used to be in bulk CMOS and so the expected payout needed to justify risking the capital went up a lot, so less projects starting small. Maybe if node shrinks stop and mask costs go back down and people will start making plenty of specialized lower-volume chips since you won't be able to win just by shrinking high-volume architectures, it will be a good time to enter the industry again.
> "... since FinFET mask costs are 10x what they used to be in bulk CMOS"
Not true, it is closer to 2 to 3x, and it is (slowly) getting cheaper. While it will take a long time (~3 years) for 14/16nm FinFET processes to reach price parity with what 28nm is now, it is dropping in price faster than when the 28nm generation came out due to the massive volumes of Apple, NVIDIA, etc.
It is looking like the 28nm generation will be a mainstay node, with continued investments by the major pure play fabs to keep bringing costs lower.
>> It is looking like the 28nm generation will be a mainstay node, with continued investments by the major pure play fabs to keep bringing costs lower.
Is that because it's essentially the last planar node? IIRC 20nm kinda sucked for both planar and FinFET so 28 is the last planar and 14/16 is looking like a long term node as well. Is that why you think 28 will be a mainstay?
I'm seeing 28,14 and 7 as pretty much stable and widespread over the next 10 years, with 14 and 7 being significant for cost/perf and cost/density reasons.
like yosefk I sort of fell into it, worked for a company architecting stuff in pals/fpgas, it came time to build chips, someone else laid gates and I did the high level stuff, next time around it was easier for me to do it in verilog - after that I changed companies and build a bunch of SoCs with a larger team working as a logic designer.
After a decade or so I actually made a concious decision to stop building chips, once the novelty wore off I found I was doing a month's creative work a year and 11 months of timing and DV. As a systems software hack I'd get somthing woprking today, and something else tomorrow - much more of a sense of accomplishment - these days I do a bit of hardware and a bit of software - mostly embedded systems - I definitely enjoy it more
Erm... A shitton of tools. You roughly write and validate an HLL description, translate it to gates or more generally/correctly library cells and then place them and route the connecting wires. GDS-II has everything at an exact place, from that they make wafer masks. But each stage is many tools and it's not fully independent considerations.
Or you can use Chuck Moore's famous 500 lines of Forth to produce a chip for 180 nm that can't access DRAM and go on about everyone doing it wrong.
It's OK with the Russians. The performance is good enough.They probably care more about the verification against security issues, and could do interesting stuff there, but we surely won't hear about that.
Intel has two architecture teams, one in Israel and the other in Oregon. They also have fabs all over, with their big 14nm ones being in Oregon (D1X, D1D, D1C) and Arizona (Fab 12, 32 and 42).
Altogether, they have more fabs in the US than outside (7 vs 3).
I know was just still making a point that you can buy an Intel CPU today that was designed in Israel, manufactured in Ireland (Fab 24) and the packaged somewhere else and it's still "American".
Interesting chip - under 5W usage and yet powerful enough for lots of things. I wonder what one of these with 8GB RAM and 1x10Gbe, 2x SATA connectors would use for power.
EDIT: the Broadcom chipset used in the Rasp. Pi 3 is still at 40nm process, according to what I can find. So not sure why people would question the 28nm process' usefulness.
It's a 2014 MIPS CPU core, why do you believe it's "junk" (unless you have prejudice against MIPS)?
I'm not sure how much Russian it is, though - given that it's CPU core is developed by UK company - doubt other stuff like Ethernet core are in-home designs as well - and the silicon is manufactured in Taiwan... Not to diminish the significant amount of engineering skills necessary to put all the stuff together, debug and test it and whatever else, but that doesn't look to be much of a domestic product to me.
It seems to be a reasonably competitive SoC for IoT applications. The ARM on a RPi is on the 40 nm range. This should have reasonably better performance and MIPS used to be a nice architecture.
It's a rather boring chip - but that's OK. Given what it costs to take a 28nm design into production, you generally do need to be conservative especially on a first go around.
When I hear that I think "China?" - and they do have much more impressive tech. There's a JV that has a license to AMD's Ryzen* tech for servers, and another company with a 64-core ARM server chip.
(* - I was thinking it was for Fail^H^H^H^HBulldozer at first, which would probably have set them back several years ;) )
this is only a guess, but i don't expect this chip to be 'competitive' -- my assumption is that they're trying to maintain domestic expertise to reduce critical reliance in imported computers. though the fact that they're licensed from the uk and manufactured abroad complicates this some.
I'm especially curious because the die shot from AMD's Ryzen has smaller but kind of similiar looking areas. [1] But there they look too uniform to be damage.
[0] https://s.zeptobars.com/baikal-Si-HD.jpg
[1] http://i.imgur.com/le2atYb.jpg