Agree with the author that UEFI is bad for security. You have this huge binary UEFI blob in a pre os boot environment that does not run open source. After the motherboard,laptop manufacturer looses interest and they loose interest as soon as the product does not sell more new products UEFI remains unpatched and insecure.
The boot loader should be simple and relatively dumb IMHO, then it is secure. If it should be bigger then it should be Open source.
Management processors like Intel ME built into the CPU, firmware another x86 insecurity.
UEFI is poorly understood by approximately everybody who doesn't work directly with it and it's frustrating to see so much misinformation out there.
UEFI does not mean that there is a huge binary blob that does not run open source. UEFI is a spec. It defines many steps that must be taken to boot in a compliant way. Large portions of the code that runs in UEFI compliant systems in the wild today are in fact based on an open source 'core' available on github. It is entirely possible to perform a UEFI boot on an entirely open firmware stack, though this tends not to be done. Large silicon vendors like to keep their silicon initialization code proprietary and secret, and they often 'require' tweaks to the open source version of the UEFI 'core' to meet their needs (read: it's seen as easier/cheaper/more-business-friendly to fork the open source core and sprinkle two or three changes throughout and keep the result closed), but there's no reason it needs to be that way.
The author is wrong - there is no 'UEFI kernel' running at any ring after boot. UEFI leaves some code and data in ordinary, OS-accessible memory which can be jumped to and run by the OS if desired to perform some UEFI-related task like setting a boot variable. This code is not protected or hidden or in a special ring and does not require any special steps to invoke. It just sits there waiting to be called and can be modified or deleted if the OS chooses to do so.
SMM is actually a special ring with its own privileges, but there is nothing baked into UEFI which requires using it or leaving code running there. UEFI is extensible and so some platform performing a UEFI boot can leverage a hardware feature like SMM to maintain some control over a platform, but that requires the firmware developers to go out of their way to do that and it can only be done on hardware equipped for it.
But you know what? UEFI is not at fault there. If a platform performed a UEFI boot without touching/configuring SMM at all, then the OS or the bootloader could do the same thing. The hardware capability exists and is accessible by ring 0 until somebody flips a switch to remove that accessibility.
Proprietary, insecure software is a problem. Making firmware fall into that hole is really bad. But UEFI doesn't make that happen. UEFI is just a way of booting that doesn't specifically disallow it, because it's designed to be flexible and extensible and powerful so that a lot of needs can be met. It's completely possible to put together a firmware image which is UEFI compliant and goes out of its way to disable SMM (or any other hardware feature), and boot to an OS that wipes UEFI traces from memory if it feels like it.
Something like Intel's ME existing as an option for businesses who want it is fine. Injecting it into every platform and making it roughly impossible to disable is not. Either way, UEFI is not implicated.
UEFI is not the bad guy. Those who ship UEFI compliant systems which happen to suck are the bad guys. They do it with UEFI, they did it before UEFI, and they would do it without UEFI.
UEFI is still a bad guy, because it is overengineered. And precisely because of that FastBoot mode was invented. Neither Windows, nor Linux (or whatever else) require 90% of UEFI features.
FastBoot just caches a bunch of known things that your system was able to boot with and then doesn't bother re-initializing/re-training/re-discovering various bits and bobs in the hardware.
It's not taking away some 90% of what UEFI does - it's letting UEFI do its thing, writing down what was done for your hardware configuration and the location of your boot image, and then reusing exactly that again on each boot.
That's simple optimization - not a fundamental change to UEFI. Though it is good optimization which certainly could have been baked in from the start.
I never said FastBoot takes 90% away. But it is the example of how simpler things can help firmware be faster and more efficient. I have quite an experience with legacy BIOS modification and RE, UEFI modification and RE, and coreboot development. So I know what I am talking about. One more example of such overengineered thing is the ACPI standard. Especially, since 6.0 version it is tied up to UEFI. EUFI was not designed to be open anyway since it effectively hides the hardware initialization (PI stage) in binary blobs.
UEFI is a beast from the worst times of Microsoft and Intel - this is why it uses PE (Portable Executable) as a format, didn't even bother for optimization, that caused some of the vendors to invent TE format (Terse Executable), which is leaner a bit. And the code, the EDK1/EDK2 code is a perfect example of poorly-written code. Compare it to coreboot or Linux kernel codebase.
> Those who ship UEFI compliant systems which happen to suck are the bad guys.
This. The UEFI implementation on my XPS 13 9343 can't pass kernel command line, unfortunately. Ideally, I would have liked to boot straight into an EFISTUB kernel. Thankfully, there is rEFInd, which I boot as a secondary bootloader.
Well simple and dumb generally is good for all engineered things, as long as they aren't so dumb that they can't do their job.
This principle applies especially well to boot loaders, because the job of a bootloader is to hand control off to some other, more sophisticated piece software.
Unpatched UEFI is more about the reputation of the manufacturer/provider of UEFI. If a major manufacturer releases a motherboard, you can be sure (1) they are patched often and (2) they use similar components across many motherboards so bugs and vulnerabilities are patched across many simultaneously are worked out sooner
Patches existing for the firmware vulnerabilities of major manufacturers is good (I'll take your word for it, having not looked recently, but I know that a few years back this was not the case and known vulnerabilities could be found easily on shipping products).
The pathway from the patch existing to the patch being applied is overgrown with flammable brush. Infrequently traveled. Not healthy. There are efforts to fix this, but they don't have too much momentum at the moment.
1a. Often - that much is true; initially. 1b. For the lifetime of the product? Nope nope nope nope, not in my wildest dreams. What's wrong with a stable, well-built, functioning motherboard? Nothing, just that some years have passed and the mfg no longer has an incentive for support.
> Secure firmware is the foundation of secure systems. If we want to build slightly more secure systems they will require open, auditable and measured firmware. If we can’t read and audit the firmware code, we can’t reason about what is going on during the critical phases of the boot process; if we can’t modify and reproducibly build the firmware, we can’t fix vulnerabilities or tailor it to our needs; and if the firmware isn’t measured and attested, we can’t be certain that our system hasn’t been tampered with.
I agree- this is a very PC focused article but firmware is everywhere. It needs to be updateable and making it open makes it far easier to update.
As far as UEFI goes, I'd just like to point out Microsoft's open source Firmware efforts (Project Mu). https://microsoft.github.io/mu/ The goal is to make firmware easier to service and easier to update with security fixes for older projects.
While it's not perfect, it is a great step forward. I think we need to see more of this in the future from other companies.
(Disclaimer: I work for Microsoft and contribute to Project MU).
Reading up on Project MU now, and I'm sorry, what is this about firmware as a service? This seems like exactly what I don't want. XaaS (X as a Service) is great when there's something external that is only temporarily or optionally required, (or too expensive). Otherwise it's an ongoing dependency. But with firmware, once I own the hardware, I should own the firmware too.
Without knowing more about this specific Firmware as a Service I can only imagine how this will actually look. Maybe it just means that updates are automatic? Even that alone is an interesting debate.
Otherwise, Prohect MU looks like a modern wrapper around UEFI, what's being done to address the fundamental issues? Firmware code running after boot, modifications to firmware possible by changing code between boots, etc...
The only firmware code running after boot that UEFI mandates is not below ring 0 and is fully optional - called only if and when the OS asks for it. The UEFI runtime services table is not a kernel and is parked in ordinary memory waiting to be jumped to/called.
SMM is supported but not mandated, which is exactly how any hardware feature should be treated. Blame those enabling the SMM code you don't like. Or blame the hardware manufacturer for putting the feature in at all.
UEFI is not your enemy. Its only sin is being overly complicated, which is (somewhat) debatable given the complexity of systems and OSes needing to be bootable.
so... as I see it the argument is that simply allowing the features is a "flaw" in the design, since, essentially, companies can't be trusted to do the right thing?
As a tinker I'd like to install my own boot code, disable default features and all that (I do some of it already), I fear it's still far from the norm.
I guess "my" question is, what features am I missing and are there better ways to get them without being so intrusive?
The intrusive parts that you see in a lot of firmware today fall into two categories.
One is like Intel ME. It supports use cases you as an end user do not give one single damn about and could easily do without, if given the option. Clean and remove as much as possible with no remorse.
The other is like SMM. Believe it or not, it's only accidentally the intrusive and insecure monster it has become. The point was exactly what it says on the tin. System Management Mode. The OS would ask the 'system' to do something it didn't know how to do, like change some power configuration in a laptop, and the firmware would handle it and then give control back. But these operations were delicate and needed to not be interrupted. And the hardware involved was delicate and needed to not be touched in the wrong way, lest the system hang or even fry. So it was locked away where the OS couldn't touch it. And then people started noticing they could use an untouchable special execution mode in other ways, and, well, here we are. The unfortunate thing is that because of how it started, you would feel some pain on most systems today trying to get rid of it. Your OS does not have drivers to change the CPU thermal characteristics properly and in accordance with silicon design (because the vendor did not develop one or make the information available to the outside world). So removing SMM will make many 'nice' features stop working. You may lose the ability to suspend and resume. Your power draw may be stuck too high or too low. It's possible on some systems you would be fine, but on others not so much.
Like the people demanding that graphics vendors provide open source drivers rather than binary blobs in the hope of making things better, really what's needed here is advocates pushing for the tasks performed in SMM today to be migrated to OS drivers, and from there to open source drivers.
You as a single tinker today aren't likely to get all this stuff working with the non-existent documentation.
Blame the silicon vendor for doing things in the most scary, back-door-ish way possible. UEFI didn't make them do that. The same crappy business practices that drive all bad proprietary software decisions did.
It is because the UEFI standard was designed to give the vendors such freedom. The only way to make vendors do the sensible things is to force them with a strict standard as possible, defining almost everything about openness and security. Even in this case there will be vendors who: 1) can't read the specification 2) do the thing their own way 3) obfuscate code for the sake of obfuscation 4) write the code the worst way possible.
I'd expect those in charge of maintaining and actually implementing these things for customers would also be aware. Project MU seems like at least in many ways a step in the right direction from Microsoft.
OS drivers seems ideal as long as it's open source, otherwise I'd expect Linux (or other OSes) hardware support to suffer.
Also interestingly, on the topic of sleep state. I have a Lenovo Thinkpad X1 Carbon (6th edition) and it's sleep was broken (and still partially broken) because Lenovo only implemented the new S0 low power idle "Modern Standby" (what a name!), without implementing the S3 mode as well. Sigh...
> I have a Lenovo Thinkpad X1 Carbon (6th edition) and it's sleep was broken (and still
> partially broken) because Lenovo only implemented the new S0 low power idle "Modern
> Standby" (what a name!), without implementing the S3 mode as well.
Good news: please update your firmware, and you will gain the option to either use this new S0 mode (for Windows) or the S3 mode (for Linux) in the firmware. This is not perfect if you're dual booting, but if you're only running Linux it solves the issue.
Also, you can now update your firmware from Linux using fwupdmgr. Be aware there's a Lenovo bug if you're dual booting: you'll end directly on Windows after an update. So keep a Refind USB key around, use it to boot Linux, and reinstall EFI grub. If you're pure Linux, no such problem.
Yeah I agree- firmware as a service doesn't exactly capture what Project Mu is trying to do. But the point is, firmware should be easy to update as any service rather than some huge monolithic codebase that was forked the master (TianoCore) and then hammered on until the platform booted.
In a perfect world a product would ship with perfect bug free and secure firmware and it would never need updates. And ideally, manufactures would allow for users to more easily install their own UEFI/firmware onto their device but that brings in some added challenges of security.
Since developers make mistakes, updates are currently the best solution we have. Making those updates more affordable to service, making the changes transparent to the end-users via OSS, and making it easier to apply those updates are all things Project Mu is trying to accomplish.
IIRC Microsoft was instrumental with a few other companies in developing the rather arcane and overbearing ACPI standard in the 90's that continues to make it difficult for non-Windows operating systems to reliably work with a laptop's hardware even today.
I've played with U-Boot on ARM platforms. It's a breath of fresh air. It loads the OS and then just gets out of the way. This is what simplified PC firmware should be.
While there's a lot of MU I could take or leave and some I don't really see the value of, I greatly enjoy that there's finally at least one effort out there to wrangle the build system. Having not used MU's build yet, I can't say if you've succeeded, but I applaud the effort since current "standard" build processes and scripts and wrappers in use are.... not good.
I wish the smartphone side of things gets more attention and someone engineers their way to an open source baseband firmware ala OpenWrt [0]. Not sure why after the breakthrough for GSM/2G with OsocomBB [1], no viable libre alternative LTE or 4G has emerged [2].
The smartphone is an always-on, always-connected computer with 2B or more (30% of world's population) unsuspecting (to almost a point of being gullible) BSD/Linux users who are exposed and don't even know it, to an unprecedented degree, to adversaries with deep pockets (ad-networks [3], nation-states [4], carriers [5]) who don't need a second invitation. Most of the privacy and security battles, I feel, will be won and lost with smartphones. That's discounting IoT security altogether which is a scary proposition in itself for rather silly reasons [6][7].
This is a very PC centric article, but the same can be said about any connected device--from cars, to baby monitors, to buttplugs. If it has an internet connection, the firmware should either be open source or in open source escrow, so that if the company dies or decides to not support their device anymore, the hardware itself can continue to live.
The patch needs to be provided by phone and tablet manufacturers. Except that many otherwise capable phones are not supported anymore and will not be fixed.
Were the firmware of these devices open source, the community could fix this (given that the firmware does not have to be signed, or a signing key can be added). But no, many devices will remain forever vulnerable.
Including my phone, 4G RAM, 32G internal storage, excellent battery and screen, great computing capabilities, in a excellent physical shape. Will probably last a few years more. Last updated on November 2017 by its manufacturer. Some parts will never be updated again and there is no way to audit this stuff.
This is a shame.
Edit: and I'm lucky my phone resembles an Android One phone, so some stuff can be taken from this phone to update mine.
Perhaps software in general shouldn't be provided _as is_ anymore. The idea that someone provides a software and it's your problem if it doesn't work is really... _too easy_.
Company A sells you a cell phone. In a reasonable time (5 years? 10?) a flaw is found. You, the costumer, can fix it? No, because it depends on proprietary code, a key, some DRM, whatever.
So company A should fix it or be accountable for the problem. Being sued, paying for it. Or open the hardware so that user can fix it.
Right, and anyway, when you get some software, you should be able to fix/improve it yourself if you want to and need to, and redistribute the fix to other people who might be interested. You also should be able to study the software provided to you before running it if you want to.
Most people don't want to actually do that, but could anyway benefit from the inspection and fixes coming from third parties.
And when I buy some piece of hardware, I expect the manufacturer to fully support the device, as you said, when used the way it was intended to, but let me use it another way if I want to (which the reliance on closed binary blobs does not allow).
Moreover, it's time we consider it mandatory that the user has access to the code running on their device. People are not dumb. More and more, people want to know where their food come from and how it is produced. The same transparency should be obvious for what the computer do and how it is built.
If some things theoretically require the user not to see the code, maybe these things should not exist in the first place. "Oh, here is a product! But for your own good, you do not get to know how it works and what it does or does not do behind your back." This does not follow.
Source escrow is an interesting concept. My initial thought it that would create huge perverse incentives. A company might continue to release small, nonsense updates to products they otherwise don't care about, just to avoid giving the source away. Meanwhile, as the owner of a device, I would be eagerly awaiting the death of the company, so I can get my hands on that juicy source code. I certainly wouldn't recommend their products! Anything to make them die more quickly.
It gets even more interesting when the "secret sauce" is in software. Say I market "SmartButton 1.0", which is nothing more than an ESP8266 connected to a button, but with some cunning algorithms and proprietary protocols that make it useful. Under an escrow system, I'll have a competitor on my hands the second I stop supporting SmartButton 1.0. Even if I'm already on SmartButton v19.
The SmartButton scenario is exactly the incentive we want, isn't it?
If you have made genuine improvements in versions 2-19, releasing version 1 shouldn't hurt too much. If, on the other hand, your versions 1 and 19 are still substantially similar, you shouldn't stop supporting version 1 to save a small cost and completely destroy the value of the product for your customers.
One of the biggest problems is the lower-level chip vendors, who often require NDAs and won't allow their code to be shared publicly. The device maker has to comply with this or find another chip, which may not be available in sufficient quantities or at a realistic price point. The chip vendors don't necessarily go out of business, even if the device maker does.
Considering the global impact on security, this is an area that would make sense for regulation. At some point, the chip vendors should have to release their code to maintainers. I'd even be fine with limiting this to after the chip goes EOL! Perhaps it could come with guarantees reducing patent infringement risks, which may be where much of the vendor reluctance comes from.
Although mentioned in the article, I would like to emphasize that https://puri.sm are selling laptops with disabled and cleaned (with me_cleaner) bios. I hope more companies follow.
Purism seems to be rather inept, they shat the bed with their recent Librem One product launch, whereby they rebranded Tusky and disabled all moderation tools on their Mastodon instance, then were suprised when their employees started quiting due to this ridiculous behaviour.
Purism doesn't seem to want to invest in the software that makes their services work, hence the commentary by Matrix devs on which services are helping to push development forward (and thus deserve subscribers).
> Chromebooks use both, coreboot on x86, and u-boot for the rest.
This isn't entirely true. Coreboot is used on a number of ARM Chromebooks, including rk3288 and rk3399 based devices. It seems like u-boot is used less and less in the space. Libreboot has builds for a few devices that kill the annoying "untrusted os" message, and even allow you to set your own trust root.
I'm not sure -- I haven't messed with a Pixelbook. On Chromebooks, the trust root lives in the bootloader's flash with no security beyond the write protect screw.
A quick googling led to a reddit post that indicates it's possible with the Pixelbook, but likely a PITA:
I am also perplexed why the Raptor Talos II does not receive more attention in the open hardware/software community. For me it is basically everything I've ever wanted from a libre system.
The criticism as I understood it was, "I can get a car for that!" To really sting, that criticism should imply a decent run-of-the-mill car, not a clunker.
Therefore, I wanted to know the car make/model. Because $6,000 for a decent run-of-the-mill car isn't bad.
Then yes-- for the same price as that new workstation you can buy a used car with 170,000 miles on it that gets terrible gas mileage and will probably require regular service at luxury part prices.
I have a completely honest question here that I'm hoping some people can answer. Is open source really more secure? My default answer would be yes absolutely but when I think about it I'm not sure I understand why.
If something is open source then bugs and security problems can be found more easily and then fixed. This sounds great to me and I'm sure that works out just fine most of the time. This makes me wonder though...are there really fewer intrusions into production systems that are built entirely on open source software than there are in ones built with lots of proprietary, closed source software? What does the data look like about this stuff?
I can't speak to the data analysis part, though I do believe some people have looked into it, and hopefully they can add their thoughts.
From my experience, the answer is: it depends very much on the community the project has.
First, the obvious positives: you could have lots of people with lots of different kinds of experience looking at the code, finding and fixing things.
This is how I got involved in Firebug back in the day. But I also noticed that while millions of developers used it daily, the number that got all the way to the issue reporter were small, and the number that posted fixes in an issue were minimal (I got to know them by name). Only once do I remember a security issue being reported, considering that extensions had such broad and unlimited access back in then.
So, if it does not invite that kind of community, then it is possible to be a net negative with only blackhats having a reason to inspect the code. OR, you have a social problem within the community (also common), where people assume that with such a large community, surely someone looked at X. Everyone thinks that, so no one looks at X. Years later someone does and finds some surprising things in code that withstood the test of time.
That said, I think the case of UEFI would be different. It might be a good candidate for shared source at least, if it isn't already.
>If the source is freely available, then every day someone is going to read it and maybe see/fix the bug.
How many years did Heartbleed go unnoticed? How many exploits in open source software get reported here?
It's not true that someone reads all of the open source code every day. The truth is, few people ever read any of it, and fewer still have the domain expertise necessary to be able to spot and patch any obvious bug, much less subtle ones. And yet this metaphysical belief in the "many eyes" persists.
Sure, it exists, but there are supposed to be eyes on the proprietary code as well, and the effect is probably smaller than people think, with no one outside of a project's maintainers ever actually studying the code for most open source projects.
I'd like to add one thing to this: Heartbleed also went unnoticed because the OpenSSL code ad build process was in such a state that simply looking at it, and having to build it costs an insane amount of effort.
So if you truly want to benefit from open source firmware, it needs to also come in at least some minimal form of quality. Things such as good build documentation, automated builds in CI, and also low requirements for setting up development builds are a thing often not present in software all of us deem critical.
It is much more intriguing to contribute to a project, use it and submit improvements when the entrance barriers are low.
Open source software is just software. That is to say it is just as secure, or insecure, as any other available software.
The open source model, however, allows for incremental improvements, patching, security updates and auditing from the community that the typical closed source model neglects to provide.
I think the trend now is to believe that closed software that is actively maintained by a well resourced party is more secure than open software that is barely maintained by whoever contributes.
Binary blobs for hardware that has long shipped doesn't really fall into the "actively maintained" category. At least not reliably.
It's not more secure: it's just potentially easier to review for security. The lifecycle of software and quality of review determine the trustworthiness of software. I wrote about it in detail a while back. Roryokane was nice enough to host a cleaned up version here:
I talked about the review that went into high-assurance, secure, closed-source products back in the 1980's and 1990's. Most FOSS can't touch those products in trustworthiness. Here's a link to that:
Alex Gaynor gave me my favorite paper illustrating how OSS != security using fuzzing numbers from Linux kernel. It gets probably more contributions and eyeballs than about any project. I'll let you look at the results:
It seems like it depends on your threat model. If what your company is doing is valuable enough and you have a large enough organization, a motivated attacker will have access to the system’s source to run their offline analysis of it, regardless.
Background checks and interviews aren’t much of a barrier…
The issue is that open source can generally be patched by a sufficiently motivated individual when the security hole is found. If you have a proprietary firmware blob, that isn't going to happen unless there is monetary incentive for the manufacturer to do so.
Let's not forget that each security fix made to Open Source software is also a recipe on how to pwn people who didn't update to that fix yet. A project changelog is in part a list of holes that can be exploited.
Note that the recently-popular "Docker Desktop" product (I believe it uses the xhyve framework in OSX) that brings up a VM to run a linux-based Docker daemon on a local OSX system is not f/oss. The source isn't even publicly available.
It surprised me when I found out, considering that all of the other tools shipped by Docker have been.
Great overview of much of the firmware and general "OS" stack (for lack of a better word).
I'm surprised not to read a mention of the micro-code which all instructions our programs ask the processor to run actually convert to. I suppose that's starting to get too close to a discussion of open hardware, which this post mostly sidesteps. Both are important issues.
Great blog post Jess! I think this is an extension of Kerckhoff's Principle that a secure cryptosystem should be able to keep your data secure even if everything (except the key) is compromised: https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle
Is there a Dockerfile or bash script anywhere that demonstrates how to install all these tools on bare metal? I operate at a higher level in the tech stack and I'm unfamiliar with these tools and how they work. A Dockerfile would be nice because then you could create a virtualish environment where you could play with the new stuff in docker exec before blowing away the old stuff.
Not a dockerfile, but it may be worth looking at buildroot [0] and qemu [1]. I'd like to say that I started 5 years ago with these tools and ended up working on embedded systems, but it's more like I started 5 years ago and ended up with drawers full of unsupported ARM boards.
Not according to AMD. It's more an organizational issue, since to make it open, they'll need to maintain two versions. One without DRM garbage (HDCP), and one with it. And it costs more to do it naturally. As usual, DRM poison ruins technology.
It's a nice system, and definitely cheaper, but it isn't a reasonable price comparison as it isn't even remotely in the same performance ballpark as POWER9.
Depends on the workload and the # of cores in the POWER9 chip(s). The Socionext has 24 cores!
Edit: Admittedly only massively parallel workloads will be faster than the POWER9, and only if the POWER9 has limited cores. A bit of a stretch for most use cases.
“Rings 1 & 2 - Device Drivers: drivers for devices, the name pretty much describes itself.”
STOP and GEMSOS did use four rings. The evaluators griped that UNIX’s didn’t. Microkernel proponents kept pushing mainstream OS’s to move drivers from kernel mode to another mode. Maybe my memory is off again but arent the drivers for Nix’s and most monolithic kernels in kernel mode (Ring 0)? Maybe Xen does something different with its use of protected mode. Then, some things in user-mode later on like FUSE.
“Each of these kernels have their own networking stacks and web servers. The code can also modify itself and persist across power cycles and re-installs. We have very little visibility into what the code in these rings is actually doing”
Which is why I put money down that the backdoors NSA paid for in direct money and/or defense contracts would be in management systems. That we’d definitely find services with 0-days in there. Sure enough…
“Linux is already quite vetted and has a lot of eyes on it since it is used quite extensively.”
That’s total nonsense. Empirical evidence below that’s been consistent over long periods of time. If anything, using Linux is guaranteeing you vulnerabilities if they can call anything in the kernel. If a subset or just one function, maybe OK. Careful analysis case by case on that. We’d be better off with something clean-slate for this purpose that can reuse Linux drivers where necessary. Then, we’d check the drivers and the interfaces.
It is true that it’s better than the closed-source stuff they’re using, has better tooling, folks understand it better, and so on. All true.
“We need open source firmware for the network interface controller (NIC), solid state drives (SSD), and base management controller (BMC).”
The problem with this and Intel/AMD internals are that they’re secretive partly to avoid patent suits and new competition. You’re not getting this stuff opened. Not easily at least. Might be better to literally do a closed-source product for them vetted by multiple parties. Otherwise, get the actual specs under NDA to build the open-source code against in a way that doesn’t leak the specs a lot. Alternatively, gotta build your own hardware doing this yourself with whatever the I.P. vendors give you. I mean, good luck on the reverse engineering efforts but these are usually lagging behind.
“We need to have all open source firmware to have all the visibility into the stack but also to actually verify the state of software on a machine.”
You actually need open, secure hardware for that since attackers are now hitting hardware. I kept telling people this would happen. Just wait till they do analog and RF more. What she’s actually saying here is “verify the state of the machine if the hardware works and is honest and doesn’t do anything malicious between verifications.”
“ is the same code running on hardware for all the various places we have firmware. We could then verify that a machine was in a correct state without a doubt of it being vulnerable or with a backdoor.”
Case in point: I put a secret coprocessor on the machine for “diagnostic purposes,” it can read state of system, it can leak over RF or network, and we leak stuff out of that signed, crypto code. Good thing no major vendors are including hidden or undocumented coprocessors on their chips. ;)
“Chromebooks are a great example of this, as well as Purism computers. You can ask your providers what they are doing for open source firmware or ensuring hardware security with roots of trust.”
End with some good advice: buy stuff that’s more open and secure to get more of it. Market demand incentivizing suppliers. That could solve a lot of these problems if enough people do it.
Rings 1&2 are basically useless on x86_64 because they give you the same access to memory as the kernel, they just don't let you execute privileged instructions directly.
On 32-bit x86, ring 1 at least got used for hypervisors (VMWare, Virtual box, and Xen off the top of my head). I half remember that OS/2 used the middle rings too.
I think the protection model of four rings was just copied from VAX, being the closest thing to big iron that x86 protected mode was inspired from.
re 1&2. Ok, that's what I was thinking. Thanks for the refresher.
re protection model. Nah, it was MULTICS from a Saltzer and Shroeder paper. They're among the pioneers of INFOSEC in high-assurance security which I'm often talking about here. They describe their reasoning about that here [1]. It, segments, and an IOMMU were in SCOMP, the first system certified to high security. Early promoter Roger Schell got an ex-Burroughs guy that Intel hired to add the rings and segments to their chips so high-assurance, security kernels could use them. The one he backed and got certified, GEMSOS, did leverage about every security feature on Intel CPU's. STOP used all the rings. GEMSOS had a hybrid scheme. BAE was selling STOP with Aesec still selling GEMSOS. Threw in a link on security kernels if you want to check that out. Today's state of the art moved on to secure hardware/software architectures using a mix of formal verification and language-level security on top of other QA activities. The competition used type enforcement [3] and capability security [4].
The four rings and their purpose was copied from VAX and VMS. It was specifically added by Intel trying to convince DEC to port VMS.
The GE-645 had 16 protection rings in hardware IIRC, and was designed so that unprivileged software would see essientially an unlimited number of rings. It's a very different model in practice (and way better IMO).
IMO we're probably going to have to migrate back to a hardware model to describe the memory regions to hardware to protect against the cache based Spectre variants the same way we protect against Meltdown.
"The four rings and their purpose was copied from VAX and VMS. It was specifically added by Intel trying to convince DEC to port VMS."
Oh yeah, I forgot VMS had rings. Cutler did VMS and Windows NT. So, that would make sense. It might have been segments the Burroughs sold them on. Anyway, I just found a nice link with more details about VMS vs Windows needs for rings for anyone curious.
Still curious, though, since they were doing all their security pushing subversively back then since hardly anyone cared about it. Found the [long, long] interview below:
"I took a Digital Equipment Corporation PDP-11/45. I picked the 11/45 because it had hardware segmentation, and the other DEC equipment didn’t; and used the 11/45 to build a security kernel; a guy named Lee Schiller built a first demonstration security kernel. The first running security kernel was on the DEC PDP-11/45, which was a legitimate security kernel, and was tamperproof, small and verifiable, and non-by-passable, and had three protection rings in the hardware. They didn’t know it, but they did. And so that was close enough to allow us to build that security kernel." (my emphasis)
He talks like it was an accident of the design. Later, he says they weren't trying to sell security to the government like IBM was, were surprised they were doing security kernels on it, and otherwise kept pushing it as a minicomputer. Really strange. I don't know much about why PDP-11/45 and VAX were designed the way they are, which VAX's had rings first, or why. My traceability stops there on them for now. Here's what I was remembering about Schell and Intel x86:
" During that time I also consulted with Intel on the x86 architecture. Ted Glaser had been with them as a significant consultant during development, since he was an architect at Burroughs, and then it was naturally he would be an architect and consultant. And he had recommended that I consult with him on someof the security issues, which I did, and had some of what I think the architect for the x86 called small but significant impacts on the x86 architecture—a reasonable characterization—so that it would support a high assurance security. And it did. I mean, the architecture; what they had originally did have flaws, and problems, and those we believe were wrung out so that the x86 architecture was one that could support that. And so the papers you saw then later with GEMSOS from Gemini Computers and such, you know, leveraged that. But the x86 was just evolving at that time and since I knew enough about what it was, and there was enough published, my research results at the postgraduate school looked forward to that. We took a z2000 microprocessor and actually laid out how we could add hardware, much as we did with the SCOMP in order to add segmentation and protection rings to a commodity microprocessor, knowing that Intel was actually going to build those into its chips."
His wording makes it seem like they added it because Glaser asked him what they were doing for security. He then started developing and commissioning software for it before the features were released to market. I'll also note that, being an acquisitions guy, he often sold things in ways that had little to do with security. The reason was virtually no buyers or sellers cared about security at the time. IBM (NSA partner) and Burroughs/Unisys were two of few exceptions. NSA mostly did COMSEC, looking down on "COMPUSEC." It's possible he and Glaser added them for security but sold them on compatibility with some OS or other non-security benefits. Pure speculation: no data yet to reconcile the different stories. There's the citation, though.
Actually, at least a bit of it does exist. There are two different "OpenBMC"s. The IBM/Rackspace one is used for POWER9, as in the Summit and Sierra supercomputers.
Another effort in the free space -- a different part from Talos -- is EOMA68 https://www.crowdsupply.com/eoma68 with a parallel effort for RISC-V.
It's a nice exception to the rule. IBM has enough patents to crush anyone that messes with them. So, they're not as worried. Don't forget older PPC and SPARC boxes with Open Firmware, too. I have one at the house from 2003 that can run Youtube vids.
I haven't forgotten Openboot, but as far as I know, ALOM wasn't part of it, and I doubt anything current comes with a free version. The two OpenBMCs aren't purely IBM, and that's more than one example apart from RISC-V possibilities. BMC is particularly important, because remote access is critical for large-scale management, typically implemented with a lot of problems, and often exposed highly insecurely.
There's obviously a very real problem, but POWER9 seems to be an encouraging example that deserves support, and even Talos has some non-free firmware, as far as I remember (apart from add-on graphics).
"two OpenBMCs aren't purely IBM, and that's more than one example apart from RISC-V possibilities. BMC is particularly important, because remote access is critical for large-scale management, typically implemented with a lot of problems, and often exposed highly insecurely. "
Very, well said. I've definitely thought about this. I was just turning ideas around instead of digging super deep. Still, one problem I had was how to sell the security-enhanced solution to businesses that were already leveraging backdoored, low-quality products. I'm concerned there would be a lot of "who gives a shit" reaction to the product.
The trick I advocated long ago was to embed and/or disguise security products as stuff with (non-security benefit worth buying here). The trick would be to figure out whatever chip, PCI card, etc had useful functionality to add to their servers. And, btw, it also has an ultra-secure interface to the buggy management systems. Back in the day, people like the folks behind Diamondtek LAN got secure tunnels and management systems certified by NSA for this stuff. There might still be a tiny market. Nonetheless, I'd rather have a non-security benefit, esp performance or monitoring, to sell them on with the security features subsidized by its sales. This concept is partly inspired by Bell's "selfless acts of security."
First of all love it that someone is thinking about bootloaders. Thank you and I hope you're successful in this project.
I think that the article though is only targeted towards desktop PC/laptop/servers and mobile phones. Also not sure whether the it is talking about first level bootloader vulnerabilities or of second level bootloader vulnerabilities.
In the embedded world there often is no second stage loading, there are simply bootloaders. There are many, many bootloaders and opensource is the most popular option here, both first and second level.
Here's a table of hardware filtered by the booloaders used
I think we can use the research done opensource router os like openwrt[1] to design a BIOS that works across all devices. One interesting point to note here is that in many routers entire bootloader can be replaced easily using network booting. It takes seconds to flash the ROM (network booting is in-secure in theory but secure in practical since you need physical connectivity to book via a network).
While many modern machines support network booting,replacing the first level bootloader (BIOS) is (impossibly) hard.
Distributions of linux use GRUB which is nice and also opensource. But again its a second stage bootloader that comes into play after BIOS (first stage bootloader) has been executed.
I'd love to see more development in u-boot as they have already done the hard work of supporting multiple devices [2] and amazingly they also support direct booting from an SD card (not an sd card adapter via a usb stick).
Here is the list of architectures supported
/arc
/arm
/m68k
/microblaze
/mips
/nds32
/nios2
/openrisc
/powerpc
/riscv
/sandbox
/sh
/x86
Another key point to note is as a user there is very little control that I have on my bootloader (first level). Since it is loaded from a ROM which I can't replace/rewrite even if opensource firmware exists I can't use it. While I can install a new operating system I have not found any easy way to switch firmwares. Unless a project like linux foundation takes it up and brings together the stake holders to use an opensource firmware I think it will be really difficult to get adoption.
On the other hand bootloader is probably the only piece of software left that gives device manufactures some kind of control over their hardware. What's in it for them to use a free opensource technology?
The boot loader should be simple and relatively dumb IMHO, then it is secure. If it should be bigger then it should be Open source.
Management processors like Intel ME built into the CPU, firmware another x86 insecurity.