That's never the goal with securing a system, though. The goal is to mitigate risk to a level acceptable to the various stakeholders involved based on what said stakeholders value.
Some systems that use paper ballots photograph each ballot before it is counted. This helps with auditing counts later on since every count must exactly match.
Preventing "lost or replaced" with paper ballots is a straightforward exercise in good human process. There are states that do not have good human process to manage their ballots, despite the example of other states that do have good process. Those states are incompetent/malicious.
What we need is a web deliverable (i.e. not a "web technology based" Electron desktop app) Neovim frontend. Something like a Wasm & WebGL based UI layer connected via msgpack to the editor core that's running in a sandbox on GitLab's infrastructure.
I'm truly sorry if the above drivel is either impossible or insane; as an embedded guy this is most definitely way outside my wheelhouse. I should get back to repairing my oscilloscope.
WebGL is more or less just a set of standardized (JavaScript) bindings to OpenGL ES... which is actually ran as DirectX I'm sure in some places...
Everything required to draw high-performance hardware accelerated text already exists in most browsers before even getting to these bindings, which would furthermore require accessing a redundant glyph renderer.
That is to say, WebGL wouldn't provide you much gain. Though, I'm sure some very amusing post-processing could then be done to your code.
Perhaps the future of programming is in the web with lens flare.
Serious or not, here's hoping it will once the feature is out of its infancy.
Tangentially, Ace[0] appears to be the only editor outside Vim proper that has managed to implement a half decent Vim mode. It's the only such editor I've come across where visual block editing doesn't end in fits of rage.
While it may not have been a serious question, vim key bindings (or emacs or whatever) are important to many people. You spend so much of your time in your editor/ide and become expert in it. Learning to work in another one is painful before you're properly productive again.
Seriously, how hard can it be? It's not like it's a completely new form of interaction. Vim and emacs are the ones that are exceptions in regard to UX.
I use qutebrowser (has vim-bindings), sway wm (I've customised it to have vim-bindings), zsh (with vim-bindings), weechat (with vim-bindings), mutt (still getting started with mutt, but it has vim-bindings too), and vim itself. These are pretty much the only pieces of software I interact with, so you can imagine how central vi/vim-bindings are in my life.
The basics are easy. A rudimentary normal mode with HJKL for movement is not hard. Block-select, sensible paragraph hops, copy/paste registers, repeatable macros, etc. is much harder to get right, and rough corners there can be a deal-breaker. Vim is a lot more than moving a cursor with your right hand.
Not only are the labels shifted by more than 50 percentage points, the scales are slightly different too, meaning not even the slope of the lines can be compared.
"How to lie while telling the truth, with figures"
You should really check your facts before posting a statement like this.
While the VM in LLVM historically was short for Virtual Machine, it really has nothing to do with that. It's a compiler backend used by Clang (C++ compiler) and Rust.
LLVM IR apparently is not restricted to C/C++/Objc/Swift etc, it is a generally purposed IR; and LLVM itself is an infrastructure that contains many facilities to deal with compilation backends (mostly, analyses and transformations).
My understanding is that IR is designed as though there were a VM to run it, but in practice, IR is immediately used to generate code for a target architecture.
That might have been true originally, but I don't think anyone uses LLVM like a JVM/CLR-esque VM any more. As the parent states, the original Low-Level Virtual Machine initialism was even retracted, meaning the project's name is just the "arbitrary" sequence of letters LLVM, with no particular meaning assigned to them.
>Scala Native provides an interop layer that makes it easy to interact with foreign native code. This includes C and other languages that can expose APIs via C ABI (e.g. C++, D, Rust etc.)
From that page, it looks like Scala-C interop is decent, but that's a far cry from C++/Rust interop. For C++ at least, you more or less need to write a pure-C wrapper API to call from Scala, since it doesn't handle C++ types.
Interesting, I guess I understood that wrong. Looks like no "easy" interop, but it's there if you really need it and don't mind the extra work.
I've been playing with this and trying to convert a ~20 line helper script that I use at work (and would really like to benefit from no JVM warmup time), and I've already run into missing core library functions like parallel collections and regexes.
This thing will be really great when it's ready, but it's not even close yet.
I'm not sure what exactly you mean by that, but what I was trying to say is that you can't expose Rust directly, you need to expose a C ABI. Which is totally doable, but is not just "drop in Rust code and it works."
While interesting, this seems like a bad idea. You're uploading your backups, no matter how encrypted, to a place where they will be publicly available to download.
Most cloud backup services are worse - they do no client side encryption, your files are freely available to the service provider or anyone who can break in.
I'd be much more comfortable with this personally. Trust the math, not the people.
Absolutely, but having another layer blocking access to your data is definitely a good thing. It's a good idea to encrypt your files yourself before uploading them to a public cloud.
Exactly. I rather trust well proven math more than people or infrastructure. One famous example nowadays is Bitcoin ... nobody was able to break the fundamental math behind it.
> Exactly. I rather trust well proven math more than people or infrastructure. One famous example nowadays is Bitcoin ... nobody was able to break the fundamental math behind it.
Well, there was the integer overflow bug years ago where someone could essentially create money out of thin air. But that's the only one I know of and it's a pretty amazing security track record for such a high-profile and lucrative target.
That said, this is just me being pedantic, I agree I'd much rather trust solid crypto than a promise from a person somewhere, even if that promise is in writing.
Depends on how long you want your data to be private, though. There's no guarantee that the encryption won't be broken in a decade or three. And, even if it's not mathematically broken, increased computing power (quantum?) could make brute-forcing fairly trivial.
True, Tarsnap is pretty on-point there, but it's also not cheap. $0.25/GB is much more than S3 ($0.023) or B2 ($0.005) - the tarsnap dev says it's because he does blocking and that makes it so much more valuable. But and there are other tools that can do encrypted backups with blocking like Duplicati and can be used with cheaper services. With this considered, Tarsnap is 50x the price of B2 - and that's without counting bandwidth.
Or if you're a cheap fuck like me, you want even lower and you go to OVH Hubic which is $50 for 10 TB for a year, with no additional bandwidth cost.
Previously: business use typically comes with expectations that tend not to align well with consumer grade products (specifically: availability and performance).
Edit: Turns out the answer is yes, no commercial use.
1.2 Using Your Files with the Services. You may use the Services only to store, retrieve, manage, organize, and access Your Files for personal, non-commercial purposes using the features and functionality we make available. You may not use the Services to store, transfer, or distribute content of or on behalf of third parties, to operate your own file storage application or service, to operate a photography business or other commercial service, or to resell any part of the Services.
True, I guess it's seemed cheap to me because I store relatively little data on Tarsnap. (In fact, I don't think I've added any funds to my Tarsnap account in like 2 years.) If you're dealing with larger quantities of data I could see how other options would be the way to go.
Nobody cares about data junk. Especially your personal data junk if it is all encrypted. I don't think that a lot of persons will look at your data there. If this sort of good encryption you consider for public cloud backups breaks we have a lot more problems than exposed backups.
You want math? How many combinations are afforded by your "long, carefully chosen password" in a symmetric system? How many core seconds per hour does a typical botnet scriptmonger control? Cryptanalysis of GPG doesn't even matter if Eve has enough time to brute force your symmetric key.