I used cygwin pretty heavily in the late 90s and early 2000s. It was slow. I had scripts that took days to run dealing with some network file management. When I moved them over to linux out of frustration (I think I brought in something like a pentium 90 laptop, gateway solo I think?) they were done in tens of minutes.
I'm sure they did the best they could ... it was just really painful to use.
This matches me experience as well. Some of my earliest rsync experiences were with the Cygwin version and I can remember scratching my head and wondering why people raved about this tool that ran so slowly. Imagine my surprise when I tried it on Linux. Night and day!
Wish I played that interview game better. I saw the success coming from a mile away (2022) but I can't vibe with people in the hire game right. It's like eye contact, smiling, facial expressions, stuff like that.
I guess there's a bunch of tools to not suck at this. Anyone had success here? The AI tools say I'm great because they can't pick up the kind of problems I'm talking about.
Pretend to and/or be motivated by things other than money, that’s the strongest thing interviewers drop people from, even though they’re motivated by money to be there.
I had access to a VMS system in my BBS days, and I had no idea it wasn't just some hard to use BBS software. When it clicked that it was a real operating system on a giant machine (I believe 11/380) it changed everything for me!
Apparently, the only place the VAX 380 exists is in a writing sample by Pearson Education. Otherwise, there is no evidence of DEC ever producing something called "VAX 380".
It looks entirely made up because the procedure described is also entirely alien to me, and I had professional experience with both VMS and Ultrix when they were still supported by DEC. (And it's certainly not BSD...)
Could be. You could also have been thinking of the 11/730, which was a cost-reduced 11/750 and thus the second-slowest VAX model DEC ever sold.
The slowest would be the 11/725, which was a cost-reduced 11/730 that had a reduced clock speed and half of the bus slots filled with epoxy to limit expansion. The 11/725 was so slow that using it was an act of masochism; It was slower than your 11/23+.
Those models were pretty rare though. Even though they were cheaper than an 11/750 the performance drop from the 750 to the 730 was too severe to justify even the reduced cost. If that were all then maybe replacing PDP-11s being used in industrial applications might have saved it but the 730 was still too expensive versus the existing PDP-11 products, and the 725's limited expansion made it less attractive than those same PDP-11 products. The PDP-11 thus outlived both the 725 and the 730.
the VMS shell had so many good ideas. If i ever write a shell, I'm including VMS style abbreviations. If there is any modern POSIX shell that implements such a feature, let me know, because if there isn't I have to write one
Not quite the same, but fish shell has programmable abbreviations. I type “tf<space>” and it expands that inline to “opentofu”. It use to say “terraform” before we upgraded; I didn’t even have to change the commands I type.
fish isn't POSIX though, I'm guessing ZSH can probably do something similar but command completion just isn't the same as being shortening "mkdir test" to "mkd test"
I've only ever read about VMS in an historic context, like Wikipedia articles and blog posts. DEC and VMS are not well known. That's a shame, considering how much influence they had, especially on WinNT.
I don't know about VMS specifically (more people will just know it as the thing the VAX runs), but DEC is very well known to anyone in the computer space.
The PDP series brought us Unix and GNU, and the VAX was the only mainframe capable of competing with IBM. DEC was the largest terminal manufacturer (they made the vt100 and vt220. if you've ever run a terminal emulator, chances are it's emulating one of those or a machine that did). They created CP/M (and by extension DOS). DEC is very well known
Oh, I'm not in any way saying it didn't haha. Every other point still stands. Besides, even if it didn't directly influence DOS it did heavily influence another Microsoft operating system (NT)
The main complexity of IPv6 is still ha I g to maintain an IPv4 installation. The vast majority of non phone devices do not work in an IPv6 world only because CLAT hasn’t been baked into the OS since the very beginning. It still isn’t a first division tenant on Linux servers, desktops, IoT, or windows. I believe OSX integrates it now
Could with approximately zero services requiring IPv6, the collapsing cost of IPv4 addressing, and it makes IPv6 very much a hidden protocol for phones. When I tether off my phone I get an IPv4 address, the phone may well do a 4:6 translate and then something else does a 6:4 translate. That doesn’t matter, I can still open a socket to 1.1.1.1 from my application.
Had IPv4 been transparently supported IPv6 wouldn’t have taken 30 years and a whole new ecosystem (phones) to get partway there.
If anything, IPv6 is extremely easy to use, especially with SLAAC: On any kind of standard network, you turn on IPv6 on your machine, and, given physical connectivity, bam! You're on the internet.
It only gets complex if you try to micro-manage it.
Oh no, last time I asked on HN I got 24 to 48 easy steps involving a lot more acronyms than this (please don't repeat them).
IPv6 is easy to use only if you let your one router manage everything and you give up control of your home network.
Edit: again, please don't help. There have been HNers trying to help before, but my home network is non trivial and all the "easy" autoconfiguration actually gets in the way.
There are no more acronyms. SLAAC means automatic client configuration. That's the only one you need.
> give up control of your home network.
What does that even mean? What do you gain by deciding your Apple TV should be at 192.168.0.3? With IPv6, you can just `ping appletv` and it works fine. What more "control" do you need?
I mean generally I want fixed IPs on my local network for robustness.
With IPv6 I actually want it more and it becomes possible since we can just use the MAC address as an IP address.
I have IPv6 service at my ISP right now but I'm hesitant to turn it on on my local network because it does make my firewalling concerns much more critical.
You're assuming there is only one internet connection in my home network, for example. The "easy" trick where your ISP gives you routable addresses does not work when there's more than one exit.
Still want to help? :)
And really... everyone is pushing for SSL everywhere - among other things so that the ISP doesn't MITM your traffic.
Why would you allow the ISP to know what machines are inside your home network then?
This doesn’t change anything about the NAT or firewall story, and having two different connections is complex with IPv4 just as well. Aside from being a fairly exotic setup for personal use anyway.
What would your ISP do with the information that there are 73 unique addresses in your network at this point in time? Especially given that devices may mint any number of them for different reasons, so you can’t even really assume that corresponds to the number of physical devices in your network?
> I mean generally I want fixed IPs on my local network for robustness.
Same here, which is why I use DHCPv6. It's pretty easy to set up, nearly everything supports it, and it's super reliable.
The only catch is that Android refuses to support DHCPv6 for some reason, which is kinda annoying since it means that you need to keep SLAAC enabled if you have any Android devices on your network. Which means that your DHCPv6-supporting devices will end up with two addresses, but there aren't any real downsides to that.
I don't care to remember them, but I do want them to be consistent so there's no dependency in DNS.
My home network isn't the Internet and isn't large: DNS is a much more complicated system to keep running then just fixed IP addresses in that circumstance.
Above a certain scale, that flips but not at the home level.
A router which can be switched off sometimes, or break and delay replacement.
I don't want all my IoT devices going down because they can't resolve hostnames - that's why I set fixed IP addresses for them. It means how they communicate with each other and my network is well-defined, and works provided they have Layer 2 (easy to keep up - it works provided any 1 AP is online, whereas my internet or the router providing it can vanish).
Honestly, it sounds more like your network is fragile rather than robust. A robust network would be able to handle the IPs changing, rather than needing them permanently set to some specific value.
You are allowed to state your opinion, as am I. My issue with your opinion is that is grounded in false belief and a lack of knowledge, and rehashing it here reproduces those opinions in others.
NAT changes the apparent destination address of a connection, it doesn't filter them. If a connection arrives with the destination address already set to one of your machines, NAT won't prevent it.
NAT is not a security device. A firewall, which will be part of any sane router's NAT implementation, is a security device. NAT is not a firewall, but is often part of one.
Any sane router also uses a firewall for IPv6. A correctly configured router will deny inbound traffic for both v4 and v6. You are not less secure on IPv6.
Even a correctly-configured NAT will let connections in from outside, and a lot of people don't understand this.
Personally I'd count "your security thing doesn't actually do the thing it's supposed to do" as being pretty bad on the security scale. At least people understand firewalls.
NAT doesn't apply to inbound connections if you don't have a matching port forward rule, so it kind of doesn't matter how NAT works here. This is pure routing, not NAT.
IPv4 requires a DHCP server. It requires assigning a range of addresses that's usually fairly small, and requires manual configuration as soon as you need more than 254 devices on a network. The range must never conflict with any VPN you use. And there's more. Compare to IPv6: Nothing. All of these just go away.
And concerning the NAT: That's just another word for firewall, which you still have in your router, which still needs to forward packages, and still can decide to block some of them.
Windows[0]: Static IP configuration is as simple as typing an IP address into the pretty dialog box. No DHCP required.
Linux[1]: # ip addr <ip4 address> <subnet mask> <device> will set a static IP address
>It requires assigning a range of addresses that's usually fairly small, and requires manual configuration as soon as you need more than 254 devices on a network.
Is 65,536 (172.16.0.0/16) or 16 million addresses (10.0.0.0/8) "fairly small"? Are DHCP servers unable to parse networks that "big"?
>Compare to IPv6: Nothing. All of these just go away.
They most certainly do. But they're not "problems" with RFC1918 addressing and aren't "problems" at all with IPv4.
There are many issues with IPv4 and the sooner it dies, the better. But the ones you mention aren't issues at all.
If you're going to dunk on IPv4, then dunk on it for the actual reasons it needs to go, not made up "problems."
there's a really interesting terminal image viewer based on cellular automata I made almost 20 years ago ... makes every image a bit animated in weird ways: https://github.com/kristopolous/ascsee
The whole premise of prediction markets is that the few people whose perception do match outcomes make bets to push the money-weighted average perception toward outcomes. If perceptions still don't match outcomes at that point, average return is 0 minus transactions, with high variance.
Yes, but I always found that objection a bit silly. It's like pointing out that real cows are obviously not perfect spheres nor do they live in a vacuum.
> [...] if prices perfectly reflected available information, there is no profit to gathering information, in which case there would be little reason to trade and markets would eventually collapse.[2]
That's a stupid way to formulate this. Markets wouldn't "collapse". They would get slightly less efficient until equilibrium is restored to where arbitragers can make enough money to keep prices at that level of efficiency.
Maybe not "collapse" in a the sense of going to zero but if there was no profit to trading, then the quant trading industry would not exist, trading profits would collapse.
Meanwhile Two Sigma is hiring alpha quants to be AI research scientists at $250k starting salary + bonuses.
Even if we're just talking about the HFT/sell-side, there clearly exist various anomalous inefficiencies that can be exploited.
As I said, if we woke up this morning and prices were magically efficient in an idealised sense, at most a few quants would go home and retire early, and tomorrow we'd be back at the level (in-) efficiency that allows people to be market makers.
How can prices reflect all available information if there's no profit to collecting the information and there are no informed quant traders? Who is collecting the information exactly so that prices can reflect it and what is their incentive for doing so? Efficiency doesn't happen magically or automatically - traders create it. It's like a kaggle contest* to process information, with the incentive being profit.
You don't believe in the existence of residual return orthogonal to priced cross sectional risk factors (alpha)? E.g. Trends, momentum, volatility clustering, etc. many easily demonstrable inefficiencies. VPIN and order flow toxicity are highly predictive features. Most HFT MM especially in crypto involves hybrid alpha in addition to the (visible) bid-ask spread, which it itself an "inefficiency" to compensate market makers like Jane Street and other successful firms that operate on the assumption that weak form EMH is not accurate.
I would have hoped that by now it was obvious that we are talking about a _specific_ weak form of the EMH that takes friction into account?
What is your whole first paragraph about? Who are you trying to convince? Where's the strawman that claimed that the strongest version of EMH that you can imagine is literally true?
There's no single weak form of EMH that could be accurate or inaccurate: there are many versions of the EMH in various strengths and dimensions (that can be accurate or inaccurate).
To be more specific: Jane Street believes (or acts lie they believe) that markets are at least efficient enough that it takes a lot of effort for them to make money. As a very, very weak form: someone doing chart astrology, eh, I mean technical analysis, on S&P 500 stocks won't beat the market. But even much stronger versions than this are defensible.
The real strong forms that say that all information is preciously reflected in profits is a simplifying assumption you can sometimes make to make your life easier. Just like you sometimes neglect friction in physics. But when you want to decide how long your train needs to emergency brake, you kinda need to take friction into account. Similarly, when trying to make money in the market or trying to understand how others like Jane Street make money, the strongest EMH is not a good guide.
Question is about EMH and how you expect efficiency to be achieved absent profit for collecting the information.
There are 3 accepted forms of EMH. I'm talking about weak form - just price history and nothing else. E.g. formulaic alpha have demonstrable predictive value in modeling.
All that to say you believe trading profits are real. Maybe you just need to learn more about what a buy side alpha quant at two sigma does for a living. Trading models can be robust and exploit real inefficiencies. Weak form EMH is demonstrably false on it's face, as you agree.
You're deliberately misunderstanding that you linked an article to EMH as informative and true, and then don't want to defend it. How do markets become efficient and reflect (any) information if nobody can profit by collecting information and trading on it? EMH states it's impossible to beat the market and that all available information is priced in. How, magically?
The way you're speaking about trading in terms of technical analysis implies you have retail trading exposure and have no idea what institutional alpha quants do.
I'm sure they did the best they could ... it was just really painful to use.
reply