Hacker Newsnew | past | comments | ask | show | jobs | submit | kristopolous's commentslogin

I used cygwin pretty heavily in the late 90s and early 2000s. It was slow. I had scripts that took days to run dealing with some network file management. When I moved them over to linux out of frustration (I think I brought in something like a pentium 90 laptop, gateway solo I think?) they were done in tens of minutes.

I'm sure they did the best they could ... it was just really painful to use.


This matches me experience as well. Some of my earliest rsync experiences were with the Cygwin version and I can remember scratching my head and wondering why people raved about this tool that ran so slowly. Imagine my surprise when I tried it on Linux. Night and day!

Wish I played that interview game better. I saw the success coming from a mile away (2022) but I can't vibe with people in the hire game right. It's like eye contact, smiling, facial expressions, stuff like that.

I guess there's a bunch of tools to not suck at this. Anyone had success here? The AI tools say I'm great because they can't pick up the kind of problems I'm talking about.


Pretend to and/or be motivated by things other than money, that’s the strongest thing interviewers drop people from, even though they’re motivated by money to be there.

Interesting. I genuinely do not care about money.

The motivation of money is literally zero to me. Maybe that's a problem as well: they want people who Are motivated by money acting like they aren't?

I wanted in because I saw them doing exciting impactful things That's literally it.

I dunno. I've been struggling with this for decades


Just act hard for the duration of the interview season.

That's the key really. It's all kayfabe. I saw Kilo as a win too from the product announcement.

I told myself about ten years ago that I need to move when I see things I get that intuition on. But I haven't figured out how to get in the door yet.


What exactly did you predict in 2022?

I can call the winners but then I go with the losers

Finally got to log into a vms system! I was looking to do that over 20 years ago but never could find one.

Somehow I still remembered most of the shell syntax in a book I read about it probably in 2001. Don't ask me ... I don't know how either.

Got bored in about 10 minutes but still, another box checked off!


I had access to a VMS system in my BBS days, and I had no idea it wasn't just some hard to use BBS software. When it clicked that it was a real operating system on a giant machine (I believe 11/380) it changed everything for me!

There was no 11/380 but there was an 11/780.

Apparently, the only place the VAX 380 exists is in a writing sample by Pearson Education. Otherwise, there is no evidence of DEC ever producing something called "VAX 380".

https://www.pearsonhighered.com/assets/samplechapter/0/2/0/5...


It looks entirely made up because the procedure described is also entirely alien to me, and I had professional experience with both VMS and Ultrix when they were still supported by DEC. (And it's certainly not BSD...)

I know I just made it up! I have an 11/23+ and I'm guessing I was thinking of that 3!

Could be. You could also have been thinking of the 11/730, which was a cost-reduced 11/750 and thus the second-slowest VAX model DEC ever sold.

The slowest would be the 11/725, which was a cost-reduced 11/730 that had a reduced clock speed and half of the bus slots filled with epoxy to limit expansion. The 11/725 was so slow that using it was an act of masochism; It was slower than your 11/23+.

Those models were pretty rare though. Even though they were cheaper than an 11/750 the performance drop from the 750 to the 730 was too severe to justify even the reduced cost. If that were all then maybe replacing PDP-11s being used in industrial applications might have saved it but the 730 was still too expensive versus the existing PDP-11 products, and the 725's limited expansion made it less attractive than those same PDP-11 products. The PDP-11 thus outlived both the 725 and the 730.


I have a VMS system running under simh! I also have an actual AlphaServer (DS10) running OpenVMS but it's very loud so I don't turn it on often.

the VMS shell had so many good ideas. If i ever write a shell, I'm including VMS style abbreviations. If there is any modern POSIX shell that implements such a feature, let me know, because if there isn't I have to write one

Not quite the same, but fish shell has programmable abbreviations. I type “tf<space>” and it expands that inline to “opentofu”. It use to say “terraform” before we upgraded; I didn’t even have to change the commands I type.

fish isn't POSIX though, I'm guessing ZSH can probably do something similar but command completion just isn't the same as being shortening "mkdir test" to "mkd test"

I've only ever read about VMS in an historic context, like Wikipedia articles and blog posts. DEC and VMS are not well known. That's a shame, considering how much influence they had, especially on WinNT.

I don't know about VMS specifically (more people will just know it as the thing the VAX runs), but DEC is very well known to anyone in the computer space.

The PDP series brought us Unix and GNU, and the VAX was the only mainframe capable of competing with IBM. DEC was the largest terminal manufacturer (they made the vt100 and vt220. if you've ever run a terminal emulator, chances are it's emulating one of those or a machine that did). They created CP/M (and by extension DOS). DEC is very well known


CP/M was created by Digital Research, a completely different company. There is no direct relation to DEC (Digital Equipment Corp.)

well nevermind

Even without CP/M, DEC still had incredible influence! The first multi-user system I used was a VAX, back in the late 80's.

Oh, I'm not in any way saying it didn't haha. Every other point still stands. Besides, even if it didn't directly influence DOS it did heavily influence another Microsoft operating system (NT)

check out decuserve.org for more vms

if it was easier to use and less of a PITA, it wouldn't be taking decades.

The main complexity of IPv6 is still ha I g to maintain an IPv4 installation. The vast majority of non phone devices do not work in an IPv6 world only because CLAT hasn’t been baked into the OS since the very beginning. It still isn’t a first division tenant on Linux servers, desktops, IoT, or windows. I believe OSX integrates it now

Could with approximately zero services requiring IPv6, the collapsing cost of IPv4 addressing, and it makes IPv6 very much a hidden protocol for phones. When I tether off my phone I get an IPv4 address, the phone may well do a 4:6 translate and then something else does a 6:4 translate. That doesn’t matter, I can still open a socket to 1.1.1.1 from my application.

Had IPv4 been transparently supported IPv6 wouldn’t have taken 30 years and a whole new ecosystem (phones) to get partway there.


If anything, IPv6 is extremely easy to use, especially with SLAAC: On any kind of standard network, you turn on IPv6 on your machine, and, given physical connectivity, bam! You're on the internet.

It only gets complex if you try to micro-manage it.


> especially with SLAAC

Oh no, last time I asked on HN I got 24 to 48 easy steps involving a lot more acronyms than this (please don't repeat them).

IPv6 is easy to use only if you let your one router manage everything and you give up control of your home network.

Edit: again, please don't help. There have been HNers trying to help before, but my home network is non trivial and all the "easy" autoconfiguration actually gets in the way.


There are no more acronyms. SLAAC means automatic client configuration. That's the only one you need.

> give up control of your home network.

What does that even mean? What do you gain by deciding your Apple TV should be at 192.168.0.3? With IPv6, you can just `ping appletv` and it works fine. What more "control" do you need?


> you can just `ping appletv` and it works fine.

How many service does it take to make this work?

mDNS is quite fragile.


I haven’t seen a bog-standard router yet that didn’t just do it out of the box.

I mean generally I want fixed IPs on my local network for robustness.

With IPv6 I actually want it more and it becomes possible since we can just use the MAC address as an IP address.

I have IPv6 service at my ISP right now but I'm hesitant to turn it on on my local network because it does make my firewalling concerns much more critical.


> I mean generally I want fixed IPs on my local network for robustness.

What do you mean by robustness? Isn't it really stable hostnames that you want? I don't understand how fixed IPs increase resilience (to what?).

> I'm hesitant to turn it on on my local network because it does make my firewalling concerns much more critical.

Block everything coming in from outside the network. Allow established connections. That's all there is to it.


You're assuming there is only one internet connection in my home network, for example. The "easy" trick where your ISP gives you routable addresses does not work when there's more than one exit.

Still want to help? :)

And really... everyone is pushing for SSL everywhere - among other things so that the ISP doesn't MITM your traffic.

Why would you allow the ISP to know what machines are inside your home network then?


This doesn’t change anything about the NAT or firewall story, and having two different connections is complex with IPv4 just as well. Aside from being a fairly exotic setup for personal use anyway.

What would your ISP do with the information that there are 73 unique addresses in your network at this point in time? Especially given that devices may mint any number of them for different reasons, so you can’t even really assume that corresponds to the number of physical devices in your network?


> Aside from being a fairly exotic setup for personal use anyway.

So I should cancel one of my pipes because the "commitee" overcomplicated things in the name of autoconfiguration?

> What would your ISP do with the information that there are 73 unique addresses in your network at this point in time?

Sell it of course. Good info for targeting marketing/political propaganda per household.

> I haven’t seen a bog-standard router yet that didn’t just do it out of the box.

Which one, the one from ISP A or the one from ISP B? :)


> So I should cancel one of my pipes because the "commitee" overcomplicated things in the name of autoconfiguration?

That is absolutely not what I said. It’s a more complex setup than a single connection with either protocol, and can be solved with both.

> Which one, the one from ISP A or the one from ISP B? :)

Realistically it is going to return an A record with both addresses, maybe also the link-local one, any works locally. That is a non-issue.


> I mean generally I want fixed IPs on my local network for robustness.

Same here, which is why I use DHCPv6. It's pretty easy to set up, nearly everything supports it, and it's super reliable.

The only catch is that Android refuses to support DHCPv6 for some reason, which is kinda annoying since it means that you need to keep SLAAC enabled if you have any Android devices on your network. Which means that your DHCPv6-supporting devices will end up with two addresses, but there aren't any real downsides to that.


> since we can just use the MAC address as an IP address

With IPv4 you need to remember ... one number per machine. The one at the end, since it's usually a /24 and everything has the same prefix.

I'm sure it's trivial to remember mac addresses from different vendors with no connection to each other too :)

> Isn't it really stable hostnames that you want?

Hostnames are another layer. Your apple tv example may advertise itself on its own. My toys don't all do that.


That’s kind of my point, though. There is no reason at all to remember IP addresses.

I don't care to remember them, but I do want them to be consistent so there's no dependency in DNS.

My home network isn't the Internet and isn't large: DNS is a much more complicated system to keep running then just fixed IP addresses in that circumstance.

Above a certain scale, that flips but not at the home level.


At the home level, you have a home router that can do mDNS out of the box. All devices are reachable by their hostname.

A router which can be switched off sometimes, or break and delay replacement.

I don't want all my IoT devices going down because they can't resolve hostnames - that's why I set fixed IP addresses for them. It means how they communicate with each other and my network is well-defined, and works provided they have Layer 2 (easy to keep up - it works provided any 1 AP is online, whereas my internet or the router providing it can vanish).


> I mean generally I want fixed IPs on my local network for robustness.

With IPv6 you can assign fixed unique local addresses in addition to dynamic public addresses from your ISP.


What firewalling? You don’t have an ipv4 firewall?

Honestly, it sounds more like your network is fragile rather than robust. A robust network would be able to handle the IPs changing, rather than needing them permanently set to some specific value.

the internet, in very large volume, disagrees. Am I not allowed to document the widely held common sentiment?

You are allowed to state your opinion, as am I. My issue with your opinion is that is grounded in false belief and a lack of knowledge, and rehashing it here reproduces those opinions in others.

So, like ipv4, but you lose the protection and privacy afforded by the NAT?

What protection? What privacy? Smoke and mirrors, mostly.

NAT is a firewall with extra steps. IPv6 reduces complexity. Privacy (illusion of it, anyway, just like in ipv4 NAT) is handled by private addresses.

…and if you really want to, NAT for ipv6 just works.


It's the illusion of a firewall too.

NAT changes the apparent destination address of a connection, it doesn't filter them. If a connection arrives with the destination address already set to one of your machines, NAT won't prevent it.


NAT is not a security device. A firewall, which will be part of any sane router's NAT implementation, is a security device. NAT is not a firewall, but is often part of one.

Any sane router also uses a firewall for IPv6. A correctly configured router will deny inbound traffic for both v4 and v6. You are not less secure on IPv6.


Misconfigured firewall is a gaping hole. Misconfigured NAT is not letting data from outside into your local network.

So firewall is actually worse than NAT.


Even a correctly-configured NAT will let connections in from outside, and a lot of people don't understand this.

Personally I'd count "your security thing doesn't actually do the thing it's supposed to do" as being pretty bad on the security scale. At least people understand firewalls.


> Even a correctly-configured NAT will let connections in from outside, and a lot of people don't understand this.

Yes, that's called port forwarding and it is normal thing. You actually want that.


It will let them in without a port forward in place. The port forward just rewrites the IP on an incoming connection, nothing more.

If you can reuse opened connection, but that will work with firewall too.

You don't need any tricks like that. Regular new connections will work.

No it won't because that's not how NAT is working.

It will, and if you test it then it does.

NAT doesn't apply to inbound connections if you don't have a matching port forward rule, so it kind of doesn't matter how NAT works here. This is pure routing, not NAT.


IPv4 requires a DHCP server. It requires assigning a range of addresses that's usually fairly small, and requires manual configuration as soon as you need more than 254 devices on a network. The range must never conflict with any VPN you use. And there's more. Compare to IPv6: Nothing. All of these just go away.

And concerning the NAT: That's just another word for firewall, which you still have in your router, which still needs to forward packages, and still can decide to block some of them.


>IPv4 requires a DHCP server.

Windows[0]: Static IP configuration is as simple as typing an IP address into the pretty dialog box. No DHCP required.

Linux[1]: # ip addr <ip4 address> <subnet mask> <device> will set a static IP address

>It requires assigning a range of addresses that's usually fairly small, and requires manual configuration as soon as you need more than 254 devices on a network.

Is 65,536 (172.16.0.0/16) or 16 million addresses (10.0.0.0/8) "fairly small"? Are DHCP servers unable to parse networks that "big"?

>Compare to IPv6: Nothing. All of these just go away.

They most certainly do. But they're not "problems" with RFC1918 addressing and aren't "problems" at all with IPv4.

There are many issues with IPv4 and the sooner it dies, the better. But the ones you mention aren't issues at all.

If you're going to dunk on IPv4, then dunk on it for the actual reasons it needs to go, not made up "problems."


The dhcp server is in the router, just like you need a router for slaac.

there's a really interesting terminal image viewer based on cellular automata I made almost 20 years ago ... makes every image a bit animated in weird ways: https://github.com/kristopolous/ascsee


So this is just a jsonl viewer in a tui framework wrapper?

Almost 30 years ago I wrote an article advocating for domain level back button with a quasi mode like ctrl to traverse domains.

Would have fixed this. Too late now


Why would outcomes match perceptions?

The whole premise of gambling is that they don't


The whole premise of prediction markets is that the few people whose perception do match outcomes make bets to push the money-weighted average perception toward outcomes. If perceptions still don't match outcomes at that point, average return is 0 minus transactions, with high variance.

huh? that sounds like ideology and not empirical observation.

That's just how limit order books work with mark-to-market pricing

Could you point me towards some resource that would help me understand what you wrote? Genuinely curious about how this stuff works


That's pure ideology and not empirical. There's you know, even a large section there in that article pointing that out

The index fund industry would like to have a word with you.


Yes, but I always found that objection a bit silly. It's like pointing out that real cows are obviously not perfect spheres nor do they live in a vacuum.

> [...] if prices perfectly reflected available information, there is no profit to gathering information, in which case there would be little reason to trade and markets would eventually collapse.[2]

That's a stupid way to formulate this. Markets wouldn't "collapse". They would get slightly less efficient until equilibrium is restored to where arbitragers can make enough money to keep prices at that level of efficiency.


Maybe not "collapse" in a the sense of going to zero but if there was no profit to trading, then the quant trading industry would not exist, trading profits would collapse.

Meanwhile Two Sigma is hiring alpha quants to be AI research scientists at $250k starting salary + bonuses.

Even if we're just talking about the HFT/sell-side, there clearly exist various anomalous inefficiencies that can be exploited.

Fama's guy doesn't agree either [1]

https://www.ft.com/content/813b3d76-6ef1-427d-a2e0-76540f58a...


As I said, if we woke up this morning and prices were magically efficient in an idealised sense, at most a few quants would go home and retire early, and tomorrow we'd be back at the level (in-) efficiency that allows people to be market makers.

How can prices reflect all available information if there's no profit to collecting the information and there are no informed quant traders? Who is collecting the information exactly so that prices can reflect it and what is their incentive for doing so? Efficiency doesn't happen magically or automatically - traders create it. It's like a kaggle contest* to process information, with the incentive being profit.

You don't believe in the existence of residual return orthogonal to priced cross sectional risk factors (alpha)? E.g. Trends, momentum, volatility clustering, etc. many easily demonstrable inefficiencies. VPIN and order flow toxicity are highly predictive features. Most HFT MM especially in crypto involves hybrid alpha in addition to the (visible) bid-ask spread, which it itself an "inefficiency" to compensate market makers like Jane Street and other successful firms that operate on the assumption that weak form EMH is not accurate.

* https://www.kaggle.com/competitions/jane-street-real-time-ma...


I don't know what your question is about?

I would have hoped that by now it was obvious that we are talking about a _specific_ weak form of the EMH that takes friction into account?

What is your whole first paragraph about? Who are you trying to convince? Where's the strawman that claimed that the strongest version of EMH that you can imagine is literally true?

There's no single weak form of EMH that could be accurate or inaccurate: there are many versions of the EMH in various strengths and dimensions (that can be accurate or inaccurate).

To be more specific: Jane Street believes (or acts lie they believe) that markets are at least efficient enough that it takes a lot of effort for them to make money. As a very, very weak form: someone doing chart astrology, eh, I mean technical analysis, on S&P 500 stocks won't beat the market. But even much stronger versions than this are defensible.

The real strong forms that say that all information is preciously reflected in profits is a simplifying assumption you can sometimes make to make your life easier. Just like you sometimes neglect friction in physics. But when you want to decide how long your train needs to emergency brake, you kinda need to take friction into account. Similarly, when trying to make money in the market or trying to understand how others like Jane Street make money, the strongest EMH is not a good guide.


Question is about EMH and how you expect efficiency to be achieved absent profit for collecting the information.

There are 3 accepted forms of EMH. I'm talking about weak form - just price history and nothing else. E.g. formulaic alpha have demonstrable predictive value in modeling.

All that to say you believe trading profits are real. Maybe you just need to learn more about what a buy side alpha quant at two sigma does for a living. Trading models can be robust and exploit real inefficiencies. Weak form EMH is demonstrably false on it's face, as you agree.


> Question is about EMH and how you expect efficiency to be achieved absent profit for collecting the information.

Huh, no?


You're deliberately misunderstanding that you linked an article to EMH as informative and true, and then don't want to defend it. How do markets become efficient and reflect (any) information if nobody can profit by collecting information and trading on it? EMH states it's impossible to beat the market and that all available information is priced in. How, magically?

The way you're speaking about trading in terms of technical analysis implies you have retail trading exposure and have no idea what institutional alpha quants do.

This explainer might help you understand: https://youtu.be/RpCzaEn4rnc?t=257


But in aggregate they might.

Worth mentioning my tmux llm chat helper sidechat: https://github.com/day50-dev/sidechat

I use it every day.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: