Hacker Newsnew | past | comments | ask | show | jobs | submit | ryanelfman's commentslogin

This resonated so hard for me. Sounds like depression


Do you forsee the government doing anything able slow wait times for Visa/green cards?


The green card process through USCIS seems to be speeding up significantly.


Hey Peter, in one of the other comments you mentioned that NVC/ immigration process is a disaster, whereas here I see GC through USCIS is speeding up.

As I understand, once USCIS approves a petition, it goes to NVC where paperwork is checked to be documentary qualified and then consular interviews are scheduled.

Is there a difference between the 2? Asking because ultimately both are responsible for issuing GC’s.


I was referring to the final stage of the GC process (not the penultimate I-140 stage), an I-485 application filed with USCIS and an immigrant visa application filed with a US Consulate abroad via the NVC. The former is speeding up and the latter is a slow mess.


Thanks Peter :)

That explains it.


Sorta the purpose of the Actor model but that has other complexities.


Try out .NET 5 with Blazor. You will be able to use 1 language (C#) on server and client side and a single package manager (nuget). Install visual studio and your good to go


Great summary


That's not necessarily true. Those forces will still be there but the learning of what not to do has also been had. People can improve and learn over time.


Yes but if you have improved over time, then at least the most recently developed parts of the code would be high-quality and would not need rewriting. So you would only need to rewrite some encapsulated legacy parts of the codebase, which is completely different from a full rewrite.


I love DDG! Switched to it about a year ago and haven't looked back. The odd time I need to use Google at work for a specific technical lookup but otherwise I don't need too. I love to support the underdog who values privacy. I recommend it to everyone.


Why not use New Relic?


My early impressions of New Relic were that they were a really fast and easy way to correlate all the default logging of the web application stack. I imagine Netflix would want something a little deeper and more customized then this, also deploying New Relic at scale would be costly and at the very least require an audit of potential performance implications.


I've been creating a platform for some time now that is similar except it has to do with trading future revenue streams. Check it out http://www.gimmeview.com


Sometimes I feel like microservices is pushed by hosting providers to get more money from all the additional deployments...


I'm not a computer programmer or scientist, but I work with a group of them. The argument I've heard them say - and I may well be misrepresenting it here - is that microservices are often used as a way to avoid getting better at parallel processing/programming. They're working on a huge amount of data processing using Go if that provides any context. I'd be curious what others think of this idea.


Microservices enforce the removal of shared state by virtue of the subsystems being physically separate. This is why they're enabling concurrency.

The same effect can be had without actually creating microservices. All that is needed is well defined and controlled interfaces.


Yes. partly it's easier parallelism. Generally, easier ways of doing things are successful.

And partly. it's more efficient use of resources because finer-grained, therefore cheaper.


I think it's likely that the recent faddishness of Go, which upon actual use turns out to be difficult to use for anything /except/ microservices, itself is causing microservice adoption.


If I can rephrase this without it being a criticism: Go encourages particular patterns of use, much like any technology.

Rails, for example, strongly encourages programmers to keep a company's entire operations in a single application and memory space. Many Rails shops eventually discover that this is suboptimal for their needs, for example when one particular part of all of their operations need to be scaled up substantially but scaling a "monorail" requires memory proportionate to the total size of all operations times the highest desired throughput of any piece of the system. I'm aware of several Rails shops which needed to retroactively decompose a monorail, and many of them rewrote the performance-intensive part in Go, as Go is bugs-in-your-teeth fast for many common workloads.

Just like Rails "wants" to be a monorail, Go feels to me like it wants to be a collection of small, X00 to ~2k line programs, talking to each other via JSON messages passed either over HTTP or a queueing system. (Use NSQ! It's fantastic!)

Partly this is due to affordances in Go's design for e.g. deploying systems. If you want to re-deploy, just compile (for free) and copy the binary everywhere. Partly it is due to Golang not yet having much in the way of community norms for building really big systems. Dependency management is a very unsolved problem and gets worse the larger the individual pieces of your system get. Golang also isn't very opinionated about project structure in the way Rails is, which counsels keeping parts of your system bite-sized as a way of imposing structure on top of it. (By comparison, you can drop any intermediate Rails programmer into virtually any Rails program and say "Find the login page. Find the $FOO business logic." and they'll be able to do it in a few seconds.)


Microservices do allow you to an extent to avoid multi-threaded applications, which is considered the most "pure" way to do parallel processing. However, even though it might be considered "impure" I think microservices are a really effective way to manage parallel computation. Especially when you're using languages which have leaky threads that end up accidentally sharing state you don't intend to share.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: