I'd be hard pressed to find one anywhere near a major urban area thats not in the south. They do exist sporadically in rural areas and along highways in many other areas
Maybe its just my domain, but most perf problems I've seen relate to architechture and infrastrucutre decisions, and not really code or algorithms. Especially the "microservice" mantra, splitting out simple functions into containers, using k8s, all in the name of "scalability" then being surprised when a simple API call takes several seconds due to all that crud it has to go through.
I used to think this is a good idea. However, the reality is when you join the military it should be reasonable assumption you are going to be sent somewhere to actually fight at some point.
My proposed alternative would be, when you sign up for the military, you are presented with a list of "regions" in which you are willing to be deployed in a combat role, with pay/benefits scaling accordingly the more regions (and more "in demand" regions) you are willing to be deployed. So you could potentially ahve a group that would not fight in the Middle East but would fight in the Pacific, etc. Of course you can't have too many declining service in too many reasons, but if it starts to get expensive to find recruits willing to fight in a given region thats a clear sign somethings amiss with popular sentiment.
I'm not a libertarian but I think they have a point here - conscription is tantamount to slave labor. The fact that it was accepted by societies for hundreds of years doesn't make it any less so.
All work can be compared to slave labor to the extent that you need money to live. Just think of it as practical training for which you also get paid.
Personally I think it'd be a massive net benefit for society if every able person had a decent standard of first-aid training and a bunch of other general competencies that gave them useful job and emergency skills which they didn't have to expend a bunch of money to pay for.
This is the case only if the new interpreter does not simply include the layer that the old interpreter has for translating bytecode to native instructions. Once you have that, you can simply bootstrap any new interpreters from previous ones. Even in the case of supporting new architectures, you can still work at the Python level to produce the necessary binary, although the initial build would have to be done on an already supported architechture.
The usual understanding of "interpreter" in a CS context is program that executes source code directly without a compilation step. However the binary that translates an intermediate bytecode to native machine code is at least sometimes called a "bytecode interpreter".
This is still incorrect. A bytecode interpreter, as its name indicates, interprets a bytecode. Typically, compiling a bytecode to native machine code is the work of a JIT compiler.
Yes, that's another great example of the same kind of thing - creating a JIT from an interpreter. It remains true that interpreters do not directly generate machine code.
What are your goals, to let everyone know that interpreters, definitionally don't generate code? This isn't debate club.
I dropped a cool link that shows we have a machine that turns interpreters into compilers. I am talking about the machine. You are talking about the definition. We aren't talking about the same thing.
Partly, it's simply that words matter. An interpreter is not a compiler, even if partial evaluators and Futamura transforms are very cool. Posting about them in a context that isn't a confusion about what interpreters are may have been more fruitful.
My experience with Apple hardware has been it generally holds up. I've only on my third iPad since I bought the original in 2011. My iPhones have all lasted at least four years.
The screen on my Macbook Air has been the exception. I wonder why they can't just use the same display on those that they do iPad. Seems better quality, as well
reply