Came to say the same thing. In Rust, you can also have fluent interfaces that return different types as you build, which means that you end up being more typesafe throughout the construction process.
Static analysis is your friend here. TypeScript, Rust, and C# can all identify dead code and report errors or warnings from that, and can perform "find all usage" searches across a workspace.
This is my main criticism of the Ruby community. Code that will throw static analysis out is not only accepted, but even welcome.
Python's cold acceptance is already enough to make big projects impractical on it, because every large codebase will gather some undecidable metaprogramming spread through it. But Ruby gets it everywhere.
I think he means that you seem to be speaking for all of the Ruby community (that is, people who write Ruby code). I've personally seen that not all of the Ruby community writes tests, because I've seen Ruby code that doesn't have tests. In fact, I've had coworkers who work with Ruby in multiple jobs in different industries, and in all cases the Ruby application had very few or no tests.
If your code base is only used within a single workspace (and you know this with certainty) then I completely agree. The issue arises when your code is used outside of your workspace as a shared code base or a public facing API.
This is an excellent point, but it should be noted that once you delete the unused API endpoint, finding the rest of the dead code is almost trivial with static analysis.
Let's hope good abstractions are developed so that this level of math isn't what we have to deal with on a daily basis. I could imagine that happening. The math behind how an individual transistor works is pretty hairy too, but we mostly don't have to care.
At this point aren't you still hard coding commands, but using "links" rather than URLs? And doesn't the dependency on every link in the chain outweigh one - easily maintained - URL?
You're hard-coding 'terms' from a hypermedia 'vocab'.
The thing is, if you use well-known terms, your client will work against ANY API that uses those terms, not just the one that you coded it to work against.
Not only that, but the APIs will have flexibility to move things around, or delegate functionality to other hosts by linking through.
As for the cost of walking the chain, once a target resource has been found the can be cached, and you only need to rewalk if the resource 404/410s.
There are commercial products that you can leave running all the time to generate data from your packets on the fly, such as ExtraHop (http://extrahop.com). There are also continuous PCAP tools, but they need massive amounts of storage and in larger environments the lookback you get is limited.
This looks like something out of the design team for Windows 8 (in both good and bad ways). It is visually striking, a dramatic simplification of what currently exists, and it makes several assumptions about the real world of what app developers need and how they can plug into OS-provided UI surfaces. One of the most painful learnings for Microsoft with Windows 8 was that overly-standardizing things like content sharing and tagging leads apps into situations where the affordances don’t work.