Oh, of course. It's okay to kill migrant workers, as long as I don't create pretty products. Apple is like any other company, it doesn't matter how they paint themselves. What matters is the peoples lives that are being ruined.
> Oh, of course. It's okay to kill migrant workers, as long as I don't create pretty products.
Comments like this are not conducive to anything other than trolling or ranting. It might leave you feeling better/good about yourself, but it doesn't help to change the world around you in any meaningful or at least positive way.
I in no way claimed that it was 'ok to kill migrant workers.' I offered an explanation for why people focus on Apple more than other companies. When someone (Apple) paints themselves as 'hip' and 'progressive' (via marketing), people will fall over themselves to point out hypocrisy when they find it. It's just the way that people work.
Records and retention is a large part of my current project. It permeates even the lowest levels, and it's practically impossible for the narrative to occur. I say practically impossible, because there is always some way some idiot can screw up big enough that this happens, but this narrative didn't happen. I'd stake 3 digits on it.
It would be great to post an image of what this looks like in a couple code examples. I've been working on this as well, primarily because I haven't seen a syntax that gets it right, usually it's rushed or sloppy, and some just really get the syntax wrong. with off false positives appearing all over.
I think the problem is also vim which doesn't have a really great way to do syntax highlighting. At least not an easy way. Highlighting is all in a single thread and blocks which makes it slow on large files and the regular expressions seem fragile. I think a better way to do syntax highlighting would involve a lexer and bison instead of regular expression hell.
> A more pessimistic outcome is that powerful AI arrives before we've really thought through the moral and ethical issues, and it possibly destroys us, or is developed and exploited by military interests to who knows what outcome.
But it's not a very likely one. Spontaneous AI is almost impossible, primarily because we design and control the substrate. AI is very very hard, and when we get there, it will be very unlikely that it turns on us, or at least not for a millennia or two. This fear is a result of some seriously good sci-fi stories, but not a lot of logic or fact,
Once we get smarter than human AI, it can improve itself or make even better AIs, which make better AIs, and so on. It could very rapidly become far, far more intelligent than us. A being that powerful could do whatever it wants. Manipulate humans, hack the entire internet, design nanotechnology, etc. It's very unlikely that it would have human morality, and so wouldn't care about eliminating us to turn the Earth into a giant supercomputer, or just prevent us from ever creating competing AIs.
I'm good with options, but I like the way Lua is, as it is. I don't really accept that types are needed for safety in my games, or any application I've used Lua (okay, almost exclusively games). What would be the use cases for this? In my tiny brain, it makes as much sense as dynamic C.
I make an online card game. The rules are pretty simple, about as complex as Duel of Champions. There are no ongoing effects, no instants, no stack, no replacement effects, and so on. What the game does have is literal thousands of triggered abilities.
When combat happens, the attacker's triggered abilities will happen, then the defender's triggered abilities, then the attacker will attack the defender. Lots of triggered abilities interact with the attacker or defender, and some may cause it to die (it dies immediately, not at the end of combat). Subsequent triggered abilities might crash if they depend on the attacker or defender existing.
In my implementation, these triggered abilities are functions with signature
Having the type system enforce null checks on uses of other_card (and also on all uses of cards from other zones, like the opponent's battlefield or the player's hand) would have saved me a lot of trouble. As it is, I use a fuzzer to play lots of games between AI players and look for errors, but most of the errors I look for could have been caught by a compiler instead.
The main advantage is that anything that you can turn into a a type error will be reported at compilation time, much earlier in the development cycle. This is not just a matter of safety but also helps in terms of refactoring and testing.
This includes things like typos (wrong method names, etc) but where types really shine is refactoring. If you want to add or remove a parameter from a function or change a string field to an integer then you can use guidance from the type checker to be sure that you didn't forget to update anything. Its a bit like adding bumpers to the side of the bowling lane - it lets you be more reckless, updating things until the compiler shuts up without needing to carefully go through each line.
---
Also, don't forget that gradual typing solutions such as Typed Lua are all about letting you program dynamically if you want. The idea is that you would only convert to the typed dialect in the parts of your program where that would suit you.
Generally, statically typed languages are less bug prone than dynamically typed languages. Dynamically typed languages are OK for smaller projects but as projects grow in size, you have to write a lot of tests to be sure that everything works.
Also, projects written in statically typed languages are easier to read and navigate for humans and it's also easier to write static analysis tools for them.
> Dynamically typed languages are OK for smaller projects but as projects grow in size, you have to write a lot of tests to be sure that everything works.
Or build your large systems as compositions of smaller systems. If dynamically typed languages are really okay for smaller systmes, than avoiding bad architecture of large tightly-coupled systems in favor of loosely coupled compositions of smaller subsystems means that it is also okay for large systems.
> Also, projects written in statically typed languages are easier to read and navigate for humans
This is not my experience.
> and it's also easier to write static analysis tools for them.
Well, yes, since you have to write a static analysis tool for a statically typed language (since the compiler must include such a tool), its not at all surprising that the structure of statically-typed languages is always designed specifically to serve static analysis tools.
IME, that's why they've historically been harder to read and navigate for humans, though some exceptional modern statically typed languages have largely closed that gap (but not reversed it, IMO.)
>If dynamically typed languages are really okay for smaller systmes, than avoiding bad architecture of large tightly-coupled systems in favor of loosely coupled compositions of smaller subsystems means that it is also okay for large systems.
It doesn't mean that at all. You might as well say "if shoes are good enough for going to the corner store, then multiple pairs of shoes are good enough for going to the moon". The notion that large systems are no more complex than small systems if you simply make the large system out of small systems is not supported by anything I can find.
>IME, that's why they've historically been harder to read and navigate for humans
That doesn't make any sense at all. How is having information written down instead of having to constantly remember and/or deduce it an impediment to reading?
> It doesn't mean that at all. You might as well say "if shoes are good enough for going to the corner store, then multiple pairs of shoes are good enough for going to the moon".
A large system can be decomposed into a set of loosely-coupled smaller systems (indeed, that's often a preferred architecture for a variety of reasons independent of whether the language is static or dynamic).
A trip to the moon cannot be decomposed into a series of walks equivalent to a walk to the corner store.
Therefore, no, your analogy is not valid.
> That doesn't make any sense at all. How is having information written down instead of having to constantly remember and/or deduce it an impediment to reading?
Visual (and mental, really) noise, mostly -- there is a reason why clear, effective, easy-to-read writing is often not writing that avoids all potential ambiguities and is sure to define as much as possible to avoid people having to deduce things. Making everything explicit is generally generally in tension with readability, not an aid to it.
If there are consequential performance gains to be had then even better. An interesting case is Dart. The implementers have a strong dynamic typing bias and wanted to keep Dart as dynamically typed as possible. Their position was we dont need any static typing for speed. They have since relented.
You realize that you just planted a good solid troll magnet of the "I never make any type errors" variety. Some people get annoyed by the notion that you can enlist the compiler (using succinct syntax) to write and validate tests that they ought to be writing. They would rather write the tedious tests themselves.
@spankalee They had a blogpost along those lines, sadly not bookmarked, but you might be able to find it.
What makes you say that Dart "relented" on not using type annotations in the runtime?
The VM basically throws the type annotations away. In production mode, the type annotations can be completely wrong and your program will still function, and still be just as fast.
There isn't really much evidence of this out there, other than groupthink, in my opinion.
What makes a language 'okay for smaller projects' but suddenly unsuitable when it gets larger? Having a minimal set of types doesn't necessarily mean you're going to have 'more bugs as the codebase grows' - the only place it really makes sense to say that dynamically-typed languages 'break' is when they don't have the types that the application needs.
In Lua's case, the only thing I can think of it missing is native floating-point support, and that's really it. Everything else: you can get there with nil, number, string, function, CFunction, userdata, and table. (But CFunction and userdata should be enough for anyone! :)
> What makes a language 'okay for smaller projects' but suddenly unsuitable when it gets larger
As projects get larger the benefits of tooling (automated testing, automated type checking, efficient compilation, etc) become more relevant and the costs of using the tools become less relevant compared to the costs of writing the program itself.
> the only place it really makes sense to say that dynamically-typed languages 'break' is when they don't have the types that the application needs
Actually, I don't think any dynamic languages used in practice actually do this right now. Most dynamic languages only check base types (numbers, null, "object") but fail to check for higher order types (functions and objects). For example if you code
MyClass x = new OtherClass();
x.foo();
wouldn't it be better to get an error in the first line than an error on the second line? Specially when you consider that the method call can be delayed and only happen much latter, in some other function and without the true guilty line even showing up in the stack trace.
> In Lua's case, the only thing I can think of it missing is native floating-point support, and that's really it. Everything else: you can get there with nil, number, string, function, CFunction, userdata, and table. (But CFunction and userdata should be enough for anyone! :)
I think you're missing the point; static typing doesn't let you write more programs, it specifically lets you write fewer programs, by ruling out ill-typed programs. If I see a Haskell variable of type "Int -> String" then I can be quite confident that it's a function turning Ints into Strings; in, say, Python I can't be confident of much without analysing the entire codebase.
If nothing else, it kind of follows from Curry-Howard correspondence.
What makes it okay? This Python program will not throw an exception when ran
def f(x):
return x
if False:
f(1,2)
even though the types are wrong. Are you 100% certain that you will never make a bug like this even when there are 10^13 conditions and the call stack is deeper than the Mariana trench? Note that that this is not the only type of bugs that static typing prevents.
> What makes a language 'okay for smaller projects' but suddenly unsuitable when it gets larger?
One and one makes two. Two and two makes four. Four multiplied by eighteen is seventy-two.
Compare to:
1+1=2
2+2=4
4*18=72
Notation, aka language, has scaleability as a feature, whether the designers intended it or not. Dynamically typed languages are nice, I love them for prototyping, and some of them, I think, are suitable for large scale applications. But not all of them. The notations made available in some languages just make large scale development easier or harder (for various reasons). Consider using maps/json objects to produce data constructs, this is a sort of duck typing unless you add some sort of schema checker to it. At which point it's either a runtime error or you're running your code through a static analysis tool. Why not use an actual type system for this that the compiler checks at compile time?
I feel like a broken record this past week.
Dynamic languages are good, I can't imagine doing the sort of exploratory/prototyping stuff I've done in them with typical statically typed languages. I wouldn't even object to using erlang for a large scale project (actually, I'd love to do this), it's got features in both syntax (bit syntax, I love it) and semantics (concurrency) that make it ideal for certain problem domains. It's got dialyzer which gets back a lot of the static analysis that statically typed languages offer. On the other hand, javascript offers duck typing, weak typing, dynamic typing, it's fine for certain applications (and essential for anything living in a web browser these days), but it's type system holds it back for large scale work. Experienced programmers may be able to get passed it with relatively few errors, but it's still going to suffer huge performance issues because there's only so much that can be known about the values passing through a block of code. Does `a + b` mean we're adding two numbers? A number and a string? two strings? The result changes depending on those circumstances and the interpreter/compiler just doesn't know enough to optimize it significantly. Similarly, it doesn't know that one is an error right away, the actual error may start here, but only appear in some function that's a child/parent/cousin in the call tree, obfuscating the cause and creating a great deal of work for the developer and tester.
Similarly to Javascript, all numbers in Lua are floating point numbers and all number operations are floating point operations. Integers can be represented accurately up to 2^53.
Not a rhetorical question: How many flight-control codebases are written in a dynamically typed language ? Or for that matter, life support systems deployed in ICU's, really curious.
About the dynamic vs Static typing debate, I think it progresses firmly along Upton Sinclair'ish lines,
"It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"
I don't know the answer, sadly. My sardonic response is to ask how many are in what would be called a modern statically typed system?
That is, I expect the tooling around them is ridiculously high. Probably done with a model based development system. Probably not, however, what is typically seen as static typing in the modern landscape. (i.e. Haskell)
Honestly, I wouldn't be surprised to find some lisp in there, just because. :)
> what would be called a modern statically typed system?
Now we are splitting hairs and looking for the one true scotsman. You asked for quantitative evidence, but the problem is that you have already made up your mind (perhaps only subconciously) and you will find a way to lookaway from the evidence or logic (which you term just an appeal). Lookup birthers for instance.
I for one have'nt heard of any flight control or life support software written in a dynamically typed language, whereas have heard of plenty written in what would qualify more as a statically typed than dynamically typed language. ADA in particular comes to mind. I dont discount that there might exist some written in a dynamic languages. Lets agree that they are quite hard to find. I wont be surprised with prototypes written in dynamic languages though.
> I wouldn't be surprised to find some lisp in there, [...] I'm open to being proven wrong.
Now that would be the Russel's teapot fallacy. I am not ruling it out, but for such assertions, the burden, I think, lies on the one making the assertion. You handed out an epistemic sucker punch there. The proof that you desire requires presenting the entire set of Lisp programs, or the entire set of flight control programs, both are quite ridiculous. BTW I do love lisp a whole freaking lot.
Curious about this: Say you have to be (god forbid) irradiated. There are two pieces of software written by two groups, one that uses statically typed, another that uses dynamically typed and this is all the information you have. Which one would you pick ? There is no right or wrong answer.
I will pick the one written by the group that uses statically typed language because of their cultural obsession with correctness. That is just a Bayesian prior, there are exceptions of course. Sqlite is indeed afflicted by the same obsession.
EDIT: @taeric
> If they are in a static language, then this should be an easy debate for you. Merely give me some evidence and we are done.
Ah! then lookup ADA's resume
> I'd pick the one that has the better track record and has been proven with more tests.
You are dodging the (silly) question sir :) I said thats the only thing you know. Anyways wasnt an important question and here's wishing that you never need to get irradiated with any such thing, seriously.
:( Why are we replying without going down the chain? I almost didn't even see you put more of a response to me.
Regardless, I'm familiar with Ada. I'm also familiar with some fun disasters using software by Ada.[1] Now, do I blame that it was in Ada? No. However, the attitude you are displaying of "if all you know is it was in Ada, then it is safer than the alternatives" seems to be the exact problem that led to that disaster.
If the only thing I know of two irradiation devices is one was statically typed and the other wasn't, I'd likely pass on both. Or I'd like to know how much radiation each is capable of outputting. Consider, x-ray machines far predate what we realize as programmable computers. And much more goes into the safety of the devices than just the language used.
> :( Why are we replying without going down the chain?
I dislike deeply nested, indentation cramped and tangential threads, so sometimes if I dont have anything that important to say I inline it. More so when everyone has left the building.
> I'm also familiar with some fun disasters using software by Ada.
I dont think we were ever discussing whether using Ada or some other statically type checked language immunizes a project against all other peripheral errors. That would be a supremely ridiculous and stupid claim to make.
Typechecking proves either that the code is type error free or that the typechecker has a bug. In the case of the latter, you try and fix it and the benefit percolates to every software verified by it. Testing proves that it passes only those specific test cases and only if the programmer tested it at all.
BTW the very first line says the fault had nothing to do with Ada. The source of the error was beyond the scope of typechecking.
To backtrack, our discussion was on whether there are notable examples of software written in dynamically typed languages where the cost of error is high.
Runtime error while spaceshuttle descends backs to earth, no thank you, on a fly-by-wire unit, when the pilot is breaking out of a loop, again no thank you. There are times when, no pun intended, you just cannot bail out and wash your hands off the situation, without terrible consequences. In such cases I would rather have the error surfaced in an environment where I can control the consequences, meaning early, and as exhaustively as possible.
Most importantly typechecking does not preclude/replace testing, it complements it.
Everything cannot be statically verified or insured against, but that does not mean that just because we cannot verify everything statically, we should verify nothing statically.
BTW Typical x-ray machines do not have the power to kill you in a single exposure, radiotherapy units absolutely can, example Therac.
The accidents occurred when the high-power electron beam
was activated instead of the intended low power beam, and
without the beam spreader plate rotated into place.
Previous models had hardware interlocks in place to
prevent this, but Therac-25 had removed them, depending
instead on software interlocks for safety. The software
interlock could fail due to a race condition.
I would rather have it proved (exhaustively, rather than with typical tests which would be anecdotaly) that such errors cannot happen.
Well, apologies for the nesting. :( I agree it is ugly, but is easier to spot changes. (Also.... I went to sleep. Sorry. Which, also, I was tired. My tone has been terrible in some of these messages. Apologies for that, as well. To all involved.)
I am aware, and agree with the opinion, that Ada had nothing to do with the error. However, I don't think many would disagree that the fact the code was in Ada gave the team that decided to reuse it more faith that it was "safely" reusable than had it been written in another language.
This is still not the fault of Ada, but it was the exact reasoning you are using to say I should prefer the statically typed solution over another, if that is all I know. My point is if that is all you know, you don't know enough to make a decision.
So, in my mind, our discussion was not over whether there are notable examples of software written in dynamically typed languages where the cost of error is high. The discussion is whether there have been studies showing that statically typed languages produce less bug prone software.
So, to your own example, what was Therac written in? Because, honestly, right off I don't know. I would not be surprised to know it was in a statically typed language. And note, I would not use that as an example of where typed safe languages are failures.
And then to wrap up back to your question. It doesn't take a lot of googling to find that Nasa used to use a Symbolics Lisp machine in their work. http://stackoverflow.com/a/563378/392812
> that the fact the code was in Ada gave the team that decided to reuse it more faith that it was "safely" reusable than had it been written in another language.
And you are making ridiculous analogies from the Ariane example. Ada had no bearing on the failure. They plugged in a controller for a different vehicle entirely.
The question is about relative confidence between a statically typed and a runtime typed software. The fact remains that given a fixed/finite budget of testing, wise people will not even think of deploying a runtime typed system for such tasks. These tasks cannot afford runtime errors, so it is imperative that due diligence be made to prove that they cannot happen.
(You cannot send several manned missions to the moon just to test) I am not aware of runtime typed systems that afford such proofs before running it and if some do, it is a statically proven system to begin with.
Revisit my points about runtime errors on a flight controller. Or to be less fancy, runtime error in trading software, "oops sent the wrong million dollar transaction request and bailed out because of runtime error". Has happened, and run companies to the ground.
> what was Therac written in?
Given its age, I would assume it was some fairly low level language. Whatever it was, it was something that did not prove that such races cannot happen, in other words it was open to runtime error, which is exactly what we want to eliminate as far as possible. We will run it a few times and see what gives, is not a tenable strategy for many important tasks.
> It doesn't take a lot of googling to find that Nasa used to use a Symbolics Lisp machine
You are being slightly disingenuous here, I was talking about deployed flight controllers that actually control the thing when its on flight.
As I have said its like arguing with birthers, you have made up your mind and no matter what I say you will try to avoid the logic of eliminating costly runtime errors.
What the shit? Your question: "You are given two systems, one implemented in language class X, one implemented in language class Y, pick the one you trust." Me: "Not enough information to make a valid answer."
To be clear, I think mistakes have been made with all paradigms of programming. In many cases, the mistakes were made completely outside the realm of the program. Like the Ariane example. I only mentioned it as a "failure" of static typing because of your bloody question.
Seriously, it is your hypothetical: "Here is a rocket where the control system was implemented in Ada. Do you trust it more than a competing one that was implemented in machine language?" That is essentially the question you gave me. I turned it around precisely to show that just knowing the type of language that something was implemented in is bloody worthless.
I have made up my mind. To remain skeptical of claims that are not backed by empirical evidence. I'm optimistic enough to think that static typing is actually a good thing. The arguments used to put it lack any backing with verification. (Which, I find ironic, since they are supposed to be about verification.)
Ah I see what happened. We somehow managed to talk past each other and go out of sync and ended up responding to something that was not on the others mind.
If you thought I am not in agreement with this comment of yours, then let me assure you that I am (with certain differences).
My point remains that your Ariane example is irrelevant, it shows that statically typed systems can still fail if there are failure modes outside of the purview of the typechecker. That is obvious and was never in contest. What was in contest, however, was the relative safeties of statically typed vs dynamically typed. Your example has no bearing in clearing that question. I am claiming superiority of the static paradigm because no one has been forthcoming in putting their money and lives in control of a dynamically typed controller. You are just refusing to concede this because it affects your view in some way, and I dont expect to see any change in that behavior.
Let me put forward my main claim (and ignore the silly hypothetical ones): dynamically typed/checked languages are unsuitable and dangerous for cases where runtime errors are egregiously costly because they cannot prove that such eventualities cannot happen before they actually happen. Because of these risks involved such systems are sensibly not implemented in dynamically typed/checked languages. Statically typed languages cannot immunizes against all possible failures, yet they are better than dynamically typed languages in this scenario because they can eliminate a large class of these runtime errors that dynamically typed languages just cannot (if they can they are statically typed by definition), whereas testing applies equally to both.
BTW a dynamically typed language with exhaustive static analysis is a statically typechecked language.
Makes sense. Though, I disagree with your last assertion. That is, no matter how exhaustively a solution has been explored, if the implementation language is a dynamic one, then it is implemented in a dynamic language.
As an example, I take any of the MIX algorithms in Knuth's books. They are more thoroughly documented and explained than anything under other software I have ever seen. They still aren't statically typed. (Simply put, if someone makes a mistake in transcribing it or "cleaning" it up, it will not be caught by a type checker.)
And, yes, I do think we ultimately agree. The heart of my question is just wanting evidence. I even agree with the premise that "in the absence of other evidence*, I would have more faith in a statically typed solution than otherwise. I just wouldn't have that much faith without more. :)
I'm not trying to play the scotsman's game. Though, I can confess that that is the direction I am somewhat steering this. Not my main goal. My aim is more against the claims of the parent post. Note that there is a big difference between static tooling and statically typed languages. I have high regard for the both. And I've seen more evidence of the former in older, less statically typed, languages than I have the latter.
So, part of my point is I haven't ever "heard" what language those systems are written in, period. It isn't that I have heard they are in dynamic or static languages. I flat out never hear. If they are in a static language, then this should be an easy debate for you. Merely give me some evidence and we are done.
I do know that the older languaged programs I have seen are typically not in what one would call a statically typed language. There is plenty of static tooling on them, but to pretend that there is statically "typed" tooling around any software that ran something such as the space program feels like it is reaching. Heavily.
So, to your silly question. I'd pick the one that has the better track record and has been proven with more tests. I don't care if they are dynamic tests, statically typed assertions, statically analyzed assertions on a dynamically typed codebase, whatever. There are too many tools to care. Because I can hazard a guess that all of the things that have irradiated me in my years have not been by the strongest statically typed languages out there. :)
Empirical evidence. Studies indicating that software developed in statically-typed languages has fewer defects. Research into the speed of development and the ability to accommodate major changes in requirements and functionality would be useful as well.
Last I looked, there isn't a great deal of literature on this, and what there is is fairly old (dealing with '80s-'90s languages) and inconclusive.
No we weren't. We were talking about "being less bug prone" and "easier to read."
Though, I'm not entirely sure that really changes much. :(
Look, I'm not against static typing. I confess to being somewhat in love with lisp at the moment. And, I'm trying to learn MIXAL for some unholy reason. At the same time, I try to type my systems as well as I can. I am far from convinced that typing will be the way to go.
Especially if you consider auxiliary tools. Having used Coverity some, it is down right impressive the stability you can bring to a C codebase with proper discipline and the appropriate tooling. Compared to using something such as a Scala codebase, where right now your only recourse is the compiler.
That is, static analysis does not begin and end with the compiler. Sadly, in moving to "newer" statically typed languages, you through out many of the tools that currently exist in the older languages.
And this is not just for "correctness". Having finally added "-march=native" to my flags for a build of software, I'm literally amazed at how well optimized a compile can be versus just using "-O3". Optimizations in the modern world are quite amazing without necessarily needing "better types."
Isn't that a touch of a diversion? With the large target area of code covered by C, it makes the most sense that that is the heaviest targetted area. This list[1] shows that there are plenty of options to go around. Even Perl gets some love. :)
You do realize that all those tools for dynamic languages are code formatters right? That they won't catch any real bugs. You do realize that right? Why do you think that they can't catch any real bugs? Hint: it has to do with static typing or lack thereof.
That is like claiming a tool like lint can't be used to catch bugs. And... not all of them are. Did you read all of the tools in the "mixed languages" section? Quite a few of the really good tools can find mistakes in dynamic languages.
Shit, this is no different than the article making the rounds talking about the many "gotchas" in bash scripting. Just making a tool that spots those is likely to catch and help correct errors.
The presentation was interesting, but if there were any studies cited I didn't see them. I'm looking for deductive research into the proposal that using a modern statically-typed language results in a better outcome than a modern dynamically-typed one.
The related research I mentioned is useful because if development in a given language improves correctness but also increases development time by an order of magnitude it might still be more effective to develop software in the "worse" language and spend the additional available time mitigating the risk or building additional features.
No, googling those results doesn't really work. :) You get a ton of dogma with a lot of logical arguments. All of which are very appealing. Appealing arguments worry me, though.
What I have not seen, is evidence that any of your claims are true. I have seen easy to read programs in both dynamic and static languages. So far, I've seen very little correlation between the two as to which is desirable. The static languages are typically a bear on some algorithms because you have to provide so much more to the compiler for it to trust you. This does feel like it would lead to less bugs, but it goes against the "readable" and possibly the "more productive" ideas.
How do you determine the return value of a C function? You look at the function prototype. How do you determine the return value of a Python function? You have to read the whole thing. If you are working with large code bases, reading the whole thing is not really possible.
You are just trying to make an appealing argument. This literally adds nothing to the debate. A simple survey of open source software would be more convincing. And even that wouldn't be definitive.
It is an appeal to how logical the example seems. That is, it sounds like it would be applicable "at large." However, evidence seems to show that, "at large," it is mostly a non sequitor.
That is, you get better systems the more programmers you have that do have a good grasp on the whole system. Ironically, you have better arguments against knowing specifics of the system -- that is, at the individual function level -- than you do the high level picture. Consider, Linus knows the linux kernel better than I can really comprehend. I doubt he knows every function's return value.
Ruby isn't fast enough, and it would chew battery on a phone and make some apps impossible to do correctly. Ruby is a great language, it's my favourite hands down, but all that sugar comes at a cost. Even in ruby, we often use c libs to do the heavy lifting, they're just wrapped by ruby using the C extension framework.
RubyMotion for Android features a completely new Ruby runtime specifically designed and implemented for Android development. This is a new implementation of the Ruby language, it does not share code with RubyMotion for the Objective-C runtime. We are using the RubySpec project to make sure the runtime behaves as expected.
...
We feature an LLVM-based static compiler that will transform Ruby source files into ARM machine code. The generated machine code contains functions that conform to JNI so that they can be inserted into the Java runtime as is.
...
RubyMotion Android apps are packaged as .apk archives, exactly like Java-written apps. They weight about 500KB by default and start as fast as Java-written apps.
It's a bold claim, but I haven't seen anything to back it up. I haven't tried all the apps made with the framework, but I haven't seen too many heavy apps that would require the power. Every game is a puzzler of some sort it seems. Nothing to really test if it is fast enough for gaming, or time critical operations.
That doesn't actually dispute either of the claims you're saying it does. Everything you just quoted can be true and Ruby could still be too slow compared to Objective-C and chew up too much battery.
Scala, yes, but not C++ at all. Indirectly from C#, perhaps (and there is a fair bit of that), but there is very little that borrows from C++ at the language level.
Oh I would love to see a write up comparing it with Scala idioms. As a person who has done C# for a while a lot of idioms translate very well to Swift.
The if let x = whatever {} syntax in Swift is damn brilliant after looking at a ton of C# code with the as check.
That was a calculation, assuming that they would loose more in the suit than in opening up it's patents, this isn't the same at all, though it is a direct response to the question.