from Hacker News

Diminishing returns of static typing

by robgering on 10/2/17, 3:09 PM with 617 comments

  • by alkonaut on 10/2/17, 7:32 PM

    There are 3 main areas of interest in the discussion of benefits of static vs dynamic typing.

    - Quality (How many bugs)

    - Dev time (How fast to develop)

    - Maintainability (how easy to maintain and adapt for years, by others than the authors)

    The argument is often that there is no formal evidence for static typing one way or the other. Proponents of dynamic typing often argue that Quality is not demonstrably worse, while dev time is shorter. Few of these formal studies however look at software in the longer perspective (10-20 years). They look at simple defect rates and development hours.

    So too much focus is spent on the first two (which might not even be two separate items as the quality is certainly related to development speed and time to ship). But in my experience those two factors aren't even important compared to the third. For any code base that isn't a throwaway like a one-off script or similar, say 10 or 20 years maintenance, then the ability to maintain/change/refactor/adapt the code far outweigh the other factors. My own experience says it's much (much) easier to make quick and large scale refactorings in static code bases than dynamic ones. I doubt there will ever be any formal evidence of this, because you can't make good experiments with those time frames.

  • by agentultra on 10/2/17, 3:32 PM

    I think what's often missing from these arguments is that statically checking (or inferring) homogenous lists is probably one of the most superficial uses of the type system in Haskell (and indeed not the interesting feature most power-users of Haskell are interested in as far as I can tell).

    What is interesting is using the type system to specify invariants about data structures and functions at the type level before they are implemented. This has two effects:

    The developer is encouraged to think of the invariants before trying to prove that their implementation satisfies them. This approach to software development asks the programmer to consider side-effects, error cases, and data transformations before committing to writing an implementation. Writing the implementation proves the invariant if the program type checks.

    (Of course Haskell's type system in its lowest-common denominator form is simply typed but with extensions it can be made to be dependently typed).

    The second interesting property is that, given a sufficiently expressive type system (which means Haskell with a plethora of extensions... or just Idris/Lean/Agda), it is possible to encode invariants about complex data structures at the type level. I'm not talking about enforcing homogenous lists of record types. I'm talking about ensuring that Red-Black Trees are properly balanced. This gets much more interesting when embedding DSLs into such a programming language that compile down to more "unsafe" languages.

  • by catpolice on 10/2/17, 5:02 PM

    Static typing prevents bugs in code to the degree that the programmer can correctly encode the desired behavior of the program into the type system. Relatively little behavior can be encoded in inexpressive type systems, so there's a lot of room for bugs that have nothing to do with types. A lot more behavior (e.g. the sorts of invariants mentioned in agentultra's top level comment) can be encoded in a more expressive type system, but you then have the challenge of encoding it /correctly/. A lot of that kind of thinking is the same as the kind of thinking you'd have to do writing in a dynamic language, but you get more assurances when your type system gives you feedback about whether you're thinking about the problem right.

    For my money, I work in a primarily dynamic language and I already have a set of practices that usually prevent relatively simple type mismatches so I very rarely see bugs slip into production that involve type mismatches that would be caught by a Go-level type system, and just that level of type information would add a lot of overhead to my code.

    But if I were already using types, a more expressive system could probably catch a lot of invariance issues. So I feel like the sweet spot graph is more bimodal for me: the initial cost of switching to a basic static type system wouldn't buy me a lot in terms of effort-to-caught-bugs-ratio, but there's a kind of longer term payout that might make it worth it as the type system becomes more expressive.

  • by simon_o on 10/2/17, 3:34 PM

    The biggest issue with claims like "there are only diminishing results when using a type system better than the one provided in my blub language" is that it assumes people keep writing the same style of code, regardless of the assurances a better type system gives you.

    "I don't see the benefit of typed languages if I keep writing code as if it was PHP/JavaScript/Go" ... OF COURSE YOU DON'T!

    This is missing most of the benefits, because the main benefits of a better type system isn't realized by writing the same code, the benefits are realized by writing code that leverages the new possibilities.

    Another benefit of static typing is that it applies to other peoples' code and libraries, not only your own.

    Being able to look at the signatures and bring certain about what some function _can't_ do is a benefit that untyped languages lack.

    I think the failure of "optional" typing in Clojure is a very educational example in this regard.

    The failure of newer languages to retrofit nullabillity information onto Java is another one.

  • by flavio81 on 10/2/17, 5:30 PM

    What amuses me in all "static typing versus..." discussions, is that it usually it is the comparison between two camps:

    Camp A: Languages with mediocre static typing facilities, for example:

         -- C (weakly typed)
         -- C++ (weakly typed in parts, plus over-complicated
            type features) 
         -- TypeScript (the runtime is weakly typed, 
            because it's Javascript all the way down)
    
    Camp B: Languages with mediocre dynamic typing facilities, for example:

         -- Javascript (weakly typed) 
         -- PHP 4/5 (weakly typed) 
         -- Python and Ruby (no powerful macro system to 
            help you keep complexity well under control 
            or take fulll advantage of dynamicism)
    
    
    
    Both camps are not the best examples of static or dynamic typing. A good comparison would be between:

    Camp C: Languages with very good static typing facilities, for example:

         -- Haskell
         -- ML
         -- F#
    
    Camp D: Languages with very good dynamic typing facilities, for example:

         -- Common Lisp
         -- Clojure
         -- Scheme/Racket
         -- Julia
         -- Smalltalk
    
     
    I think that as long as you stay in camp (A) or (B), you'll not be entirely satisfied, and you will get criticism from the other camp.
  • by fny on 10/2/17, 4:47 PM

    There's one huge benefit to static typing people often forget: self documentation.

    While, yes, top-quality dynamic code will have documentation and test cases to make up for this deficiency, it's often still not good enough for me to get my answer without spelunking the source or StackOverflow.

    I feel like I learned this the hard way over the years after having to deal with my own code. Without types, I spend nearly twice as long to familiarize myself with whatever atrocity I committed.

  • by mpartel on 10/2/17, 4:16 PM

    Having programmed in languages ranging from Ruby to Coq, for web apps and games, I feel the sweet spot is somewhere in the neighborhood of Java/C#, i.e. include generics but maybe leave out stuff like higher kinds and super-advanced type inference (and null!).

    The main use case of generics, making collections and datastructures convenient and readable, is more than enough to justify the feature in my view, since virtually all code deals with various kinds of "collections" almost all of the time. It's a very good place to spend a language's "complexity budget".

    I wrote an appreciable amount of Go recently, with advice and reviews from several experienced Go users, and the experience pretty much cemented this view for me. An awful lot of energy was wasted memorizing various tricks and conventions to make do with loops, slices and maps where in other languages you'd just call a generic method. Simple concurrency patterns like a worker pool or a parallel map required many lines of error-prone channel boilerplate.

  • by mattnewton on 10/2/17, 3:36 PM

    I just don’t buy that go is some sort of sweet spot because it doesn’t have generics. Generics pretty much exist for maps and slices, because they are needed in real programs. The language designers just don’t let you make your own generic collections.
  • by evmar on 10/2/17, 4:10 PM

    In this thread: people will bring out the same tired arguments for or against static typing, without commenting on the actual content of the post, which was quite good!

    I have come to see type systems, like many pieces of computer science, can either be viewed as a math/research problem (in which generally more types = better) or as an engineering challenge, in which you're more concerned with understanding and balancing tradeoffs (bugs / velocity / ease of use / etc., as described in the post). These two mindsets are at odds and generally talk past each other because they don't fundamentally agree on which values are more important (like the great startups vs NASA example at the end).

  • by oldandtired on 10/3/17, 12:51 AM

    It has been interesting to see the to and froing of arguments for and against static typing in the discussions here.

    Though I am not a type theorist (I only dabble in compilers and language design), I have noted that many people conflate static typing and dynamic typing with other additional ideas.

    Static typing has certain benefits but also has certain disadvantages, dynamic typing has certain benefits but also has certain disadvantages.

    What I find interesting is that few people fall into the soft typing arena, using static typing where applicable and advantageous and using dynamic typing where applicable and advantageous.

    Static typing has a tendency in many languages to explode the amount of code required to get anything done, dynamic typing has a tendency to produce somewhat brittle code that will only be discovered at runtime. The implementation of static typing in many languages requires extensive type annotation which can be problematic.

    But what is forgotten by most is that static typing is a dynamic runtime typing situation for the compiler even when the compiler is written in a static typed language.

    Instead of falling into either camp, we need to develop languages that give us the beast of both world. Many of the features people here have raised as being a part of the static typing framework have been rightly pointed out as being of part of the language editors being used and are not specifically part of the static typing regime.

    Many years ago a similar discussion was held on Lambda-the-Ultimate, and the sensible heads came to the conclusion that soft typing was the best goal to head for. Yet, in the intervening years,when watching language design aficionados at work, they head towards full static typing or full dynamic typing and rarely head in the direction of soft typing (taking advantage of both worlds).

    S, the upshot, this discussion will continue to repeat itself for the foreseeable future and there will continue to NOT be a meeting of minds over the subject.

  • by willtim on 10/2/17, 4:57 PM

    Our industry has not yet even scratched the surface of what types can offer: Types for enforcing architectures and controlling effects, types for checking correct use/free of scarce resources, types for verifying protocol implementations etc etc. Currently, half the industry is using schema-less json and dynamic languages; so really it is far too early to generally talk about any diminishing returns.
  • by solatic on 10/2/17, 5:12 PM

    OP draws a false one-dimensional relationship between types vs tests in terms of code quality. Writing expressive types instead of tests does much more than affect a quality curve - it changes the way you approach the problem you are trying to solve. The classic Haskell example is understanding how IO being a monad allows you to push impurity to the edge of your system.

    Start-ups decide not to write MVPs in languages like Haskell or Idris not because those languages aren't "rapid" enough, but because it's too difficult to find programmers experienced in those languages on the labor market. It's already difficult enough to find competent programmers - no founder wants to make their hiring woes even more difficult.

  • by barrkel on 10/2/17, 3:38 PM

    There's a point beyond which you spend more time proving things about your code than writing it, all the way up to the point where your ability to prove things about your code in your chosen type system starts to affect the kinds of solutions you can construct, and a different kind of complexity creeps in; representational complexity rather than implementation complexity. This can be a source of error, not just inefficiency.
  • by mannykannot on 10/2/17, 5:14 PM

    Firstly, thank you for wanting to take an open-minded look into the issue, rather than simply defend a position that you have already committed to.

    You write "Why then is it, that we don't all code in Idris, Agda or a similarly strict language?... The answer, of course, is that static typing has a cost and that there is no free lunch."

    I take it that you wrote "of course" here through assuming that there must be some objective reason for the choice, and that it depends solely on strictness, but languages don't differ only in their strictness, so choices may be made objectively on the basis of their other differences, and we also know that choices are sometimes made on subjective or extrinsic grounds, such as familiarity. I don't know what proportion of professional programmers are familiar enough with Iris or Agda to be able to judge the value proposition of their strictness, but I would guess that it is rather small.

    Now, to look at the sentences I elided in the above quote: "Sure, the graph above is suggestively drawn to taper off, but it's still monotonically increasing. You'd think that this implies more is better." As the graph is speculative, it cannot really be presented as evidence for the proposition you are making. I could just as well speculate that static program checking does not do much for program reliability until you are checking almost every aspect of program behavior, and that simple syntactical type checking is of limited value. That would be consistent with the fact that there is little empirical evidence for the benefit of this sort of checking, and explain why most people aren't motivated to take a close look at Iris or Agda. In this equally-speculative view of things, current language choices don't necessarily represent a global optimization, but might be due to a valley of much more work for little benefit between the status quo and the world of extensive-but-expensive static checking.

  • by geokon on 10/2/17, 5:44 PM

    I think talking about a sweet spot is correct

    I've been thinking about the trajectory of C++ language development recently and the emphasis has definitely been on making generics more and powerful. You watch CppCon talks and see all this super expressive template spaghetti and see that while it's definitely a better way to write code - the syntax is just horrifying and hard to "get over"

    Just like when "auto" took off and people starting thinking about having "const by default" - I'm starting to think that generic by default is the way to go. The composability of generic code is incredible powerful and needs to be more accessible

    However the other end of the spectrum: dynamic code leaves a lot of performance on the table and leads to runtime errors

  • by CoolGuySteve on 10/2/17, 3:29 PM

    When I went from working at Apple to a language implementation group at another company, my views on Objective-C's duck typing + warnings for classes being useful and good was pretty heretical. It's nice to see other people agree with me.

    Especially when it comes to GUI programming, I really don't care if a BlueButton.Click() got called instead of RedButton.Click().

  • by ruskimalooski on 10/2/17, 3:35 PM

    These graphs really mean nothing. There is no data behind them. I might as well make a graph that conveys a non-descript correlation between how much an article bashes static typing & assertion and how high it is on HN.
  • by k__ on 10/2/17, 3:29 PM

    I had the same experience, but I also have to say that the static type systems of some FP-languages feel really light-weight.

    So year, static typing doesn't buy you much, but in some languages it's at least cheap.

  • by stephengillie on 10/2/17, 3:26 PM

    One of my favorite parts of Powershell is optional typing. Variables are a generic "Object" type by default, which can hold anything from a string to array to "Amazon.AWS.Model.EC2.Tag" or other custom types.

    Or, type can be specified when setting the variable:

    [String]$myString = "Hello World!"

    This would generate a type error:

    [Int]$myString = "Hello World!"

    Often, typed and untyped variables will sit together:

    [Int]$EmployeeID,[String]$FullName,$Address = $Input -split ","

  • by coding123 on 10/2/17, 7:46 PM

    I'm converting a codebase of Javascript of about 200+ js files to Typescript today. I am about 5% complete... already found two places where the argument list was wrong and was being sent into a void. I also see the code that was making up for the fact that the third argument was being ignored (basically patching downstream because they thought the feature was broken).

    Now this codebase was written with a high degree of quality (it's pretty good but not perfect), but the lack of compile (and of course runtime)-time checks has caused waste.

    The second phase of my project to convert all promises to RX Observables :)

  • by cm2187 on 10/2/17, 3:25 PM

    The benefit of static typing isn't just reliability. Tooling is another major argument. Won't appeal to certain hardcore programmers who think that even notepad has too many features. But it is great for refactoring, finding all references to a function or a property or navigating through the code at design time. Basically all the features visual studio excels at for .net languages.

    And I disagree with the barrier to entry argument. Static typing, by enabling rich tooling, helps a beginner (like it helped me) a lot more by giving live feedback on your code, telling you immediately where you have a problem and why, telling you through a drop down what other options are available from there, etc. Basically makes the language way more self-discoverable than having to RTFM to figure out what you can do on a class.

  • by seasoup on 10/2/17, 4:44 PM

    I really enjoyed how the analysis shows that different developers can have different equally valid opinions on this topic. It's where you place your values and preferences of programming, modified by what you are programming. The failure state of a cat photo sharing web app likely isn't as dramatic or important as that of a financial system or driverless car code. Great article.
  • by btown on 10/2/17, 3:42 PM

    Also depends on your problem domain. If you have good test coverage but you're parsing strings found in the wild, you're going to spend a lot more time "debugging" your assumptions than AttributeErrors which would be caught by typing. Bug free code is not always the same as working code.

    Disclaimer: Python user scarred by email header RFC violations

  • by noncoml on 10/2/17, 5:17 PM

    I think there are two kind of static typing languages. The ones that static typing is for helping the compiler(eg C) and the ones that it’s for helping the user(eg Typescript).

    I think Go with its lack of algebraic type is more of the first, helping the compiler, so I wouldn’t use it as a good example of static typing.

    Haskell, OCaml and Rust would make excellent case studies, but we have nothing to compare against.

    So IMHO the best way to compare static typing vs dynamic typing is by comparing Typescript against JS. And in my experience the difference when writing code is huge. It completely eliminates the code-try-fix cycle during development.

  • by thesz on 10/2/17, 4:06 PM

    The effort to fix a defect is proportional to the time between introduction of a defect and it's discovery.

    This is a basic intuition behind all good practices, including CI, QA, etc.

    Types allow one to discover program defects (even generalized ones, when using some of the programming languages) in (almost) shortest possible amount of time.

    Types also allows one to constrain effects of various kind (again, use good language for this), which constraintment can make code simpler, safer and, in the end, more performant.

  • by valuearb on 10/2/17, 4:56 PM

    The two languages I develop in are Javascript and Swift. Couldn't be more different in type safety.

    I love everything about Swift except the compile times and occasionally inscrutable compile error messages.

    I love the interactivity of Javascript, but despise the lack of types, it's like I'm sketching out the idea for a program instead of directly defining what it is. And the lack of types burns me occasionally.

  • by avg_programmer on 10/2/17, 5:58 PM

    What are the costs of statically typed languages? The author stated "thinking about the correct types" and "increases compile times" among some other, weaker (imo) costs. What is wrong with "thinking about the correct types"? You are thinking about the same things in a dynamic language, right? For example, say you need to know about things that are "thennable". Weather you are in a statically typed language or not, you are still checking for the same thing: does it have the then() method? The tradeoff is in reading vs implementing code. With a statically typed language, you can easily search for implementers of the Thennable interface and you are guaranteed to be show every implementer. The downside is that you have to write a few more lines of code to satisfy the static typing. With a dynamically typed language, you have to find the implementers yourself, but you can just slap a then method on anything and it will work. I am biased toward static typing so I am interested to hear counter points.
  • by _Codemonkeyism on 10/2/17, 3:45 PM

    I like for example Refined

    https://github.com/fthomas/refined

    not only for the static checking,

        scala> val i: Int Refined Positive = -5
        <console>:22: error: Predicate failed: (-5 > 0).
                val i: Int Refined Positive = -5
    
    but the expressive descriptions of a domain model.
  • by hwayne on 10/2/17, 5:03 PM

    Sometimes I wonder if we're arguing the wrong thing, where we think we're arguing static vs dynamic typing but what we're _actually_ arguing is static vs no-static typing. Haskell is static and not dynamic. Ruby is dynamic but not static. Python, starting with 3.5, is sorta both. C# is definitely both.

    All static typing means is that type information exists at compile time. All dynamic typing means is that type information exists at runtime. You generally need _at least_ one of the two, and the benefits each gives you is partially hobbled by the drawbacks of the other, so most dynamic languages choose not to have static typing. I also feel that dynamic languages don't really lean into dynamic typing benefits, though, which is why this becomes more "static versus no static".

    One example of leaning in: J allows for some absolutely crazy array transformations. I don't really see how it could be easily statically-typed without losing almost all of its benefits.

  • by hellofunk on 10/2/17, 6:02 PM

    There is one aspect to this debate that is worth pointing out. What about generative testing, which is possible in static or dynamically typed languages? The article mentions that testing is perhaps more important in a dynamically typed language since there is less compiler support. But for example, Clojure rolled out the very clever Clojure.spec library that allows you to precisely specify all details relating to function arguments, data structures, etc, in even more fine-tuned methodology than just types; you can specify that the second argument to a function must be larger than the first, or that a function should only return a value between 5 and 10, etc. These "specs" have the interesting property of being run-time checked or compile-time checked in the form of automatic tests, which can generate inputs based on the specs.

    In such a case, the line between these two type environments narrows.

  • by bad_user on 10/2/17, 5:41 PM

    Those line charts are totally made up, with arguments pulled out of thin air to support this line:

    > "Go reaps probably upwards of 90% of the benefits you can get from static typing"

    That 90% number is totally made up as well. I don't see evidence that the author actually worked with Haskell, or Idris, or Agda these being the three static languages mentioned. Article is basically hyperbole.

    If I am to pull numbers out of my ass, I would say that Go reaps only 10% of the benefits you get with static typing. This is an educated guess, because:

    1. it gives you no way to turn a type name into a value (i.e. what you get with type classes or implicit parameters), therefore many abstractions are out of reach

    2. no generics means you can't abstract over higher order functions without dropping all notions of type safety

    3. goes without saying that it has no higher kinded types, meaning that expressing abstractions over M[_] containers is impossible even with code generation

    So there are many abstractions that Go cannot express because you lose all type safety, therefore developers simply don't express those abstractions, resorting to copy/pasting and writing the same freaking for-loop over and over again.

    This is a perfect example of the Blub paradox btw. The author cannot imagine the abstractions that are impossible in Go, therefore he reaches the conclusion that the instances in which Go code succumbs to interface{} usage are acceptable.

    > "It requires more upfront investment in thinking about the correct types."

    This is in general a myth. In dynamic languages you still think about the shape of the data all the time, except that you can't write it down, you don't have a compiler to check it for you, you don't have an IDE to help you, so you have to load it in your head and keep it there, which is a real PITA.

    Of course, in OOP languages with manifest typing (e.g. Java, C#) you don't get full type inference, which does make you think about type names. But those are lesser languages, just like Go and if you want to see what a static type system can do, then the minimum should be Haskell or OCaml.

    > "It increases compile times and thus the change-compile-test-repeat cycle."

    This is true, but irrelevant.

    With a good static language you don't need to test that often. With a good static type system you get certain guarantees, increasing your confidence in the process.

    With a dynamic language you really, really need to run your code often, because remember, the shape of the data and the APIs are all in your head, there's no compiler to help, so you need to validate that what you have in your head is valid, for each new line of code.

    In other words this is an unfair comparison. With a good static language you really don't need to run the code that often.

    > "It makes for a steeper learning curve."

    The actual learning is in fact the same, the curve might be steeper, but that's only because with dynamic languages people end up being superficial about the way they work, leading to more defects and effort.

    In the long run with a dynamic language you have to learn best practices, patterns, etc. things that you don't necessarily need with a static type system because you don't have the same potential for shooting yourself in the foot.

    > "And more often than we like to admit, the error messages a compiler will give us will decline in usefulness as the power of a type system increases."

    This is absolutely false, the more static guarantees a type system provides, the more compile time errors you get, and a compile time error will happen where the mistake is actually made, whereas a runtime error can happen far away, like a freaking butterfly effect, sometimes in production instead of crashing your build. So whenever you have the choice, always choose compile-time errors.

  • by iamleppert on 10/2/17, 7:10 PM

    It's far more useful to implement validation and type checking via introspection and interrogation of type, quantity, structure, size, or some other property at runtime in a dynamic programming language than to pedantically have to type all your variables. Most interesting types are far from the basics of different size numbers, string and objects anyway. It's better to trade a fast and quick runtime type error than a lengthy compile-time type checking process, because less code needs to be evaluated at run-time to expose the type error. See the "Worse is better" principle in language design.

    Wouldn't it be great if we can use the computer to figure out what the types should be by a runtime evaluation of the code and save precious human time for things only humans can do?

    I don't have to think or decorate my speech with types of noun, verb, pronoun, adjective etc. when I speak, but I'm still able to communicate very effectively, because your brain is automatically adding the correct type information based on context that helps you understand what I'm saying, even with words that have multiple types. Granted, natural language is different than programming language but there was once a trend to try and make programming languages more like human language, not less so.

  • by platz on 10/2/17, 7:08 PM

    https://www.theatlantic.com/technology/archive/2017/09/savin...

    Software failures are failures of understanding, and of imagination.

    The problem is that programmers are having a hard time keeping up with their own creations.

    dynamic typing simply doesn't scale.

  • by jon49 on 10/6/17, 7:31 PM

    Languages like F# give a nice sweet spot between static typing and dynamic typing. It has Type Providers that "generate" code on the fly as you are typing. You don't need to specify all the types, it will infer many types for you. So, you almost feel like you are writing in a dynamic language but you it tells you if you are writing something incorrectly.

    I would not consider a language to be modern unless it has Type Providers I consider this to be such an essential feature. I believe Idris and F# are the only languages that have it. People are trying to push TypeScript to add it - who knows if it will happen.

    Many are saying that if you have a dynamic language you just need to be disciplined and write many tests. With good static typed languages like F# you can't even write tests on certain business logic since the way you write your code you make "impossible states impossible", see https://www.youtube.com/watch?v=IcgmSRJHu_8

  • by hyperpallium on 10/2/17, 9:53 PM

      1. performance dominates (like 80:20)
      2. tooling
      3. doc (becomes crucial on large projects)
      4. correctness
    
    Formal correctness doesn't really matter. Anecdotally (since that's really all we have), I find in practice, very few bugs are caught by the type-checker.

    Further, code is usually not typed as accurately as the language allows. i.e. the degree of type-checking is a function of the code; the language only provides a maximum. In a sense, every value has a type, even if it's not formally specified or even considered by the programmer, in the same sense that every program has a formal specification, even if it's not formally specified.

    Upfront design is the price. Which is difficult to pay when the requirements are changing and/or not yet known.

  • by js8 on 10/2/17, 4:34 PM

    Like other commenters, I disagree there are diminishing returns to static typing itself, but rather diminishing returns to proper engineering in certain cases (i.e. do something as perfectly as possible).

    By adding types (and in the extreme, dependent types), you're allowing compiler to prove more things about the code (to check correctness or generate more optimal code). If you actually need to prove more things, then it's better to leave that for a compiler rather than human.

    Of course, if you're writing e.g. web scraping script, you don't need these guarantees and then you don't have to care about types. But the better engineering you want, the more static typing will help and there is no diminishing returns.

  • by FranOntanaya on 10/2/17, 5:21 PM

    It bothers me that types as representation of hardware constraints are mixed up with types as a machine readable subset of validation.

    It makes the higher level types seem more transcendental than they are, and also seems to put actual validation on a second rate level. End of the day if an argument is the right scalar or interface you'll get the same result on runtime whether you hinted it -- for one's quality of life improvements -- or checked it with some boilerplate validation. Worst case scenario people will forgo encoding known stricter constraints after generally hinting the expected type.

  • by tabtab on 10/2/17, 9:30 PM

    I've generally felt that each shines in different areas. Static typing is best for lower-level infrastructure and shared API's, while dynamic is better for gluing these all together toward the "top" of the stack, closer to the UI and biz logic. The problem is that languages tend to be all one or the other so that we have to make choice. What's needed is a language (or language interface convention) that can straddle both. A given class or library can be "locked down" type-wise to various degrees as needed.
  • by cleandreams on 10/2/17, 5:42 PM

    My 2 cents: dynamic typing works okay for library consumers. For libraries themselves though, or platform code, the disadvantages are real. It is harder to fix and extend code when you don't know who calls it, how they call it, what they get in return. Complex code becomes littered with 'black holes'. That is a big part of why facebook implemented Hack. I heard a talk by one of the developers. Even now there are PHP blackholes in the Facebook code base that they can't migrate to Hack.
  • by lisper on 10/2/17, 10:54 PM

    100% statically-type-checked code != 100% bug-free code. That would require solving the halting problem. So you have to test everything anyway if you need high reliability.
  • by snambi on 10/2/17, 4:52 PM

    Any program that is non-trivial meaning 100K+ lines of code, involves many developers over 2+ years of time, should be written in a statically typed language.
  • by tiuPapa on 10/2/17, 7:10 PM

    So the article does praise Go, but how is Rust? Does it strike that sweetish spot? Is it a language a startup should use?
  • by ratherbefuddled on 10/3/17, 12:08 AM

    I guess the only bit I don't really agree with is this:

    > upfront investment in thinking about the correct types

    being a cost. Surely you have to do this whether the compiler will check your work or not, and if you just don't do the thinking you'll end up with bugs? Isn't this a benefit?

  • by zengid on 10/2/17, 7:39 PM

    Couldn't these discussions benefit from an inclusion of actual empirical evidence? Here's a list of some such studies: http://danluu.com/empirical-pl/
  • by z3t4 on 10/3/17, 7:35 AM

    While the made up graphs might help understanding his reasoning, I think it's way too abstract/philosophical. It's like walking into a dark room making assumptions and arguments based on your belief of what color the walls are.
  • by magice on 10/2/17, 11:15 PM

    https://dl.acm.org/citation.cfm?id=2635922

    Just ONE study, so don't take too much heed. That said, apparently:

    * Strongly type, statically compiled, functional, and managed memory is least buggy

    * perl is REVERSELY correlated with bugs. Interestingly, Python is positively correlated with bug. There goes the theory about how Python code looks like running pseudo-code... Snake (python's, to be more precise) oil?

    * Interestingly, unmanaged memory languages (C/C++) has high association with bugs across the board, rather than just memory bugs.

    * Erlang and Go are more prone to concurrency bugs than Javascript ¯\_(ツ)_/¯. Lesson: if you ain't gonna do something well, just ban it.

    All in all, interesting paper.

  • by shalabhc on 10/2/17, 4:56 PM

    Question for all static or dynamic typing proponents: do you see your language/type-system as a great and scalable way to program large distributed systems in 10 years? 20 years?
  • by amelius on 10/2/17, 6:44 PM

    Can't we have tools that automatically perform the static typing for us, perhaps in an interactive way?

    (I'm not talking about systems which just infer types automatically).

  • by vhiremath4 on 10/2/17, 10:00 PM

    > “And more often than we like to admit, the error messages a compiler will give us will decline in usefulness as the power of a type system increases.”

    Can someone explain this?

  • by woolvalley on 10/2/17, 7:07 PM

    I would like lots of static typing, even more than we have now, but an ability to turn it off for faster compile times during some parts of development.
  • by jugg1es on 10/3/17, 3:37 AM

    In my experience with growing companies, even business-critical code bases get rewritten within 3-4 years to account for flexibility that the previous strongly-typed system just can't handle. A well designed system uses strong types for the "knowns" but allows changes via dynamic types for the "unknowns". Those are the systems that last.
  • by danharaj on 10/2/17, 3:40 PM

    Just a technical point that hints at a significant philosophical idea: The asymptote cannot reach 100% of program behavior in any finitary way. That would solve the halting problem. The x-axis should go off to infinity. Also, it's not a smooth progression. There are huge jumps in expressivity involved here. Going from Java-style types to Hindley-Milner to full System F are all massive jumps in expressivity. There are also incompatible features of type theories. Type theories are a fractal of utility and complexity.

    A type system doesn't only describe the behavior of the program you write. It also informs you of how to write a program that does what you want. That's why functional programming pairs so well with static typing, and in my opinion why typed functional languages are gaining more traction than lisp.

    How many ways are there to do something in lisp? Pose a feature request to 10 lispers and they'll come back with 11 macros. God knows how those macros compose together. On the other hand, once you have a good abstraction in ML or Haskell it's probably adhering to some simple, composable idea which can be reused again and again. In lisp, it's not so easy.

    A static type system that's typing an inexpressive programming construct is kind of a pain because it just gets in the way of whatever simple thing you're trying to do. A powerful programming construct without a type system is difficult to compose because the user will have to understand its dynamics with no help from the compiler and no logical framework in which to reason about the construct.

    So, a static type system should be molded to fit the power of what it's typing.

    The fact that every Go programmer I talk to has something to say about their company's boilerplate factory for getting around the lack of generics tells me something. This is only a matter of taste to a point. In mathematics there are a vast possibility of abstract concepts that could be studied, but very few are. It's because there's some difficult to grasp idea of what is good, natural mathematics. The same is in programming: there are a panoply of programming constructs that could be devised, but only some of them are worth investigating. Furthermore, for every programming construct you can think of there's only going to be a relatively small set of natural type systems for it in the whole space of possible type systems.

    Generics are a natural type system for interfaces. The idea that interfaces can be abstracted over certain constituents is powerful even if your compiler doesn't support it. If it doesn't, it just means that you have to write your own automated tools for working with generics. It's not pretty.

  • by tree_of_item on 10/2/17, 3:33 PM

    Yeah, actually I'm gonna go ahead and roll my eyes at the idea that parametric polymorphism is on the wrong side of the "diminishing returns of static typing". Less than ONE percent of Go code would benefit from type-safe containers?
  • by 201709User on 10/2/17, 3:18 PM

    If I don't have to maintain the thing you can give me any Python, JS or Go you want!
  • by katastic on 10/2/17, 11:38 PM

    This site has a strange fascination with hatred of static languages. I really don't get it. My only guess is that modern colleges teach dynamic languages to students and so they're more familiar with it. Perhaps their teachers even stress that static languages are inferior.

    To me, it's right tool for the right job. I have no problem spinning up a static language for performance and outsourcing the scripting to a dynamic language like Python for the best of both worlds in terms of speed, and rapid development.

  • by zzzcpan on 10/2/17, 3:55 PM

    "I don't think it's particularly controversial, that static typing in general has advantages"

    That's not really true, just a belief. I give you an example to start understanding these things: the exact same program written in a very high level and very expressive language, like Perl, instead of Go, is going to have at least 3 times less code and since defect rates per line of code are comparable, you would end up with at least 3 times less bugs. Suddenly reliability argument of static typing doesn't make any sense. That's because in PL research there is a huge gap in understanding of how programmers actually think.

  • by guicho271828 on 10/2/17, 4:42 PM

    Correct and useless programs are useless. Quite simple.
  • by brango on 10/2/17, 3:48 PM

    Why my favorite color is red not blue...
  • by nwellinghoff on 10/2/17, 4:46 PM

    Time and time again I can make a well written functioning program in Java or C# at least twice as fast than using js and brothers. Sure it might have more "lines". Who freaking cares. My team and I square off all the time. "K, you use node I will use java" And the Java dev always wins. Its just so much faster, cleaner and mature. Its NO CONTEST.