
A friend of mine asked the following question on our local tech Slack. Despite the fact that I’m extremely happy with dynamic typing, I’m pretty curious as well, so I’ll re-ask on his behalf: > At the risk of stirring up a hornet’s nest, can anyone recommend a book or article that does a good job of outlining the pros and cons of static and dynamic typing?


I think this is the best comparison I’ve ever seen. :smile:

In my opinion a completely legitimate question. I’m still undecided if I prefer static or dynamic typing. :thinking_face: It also depends on the specific languages you compare.

I’m somewhat confused by the third “advantage” listed for “dynamically-typed languages”. Depending on the language, many types of “syntax and semantic errors” would also be/cause bugs in programs in dynamically-typed languages. Also, in the example of if(1)
it depends on the type system of the respective languages if this is allowed. To me it also seems the article confuses (to some extent) dynamic and weak typing.

@sschwarzer an unfortunately easy mistake to make; I once tried to get on the “alignment axes for everything” with type-systems, and I’m afraid I still didn’t get it quite right: https://benknoble.github.io/blog/2019/08/27/types-alignment/ feedback welcome

Not necessarily a pro and con list, but I enjoyed this lit review: http://danluu.com/empirical-pl/\|http://danluu.com/empirical-pl/

For me, which I like is entirely based on context/parameters of the problem.
- Only me, or me + one other team member, and for a project size where it’s possible to keep everything in my head at once? Dynamic.
- Need to prototype or build something fast? Dynamic.
- Big project, or lots of moving parts, many team members each working on their part, and I need to know that a change doesn’t break some random part of the code (and “testing” isn’t possible for 100% coverage)? Static.

I totally agree that the “You spend less time debugging syntax and semantic errors” point was written poorly

@ben.knoble I’d put Python in the strong typing category. For example, trying to add a number and a string causes a TypeError
, as does adding a byte string (bytes
type) and unicode string (str
type). What operations in Python do you have in mind that would suggest weak typing?

@ben.knoble Also, I think calling dynamic typing “chaotic” is too strong or at least ambiguous.

[Let me pretend to be Shriram] What’s the definition of strong/weak typing?

@sorawee The “definition” I know is that weak typing implies (more) implicit conversions. For example, a languages might evaluate the expression 1 + "1"
to 2
or to "11"
. I’d say, the categorization in strong/weak typing is according to what degree such implicit conversions are “typical” in a language. For example, many languages allow type conversions for numbers, like 1.0 + 1
-> 2.0
, without necessarily being considered weakly typed. “Weakly typed” is typically associated with languages that do string <-> number conversions as above.

But that’s not really about types, right? It’s about the semantics of +
.

@massung It just occurred to me that the first point can be interpreted in a slightly different way. If I have static typing, I have to hold less in my head at once, even if I could (with some more mental effort). Maybe that’s the reason I feel a bit more relaxed if I have a compiler “looking over my shoulder.” :slightly_smiling_face:

@sorawee I think it can be interpreted either way. :slightly_smiling_face:

yeah. I’ve had the misfortune of working on a rather large Python project w/ a team of 10 people. The number of times someone would do a moderately-sized refactor and end up breaking multiple things and no one would find out until days (or weeks) later when some rare function finally got called was rather high.
Others would argue for things like “more unit tests” and “greater code coverage”. But, IMO, that’s what static typing is in many ways: “free code coverage” done by the compiler.

@sorawee That’s a good point. For example, in the if(1)
case, you could say if
is able to interpret integers as conditions (semantics of if
) or you could say that in if
an integer is converted implicitly to a boolean (weak typing).

Yes maybe a issue / pr is better for discussikn

o

The .1 is because of macro expansion.

This looks like a stub. Should probably be removed? https://docs.racket-lang.org/csv/index.html

Well, it’s a user package, so it’s up to the author what they want to do.

> It’s about the semantics of +
. Don’t get me started on +
and operators in languages :wink:. This is one of those areas I’m pretty passionate. Operators have known semantics that - IMO - shouldn’t be messed with.
For example, +
and *
are commutative. Any programming language that allows list + list
or string + string
or string * int
gets a big negative strike against it from me. Make up a new operator (++
, ,
, &
) if you feel compelled to use one.
I also wish there was a way in languages to enforce such properties at the compiler level. Maybe there’s a language that does that, but I haven’t come across it as of yet (that I’m aware of). :disappointed:

ML-like langs do the separate-operators thing; the only langs I know of with the ability to enforce things like commutativity require proofs (interactively, automatically, or proof-carrying-code)

Hi How can I transform this into a list without waiting for an EOF? (define-values (in out ) (make-pipe ))
(displayln "1" out)
(displayln "2" out)
(displayln "3" out)
(port->lines in) ; '( 1 2 3)

Have you guys by any chance studied operators in Mathematica?

Not me, sorry

Call (read in)
3 times


Pretty simple to remember the precedence :wink:

does not work, the same thing happens

Ah, right, I read it was based on M-expressions originally

What do you mean? [samth@homer:~/.../racket-test/tests/match (master) plt] r
Welcome to Racket v8.0.0.11 [cs].
> (define-values (in out ) (make-pipe ))
> (displayln "1" out)
> (displayln "2" out)
> (displayln "3" out)
> (list (read in) (read in) (read in))
'(1 2 3)

The representation of expressions is pretty flexible: https://reference.wolfram.com/language/tutorial/TextualInputAndOutput.html#5084
It “feels” like they are using S-expressions, but I don’t know the internal representation.

I do not know in advance what the size of the list is. That’s why I use a pipe. I only know when the process is finished. It’s like a buffer

Then the other side needs to close the port

[samth@homer:~/.../racket-test/tests/match (master) plt] r
Welcome to Racket v8.0.0.11 [cs].
> > (define-values (in out ) (make-pipe ))
#<procedure:>>
> > (displayln "1" out)
#<procedure:>>
> > (displayln "2" out)
#<procedure:>>
> > (displayln "3" out)
#<procedure:>>
> (close-output-port out)
> (port->lines in)
'("1" "2" "3")
>

Yes, that worked! Thank you very much

I’m looking at fully-expanded code and don’t see the suffix .1
. I am so confused what’s happening where, when, and why.

@massung Although I can kind of live with +
concatenating strings, I prefer if the language uses another operator. Even though the use of +
for strings may be intuitive, I like if the operator hints at the types of the combined objects.
Edit: That said, many ambiguities for human readers can be “resolved” by good naming.

I suppose it depends on the types of errors. In that Python project, if many/most of the errors would’ve been caught by a statically typed compiler, then it’s clear. In my case, the vast majority of uncaught errors would not have been caught by a statically typed compiler. It’s probably fairly domain specific.

In our case they would have been caught. They were things like changing the parameters to a function (removing, adding, or changing the expected types from string to dict, etc)

but i totally get what you are saying :slightly_smiling_face:

I’m primarily a solo developer with decent practices. If I was on a large team of various skill levels, I’m sure I could be tempted to use static typing again :)

…although, I’d probably break the large team into smaller teams, and the large projects into smaller projects, and introduce good testing methodology…. but I do see value in static typing

I’ve yet to a string-append operator I like, though: ^
in ML just doesn’t do it for me.

Does ~a count?

Heh, that one’s not too bad, at least if you’re familiar with common-lisp-style formatting

I’ve liked ,
from Smalltalk. Even ..
in Lua works.

^vim had an issue where .
was ambiguous in some contexts (string append or dictionary access?); later versions prefer ..

Any good references on bootstrapping compilers (technical or academic)? Chez Scheme in particular would be fun, but anything that explains the core ideas will do

Maybe this is more specific than you want, but here’s some work from the Spoofax world: https://dl.acm.org/doi/10.1145/3093335.2993242

Ah, that looks really fun, thanks. Its references are probably also sufficient for what I wanted, so bonus points

Nim uses &
to concatenate strings. In my opinion, although it looks a bit odd, you could see it as another “and” (i. e. with +
and &
as two spellings of “and”).

Probably not what you’re looking for, but “Reflections on Trusting Trust” by Ken Thompson is relevant and worth reading.

I understood the “semantic errors” point not to be that mistakes like those don’t cause bugs, but that in those languages it’s reasonably common to run across a situation where two types that were conceived separately just so happen to share some of their interface, thus making it possible for some of the code that was written for one type to be reused for the other without bugs. Usually this is duck typing (two interfaces with string-named methods that have some colliding strings, or which overload operators that are the same operators), and the serendipity can lead to surprises too (like a matrix type overloading *
but not making it commutative).
Static type systems where it’s normal to type objects at the granularity of individual methods seem more common now (Go, TypeScript), and not every untyped language has to allow values to have fine-grained parts of their interface in common either, so it’s not necessarily a long-term distinguishing factor between static typing and dynamic typing. It’s more of an observation about existing languages.
In that regard, a lot of the other supposed benefits of dynamic typing can be nitpicked: Dynamically typed programs are often more verbose if they have to write out a bunch of manual error checking. They can take a while to compile if they make extensive use of macros. I often see people say they’re less tolerant to change because refactoring can’t rely on automation or machine checking as much.
The supposed benefits of static typing can be nitpicked too: Not every statically typed language can rely on mature tooling. Not every one is equipped to communicate with databases. Some statically typed languages are weakly typed enough that they let a lot of errors through (C) or can introduce novel surprises (Scala’s resolution of implict arguments).
But I think these are mostly distractions that the post does well to gloss over. People who want this kind of overview of the differences probably aren’t immediately trying to figure out what their pie-in-the-sky dream language would look like, nor what flaws can possibly exist. They’re probably mostly interested in what the currently available options look like. Personally, the ability to have control over whole-program compile times (if not the ability to keep them short, per se) is one reason I’ve stuck with dynamic typing for so long. For some people, their reason might be the ability to find serendipitous reuse opportunities across types of different original intent (if for whatever reason TypeScript or Go don’t meet their qualifications). And someone with a particular need for IDE tooling is probably going to opt for statically typed languages that already have that tooling integration ready to go.

actually even in math, +
is not always commutative, e.g. in <https://www.isical.ac.in/~arnabc/prob1/series.html|infinite series>

so I think it’s okay for programming languages to not require +
to be commutative

also in abstract algebra/category theory, https://en.wikipedia.org/wiki/Near-semiring\|Near-semiring does not require +
to be commutative, only for (S, +, 0) to be monoid

and if *
is defined as multiplication, then <https://en.wikipedia.org/wiki/Matrix_multiplication#Non-commutativity|matrix multiplication> is famously non-commutative