In a dynamically typed system, values have types at runtime but variables and functions do not. In a statically typed system, variables and functions have types known and checked at compile-time. E.g. in Python x
can be anything; at runtime, if it is 1
it's a number and if it is "foo"
, it's a string. You would only know which type x
was at runtime, and it could be different each time you ran the program. In a language like Java, you would write int x
if x
was to be a number, and you would know at compile time that x
always has to be an int
.
"Explicit" and "implicit" types both refer to static type systems. The defining characteristic of a static system is that the types are known at compile time, but not necessarily that they have to be written out. In Java, types are explicit--you have to write them out. So in Java, a method might look something like:
public int foo(String bar, Object baz) { ... }
The types are both known at compile time (static) and written out (explicit). However, there are also languages that do not force you to write the type out. They can infer the type of a function from its body and how it is used. An example would be OCaml, where you can write something like:
let foo x = x + 1
Since you used +
, OCaml can figure out that x
has to be an int
all on its own. So the type of foo
(foo : int -> int
) is known at compile time, just like the Java example. It is entirely static. However, since the compiler can figure out what the types have to be on its own, you do not have to write them out yourself: they're implicit.
In short: whether a type system is explicit or implicit is a property of static systems. It is a completely different question from whether a type system is dynamic or static.
Often, you have type systems that are at times explicit and at times implicit.
For example, I believe C# lets you infer types using the var
keyword. So instead of writing int x = 10
, you can write var x = 10
and the compiler figures out that x
has to be an int
. C++ does something similar with auto
. These systems are usually explicit but have some inference.
On the flipside, there are systems that are usually implicit but sometimes force you to write out a type signature. Haskell is a great example. Most of the time, Haskell can infer the types for you. However, sometimes you can write code that is ambiguous like show . read
, where Haskell cannot figure out the types on its own. In this case, you would be forced to explicitly specify the type of either show
or read
. Additionally, some more advanced features of the type system (like rank-n polymorphism) make inference undecidable--that is, it is not guaranteed to halt. This means that code using this feature often needs explicit type signatures.
I don't see extension methods and implicit interfaces as the same at all.
First let's speak to purpose.
Extension methods exist as a syntactic sugar specifically to give you the ability to use a method as if it's a member of an object, without having access to the internals of that object. Without extension methods you can do exactly the same thing, you just don't get the pleasant syntax of someObjectYouCantChange.YourMethod()
and rather have to call YourMethod(someObjectYouCantChange)
.
The purpose of implicit interfaces however is so that you can implement an interface on an object you don't have access to change. This gives you the ability to create a polymorphic relationship between any object you write yourself and any object you don't have access to the internals of.
Now let's speak to consequences.
Extension methods really have none, this is perfectly in line with the ruthless security constraints .NET tries to use to aid in distinct perspectives on a model (the perspective from inside, outside, inheritor, and neighbor). The consequence is just some syntactic pleasantness.
The consequences of implicit interfaces are a few things.
- Accidental interface implementation, this can be a happy accident, or an accidental LSP violation by meeting someone else's interface which you didn't intend to while not honoring the intent of it's contract.
- The ability to easily make any method accept a mock of any given object at all by simply mirroring that objects interface (or even just creating an interface that meets that methods requirements and no more).
- The ability to create adapters or other various similar patterns with more ease in regards to objects you can't meddle at the innards of.
- Delay interface implementation, and implement it later without having to touch the actual implementation, only implementing it when you actually want to create another implementor.
Best Answer
The two concepts are very very similar. In normal OOP languages, we attach a vtable (or for interfaces: itable) to each object:
This allows us to invoke methods similar to
this->vtable.p(this)
.In Haskell, the method table is more like an implicit hidden argument:
would look like the C++ function
where
Class<A>
is an instance of typeclassClass
for typeA
. A method would be invoked likeThe instance is separate from the values. The values still retain their actual type. While typeclasses allow some polymorphism, this is not subtyping polymorphism. That makes it impossible to make a list of values that satisfy a
Class
. E.g. assuming we haveinstance Class Int ...
andinstance Class String ...
, we cannot create a heterogeneous list type like[Class]
that has values like[42, "foo"]
. (This is possible when you use the “existential types” extension, which effectively switches to the Go approach).In Go, a value doesn't implement a fixed set of interfaces. Consequently it can't have a vtable pointer. Instead, pointers to interface types are implemented as fat pointers that include one pointer to the data, another pointer to the itable:
The itable is combined with the data into a fat pointer when you cast from an ordinary value to an interface type. Once you have an interface type, the actual type of the data has become irrelevant. In fact, you can't access the fields directly without going through methods or downcasting the interface (which may fail).
Go's approach to interface dispatch comes at a cost: each polymorphic pointer is twice as large as a normal pointer. Also, casting from one interface to another involves copying the method pointers to a new vtable. But once we've constructed the itable, this allows us to cheaply dispatch method calls to many interfaces, something which traditional OOP languages suffer with. Here, m is the number of methods in the target interface, and b is the number of base classes:
The typical complexity for method dispatch is much better since method lookup can often be cached, but the worst case complexities are quite horrible.
In comparison, Go has O(1) or O(m) upcasting, and O(1) method dispatch. Haskell has no upcasting (constraining a type with a type class is a compile-time effect), and O(1) method dispatch.