Introducing Spellcode

Since Samuel Gélineau, for one, has now expressed an interest in reading more about Spellcode, I guess I should start using this blog for one (or, rather, another) of its intended purposes. Although much of Spellcode’s design is already fairly solidified, I haven’t yet formulated a way to adequately explain most of my individual design decisions to another human, so this first post will be pretty much just a vague overview of what kind of language Spellcode will be.


I suppose I should get the obvious question out of the way first:

What is Spellcode?

Spellcode is a (currently work-in-progress) programming language that I am designing because I feel like I’ve seen enough of a variety among languages and paradigms that my personal vision of an ideal language is now fairly solid, potentially solid enough to be realized as an actual, usable language. There is no implementation yet, but my mental picture of Spellcode is clear enough by now that I can describe the language’s major properties and core behavior. For those who love buzzwords, Spellcode:

  • is purely functional (at its core).
  • has full first-class modules.
  • is strongly, statically, and fully dependently typed.
  • is polymorphic over execution models and computational contexts.
  • has full first-class constraint-based subtyping.
  • has controlled and demarcated value-level reification of type-level information. (This allows you to preserve automatic parametricity wherever possible.)
  • has controlled and demarcated totality, partiality, copartiality, and bipartiality.
  • has full lexical, syntactic, and semantic self-alteration capabilities. (This goes way beyond macros!)

Okay, so I made up a few of those words. ;) Still, I think that that list accurately describes some (although definitely not all) of Spellcode’s properties and abilities, and the following sections will explore each of the former five points a bit.  (The latter three will probably each require a full blog post of their own, I think, and I don’t feel ready to condense them completely into words yet anyway.)

Purely Functional Core

There’s not much to say regarding this particular aspect of Spellcode. Most of the people whom I expect to be reading this will already know pretty much what I mean, but just to clarify for those who don’t: Spellcode will have a fairly minimal core language out of and on top of which everything else will be built (i.e., it will need a large standard library to supplement the minimal core). That core language will be purely functional; that is, it will not have any side effects, not even side effects that many self-described purely functional languages don’t consider to be side effects. E.g., Spellcode’s core language will refuse to commit itself to a particular evaluation strategy (without you explicitly telling it to); it will refuse to duplicate and/or discard information (again, without you explicitly telling it to); etc., etc., etc. (as the King of Siam would say).

First-Class Modules

This one is also a pretty straightforward language feature: In Spellcode, modules will be ordinary values with appropriate types that may be manipulated in arbitrary ways from within the language itself. This gets you parametrized modules, module-based mixins, the cake pattern on steroids, path-dependent values and types, and a whole host of other goodies, completely for free, simply because modules are just like any other value.

Full Dependent Typing

As a dependently typed language, Spellcode erases the barrier between types and values. The only thing special about a type, as compared to any other value, is that it is (potentially) “inhabited” by some other values. In Spellcode, even that isn’t always strictly true! (See the section on Constraint-Based Subtyping.)

Like most other dependently typed languages, Spellcode will allow you to make a function’s result type depend (possibly nontrivially) on its input value (pi types), and it will allow you to make the type of a field in a record depend (again, possibly nontrivially) on the value of some other field in that record (sigma types). If you stick with the guaranteed-total sublanguage, this is about as far as you can go.

However, unlike most other dependently typed languages, if you are willing to venture out into the wacky world of partiality, copartiality, and bipartiality, then Spellcode will also allow you make the type of the argument to a function arbitrarily dependent on the value of its result! Spellcode will even allow you to produce cyclic dependent references—but, of course, this again comes with the price of leaving the total sublanguage and striking out on your own across the chaotic wasteland of partial programming. :)

Computational-Contextual Polymorphism

There’s much more to this than first meets the eye, but for now, I will say that the primary way I anticipate this being used (at least at first) is in being able to be polymorphic over whether your function will be applied in a strict context or a non-strict context; essentially, you can be polymorphic over whether something is a value or a computation (in something close to the senses used by Paul Levy). This is not, by any means, the only application of this feature, but the general case will most likely have to wait for another day, and a dedicated post of its own, to explain.

Constraint-Based Subtyping

Ah, constraint-based subtyping! This feature is, in many ways, Spellcode’s raison d’être: to prove by example that an integration between full constraint-based subtyping and full dependent typing was not only possible, but workable and, just possibly, even useful.

Like computational-contextual polymorphism, constraint-based subtyping is a deep well of power and flexibility, and it would be impossible for me to summarize all of what I see in it in just a few paragraphs. In lieu of such a complete examination, I will instead describe its core ideas and inspiration, and (for now) let the reader extrapolate from there. :)

The inspiration for constraint-based subtyping came to me when looking at how GHC Haskell handles type classes. No, I don’t mean dictionary passing, I mean how they appear to the programmer. The programmer declares a constraint as part of a type signature, and that constraint potentially allows both the constrained type and the associated implementation to make use of whatever additional information that that particular constraint represents. Constraints in Haskell have a subtype relation, although the programmer cannot directly make use of it (without some very odd and generally not-well-supported tricks). The subtype relation on constraints in Haskell is fairly simple: it is essentially the “implies” relation on the things that the constraints (abstractly) represent; e.g., if f is an Applicative then it must also be a Functor, therefore Applicative f implies Functor f, therefore Applicative f is a subtype (or subconstraint, I guess) of Functor f, for any given f :: * -> *.

The subtyping relation on constraints in Haskell gives rise to a subtyping relation on constrained types, but in the opposite direction from the original relation. In other words, constrained types are contravariant in their constraints. The first key insight here is that this is a consequence of Haskell constraints being tied to forall; if Haskell had first-class exists, then only universally constrained types would be contravariant in their constraints; existentially constrained types would instead be covariant in their constraints.

This gets extended yet further in Spellcode, with local-based quantification producing constraint-invariant types and arb-based quantification producing constraint-bivariant types. The general form of a quantifier in Spellcode is quant vars where cst in expr, where:

  • where and in are keywords.
  • quant is exactly one of the keywords forall, exists, local, and arb.
  • vars is a (possibly empty) sequence of identifiers to bind as fresh variables within the scope of the quantifier.
  • cst is an arbitrary expression that may use any of the variables introduced in vars, and whose type must be (a subtype of) Constraint i, where i‘s type must be (a subtype of) Level.
  • expr is an arbitrary expression that may use any of the variables introduced in vars, and whose type is the type of the whole quantifier expression and must be an inhabitant of Type (succ i), where i is shared with cst.

There is much more I could write about Spellcode, but it will have to wait until I have properly reformatted it mentally such that it is suitable for translation into English. Questions and comments on what I’ve written so far are very welcome!

15 thoughts on “Introducing Spellcode

    • forall produces universal quantification
      exists produces existential quantification
      local produces fixed quantification
      arb produces arbitrary quantification

      In very approximate words:

      forall means that the enclosed value uses the introduced variable(s) to refer to every possible (set of) value(s) that meet(s) the constraint(s)
      exists means that the enclosed value uses the introduced variable(s) to refer to at least one (set of) value(s) that meet(s) the constraint(s)
      local means that the enclosed value uses the introduced variable(s) to refer to exactly one (set of) value(s) that meet(s) the constraint(s)
      arb means that the enclosed value uses the introduced variable(s) to refer to unidentifiable value(s) that ignore the constraints entirely

      Like

      • Could you say a word or two about the use cases for `local` and `arb`? `local` is intriguing and the “one and exactly one” condition vaguely reminds me of the global uniqueness property of type classes. `arb` seems utterly pointless, but I’m probably missing something (which I never had).

        Like

        • `arb` probably seems pointless because it mostly is; the only reason it takes constraints at all is because it completes the lattice. It is a bit like (and, actually, closely related with) the issue of declaration-site variance in languages like Scala, C#, and Ceylon: They all have the options of covariance, contravariance, and invariance, but bivariance (the other side of the lattice from invariance, making the type parameter phantom) is generally ignored. I could just as easily ignore `arb` for the same reason, but I won’t because I value completion of the lattice more than conserving keywords. :)

          Like

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s