Version
2.5
Version
This is the fifth major draft of this document since 2009.
Pull requests are always accepted for changes and additional content. This is a living document. The only way this document will stay up to date is through the kindness of readers like you and community patches and pull requests on Github.
If you’d like a physical copy of the text you can either print it for yourself (see Printable PDF) or purchase one online:
Author
This text is authored by Stephen Diehl.
 Web: www.stephendiehl.com
 Twitter: https://twitter.com/smdiehl
 Github: https://github.com/sdiehl
Special thanks for Erik Aker for copyediting assitance.
License
Copyright © 20092020 Stephen Diehl
This code included in the text is dedicated to the public domain. You can copy, modify, distribute and perform the code, even for commercial purposes, all without asking permission.
You may distribute this text in its full form freely, but may not reauthor or sublicense this work. Any reproductions of major portions of the text must include attribution.
The software is provided “as is”, without warranty of any kind, express or implied, including But not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, Arising from, out of or in connection with the software or the use or other dealings in the software.
What is Haskell?
At its heart Haskell is a lazy, functional, staticallytyped programming language with advanced type system features such as higherrank, higherkinded parametric polymorphism, monadic effects, generalized algebraic data types, adhoc polymorphism through type classes, associated type families, and more. As a programming language, Haskell pushes the frontiers of programming language design more so than any other general purpose language while still remaining practical for everyday use.
Beyond language features, Haskell remains an organic, communitydriven effort, run by its userbase instead of by corporate influences. While there are some Haskell companies and consultancies, most are fairly small and none have an outsized influence on the development of the language. This is in stark contrast to ecosystems like Java and Go where Oracle and Google dominate all development. In fact, the Haskell community is a synthesis between multiple disciplines of academic computer science and industrial users from large and small firms, all of whom contribute back to the language ecosystem.
Originally, Haskell was borne out of academic research. Designed as an ML dialect, it was initially inspired by an older language called Miranda. In the early 90s, a group of academics formed the GHC committee to pursue building a research vehicle for lazy programming languages as a replacement for Miranda. This was a particularly invogue research topic at the time and as a result the committee attracted various talented individuals who initiated the language and ultimately laid the foundation for modern Haskell.
Over the last 30 years Haskell has evolved into a mature ecosystem, with an equally mature compiler. Even so, the language is frequently reimagined by passionate contributors who may be furthering academic research goals or merely contributing out of personal interest. Although laziness was originally the major research goal, this has largely become a quirky artifact that most users of the language are generally uninterested in. In modern times the major themes of Haskell community are:
 A vehicle for type system research
 Experimentation in the design space of typed effect systems
 Algebraic structures as a method of program synthesis
 Referential transparency as a core language feature
 Embedded domain specific languages
 Experimentation toward practical dependent types
 Stronger encoding of invariants through typelevel programming
 Efficient functional compiler design
 Alternative models of parallel and concurrent programming
Although these are the major research goals, Haskell is still a fully general purpose language, and it has been applied in wildly diverse settings from garbage trucks to cryptanalysis for the defense sector and everything inbetween. With a thriving ecosystem of industrial applications in web development, compiler design, machine learning, financial services, FPGA development, algorithmic trading, numerical computing, cryptography research, and cybersecurity, the language has a lot to offer to any field or software practitioner.
Haskell as an ecosystem is one that is purely organic, it takes decades to evolve, makes mistakes and is not driven by any one ideology or belief about the purpose of functional programming. This makes Haskell programming simultaneously frustrating and exciting; and therein lies the fun that has been the intellectual siren song that has drawn many talented programmers to dabble in this beautiful language at some point in their lives.
See:
How to Read
This is a guide for working software engineers who have an interest in Haskell but don’t know Haskell yet. I presume you know some basics about how your operating system works, the shell, and some fundamentals of other imperative programming languages. If you are a Python or Java software engineer with no Haskell experience, this is the executive summary of Haskell theory and practice for you. We’ll delve into a little theory as needed to explain concepts but no more than necessary. If you’re looking for a purely introductory tutorial, this probably isn’t the right start for you, however this can be read as a companion to other introductory texts.
There is no particular order to this guide, other than the first chapter which describes how to get set up with Haskell and use the foundational compiler and editor tooling. After that you are free to browse the chapters in any order. Most are divided into several sections which outline different concepts, language features or libraries. However, the general arc of this guide bends toward more complex topics in later chapters.
As there is no ordering after the first chapter I will refer to concepts globally without introducing them first. If something doesn’t make sense just skip it and move on. I strongly encourage you to play around with the source code modules for yourself. Haskell cannot be learned from an armchair; instead it can only be mastered by writing a ton of code for yourself. GHC may initially seem like a cruel instructor, but in time most people grow to see it as their friend.
GHC
GHC is the Glorious Glasgow Haskell Compiler. Originally written in 1989, GHC is now the de facto standard for Haskell compilers. A few other compilers have existed along the way, but they either are quite limited or have bit rotted over the years. At this point, GHC is a massive compiler and it supports a wide variety of extensions. It’s also the only reference implementation for the Haskell language and as such, it defines what Haskell the language is by its implementation.
GHC is run at the command line with the command ghc
.
GHC’s runtime is written in C and uses machinery from GCC infrastructure for its native code generator and can also use LLVM for its native code generation. GHC is supported on the following architectures:
 Linux x86
 Linux x86_64
 Linux PowerPC
 NetBSD x86
 OpenBSD x86
 FreeBSD x86
 MacOS X Intel
 MacOS X PowerPC
 Windows x86_64
GHC itself depends on the following Linux packages.
 buildessential
 libgmpdev
 libffidev
 libncursesdev
 libtinfo5
ghcup
There are two major packages that need to be installed to use Haskell:
 ghc
 cabalinstall
GHC can be installed on Linux and Mac with ghcup by running the following command:
This can be used to manage multiple versions of GHC installed locally.
To select which version of GHC is available on the PATH use the set
command.
This can also be used to install cabal.
To modify your shell to include ghc and cabal.
Or you can permanently add the following to your .bashrc
or .zshrc
file:
Package Managers
There are two major Haskell packaging tools: Cabal and Stack. Both take differing views on versioning schemes but can more or less interoperate at the package level. So, why are there two different package managers?
The simplest explanation is that Haskell is an organic ecosystem with no central authority, and as such different groups of people with different ideas and different economic interests about optimal packaging built their own solutions around two different models. The interests of an organic community don’t always result in open source convergence; however, the ecosystem has seen both package managers reach much greater levels of stability as a result of collaboration. In this article, I won’t offer a preference for which system to use: it is left up to the reader to experiment and use the system which best suits your or your company’s needs.
Project Structure
A typical Haskell project hosted on Github or Gitlab will have several executable, test and library components across several subdirectories. Each of these files will correspond to an entry in the Cabal file.
More complex projects consisting of multiple modules will include multiple project directories like those above, but these will be nested in subfolders with a cabal.project
or stack.yaml
in the root of the repository.
An example Cabal project file, named cabal.project
above, this multicomponent library repository would include these lines.
By contrast, an example Stack project stack.yaml
for the above multicomponent library repository would be:
Cabal
Cabal is the build system for Haskell. Cabal is also the standard build tool for Haskell source supported by GHC. Cabal can be used simultaneously with Stack or standalone with cabal newbuild.
To update the package index from Hackage, run:
To start a new Haskell project, run:
This will result in a .cabal
file being created with the configuration options for our new project.
Cabal can also build dependencies in parallel by passing j
where n
is the number of concurrent builds.
Let’s look at an example .cabal
file. There are two main entry points that any package may provide: a library
and an executable
. Multiple executables can be defined, but only one library. In addition, there is a special form of executable entry point TestSuite
, which defines an interface for invoking unit tests from cabal
.
For a library, the exposedmodules
field in the .cabal
file indicates which modules within the package structure will be publicly visible when the package is installed. These modules are the userfacing APIs that we wish to expose to downstream consumers.
For an executable, the mainis
field indicates the module that exports the main
function responsible for running the executable logic of the application. Every module in the package must be listed in one of othermodules
, exposedmodules
or mainis
fields.
To run an “executable” under cabal
execute the command:
To load the “library” into a GHCi shell under cabal
execute the command:
The
metavariable is either one of the executable or library declarations in the .cabal
file and can optionally be disambiguated by the prefix exe:
or lib:
respectively.
To build the package locally into the ./dist/build
folder, execute the build
command:
To run the tests, our package must itself be reconfigured with the enabletests
flag and the builddepends
options. The TestSuite
must be installed manually, if not already present.
Moreover, arbitrary shell commands can be invoked with the GHC environmental variables. It is quite common is to run a new bash shell with this command such that the ghc
and ghci
commands use the package environment. This can also run any system executable with the GHC_PACKAGE_PATH
variable set to the libraries package database.
The haddock documentation can be generated for the local project by executing the haddock
command. The documentation will be built to the ./dist
folder.
When we’re finally ready to upload to Hackage ( presuming we have a Hackage account set up ), then we can build the tarball and upload with the following commands:
The current state of a local build can be frozen with all current package constraints enumerated:
This will create a file cabal.config
with the constraint set.
The cabal
configuration is stored in $HOME/.cabal/config
and contains various options including credential information for Hackage upload.
A library can also be compiled with runtime profiling information enabled. More on this is discussed in the section on Concurrency and Profiling.
Another common flag to enable is documentation
which forces the local build of Haddock documentation, which can be useful for offline reference. On a Linux filesystem these are built to the /usr/share/doc/ghcdoc/html/libraries/
directory.
Cabal can also be used to install packages globally to the system PATH. For example the parsec package to your system from Hackage, the upstream source of Haskell packages, invoke the install
command:
To download the source for a package, we can use the get
command to retrieve the source from Hackage.
Cabal NewBuild
The interface for Cabal has seen an overhaul in the last few years and has moved more closely towards the Nixstyle of local builds. Under the new system packages are separated into categories:
 Local Packages – Packages are built from a configuration file which specifies a path to a directory with a cabal file. These can be working projects as well as all of its local transitive dependencies.
 External Packages – External packages are packages retrieved from either the public Hackage or a private Hackage repository. These packages are hashed and stored locally in
~/.cabal/store
to be reused across builds.
As of Cabal 3.0 the newbuild commands are the default operations for build operations. So if you type cabal build
using Cabal 3.0 you are already using the newbuild system.
Historically these commands were separated into two different command namespaces with prefixes v1
and v2
, with v1 indicating the old sandbox build system and the v2 indicating the newbuild system.
The new build commands are listed below:
Cabal also stores all of its build artifacts inside of a distnewstyle
folder stored in the project working directory. The compilation artifacts are of several categories.
.hi
– Haskell interface modules which describe the type information, public exports, symbol table, and other module guts of compiled Haskell modules..hie
– An extended interface file containing module symbol data..hspp
– A Haskell preprocessor file..o
– Compiled object files for each module. These are emitted by the native code generator assembler..s
– Assembly language source file..bc
– Compiled LLVM bytecode file..ll
– An LLVM source file.cabal_macros.h
– Contains all of the preprocessor definitions that are accessible when using the CPP extension.cache
– Contains all artifacts generated by solving the constraints of packages to set up a build plan. This also contains the hash repository of all external packages.packagedb
– Database of all of the cabal metadata of all external and local packages needed to build the project. See Package Databases.
These all get stored under the distnewstyle
folder structure which is set up hierarchically under the specific CPU architecture, GHC compiler version and finally the package version.
Local Packages
Both Stack and Cabal can handle local packages built from the local filesystem, from remote tarballs, or from remote Git repositories.
Inside of the stack.yaml
simply specify the git repository remote and the hash to pull.
In Cabal to add a remote create a cabal.project
file and add your remote in the sourcerepositorypackage
section.
Version Bounds
All Haskell packages are versioned and the numerical quantities in the version are supposed to follow the Package Versioning Policy.
As packages evolve over time there are three numbers which monotonically increase depending on what has changed in the package.
 Major version number
 Minor version number
 Patch version number
Every library’s cabal file will have a packages dependencies section which will specify the external packages which the library depends on. It will also contain the allowed versions that it is known to build against in the builddepends
section. The convention is to put the upper bound to the next major unreleased version and the lower bound at the currently used version.
Individual lines in the version specification can be dependent on other variables in the cabal file.
Version bounds in cabal files can be managed automatically with a tool cabalbounds
which can automatically generate, update and format cabal files.
See:
Stack
Stack is an alternative approach to Haskell’s package structure that emerged in 2015. Instead of using a rolling build like Cabal, Stack breaks up sets of packages into release blocks that guarantee internal compatibility between sets of packages. The package solver for Stack uses a different strategy for resolving dependencies than cabalinstall has historically used and Stack combines this with a centralised build server called Stackage which continuously tests the set of packages in a resolver to ensure they build against each other.
Install
To install stack
on Linux or Mac, run:
For other operating systems, see the official install directions.
Usage
Once stack
is installed, it is possible to setup a build environment on top of your existing project’s cabal
file by running:
An example stack.yaml
file for GHC 8.8.1 would look like this:
Most of the common libraries used in everyday development are already in the Stackage repository. The extradeps
field can be used to add Hackage dependencies that are not in the Stackage repository. They are specified by the package and the version key. For instance, the zenc
package could be added to stack build
in the following way:
The stack
command can be used to install packages and executables into either the current build environment or the global environment. For example, the following command installs the executable for hlint
, a popular linting tool for Haskell, and places it in the PATH:
To check the set of dependencies, run:
Just as with cabal
, the build and debug process can be orchestrated using stack
commands:
To visualize the dependency graph, use the dot command piped first into graphviz, then piped again into your favorite image viewer:
Hpack
Hpack is an alternative package description language that uses a structured YAML format to generate Cabal files. Hpack succeeds in DRYing (Don’t Repeat Yourself) several sections of cabal files that are often quite repetitive across large projects. Hpack uses a package.yaml
file which is consumed by the command line tool hpack
. Hpack can be integrated into Stack and will generate resulting cabal files whenever stack build
is invoked on a project using a package.yaml
file. The output cabal file contains a hash of the input yaml file for consistency checking.
A small package.yaml
file might look something like the following:
Base
GHC itself ships with a variety of core libraries that are loaded into all Haskell projects. The most foundational of these is base
which forms the foundation for all Haskell code. The base library is split across several modules.
 Prelude – The default namespace imported in every module.
 Data – The simple data structures wired into the language
 Control – Control flow functions
 Foreign – Foreign function interface
 Numeric – Numerical tower and arithmetic operations
 System – System operations for Linux/Mac/Windows
 Text – Basic String types.
 Type – Typelevel operations
 GHC – GHC Internals
 Debug – Debug functions
 Unsafe – Unsafe “backdoor” operations
There have been several large changes to Base over the years which have resulted in breaking changes that means older versions of base are not compatible with newer versions.
 Monad Applicative Proposal (AMP)
 MonadFail Proposal (MFP)
 Semigroup Monoid Proposal (SMP)
Prelude
The Prelude is the default standard module. The Prelude is imported by default into all Haskell modules unless either there is an explicit import statement for it, or the NoImplicitPrelude extension is enabled.
The Prelude exports several hundred symbols that are the default datatypes and functions for libraries that use the GHCissued prelude. Although the Prelude is the default import, many libraries these days do not use the standard prelude instead choosing to roll a custom one on a perproject basis or to use an offthe shelf prelude from Hackage.
The Prelude contains common datatype and classes such as List, Monad, Maybe and most associated functions for manipulating these structures. These are the most foundational programming constructs in Haskell.
Modern Haskell
There are two official language standards:
 Haskell98
 Haskell2010
And then there is what is colloquially referred to as Modern Haskell which is not an official language standard, but an ambiguous term to denote the emerging way most Haskellers program with new versions of GHC. The language features typically included in modern Haskell are not welldefined and will vary between programmers. For instance, some programmers prefer to stay quite close to the Haskell2010 standard and only include a few extensions while some go all out and attempt to do full dependent types in Haskell.
By contrast, the type of programming described by the phrase Modern Haskell typically utilizes some typelevel programming, as well as flexible typeclasses, and a handful of Language Extensions.
Flags
GHC has a wide variety of flags that can be passed to configure different behavior in the compiler. Enabling GHC compiler flags grants the user more control in detecting common code errors. The most frequently used flags are:
fwarntabs 
Emit warnings of tabs instead of spaces in the source code 
fwarnunusedimports 
Warn about libraries imported without being used 
fwarnnameshadowing 
Warn on duplicate names in nested bindings 
fwarnincompleteunipatterns 
Emit warnings for incomplete patterns in lambdas or pattern bindings 
fwarnincompletepatterns 
Warn on nonexhaustive patterns 
fwarnoverlappingpatterns 
Warn on pattern matching branches that overlap 
fwarnincompleterecordupdates 
Warn when records are not instantiated with all fields 
fdefertypeerrors 
Turn type errors into warnings 
fwarnmissingsignatures 
Warn about toplevel missing type signatures 
fwarnmonomorphismrestriction 
Warn when the monomorphism restriction is applied implicitly 
fwarnorphans 
Warn on orphan typeclass instances 
fforcerecomp 
Force recompilation regardless of timestamp 
fnocode 
Omit code generation, just parse and typecheck 
fobjectcode 
Generate object code 
Like most compilers, GHC takes the Wall
flag to enable all warnings. However, a few of the enabled warnings are highly verbose. For example, fwarnunuseddobind
and fwarnunusedmatches
typically would not correspond to errors or failures.
Any of these flags can be added to the ghcoptions
section of a project’s .cabal
file. For example:
The flags described above are simply the most useful. See the official reference for the complete set of GHC’s supported flags.
For information on debugging GHC internals, see the commentary on GHC internals.
Hackage
Hackage is the upstream source of open source Haskell packages. With Haskell’s continuing evolution, Hackage has become many things to developers, but there seem to be two dominant philosophies of uploaded libraries.
A Repository for Production Libraries
In the first philosophy, libraries exist as reliable, communitysupported building blocks for constructing higher level functionality on top of a common, stable edifice. In development communities where this method is the dominant philosophy, the authors of libraries have written them as a means of packaging up their understanding of a problem domain so that others can build on their understanding and expertise.
An Experimental Playground
In contrast to the previous method of packaging, a common philosophy in the Haskell community is that Hackage is a place to upload experimental libraries as a means of getting community feedback and making the code publicly available. Library authors often rationalize putting these kinds of libraries up without documentation, often without indication of what the library actually does or how it works. This unfortunately means a lot of Hackage namespace has become polluted with deadend, bitrotting code. Sometimes packages are also uploaded purely for internal use within an organisation, or to accompany an academic paper. These packages are often left undocumented as well.
For developers coming to Haskell from other language ecosystems that favor the former philosophy (e.g., Python, JavaScript, Ruby), seeing thousands of libraries without the slightest hint of documentation or description of purpose can be unnerving. It is an open question whether the current cultural state of Hackage is sustainable in light of these philosophical differences.
Needless to say, there is a lot of very lowquality Haskell code and documentation out there today, so being conservative in library assessment is a necessary skill. That said, there are also quite a few phenomenal libraries on Hackage that are highly curated by many people.
As a general rule, if the Haddock documentation for the library does not have a minimal working example, it is usually safe to assume that it is an RFCstyle library and probably should be avoided for production code.
There are several heuristics you can use to answer the question Should I Use this Hackage Library:
 Check the Uploaded to see if the author has updated it in the last five years.
 Check the Maintainer email address, if the author has an academic email address and has not uploaded a package in two or more years, it is safe to assume that this is a thesis project and probably should not be used industrially.
 Check the Modules to see if the author has included toplevel Haddock docstrings. If the author has not included any documentation then the library is likely of lowquality and should not be used industrially.
 Check the Dependencies for the bound on
base
package. If it doesn’t include the latest base included with the latest version of GHC then the code is likely not actively maintained.  Check the reverse Hackage search to see if the package is used by other libraries in the ecosystem. For example: https://packdeps.haskellers.com/reverse/QuickCheck
An example of a bitrotted package:
https://hackage.haskell.org/package/numericquest
An example of a well maintained package:
https://hackage.haskell.org/package/QuickCheck
Stackage
Stackage is an alternative optin packaging repository which mirrors a subset of Hackage. Packages that are included in Stackage are built in a massive continuous integration process that checks to see that given versions link successfully against each other. This can give a higher degree of assurance that the bounds of a given resolver ensure compatibility.
Stackage releases are built nightly and there are also longterm stable (LTS) releases. Nightly resolvers have a date convention while LTS releases have a major and minor version. For example:
lts14.22
nightly20200130
See:
GHCi
GHCi is the interactive shell for the GHC compiler. GHCi is where we will spend most of our time in everyday development. Following is a table of useful GHCi commands.
:reload 
:r 
Code reload 
:type 
:t 
Type inspection 
:kind 
:k 
Kind inspection 
:info 
:i 
Information 
:print 
:p 
Print the expression 
:edit 
:e 
Load file in system editor 
:load 
:l 
Set the active Main module in the REPL 
:module 
:m 
Add modules to imports 
:add 
:ad 
Load a file into the REPL namespace 
:instances 
:in 
Show instances of a typeclass 
:browse 
:bro 
Browse all available symbols in the REPL namespace 
The introspection commands are an essential part of debugging and interacting with Haskell code:
Querying the current state of the global environment in the shell is also possible. For example, to view modulelevel bindings and types in GHCi, run:
To examine modulelevel imports, execute:
Language extensions can be set at the repl.
To see compilerlevel flags and pragmas, use:
Language extensions and compiler pragmas can be set at the prompt. See the Flag Reference for the vast collection of compiler flag options.
Several commands for the interactive shell have shortcuts:
+t 
Show types of evaluated expressions 
+s 
Show timing and memory usage 
+m 
Enable multiline expression delimited by :{ and :} . 
.ghci.conf
The GHCi shell can be customized globally by defining a configure file ghci.conf
in $HOME/.ghc/
or in the current working directory as ./.ghci.conf
.
For example, we can add a command to use the Hoogle type search from within GHCi. First, install hoogle
:
Then, we can enable the search functionality by adding a command to our ghci.conf
:
It is common community tradition to set the prompt to a colored λ
:
GHC can also be coerced into giving slightly better error messages:
GHCi can also use a pretty printing library to format all output, which is often much easier to read. For example if your project is already using the amazing prettysimple
library simply include the following line in your ghci configuration.
And the default prelude can also be disabled and swapped for something more sensible:
GHCi Performance
For large projects, GHCi with the default flags can use quite a bit of memory and take a long time to compile. To speed compilation by keeping artifacts for compiled modules around, we can enable object code compilation instead of bytecode.
Enabling object code compilation may complicate type inference, since type information provided to the shell can sometimes be less informative than sourceloaded code. This underspecificity can result in breakage with some language extensions. In that case, you can temporarily reenable bytecode compilation on a per module basis with the fbytecode
flag.
If you all you need is to typecheck your code in the interactive shell, then disabling code generation entirely makes reloading code almost instantaneous:
Editor Integration
Haskell has a variety of editor tools that can be used to provide interactive development feedback and functionality such as querying types of subexpressions, linting, type checking, and code completion. These are largely provided by the haskellideengine which serves as an editor agnostic backend that interfaces with GHC and Cabal to query code.
Vim
Emacs
VSCode
Linux Packages
There are several upstream packages for Linux packages which are released by GHC development. The key ones of note for Linux are:
For scripts and operations tools, it is common to include commands to add the following apt repositories, and then use these to install the signed GHC and cabalinstall binaries (if using Cabal as the primary build system).
It is not advisable to use a Linux system package manager to manage Haskell dependencies. Although this can be done, in practice it is better to use Cabal or Stack to create locally isolated builds to avoid incompatibilities.
Names
Names in Haskell exist within a specific namespace. Names are either unqualified of the form:
Or qualified by the module where they come from, such as:
The major namespaces are described below with their naming conventions
Modules  Uppercase 
Functions  Lowercase 
Variables  Lowercase 
Type Variables  Lowercase 
Datatypes  Uppercase 
Constructors  Uppercase 
Typeclasses  Uppercase 
Synonyms  Uppercase 
Type Families  Uppercase 
Modules
A module consists of a set of imports and exports and when compiled generates an interface which is linked against other Haskell modules. A module may reexport symbols from other modules.
Modules’ dependency graphs optionally may be cyclic (i.e. they import symbols from each other) through the use of a boot file, but this is often best avoided if at all possible.
Various module import strategies exist. For instance, we may:
Import all symbols into the local namespace.
Import select symbols into the local namespace:
Import into the global namespace masking a symbol:
Import symbols qualified under Data.Map
namespace into the local namespace.
Import symbols qualified and reassigned to a custom namespace (M
, in the example below):
You may also dump multiple modules into the same namespace so long as the symbols do not clash:
A main module is a special module which reserves the name Main
and has a mandatory export of type IO ()
which is invoked when the executable is run.. This is the entry point from the system into a Haskell program.
Functions
Functions are the central construction in Haskell. A function f
of two arguments x
and y
can be defined in a single line as the lefthand and righthand side of an equation:
This line defines a function named f
of two arguments, which on the righthand side adds and yields the result. Central to the idea of functional programming is that computational functions should behave like mathematical functions. For instance, consider this mathematical definition of the above Haskell function, which, aside from the parentheses, looks the same:
f(x, y) = x + y
In Haskell, a function of two arguments need not necessarily be applied to two arguments. The result of applying only the first argument is to yield another function to which later the second argument can be applied. For example, we can define an add
function and subsequently a singleargument inc
function, by merely preapplying 1
to add
:
In addition to named functions Haskell also has anonymous lambda functions denoted with a backslash. For example the identity function:
Is identical to:
Functions may call themselves or other functions as arguments; a feature known as higherorder functions. For example the following function applies a given argument f
, which is itself a function, to a value x
twice.
Types
Typed functional programming is essential to the modern Haskell paradigm. But what are types precisely?
The syntax of a programming language is described by the constructs that define its types, and its semantics are described by the interactions among those constructs. A type system overlays additional structure on top of the syntax that imposes constraints on the formation of expressions based on the context in which they occur.
Dynamic programming languages associate types with values at evaluation, whereas statically typed languages associate types to expressions before evaluation. Dynamic languages are in a sense as statically typed as static languages, however they have a degenerate type system with only one type.
The dominant philosophy in functional programming is to “make invalid states unrepresentable” at compiletime, rather than performing massive amounts of runtime checks. To this end Haskell has developed a rich type system that is based on typed lambda calculus known as Girard’s SystemF (See RankN Types) and has incrementally added extensions to support more typelevel programming over the years.
The following ground types are quite common:
()
– The unit typeChar
– A single unicode character (“code point”)Text
– Unicode stringsBool
– Boolean valuesInt
– Machine integersInteger
– GMP arbitrary precision integersFloat
– Machine floating point valuesDouble
– Machine double floating point values
Parameterised types consist of a type and several type parameters indicated as lower case type variables. These are associated with common data structures such as lists and tuples.
[a]
– Homogeneous lists with elements of typea
(a,b)
– Tuple with two elements of typesa
andb
(a,b,c)
– Tuple with three elements of typesa
,b
, andc
The type system grows quite a bit from here, but these are the foundational types you’ll first encounter. See the later chapters for all types off advanced features that can be optionally turned on.
This tutorial will only cover a small amount of the theory of type systems. For a more thorough treatment of the subject there are two canonical texts:
 Pierce, B. C., & Benjamin, C. (2002). Types and Programming Languages. MIT Press.
 Harper, R. (2016). Practical Foundations for Programming Languages. Cambridge University Press.
Type Signatures
A toplevel Haskell function consists of two lines. The valuelevel definition which is a function name, followed by its arguments on the lefthand side of the equals sign, and then the function body which computes the value it yields on the righthand side:
The typelevel definition is the function name followed by the type of its arguments separated by arrows, and the final term is the type of the entire function body, meaning the type of value yielded by the function itself.
Here is a simple example of a function which adds two integers.
Functions are also capable of invoking other functions inside of their function bodies:
The simplest function, called the identity function, is a function which takes a single value and simply returns it back. This is an example of a polymorphic function since it can handle values of any type. The identity function works just as well over strings as over integers.
This can alternatively be written in terms of an anonymous lambda function which is a backslash followed by a spaceseparated list of arguments, followed by a function body.
One of the big ideas in functional programming is that functions are themselves first class values which can be passed to other functions as arguments themselves. For example the applyTwice
function takes an argument f
which is of type (a > a
) and it applies that function over a given value x
twice and yields the result. applyTwice
is a higherorder function which will transform one function into another function.
Often to the left of a type signature you will see a big arrow =>
which denotes a set of constraints over the type signature. Each of these constraints will be in uppercase and will normally mention at least one of the type variables on the right hand side of the arrow. These constraints can mean many things but in the simplest form they denote that a type variable must have an implementation of a type class. The add
function below operates over any two similar values x
and y
, but these values must have a numerical interface for adding them together.
Type signatures can also appear at the value level in the form of explicit type signatures which are denoted in parentheses.
These are sometimes needed to provide additional hints to the typechecker when specific terms are ambiguous to the typechecker, or when additional language extensions have been enabled which don’t have precise inference methods for deducing all type variables.
Currying
In other languages functions normally have an arity which prescribes the number of arguments a function can take. Some languages have fixed arity (like Fortran) others have flexible arity (like Python) where a variable of number of arguments can be passed. Haskell follows a very simple rule: all functions in Haskell take a single argument. For multiargument functions (some of which we’ve already seen), arguments will be individually applied until the function is saturated and the function body is evaluated.
For example, the add function from above can be partially applied to produce an add1 function:
Uncurrying is the process of taking a function which takes two arguments and transforming it into a function which takes a tuple of arguments. The Haskell prelude includes both a curry and an uncurry function for transforming functions into those that take multiple arguments from those that take a tuple of arguments and vice versa:
For example, uncurry applied to the add function creates a function that takes a tuple of integers:
Algebraic Datatypes
Custom datatypes in Haskell are defined with the data
keyword followed by the the type name, its parameters, and then a set of constructors. The possible constructors are either sum types or of product types. All datatypes in Haskell can expressed as sums of products. A sum type is a set of options that is delimited by a pipe.
A datatype can only ever be inhabited by only single value from a sum type and intuitively models a set of “options” a value may take. While a product type is a combination of a set of typed values, potentially named by record fields. For example the following are two definitions of a Point product type, the latter with two fields x
and y
.
As another example: A deck of common playing cards could be modeled by the following set of product and sum types:
A record type can use these custom datatypes to define all the parameters that define an individual playing card.
Some example values:
The problem with the definition of this datatype is that it admits several values which are malformed. For instance it is possible to instantiate a Card
with a suit Hearts
but with color Black
which is an invalid value. The convention for preventing these kind of values in Haskell is to limit the export of constructors in a module and only provide a limited set of functions which the module exports, which can enforce these constraints. These are smart constructors and an extremely common pattern in Haskell library design. For example we can export functions for building up specific suit cards that enforce the color invariant.
Datatypes may also be recursive, in the sense that they can contain themselves as fields. The most common example is a linked list which can be defined recursively as either an empty list or a value linked to a potentially nested version of itself.
An example value would look like:
Constructors for datatypes can also be defined as infix symbols. This is somewhat rare, but is sometimes used in more math heavy libraries. For example the constructor for our list type could be defined as the infix operator :+:
. When the value is printed using a Show instance, the operator will be printed in infix form.
Lists
Linked lists or cons lists are a fundamental data structure in functional programming. GHC has builtin syntactic sugar in the form of list syntax which allows us to write lists that expand into explicit invocations of the cons operator (:)
. The operator is right associative and an example is shown below:
This syntax also extends to the typelevel where lists are represented as brackets around the type of values they contain.
The cons operator itself has the type signature which takes a head element as its first argument and a tail argument as its second.
The Data.List
module from the standard Prelude defines a variety of utility functions for operations over linked lists. For example the length
function returns the integral length of the number of elements in the linked list.
While the take
function extracts a fixed number of elements from the list.
Both of these functions are pure and return a new list without modifying the underlying list passed as an argument.
Another function iterate
is an example of a function which returns an infinite list. It takes as its first argument a function and then repeatedly applies that function to produce a new element of the linked list.
Consuming these infinite lists can be used as a control flow construct to construct loops. For example instead of writing an explicit loop, as we would in other programming languages, we instead construct a function which generates list elements. For example producing a list which produces subsequent powers of two:
We can then use the take
function to evaluate this lazy stream to a desired depth.
An equivalent loop in an imperative language would look like the following.
Pattern Matching
To unpack an algebraic datatype and extract its fields we’ll use a built in language construction known as pattern matching. This is denoted by the case
syntax and scrutinizes a specific value. A case expression will then be followed by a sequence of matches which consist of a pattern on the left and an arbitrary expression on the right. The left patterns will all consist of constructors for the type of the scrutinized value and should enumerate all possible constructors. For product type patterns that are scrutinized a sequence of variables will bind the fields associated with its positional location in the constructor. The types of all expressions on the right hand side of the matches must be identical.
Pattern matches can be written in explicit case statements or in toplevel functional declarations. The latter simply expands the former in the desugaring phase of the compiler.
Following on the playing card example in the previous section, we could use a pattern to produce a function which scores the face value of a playing card.
And we can use a double pattern match to produce a function which determines which suit trumps another suit. For example we can introduce an order of suits of cards where the ranking of cards proceeds (Clubs, Diamonds, Hearts, Spaces). A _
underscore used inside a pattern indicates a wildcard pattern and matches against any constructor given. This should be the last pattern used a in match list.
And finally we can write a function which determines if another card beats another card in terms of the two pattern matching functions above. The following pattern match brings the values of the record into the scope of the function body assigning to names specified in the pattern syntax.
Functions may also invoke themselves. This is known as recursion. This is quite common in pattern matching definitions which recursively tear down or build up data structures. This kind of pattern is one of the defining modes of programming functionally.
The following two recursive pattern matches are desugared forms of each other:
Pattern matching on lists is also an extremely common pattern. It has special pattern syntax and the tail variable is typically pluralized. In the following x
denotes the head variable and xs
denotes the tail. For example the following function traverses a list of integers and adds (+1)
to each value.
Guards
Guard statements are expressions that evaluate to boolean values that can be used to restrict pattern matches. These occur in a pattern match statements at the toplevel with the pipe syntax on the left indicating the guard condition. The special otherwise
condition is just a renaming of the boolean value True
exported from Prelude.
Guards can also occur in pattern case expressions.
Operators and Sections
An operator is a function that can be applied using infix syntax or partially applied using a section. Operators can be defined to use any combination of the special ASCII symbols or any unicode symbol.
!
#
%
&
*
+
.
/
<
=
>
?
@
^


~
:
The following are reserved syntax and cannot be overloaded:
..
:
::
=

<
>
@
~
=>
Operators are of one of three fixity classes.
 Infix  Place between expressions
 Prefix  Placed before expressions
 Postfix  Placed after expressions. See Postfix Operators.
Expressions involving infix operators are disambiguated by the operator’s fixity and precedence. Infix operators are either left or right associative. Associativity determines how operators of the same precedence are grouped in the absence of parentheses.
Precedence and associativity are denoted by fixity declarations for the operator using either infixr
infixl
and infix
. The standard operators defined in the Prelude have the following precedence table.
infixr 9 .
infixr 8 ^, ^^, **
infixl 7 *, /, `quot`, `rem`, `div`, `mod`
infixl 6 +, 
infixr 5 ++
infix 4 ==, /=, <, <=, >=, >
infixr 3 &&
infixr 2 
infixr 1 >>, >>=
infixr 0 $, `seq`
Sections are written as ( op e )
or ( e op )
. For example:
Operators written within enclosed parens are applied like traditional functions. For example the following are equivalent:
Tuples
Tuples are heterogeneous structures which contain a fixed number of values. Some simple examples are shown below:
For twotuples there are two functions fst
and snd
which extract the left and right values respectively.
GHC supports tuples to size 62.
Where & Let Clauses
Haskell syntax contains two different types of declaration syntax: let
and where
. A let binding is an expression and binds anywhere in its body. For example the following let binding declares x
and y
in the expression x+y
.
A where binding is a toplevel syntax construct (i.e. not an expression) that binds variables at the end of a function. For example the following binds x
and y
in the function body of f
which is x+y
.
Where clauses following the Haskell layout rule where definitions can be listed on newlines so long as the definitions have greater indentation than the first toplevel definition they are bound to.
Conditionals
Haskell has builtin syntax for scrutinizing boolean expressions. These are first class expressions known as if
statements. An if statement is of the form if cond then trueCond else falseCond
. Both the True
and False
statements must be present.
If statements are just syntactic sugar for case
expressions over boolean values. The following example is equivalent to the above example.
Function Composition
Functions are obviously at the heart of functional programming. In mathematics function composition is an operation which takes two functions and produces another function with the result of the first argument function applied to the result of the second function. This is written in mathematical notation as:
g ∘ f
The two functions operate over a domain. For example X, Y and Z.
f : X → Y g : Y → Z
Or in Haskell notation:
Composition operation results in a new function:
g ∘ f : X → Z
In Haskell this operator is given special infix operator to appear similar to the mathematical notation. Intuitively it takes two functions of types b > c
and a > b
and composes them together to produce a new function. This is the canonical example of a higherorder function.
Haskell code will liberally use this operator to compose chains of functions. For example the following composes a chain of list processing functions sort
, filter
and map
:
Another common higherorder function is the flip
function which takes as its first argument a function of two arguments, and reverses the order of these two arguments returning a new function.
The most common operator in all of Haskell is function application operator $
. This function is right associative and takes the entire expression on the right hand side of the operator and applies it to function on the left.
This is quite often used in the pattern where the left hand side is a composition of other functions applied to a single argument. This is common in pointfree style of programming which attempts to minimize the number of input arguments in favour of pure higher order function composition. The flipped form of this function does the opposite and is left associative, and applies the entire left hand side expression to a function given in the second argument to the function.
For comparison consider the use of $
, &
and explicit parentheses.
The on
function takes a function b
and yields the result of applying unary function u
to two arguments x
and y
. This is a higher order function that transforms two inputs and combines the outputs.
This is used quite often in sort functions. For example we can write a custom sort function which sorts a list of lists based on length.
λ: import Data.List
λ: sortSize = sortBy (compare `on` length)
λ: sortSize [[1,2], [1,2,3], [1]]
[[1],[1,2],[1,2,3]]
List Comprehensions
List comprehensions are a syntactic construct that first originated in the Haskell language and has now spread to other programming languages. List comprehensions provide a simple way of working with lists and sequences of values that follow patterns. List comprehension syntax consists of three components:
 Generators  Expressions which evaluate a list of values which are iteratively added to the result.
 Let bindings  Expressions which generate a constant value which is scoped on each iteration.
 Guards  Expressions which generate a boolean expression which determine whether an iteration is added to the result.
The simplest generator is simply a list itself. The following example produces a list of integral values, each element multiplied by two.
We can extend this by adding a let statement which generalizes the multiplier on each step and binds it to a variable n
.
And we can also restrict the set of resulting values to only the subset of values of x
that meet a condition. In this case we restrict to only values of x
which are odd.
Comprehensions with multiple generators will combine each generator pairwise to produce the cartesian product of all results.
λ: [(x,y)  x < [1,2,3], y < [10,20,30]]
[(1,10),(1,20),(1,30),(2,10),(2,20),(2,30),(3,10),(3,20),(3,30)]
λ: [(x,y,z)  x < [1,2], y < [10,20], z < [100,200]]
[(1,10,100),(1,10,200),(1,20,100),(1,20,200),(2,10,100),(2,10,200),(2,20,100),(2,20,200)]
Haskell has builtin comprehension syntax which is syntactic sugar for specific methods of the Enum
typeclass.
[ e1.. ] 
enumFrom e1 
[ e1,e2.. ] 
enumFromThen e1 e2 
[ e1..e3 ] 
enumFromTo e1 e3 
[ e1,e2..e3 ] 
enumFromThenTo e1 e2 e3 
There is an Enum
instance for Integer
and Char
types and so we can write list comprehensions for both, which generate ranges of values.
λ: [1 .. 15]
[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]
λ: ['a' .. 'z']
"abcdefghijklmnopqrstuvwxyz"
λ: [1,3 .. 15]
[1,3,5,7,9,11,13,15]
λ: [0,50..500]
[0,50,100,150,200,250,300,350,400,450,500]
These comprehensions can be used inside of function definitions and reference locally bound variables. For example the factorial
function (written as n!) is defined as the product of all positive integers up to a given value.
As a more complex example consider a naive prime number sieve:
And a more complex example, consider the classic FizzBuzz interview question. This makes use of iteration and guard statements.
Single line comments begin with double dashes 
:
Multiline comments begin with {
and end with }
.
Comments may also add additional structure in the form of Haddock docstrings. These comments will begin with a pipe.
Modules may also have a comment convention which describes the individual authors, copyright and stability information in the following form:
Typeclasses
Typeclasses are one of the core abstractions in Haskell. Just as we wrote polymorphic functions above which operate over all given types (the id
function is one example), we can use typeclasses to provide a form of bounded polymorphism which constrains type variables to a subset of those types that implement a given class.
For example we can define an equality class which allows us to define an overloaded notion of equality depending on the data structure provided.
Then we can define this typeclass over several different types. These definitions are called typeclass instances. For example for the Bool
type the equality typeclass would be defined as:
Over the unit type, where only a single value exists, the instance is trivial:
For the Ordering type, defined as:
We would have the following Equal instance:
An Equal instance for a more complex data structure like the list type relies upon the fact that the type of the elements in the list must also have a notion of equality, so we include this as a constraint in the typeclass context, which is written to the left of the fat arrow =>
. With this constraint in place, we can write this instance recursively by pattern matching on the list elements and checking for equality all the way down the spine of the list:
In the above definition, we know that we can check for equality between individual list elements if those list elements satisfy the Equal constraint. Knowing that they do, we can then check for equality between two complete lists.
For tuples, we will also include the Equal constraint for their elements, and we can then check each element for equality respectively. Note that this instance includes two constraints in the context of the typeclass, requiring that both type variables a
and b
must also have an Equal instance.
The default prelude comes with a variety of typeclasses that are used frequently and defined over many prelude types:
 Num  Provides a basic numerical interface for values with addition, multiplication, subtraction, and negation.
 Eq  Provides an interface for values that can be tested for equality.
 Ord  Provides an interface for values that have a total ordering.
 Read  Provides an interface for values that can be read from a string.
 Show  Provides an interface for values that can be printed to a string.
 Enum  Provides an interface for values that are enumerable to integers.
 Semigroup  Provides an algebraic semigroup interface.
 Functor  Provides an algebraic functor interface. See Functors.
 Monad  Provides an algebraic monad interface. See Monads.
 Category  Provides an algebraic category interface. See Categories.
 Bounded  Provides an interface for enumerable values with bounds.
 Integral  Provides an interface for integrallike quantities.
 Real  Provides an interface for reallike quantities.
 Fractional  Provides an interface for rationallike quantities.
 Floating  Provides an interface for defining transcendental functions over real values.
 RealFrac  Provides an interface for rounding real values.
 RealFloat  Provides an interface for working with IEE754 operations.
To see the implementation for any of these typeclasses you can run the GHCi info command to see the methods and all instances in scope. For example:
Many of the default classes have instances that can be derived automatically. After the definition of a datatype you can add a deriving
clause which will generate the instances for this datatype automatically. This does not work universally but for many instances which have boilerplate definitions, GHC is quite clever and can save you from writing quite a bit of code by hand.
For example for a custom list type.
Side Effects
Contrary to a common misconception, side effects are an integral part of Haskell programming. Probably the most interesting thing about Haskell’s approach to side effects is that they are encoded in the type system. This is certainly a different approach to effectful programming, and the language has various models for modeling these effects within the type system. These models range from using Monads to building algebraic models of effects that draw clear lines between effectful code and pure code. The idea of reasoning about where effects can and cannot exist is one of the key ideas of Haskell, but this certainly does not mean trying to avoid side effects altogether!
Indeed, a Hello World program in Haskell is quite simple:
Other side effects can include reading from the terminal and prompting the user for input, such as in the complete program below:
Records
Records in Haskell are fundamentally broken for several reasons:
 The syntax is unconventional.
Most programming languages use dot or arrow syntax for field accessors like the following:
Haskell however uses function application syntax since record accessors are simply just functions. Instead or creating a privileged class of names and syntax for field accessors, Haskell instead choose to implement the simplest model and expands accessors to function during compilation.
 Incomplete pattern matches are implicitly generated for sums of products.
The functions generated for a
or b
in both of these cases are partial. See Exhaustiveness checking.
 Lack of Namespacing
Given two records defined in the same module (or imported) GHC is unable to (by default) disambiguate which field accessor to assign at a callsite that uses a
.
This can be routed around with the language extension DisambiguateRecordFields
but only to a certain extent. If we want to write maximally polymorphic functions which operate over arbitrary records which have a field a
, then the GHC typesystem is not able to express this without some much higherlevel magic.
Pragmas
At the beginning of a module there is special syntax for pragmas which direct the compiler to compile the current module in a specific way. The most common is a language extension pragma denoted like the following:
These flags alter the semantics and syntax of the module in a variety of ways. See Language Extensions for more details on all of these options.
Additionally we can pass specific GHC flags which alter the compilation behavior, enabling or disabling specific bespoke features based on our needs. These include compiler warnings, optimisation flags and extension flags.
Warning flags allow you to inform users at compiletime with a custom error message. Additionally you can mark a module as deprecated with a specific replacement message.
Newtypes
Newtypes are a form of zerocost abstraction that allows developers to specify compiletime names for types for which the developer wishes to expose a more restrictive interface. They’re zerocost because these newtypes end up with the same underlying representation as the things they differentiate. This allows the compiler to distinguish between different types which are representationally identical but semantically different.
For instance velocity can be represented as a scalar quantity represented as a double but the user may not want to mix doubles with other vector quantities. Newtypes allow us to distinguish between scalars and vectors at compile time so that no accidental calculations can occur.
Most importantly these newtypes disappear during compilation and the velocity type will be represented as simply just a machine double with no overhead.
See also the section on Newtype Deriving for a further discussion of tricks involved with handling newtypes.
Bottoms
The bottom is a singular value that inhabits every type. When this value is evaluated, the semantics of Haskell no longer yield a meaningful value. In other words, further operations on the value cannot be defined in Haskell. A bottom value is usually written as the symbol ⊥, ( i.e. the compiler flipping you off ). Several ways exist to express bottoms in Haskell code.
For instance, undefined
is an easily called example of a bottom value. This function has type a
but lacks any type constraints in its type signature. Thus, undefined
is able to stand in for any type in a function body, allowing type checking to succeed, even if the function is incomplete or lacking a definition entirely. The undefined
function is extremely practical for debugging or to accommodate writing incomplete programs.
Another example of a bottom value comes from the evaluation of the error
function, which takes a String
and returns something that can be of any type. This property is quite similar to undefined
, which also can also stand in for any type.
Calling error
in a function causes the compiler to throw an exception, halt the program, and print the specified error message.
In the divByY
function below, passing the function 0
as the divisor results in this function returning such an exception.
A third type way to express a bottom is with an infinitely looping term:
Examples of actual Haskell code that use this looping syntax lives in the source code of the GHC.Prim module. These bottoms exist because the operations cannot be defined in native Haskell. Such operations are baked into the compiler at a very low level. However, this module exists so that Haddock can generate documentation for these primitive operations, while the looping syntax serves as a placeholder for the actual implementation of the primops.
Perhaps the most common introduction to bottoms is writing a partial function that does not have exhaustive pattern matching defined. For example, the following code has nonexhaustive pattern matching because the case
expression, lacks a definition of what to do with a B
:
The code snippet above is translated into the following GHC Core output where the compiler will insert an exception to account for the nonexhaustive patterns:
GHC can be made more vocal about incomplete patterns using the fwarnincompletepatterns
and fwarnincompleteunipatterns
flags.
A similar situation can arise with records. Although constructing a record with missing fields is rarely useful, it is still possible.
When the developer omits a field’s definition, the compiler inserts an exception in the GHC Core representation:
Fortunately, GHC will warn us by default about missing record fields.
Bottoms are used extensively throughout the Prelude, although this fact may not be immediately apparent. The reasons for including bottoms are either practical or historical.
The canonical example is the head
function which has type [a] > a
. This function could not be welltyped without the bottom.
Some further examples of bottoms:
It is rare to see these partial functions thrown around carelessly in production code because they cause the program to halt. The preferred method for handling exceptions is to combine the use of safe variants provided in Data.Maybe
with the functions maybe
and either
.
Another method is to use pattern matching, as shown in listToMaybe
, a safer version of head
described below:
Invoking a bottom defined in terms of error
typically will not generate any position information. However, assert
, which is used to provide assertions, can be shortcircuited to generate position information in place of either undefined
or error
calls.
See: Avoiding Partial Functions
Exhaustiveness
Pattern matching in Haskell allows for the possibility of nonexhaustive patterns. For example, passing Nothing to unsafe
will cause the program to crash at runtime. However, this function is an otherwise valid, typechecked program.
Since unsafe
takes a Maybe a
value as its argument, two possible values are valid input: Nothing
and Just a
. Since the case of a Nothing
was not defined in unsafe
, we say that the pattern matching within that function is nonexhaustive. In other words, the function does not implement appropriate handling of all valid inputs. Instead of yielding a value, such a function will halt from an incomplete match.
Partial functions from nonexhaustivity are a controversial subject, and frequent use of nonexhaustive patterns is considered a dangerous code smell. However, the complete removal of nonexhaustive patterns from the language would itself be too restrictive and forbid too many valid programs.
Several flags exist that we can pass to the compiler to warn us about such patterns or forbid them entirely, either locally or globally.
The Wall
or fwarnincompletepatterns
flag can also be added on a permodule basis by using the OPTIONS_GHC
pragma.
A more subtle case of nonexhaustivity is the use of implicit pattern matching with a single unipattern in a lambda expression. In a manner similar to the unsafe
function above, a unipattern cannot handle all types of valid input. For instance, the function boom
will fail when given a Nothing, even though the type of the lambda expression’s argument is a Maybe a
.
Nonexhaustivity arising from unipatterns in lambda expressions occurs frequently in let
or do
blocks after desugaring, because such code is translated into lambda expressions similar to boom
.
GHC can warn about these cases of nonexhaustivity with the fwarnincompleteunipatterns
flag.
Generally speaking, any nontrivial program will use some measure of partial functions. It is simply a fact. Thus, there exist obligations for the programmer that cannot be manifested in the Haskell type system.
Debugger
Since GHC version 6.8.1, a builtin debugger has been available, although its use is somewhat rare. Debugging uncaught exceptions is in a similar style to debugging segfaults with gdb. Breakpoints can be set :break
and the call stack stepped through with :forward
and :back
.
Stack Traces
With runtime profiling enabled, GHC can also print a stack trace when a diverging bottom term (error, undefined) is hit. This action, though, requires a special flag and profiling to be enabled, both of which are disabled by default. So, for example:
And indeed, the runtime tells us that the exception occurred in the function g
and enumerates the call stack.
It is best to run this code without optimizations applied O0
so as to preserve the original call stack as represented in the source. With optimizations applied, GHC will rearrange the program in rather drastic ways, resulting in what may be an entirely different call stack.
Printf Tracing
Since Haskell is a pure language it has the unique property that most code is introspectable on its own. As such, using printf to display the state of the program at critical times throughout execution is often unnecessary because we can simply open GHCi and test the function. Nevertheless, Haskell does come with an unsafe trace
function which can be used to perform arbitrary print statements outside of the IO monad. You can place these statements wherever you like in your code without without IO restrictions.
Trace uses unsafePerformIO
under the hood and should not be used in production code.
In addition to the trace
function, several monadic trace
variants are quite common.
Type Inference
While inference in Haskell is usually complete, there are cases where the principal type cannot be inferred. Three common cases are:
 Reduced polymorphism due to mutually recursive binding groups
 Undecidability due to polymorphic recursion
 Reduced polymorphism due to the monomorphism restriction
In each of these cases, Haskell needs a hint from the programmer, which may be provided by adding explicit type signatures.
Mutually Recursive Binding Groups
In this case, the inferred type signatures are correct in their usage, but they don’t represent the most general signatures. When GHC analyzes the module it analyzes the dependencies of expressions on each other, groups them together, and applies substitutions from unification across mutually defined groups. As such the inferred types may not be the most general types possible, and an explicit signature may be desired.
Polymorphic recursion
In the second case, recursion is polymorphic because the inferred type variable a
in size
spans two possible types (a
and (a,a)
). These two types won’t pass the occurscheck of the typechecker and it yields an incorrect inferred type:
Simply adding an explicit type signature corrects this. Type inference using polymorphic recursion is undecidable in the general case.
See: Static Semantics of Function and Pattern Bindings
Monomorphism Restriction
Finally Monomorphism restriction is a builtin typing rule. By default, it is turned on when compiling and off in GHCi. The practical effect of this rule is that types inferred for functions without explicit type signatures may be more specific than expected. This is because GHC will sometimes reduce a general type, such as Num
to a default type, such as Double
. This can be seen in the following example in GHCi:
This rule may be deactivated with the NoMonomorphicRestriction
extension, see below.
See:
Type Holes
Since the release of GHC 7.8, type holes allow underscores as standins for actual values. They may be used either in declarations or in type signatures.
Type holes are useful in debugging incomplete programs. By placing an underscore on any value on the right handside of a declaration, GHC will throw an error during typechecking. The error message describes which values may legally fill the type hole.
GHC has rightly suggested that the expression needed to finish the program is xs :: [a]
.
The same hole technique can be applied at the toplevel for signatures:
Pattern wildcards can also be given explicit names so that GHC will use the names when reporting the inferred type in the resulting message.
The same wildcards can be used in type contexts to dump out inferred type class constraints:
When the flag XPartialTypeSignatures
is passed to GHC and the inferred type is unambiguous, GHC will let us leave the holes in place and the compilation will proceed with a warning instead of an error.
Deferred Type Errors
Since the release of version 7.8, GHC supports the option of treating type errors as runtime errors. With this option enabled, programs will run, but they will fail when a mistyped expression is evaluated. This feature is enabled with the fdefertypeerrors
flag in three ways: at the module level, when compiled from the command line, or inside of a GHCi interactive session.
For instance, the program below will compile:
However, when a pathological term is evaluated at runtime, we’ll see a message like this:
This error tells us that while x
has a declared type of ()
, the body of the function print 3
has a type of IO ()
. However, if the term is never evaluated, GHC will not throw an exception.
Name Conventions
Haskell uses short variable names as a convention. This is offputting at first but after you read enough Haskell, it ceases to be a problem. In addition there are several adhoc conventions that are typically adopted
a,b,c.. 
Type level variable 
x,y,z.. 
Value variables 
f,g,h.. 
Higher order function values 
x,y 
List head values 
xs,ys 
List tail values 
m 
Monadic type variable 
t 
Monad transformer variable 
e 
Exception value 
s 
Monad state value 
r 
Monad reader value 
t 
Foldable or Traversable type variable 
f 
Functor or applicative type variable 
mX 
Maybe variable 
Functions that end with a tick (like fold'
) are typically strict variants of a default lazy function.
Functions that end with a _ (like map_
) are typically variants of a function which discards the output and returns void.
Variables that are pluralized xs
, ys
typically refer to list tails.
Records that do not export their accessors will sometimes prefix them with underscores. These are sometimes interpreted by Template Haskell logic to produce derived field accessors.
Predicates will often prefix their function names with is
, as in isPositive
.
Functions which result in an Applicative or Monad type will often suffix their name with a A for Applicative or M for Monad. For example:
Functions which have chirality in which they traverse a data structure (i.e. lefttoright or righttoleft) will often suffix the name with L or R for their iteration pattern. This is useful because often times these type signatures identical.
Functions working with mutable structures or monadic state will often adopt the following naming conventions:
Functions that are prefixed with with
typically take a value as their first argument and a function as their second argument returning the value with the function applied over some substructure as the result.
ghcid
ghcid is a lightweight IDE hook that allows continuous feedback whenever code is updated. It can be run from the command line in the root of the cabal
project directory by specifying a command to run (e.g. ghci
, cabal repl
, or stack repl
).
When a Haskell module is loaded into ghcid
, the code is evaluated in order to provide the user with any errors or warnings that would happen at compile time. When the developer edits and saves code loaded into ghcid
, the program automatically reloads and evaluates the code for errors and warnings.
HLint
HLint is a source linter for Haskell that provides a variety of hints on code improvements. It can be customised and configured with custom rules, on a perproject basis. HLint is configured through a hlint.yaml
file placed in the root of a project. To generate the default configuration run:
Custom errors can be added to this file in order to match and suggest custom changes of code from the left hand side match to the right hand side replacement:
HLint’s default is to warn on all possible failures. These can be disabled globally by adding ignore pragmas.
Or within specific modules by specifying the within
option.
See:
Docker Images
Haskell has stable Docker images that are widely used for deployments across Kubernetes and Docker environments. The two Dockerhub repositories of note are:
To import the official Haskell images with ghc
and cabalinstall
include the following preamble in your Dockerfile with your desired GHC version.
FROM haskell:8.8.1
To import the stack images include the following preamble in your Dockerfile with your desired Stack resolver replaced.
FROM fpco/stackbuild:lts14.0
Continuous Integration
These days it is quite common to use cloud hosted continuous integration systems to test code from version control systems. There are many community contributed build scripts for different service providers, including the following:
 Travis CI for Cabal
 Travis CI for Stack
 Circle CI for Cabal & Stack
 Github Actions for Cabal & Stack
See also the official CI repository:
Ormolu
Ormolu is an opinionated Haskell source formatter that produces a canonical way of rendering the Haskell abstract syntax tree to text. This ensures that code shared amongst teams and checked into version control conforms to a single universal standard for whitespace and lexeme placing. This is similar to tools in other languages such as go fmt
.
For example running ormolu example.hs inplace
on the following module:
Will rerender the file as:
Ormolu can be installed via a variety of mechanisms.
See:
Haddock
Haddock is the automatic documentation generation tool for Haskell source code, and it integrates with the usual cabal
toolchain. In this section, we will explore how to document code so that Haddock can generate documentation successfully.
Several frequent comment patterns are used to document code for Haddock. The first of these methods uses  
to delineate the beginning of a comment:
Multiline comments are also possible:
 ^
is used to comment Constructors or Record fields:
Elements within a module (i.e. values, types, classes) can be hyperlinked by enclosing the identifier in single quotes:
Modules themselves can be referenced by enclosing them in double quotes:
haddock
also allows the user to include blocks of code within the generated documentation. Two methods of demarcating the code blocks exist in haddock
. For example, enclosing a code snippet in @
symbols marks it as a code block:
Similarly, it is possible to use bird tracks (>
) in a comment line to set off a code block.
Snippets of interactive shell sessions can also be included in haddock
documentation. In order to denote the beginning of code intended to be run in a REPL, the >>>
symbol is used:
Headers for specific blocks can be added by prefacing the comment in the module block with a *
:
Sections can also be delineated by $
blocks that pertain to references in the body of the module:
Links can be added with the following syntax:
Images can also be included, so long as the path is either absolute or relative to the directory in which haddock
is run.
haddock
options can also be specified with pragmas in the source, either at the module or project level.
ignoreexports  Ignores the export list and includes all signatures in scope. 
nothome  Module will not be considered in the root documentation. 
showextensions  Annotates the documentation with the language extensions used. 
hide  Forces the module to be hidden from Haddock. 
prune  Omits definitions with no annotations. 
Unsafe Functions
As everyone eventually finds out there are several functions within the implementation of GHC (not the Haskell language) that can be used to subvert the typesystem; these functions are marked with the prefix unsafe
. Unsafe functions exist only for when one can manually prove the soundness of an expression but can’t express this property in the typesystem, or externalities to Haskell.
Using these functions to subvert the Haskell typesystem will cause all measure of undefined behavior with unimaginable pain and suffering, and so they are strongly discouraged. When initially starting out with Haskell there are no legitimate reasons to use these functions at all.
Monads form one of the core components for constructing Haskell programs. In their most general form monads are an algebraic building block that can give rise to ways of structuring control flow, handling data structures and orchestrating logic. Monads are a very general algebraic way of structuring code and have a certain reputation for being confusing. However their power and flexibility have become foundational to the way modern Haskell programs are structured.
There is a singular truth to keep in mind when learning monads.
A monad is just its algebraic laws. Nothing more, nothing less.
Eightfold Path to Monad Satori
Much ink has been spilled waxing lyrical about the supposed mystique of monads. Instead, I suggest a path to enlightenment:
 Don’t read the monad tutorials.
 No really, don’t read the monad tutorials.
 Learn about the Haskell typesystem.
 Learn what a typeclass is.
 Read the Typeclassopedia.
 Read the monad definitions.
 Use monads in real code.
 Don’t write monadanalogy tutorials.
In other words, the only path to understanding monads is to read the fine source, fire up GHC, and write some code. Analogies and metaphors will not lead to understanding.
Monad Myths
The following are all false:
 Monads are impure.
 Monads are about effects.
 Monads are about state.
 Monads are about imperative sequencing.
 Monads are about IO.
 Monads are dependent on laziness.
 Monads are a “backdoor” in the language to perform sideeffects.
 Monads are an embedded imperative language inside Haskell.
 Monads require knowing abstract mathematics.
 Monads are unique to Haskell.
Monad Methods
Monads are not complicated. They are implemented as a typeclass with two methods, return
and (>>=)
(pronounced “bind”). In order to implement a Monad instance, these two functions must be defined:
The first type signature in the Monad class definition is for return
. Any preconceptions one might have for the word “return” should be discarded. It has an entirely different meaning in the context of Haskell and acts very differently than in languages such as C, Python, or Java. Instead of being the final arbiter of what value a function produces, return
in Haskell injects a value of type a
into a monadic context (e.g., Maybe, Either, etc.), which is denoted as m a
.
The other function essential to implementing a Monad instance is (>>=)
. This infix function takes two arguments. On its left side is a value with type m a
, while on the right side is a function with type (a > m b)
. The bind operation results in a final value of type m b
.
A third, auxiliary function ((>>)
) is defined in terms of the bind operation that discards its argument.
This definition says that (>>) has a left and right argument which are monadic with types m a
and m b
respectively, while the infix function yields a value of type m b
. The actual implementation of (>>) says that when m
is passed to (>>)
with k
on the right, the value k
will always be yielded.
Monad Laws
In addition to specific implementations of (>>=)
and return
, all monad instances must satisfy three laws.
Law 1
The first law says that when return a
is passed through (>>=)
into a function f
, this expression is exactly equivalent to f a
.
In discussing the next two laws, we’ll refer to a value m
. This notation is shorthand for a value wrapped in a monadic context. Such a value has type m a
, and could be represented more concretely by values like Nothing
, Just x
, or Right x
. It is important to note that some of these concrete instantiations of the value m
have multiple components. In discussing the second and third monad laws, we’ll see some examples of how this plays out.
Law 2
The second law states that a monadic value m
passed through (>>=)
into return
is exactly equivalent to itself. In other words, using bind to pass a monadic value to return
does not change the initial value.
A more explicit way to write the second Monad law exists. In this following example code, the first expression shows how the second law applies to values represented by nonnullary type constructors. The second snippet shows how a value represented by a nullary type constructor works within the context of the second law.
Law 3
While the first two laws are relatively clear, the third law may be more difficult to understand. This law states that when a monadic value m
is passed through (>>=)
to the function f
and then the result of that expression is passed to >>= g
, the entire expression is exactly equivalent to passing m
to a lambda expression that takes one parameter x
and outputs the function f
applied to x
. By the definition of bind, f x
must return a value wrapped in the same monad. Because of this property, the resultant value of that expression can be passed through (>>=)
to the function g
, which also returns a monadic value.
Again, it is possible to write this law with more explicit code. Like in the explicit examples for law 2, m
has been replaced by SomeMonad val
in order to be make it clear that there can be multiple components to a monadic value. Although little has changed in the code, it is easier to see that value –namely, val
– corresponds to the x
in the lambda expression. After SomeMonad val
is passed through (>>=)
to f
, the function f
operates on val
and returns a result still wrapped in the SomeMonad
type constructor. We can call this new value SomeMonad newVal
. Since it is still wrapped in the monadic context, SomeMonad newVal
can thus be passed through the bind operation into the function g
.
Monad law summary: Law 1 and 2 are identity laws (left and right identity respectively) and law 3 is the associativity law. Together they ensure that Monads can be composed and ‘do the right thing’.
See:
Do Notation
Monadic syntax in Haskell is written in a sugared form, known as do
notation. The advantages of this special syntax are that it is easier to write and often easier to read, and it is entirely equivalent to simply applying the monad operations. The desugaring is defined recursively by the rules:
Thus, through the application of the desugaring rules, the following expressions are equivalent:
do
a < f  f, g, and h are bound to the names a,
b < g  b, and c. These names are then passed
c < h  to 'return' to ensure that all values
return (a, b, c)  are wrapped in the appropriate monadic
 context
do {  N.B. '{}' and ';' characters are
a < f;  rarely used in donotation
b < g;
c < h;
return (a, b, c)
}
f >>= a >
g >>= b >
h >>= c >
return (a, b, c)
If one were to write the bind operator as an uncurried function (which is not how Haskell uses it) the same desugaring might look something like the following chain of nested binds with lambdas.
In the donotation, the monad laws from above are equivalently written:
Law 1
Law 2
Law 3
See:
Maybe Monad
The Maybe monad is the simplest first example of a monad instance. The Maybe monad models a computation which may fail to yield a value at any point during computation.
The Maybe type has two value constructors. The first, Just
, is a unary constructor representing a successful computation, while the second, Nothing
, is a nullary constructor that represents failure.
The monad instance describes the implementation of (>>=)
for Maybe
by pattern matching on the possible inputs that could be passed to the bind operation (i.e., Nothing
or Just x
). The instance declaration also provides an implementation of return
, which in this case is simply Just
.
The following code shows some simple operations to do within the Maybe monad.
In the above example, the value Just 3
is passed via (>>=)
to the lambda function x > return (x + 1)
. x
refers to the Int
portion of Just 3
, and we can use x
in the second half of the lambda expression, return (x + 1)
which evaluates to Just 4
, indicating a successful computation.
In the second example, the value Nothing
is passed via (>>=)
to the same lambda function as in the previous example. However, according to the Maybe
Monad instance, whenever Nothing
is bound to a function, the expression’s result will be Nothing
.
Here, return
is applied to 4
and results in Just 4
.
The next code examples show the use of do
notation within the Maybe monad to do addition that might fail. Desugared examples are provided as well.
List Monad
The List monad is the second simplest example of a monad instance. As always, this monad implements both (>>=)
and return
.
The definition of bind says that when the list m
is bound to a function f
, the result is a concatenation of map f
over the list m
. The return
method simply takes a single value x
and injects into a singleton list [x]
.
In order to demonstrate the List
monad’s methods, we will define two values: m
and f
. m
is a simple list, while f
is a function that takes a single Int
and returns a two element list [1, 0]
.
When applied to bind, evaluation proceeds as follows:
m >>= f
==> [1,2,3,4] >>= x > [1,0]
==> concat (map (x > [1,0]) [1,2,3,4])
==> concat ([[1,0],[1,0],[1,0],[1,0]])
==> [1,0,1,0,1,0,1,0]
The list comprehension syntax in Haskell can be implemented in terms of the list monad. List comprehensions can be considered syntactic sugar for more obviously monadic implementations. Examples a
and b
illustrate these use cases.
The first example (a
) illustrates how to write a list comprehension. Although the syntax looks strange at first, there are elements of it that may look familiar. For instance, the use of <
is just like bind in a do
notation: It binds an element of a list to a name. However, one major difference is apparent: a
seems to lack a call to return
. Not to worry, though, the []
fills this role. This syntax can be easily desugared by the compiler to an explicit invocation of return
. Furthermore, it serves to remind the user that the computation takes place in the List monad.
The second example (b
) shows the list comprehension above rewritten with do
notation:
The final examples are further illustrations of the List monad. The functions below each return a list of 3tuples which contain the possible combinations of the three lists that get bound the names a
, b
, and c
. N.B.: Only values in the list bound to a
can be used in a
position of the tuple; the same fact holds true for the lists bound to b
and c
.
example :: [(Int, Int, Int)]
example = do
a < [1,2]
b < [10,20]
c < [100,200]
return (a,b,c)
 [(1,10,100),(1,10,200),(1,20,100),(1,20,200),(2,10,100),(2,10,200),(2,20,100),(2,20,200)]
desugared :: [(Int, Int, Int)]
desugared = [1, 2] >>= a >
[10, 20] >>= b >
[100, 200] >>= c >
return (a, b, c)
 [(1,10,100),(1,10,200),(1,20,100),(1,20,200),(2,10,100),(2,10,200),(2,20,100),(2,20,200)]
IO Monad
Perhaps the most (in)famous example in Haskell of a type that forms a monad is IO
. A value of type IO a
is a computation which, when performed, does some I/O before returning a value of type a
. These computations are called actions. IO actions executed in main
are the means by which a program can operate on or access information from the external world. IO actions allow the program to do many things, including, but not limited to:
 Print a
String
to the terminal  Read and parse input from the terminal
 Read from or write to a file on the system
 Establish an
ssh
connection to a remote computer  Take input from a radio antenna for signal processing
 Launch the missiles.
Conceptualizing I/O as a monad enables the developer to access information from outside the program, but also to use pure functions to operate on that information as data. The following examples will show how we can use IO actions and IO
values to receive input from stdin and print to stdout.
Perhaps the most immediately useful function for doing I/O in Haskell is putStrLn
. This function takes a String
and returns an IO ()
. Calling it from main
will result in the String
being printed to stdout followed by a newline character.
Here is some code that prints a couple of lines to the terminal. The first invocation of putStrLn
is executed, causing the String
to be printed to stdout. The result is bound to a lambda expression that discards its argument, and then the next putStrLn
is executed.
Another useful function is getLine
which has type IO String
. This function gets a line of input from stdin. The developer can then bind this line to a name in order to operate on the value within the program.
The code below demonstrates a simple combination of these two functions as well as desugaring IO
code. First, putStrLn
prints a String
to stdout to ask the user to supply their name, with the result being bound to a lambda that discards it argument. Then, getLine
is executed, supplying a prompt to the user for entering their name. Next, the resultant IO String
is bound to name
and passed to putStrLn
. Finally, the program prints the name to the terminal.
The next code block is the desugared equivalent of the previous example where the uses of (>>=)
are made explicit.
Our final example executes in the same way as the previous two examples. This example, though, uses the special (>>)
operator to take the place of binding a result to the lambda that discards its argument.
See:
What’s the point?
Although it is difficult, if not impossible, to touch, see, or otherwise physically interact with a monad, this construct has some very interesting implications for programmers. For instance, consider the nonintuitive fact that we now have a uniform interface for talking about three very different, but foundational ideas for programming: Failure, Collections and Effects.
Let’s write down a new function called sequence
which folds a function mcons
over a list of monadic computations. We can think of mcons
as analogous to the list constructor (i.e. (a : b : [])
) except it pulls the two list elements out of two monadic values (p
,q
) by means of bind. The bound values are then joined with the list constructor :
, before finally being rewrapped in the appropriate monadic context with return
.
What does this function mean in terms of each of the monads discussed above?
Maybe
For the Maybe monad, sequencing a list of values within the Maybe
context allows us to collect the results of a series of computations which can possibly fail. However, sequence
yields the aggregated values only if each computation succeeds. In other words, if even one of the Maybe
values in the initial list passed to sequence
is a Nothing
, the result of evaluating sequence
for the whole list will also be Nothing
.
List
The bind operation for the list monad forms the pairwise list of elements from the two operands. Thus, folding the binds contained in mcons
over a list of lists with sequence
implements the general Cartesian product for an arbitrary number of lists.
IO
Applying sequence
within the IO context results in still a different result. The function takes a list of IO actions, performs them sequentially, and then gives back the list of resulting values in the order sequenced.
So there we have it, three fundamental concepts of computation that are normally defined independently of each other actually all share this similar structure. This unifying pattern can be abstracted out and reused to build higher abstractions that work for all current and future implementations. If you want a motivating reason for understanding monads, this is it! These insights are the essence of what I wish I knew about monads looking back.
See:
Reader Monad
The reader monad lets us access shared immutable state within a monadic context.
A simple implementation of the Reader monad:
Writer Monad
The writer monad lets us emit a lazy stream of values from within a monadic context.
A simple implementation of the Writer monad:
This implementation is lazy, so some care must be taken that one actually wants to only generate a stream of thunks. Most often the lazy writer is not suitable for use, instead implement the equivalent structure by embedding some monomial object inside a StateT monad, or using the strict version.
State Monad
The state monad allows functions within a stateful monadic context to access and modify shared state.
The state monad is often mistakenly described as being impure, but it is in fact entirely pure and the same effect could be achieved by explicitly passing state. A simple implementation of the State monad takes only a few lines:
Why are monads confusing?
So many monad tutorials have been written that it begs the question: what makes monads so difficult when first learning Haskell? I hypothesize there are three aspects to why this is so:
 There are several levels of indirection with desugaring.
A lot of the Haskell we write is radically rearranged and transformed into an entirely new form under the hood.
Most monad tutorials will not manually expand out the dosugar. This leaves the beginner thinking that monads are a way of dropping into a pseudoimperative language inside of pure code and further fuels the misconception that specific instances like IO describe monads in their full generality. When in fact the IO monad is only one among many instances.
Being able to manually desugar is crucial to understanding.
 Infix operators for higher order functions are not common in other languages.
On the left hand side of the operator we have an m a
and on the right we have a > m b
. Thus, this operator is asymmetric, utilizing a monadic value on the left and a higher order function on the right. Although some languages do have infix operators that are themselves higher order functions, it is still a rather rare occurrence.
Thus, with a function desugared, it can be confusing that (>>=)
operator is in fact building up a much larger function by composing functions together.
Written in prefix form, it becomes a little bit more digestible.
Perhaps even removing the operator entirely might be more intuitive coming from other languages.
 Adhoc polymorphism is not commonplace in other languages.
Haskell’s implementation of overloading can be unintuitive if one is not familiar with type inference. Indeed, newcomers to Haskell often believe they can gain an intuition for monads in a way that will unify their understanding of all monads. This is a fallacy, however, because any particular monad instance is merely an instantiation of the monad typeclass functions implemented for that particular type.
This is all abstracted away from the user, but the (>>=)
or bind
function is really a function of 3 arguments with the extra typeclass dictionary argument ($dMonad
) implicitly threaded around.
In general, this is true for all typeclasses in Haskell and it’s true here as well, except in the case where the parameter of the monad class is unified (through inference) with a concrete class instance.
Now, all of these transformations are trivial once we understand them, they’re just typically not discussed. In my opinion the fundamental fallacy of monad tutorials is not that intuition for monads is hard to convey (nor are metaphors required!), but that novices often come to monads with an incomplete understanding of points (1), (2), and (3) and then trip on the simple fact that monads are the first example of a Haskell construct that is the confluence of all three.
Thus we make monads more difficult than they need to be. At the end of the day they are simple algebraic critters.
mtl / transformers
The descriptions of Monads in the previous chapter are a bit of a white lie. Modern Haskell monad libraries typically use a more general form of these, written in terms of monad transformers which allow us to compose monads together to form composite monads.
Imagine if you had an application that wanted to deal with a Maybe monad wrapped inside a State Monad, all wrapped inside the IO monad. This is the problem that monad transformers solve, a problem of composing different monads. At their core, monad transformers allow us to nest monadic computations in a stack with an interface to exchange values between the levels, called lift:
In production code, the monads mentioned previously may actually be their more general transformer form composed with the Identity
monad.
The following table shows the relationships between these forms:
Maybe  MaybeT  Maybe a 
m (Maybe a) 
Reader  ReaderT  r > a 
r > m a 
Writer  WriterT  (a,w) 
m (a,w) 
State  StateT  s > (a,s) 
s > m (a,s) 
Just as the base monad class has laws, monad transformers also have several laws:
Law #1
Law #2
Or equivalently:
Law #1
Law #2
It’s useful to remember that transformers compose outsidein but are unrolled inside out.
Transformers
The lift definition provided above comes from the transformers
library along with an IOspecialized form called liftIO
:
These definitions rely on the following typeclass definitions, which describe composing one monad with another monad (the “t” is the transformed second monad):
Basics
The most basic use requires us to use the Tvariants for each of the monad transformers in the outer layers and to explicitly lift
and return
values between the layers. Monads have kind (* > *)
, so monad transformers which take monads to monads have ((* > *) > * > *)
:
For example, if we wanted to form a composite computation using both the Reader and Maybe monads, using MonadTrans
we could use Maybe inside of a ReaderT
to form ReaderT t Maybe a
.
The fundamental limitation of this approach is that we find ourselves lift.lift.lift
ing and return.return.return
ing a lot.
mtl
The mtl library is the most commonly used interface for these monad tranformers, but mtl depends on the transformers library from which it generalizes the “basic” monads described above into more general transformers, such as the following:
This solves the “lift.lift.lifting” problem introduced by transformers.
ReaderT
By way of an example there exist three possible forms of the Reader monad. The first is the primitive version which no longer exists, but which is useful for understanding the underlying ideas. The other two are the transformers and mtl variants.
Reader
ReaderT
MonadReader
So, hypothetically the three variants of ask would be:
In practice the mtl
variant is the one commonly used in Modern Haskell.
Newtype Deriving
Newtype deriving is a common technique used in combination with the mtl
library and as such we will discuss its use for transformers in this section.
As discussed in the newtypes section, newtypes let us reference a data type with a single constructor as a new distinct type, with no runtime overhead from boxing, unlike an algebraic datatype with a single constructor. Newtype wrappers around strings and numeric types can often drastically reduce accidental errors.
Consider the case of using a newtype to distinguish between two different text blobs with different semantics. Both have the same runtime representation as a text object, but are distinguished statically, so that plaintext can not be accidentally interchanged with encrypted text.
This is a surprisingly powerful tool as the Haskell compiler will refuse to compile any function which treats Cryptotext as Plaintext or vice versa!
The other common use case is using newtypes to derive logic for deriving custom monad transformers in our business logic. Using XGeneralizedNewtypeDeriving
we can recover the functionality of instances of the underlying types composed in our transformer stack.
Using newtype deriving with the mtl library typeclasses we can produce flattened transformer types that don’t require explicit lifting in the transform stack. For example, here is a little stack machine involving the Reader, Writer and State monads.
{# LANGUAGE GeneralizedNewtypeDeriving #}
import Control.Monad.Reader
import Control.Monad.Writer
import Control.Monad.State
type Stack = [Int]
type Output = [Int]
type Program = [Instr]
type VM a = ReaderT Program (WriterT Output (State Stack)) a
newtype Comp a = Comp { unComp :: VM a }
deriving (Functor, Applicative, Monad, MonadReader Program, MonadWriter Output, MonadState Stack)
data Instr = Push Int  Pop  Puts
evalInstr :: Instr > Comp ()
evalInstr instr = case instr of
Pop > modify tail
Push n > modify (n:)
Puts > do
tos < gets head
tell [tos]
eval :: Comp ()
eval = do
instr < ask
case instr of
[] > return ()
(i:is) > evalInstr i >> local (const is) eval
execVM :: Program > Output
execVM = flip evalState [] . execWriterT . runReaderT (unComp eval)
program :: Program
program = [
Push 42,
Push 27,
Puts,
Pop,
Puts,
Pop
]
main :: IO ()
main = mapM_ print $ execVM program
Pattern matching on a newtype constructor compiles into nothing. For example theextractB
function below does not scrutinize the MkB
constructor like extractA
does, because MkB
does not exist at runtime; it is purely a compiletime construct.
Efficiency
The second monad transformer law guarantees that sequencing consecutive lift operations is semantically equivalent to lifting the results into the outer monad.
Although they are guaranteed to yield the same result, the operation of lifting the results between the monad levels is not without cost and crops up frequently when working with the monad traversal and looping functions. For example, all three of the functions on the left below are less efficient than the right hand side which performs the bind in the base monad instead of lifting on each iteration.
Monad Morphisms
Although the base monad transformer package provides a MonadTrans
class for lifting to another monad:
But oftentimes we need to work with and manipulate our monad transformer stack to either produce new transformers, modify existing ones or extend an upstream library with new layers. The mmorph
library provides the capacity to compose monad morphism transformation directly on transformer stacks. This is achieved primarily by use of the hoist
function which maps a function from a base monad into a function over a transformed monad.
Hoist takes a monad morphism (a mapping from a m a
to a n a
) and applies in on the inner value monad of a transformer stack, transforming the value under the outer layer.
The monad morphism generalize
takes an Identity monad into any another monad m
.
For example, it generalizes State s a
(which is StateT s Identity a
) to StateT s m a
.
So we can generalize an existing transformer to lift an IO layer onto it.
See:
Effect Systems
The mtl model has several properties which make it suboptimal from a theoretical perspective. Although it is used widely in production Haskell we will discuss its shortcomings and some future models called effect systems.
Extensibility
When you add a new custom transformer inside of our business logic we’ll typically have to derive a large number of boilerplate instances to compose it inside of existing mtl transformer stack. For example adding MonadReader
instance for n number of undecidable instances that do nothing but mostly lifts. You can see this massive boilerplate all over the design of the mtl
library and its transitive dependencies.
This is called the n^{2} instance problem or the instance boilerplate problem and remains an open problem of mtl.
Composing Transformers
Effects don’t generally commute from a theoretical perspective and as such monad transformer composition is not in general commutative. For example stacking State
and Except
is not commutative:
In addition, the standard method of deriving mtl classes for a transformer stack breaks down when using transformer stacks with the same monad at different layers of the stack. For example stacking multiple State
transformers is a pattern that shows up quite frequently.
In order to get around this you would have to handwrite the instances for this transformer stack and manually lift anytime you perform a State action. This is a suboptimal design and difficult to route around without massive boilerplate.
While these problems exist, most users of mtl don’t implement new transformers at all and can get by. However in recent years there have been written many other libraries that have explored the design space of alternative effect modeling systems. These systems are still quite early compared to the mtl
but some are able to avoid some of the shortcomings of mtl
in favour of newer algebraic models of effects. The two most commonly used libraries are:
polysemy
fusedeffects
Polysemy
Polysemy is a new effect system library based on the freemonad approach to modeling effects. The library uses modern type system features to model effects on top of a Sem
monad. The monad will have a members constraint type which constraints a parameter r
by a typelevel list of effects in the given unit of computation.
For example we seamlessly mix and match error handling, tracing, and stateful updates inside of one computation without the new to create a layered monad. This would look something like the following:
These effects can then be evaluated using an interpreter function which unrolls and potentially evaluates the effects of the Sem
free monad. Some of these interpreters for tracing, state and error are similar to the evaluations for monad transformers but evaluate one layer of typelevel list of the effect stack.
The resulting Sem
monad with a single field can then be lowered into a single resulting monad such as IO or Either.
The library provides rich set of of effects that can replace many uses of monad transformers.
Polysemy.Async
 Asynchronous computationsPolysemy.AtomicState
 Atomic operationsPolysemy.Error
 Error handlingPolysemy.Fail
 Computations that failPolysemy.IO
 Monadic IOPolysemy.Input
 Input effectsPolysemy.Output
 Output effectsPolysemy.NonDet
 Nondeterminism effectPolysemy.Reader
 Contextual state a la Reader monadPolysemy.Resource
 Resources with finalizersPolysemy.State
 Stateful effectsPolysemy.Trace
 Tracing effectPolysemy.Writer
 Accumulation effect a la Writer monad
For example for a simple stateful computation with only a single effect.
And a more complex example which combines multiple effects:
Polysemy will require the following language extensions to operate:
The use of freemonads is not entirely without cost, and there are experimental GHC plugins which can abstract away some of the overhead from the effect stack. Code thats makes use of polysemy should enable the following GHC flags to enable aggressive typeclass specialisation:
flatespecialise
fspecialiseaggressively
Fused Effects
Fusedeffects is an alternative approach to effect systems based on an algebraic effects model. Unlike polysemy, fusedeffects does not use a free monad as an intermediate form. Fusedeffects has competative performance compared with mtl and doesn’t require additional GHC plugins or extension compiler fusion rules to optimise away the abstraction overhead.
The fusedeffects
library exposes a constraint kind called Has
which annotates a type signature that contains effectful logic. In this signature m
is called the carrier for the sig
effect signature containing the eff
effect.
For example the traditional State effect is modeled by the following datatype with three parameters. The s
parameter is the state object, the m
is the effect parameter. This exposes the same interface as Control.Monad.State
except for the Has
constraint instead.
The Carrier
for the State effect is defined as StateC
and the evaluators for the state carrier are defined in the same interface as mtl
except they evaluate into a result containing the effect parameter m
.
The evaluators for the effect lift monadic actions from an effectful computation.
Fusedeffects requires the following language extensions to operate.
Minimal Example
A minimal example using the State
effect to track stateful updates to a single integral value.
The evaluation of this monadic state block results in a m Integer
with the Algebra and Effect context. This can then be evaluated into either Identity
or IO
using run
.
Composite Effects
Consider a more complex example which combines exceptions with Throw
effect with State
. Importantly note that functions runThrow
and evalState
cannot infer the state type from the signature alone and thus require additional annotations. This differs from mtl
which typically has more optimal inference.
Philosophy
Haskell takes a drastically different approach to language design than most other languages as a result of being the synthesis of input from industrial and academic users. GHC allows the core language itself to be extended with a vast range of optin flags which change the semantics of the language on a permodule or perproject basis. While this does add a lot of complexity at first, it also adds a level of power and flexibility for the language to evolve at a pace that is unrivaled in the broader space of programming language design.
Classes
It’s important to distinguish between different classes of GHC language extensions: general and specialized.
The inherent problem with classifying extensions into general and specialized categories is that it is a subjective classification. Haskellers who do theorem proving research will have a very different interpretation of Haskell than people who do web programming. Thus, we will use the following classifications:
 Benign implies both that importing the extension won’t change the semantics of the module if not used and that enabling it makes it no easier to shoot yourself in the foot.
 Historical implies that one shouldn’t use this extension, it is in GHC purely for backwards compatibility. Sometimes these are dangerous to enable.
 Steals syntax means that enabling this extension causes certain code, that is valid in vanilla Haskell, to be no longer be accepted. For example,
f $(a)
is the same asf $ (a)
in Haskell98, butTemplateHaskell
will interpret$(a)
as a splice.
The golden source of truth for language extensions is the official GHC user’s guide which contains a plethora of information on the details of these extensions.
Extension Dependencies
Some language extensions will implicitly enable other language extensions for their operation. The table below shows the dependencies between various extensions and which sets are implied.
TypeFamilyDependencies  TypeFamilies 
TypeInType  PolyKinds, DataKinds, KindSignatures 
PolyKinds  KindSignatures 
ScopedTypeVariables  ExplicitForAll 
RankNTypes  ExplicitForAll 
ImpredicativeTypes  RankNTypes 
TemplateHaskell  TemplateHaskellQuotes 
Strict  StrictData 
RebindableSyntax  NoImplicitPrelude 
TypeOperators  ExplicitNamespaces 
LiberalTypeSynonyms  ExplicitForAll 
ExistentialQuantification  ExplicitForAll 
GADTs  MonoLocalBinds, GADTSyntax 
DuplicateRecordFields  DisambiguateRecordFields 
RecordWildCards  DisambiguateRecordFields 
DeriveTraversable  DeriveFoldable, DeriveFunctor 
MultiParamTypeClasses  ConstrainedClassMethods 
DerivingVia  DerivingStrategies 
FunctionalDependencies  MultiParamTypeClasses 
FlexibleInstances  TypeSynonymInstances 
TypeFamilies  MonoLocalBinds, KindSignatures, ExplicitNamespaces 
IncoherentInstances  OverlappingInstances 
The Benign
It’s not obvious which extensions are the most common but it’s fairly safe to say that these extensions are benign and are safely used extensively:
 NoImplicitPrelude
 OverloadedStrings
 LambdaCase
 FlexibleContexts
 FlexibleInstances
 GeneralizedNewtypeDeriving
 TypeSynonymInstances
 MultiParamTypeClasses
 FunctionalDependencies
 NoMonomorphismRestriction
 GADTs
 BangPatterns
 DeriveGeneric
 DeriveAnyClass
 DerivingStrategies
 ScopedTypeVariables
The Advanced
These extensions are typically used by advanced projects that push the limits of what is possible with Haskell to enforce complex invariants and very typesafe APIs.
 PolyKinds
 DataKinds
 DerivingVia
 GADTs
 RankNTypes
 ExistentialQuantification
 TypeFamilies
 TypeOperators
 TypeApplications
 UndecidableInstances
The Lowlevel
These extensions are typically used by lowlevel libraries that are striving for optimal performance or need to integrate with foreign functions and native code. Most of these are used to manipulate base machine types and interface directly with the lowlevel byte representations of data structures.
 CPP
 BangPatterns
 CApiFFI
 Strict
 StrictData
 RoleAnnotations
 ForeignFunctionInterface
 InterruptibleFFI
 UnliftedFFITypes
 MagicHash
 UnboxedSums
 UnboxedTuples
The Dangerous
GHC’s typechecker sometimes casually tells us to enable language extensions when it can’t solve certain problems. Unless you know what you’re doing, these extensions almost always indicate a design flaw and shouldn’t be turned on to remedy the error at hand, as much as GHC might suggest otherwise!
 AllowAmbigiousTypes
 DatatypeContexts
 OverlappingInstances
 IncoherentInstances
 ImpredicativeTypes
NoMonomorphismRestriction
The NoMonomorphismRestriction allows us to disable the monomorphism restriction typing rule GHC uses by default. See monomorphism restriction.
For example, if we load the following module into GHCi
And then we attempt to call the function bar
with a Double, we get a type error:
The problem is that GHC has inferred an overly specific type:
We can prevent GHC from specializing the type with this extension:
Now everything will work as expected:
ExtendedDefaultRules
In the absence of explicit type signatures, Haskell normally resolves ambiguous literals using several defaulting rules. When an ambiguous literal is typechecked, if at least one of its typeclass constraints is numeric and all of its classes are standard library classes, the module’s default list is consulted, and the first type from the list that will satisfy the context of the type variable is instantiated. For instance, given the following default rules
The following set of heuristics is used to determine what to instantiate the ambiguous type variable to.
 The type variable
a
appears in no other constraints  All the classes
Ci
are standard.  At least one of the classes
Ci
is numerical.
The standard default
definition is implicitly defined as (Integer, Double)
This is normally fine, but sometimes we’d like more granular control over defaulting. The XExtendedDefaultRules
loosens the restriction that we’re constrained with working on Numerical typeclasses and the constraint that we can only work with standard library classes. For example, if we’d like to have our string literals (using XOverloadedStrings
) automatically default to the more efficient Text
implementation instead of String
we can twiddle the flag and GHC will perform the right substitution without the need for an explicit annotation on every string literal.
For code typed at the GHCi prompt, the XExtendedDefaultRules
flag is always on, and cannot be switched off.
Safe Haskell
The Safe Haskell language extensions allow us to restrict the use of unsafe language features using XSafe
which restricts the import of modules which are themselves marked as Safe. It also forbids the use of certain language extensions (XTemplateHaskell
) which can be used to produce unsafe code. The primary use case of these extensions is security auditing of codebases for compliance purposes.
See: Safe Haskell
PartialTypeSignatures
Normally a function is either given a full explicit type signature or none at all. The partial type signature extension allows something in between.
Partial types may be used to avoid writing uninteresting pieces of the signature, which can be convenient in development:
If the Wpartialtypesignatures
GHC option is set, partial types will still trigger warnings.
See:
RecursiveDo
Recursive do notation allows for the use of selfreference expressions on both sides of a monadic bind. For instance the following example uses lazy evaluation to generate an infinite list. This is sometimes used to instantiate a cyclic datatype inside a monadic context where the datatype needs to hold a reference to itself.
ApplicativeDo
By default GHC desugars donotation to use implicit invocations of bind and return. With normal monad sugar the following…
… desugars into:
With ApplicativeDo
this instead desugars into use of applicative combinators and a laxer Applicative constraint.
Which is equivalent to the traditional notation.
PatternGuards
Pattern guards are an extension to the pattern matching syntax. Given a <
pattern qualifier, the right hand side is evaluated and matched against the pattern on the left. If the match fails then the whole guard fails and the next equation is tried. If it succeeds, then the appropriate binding takes place, and the next qualifier is matched.
ViewPatterns
View patterns are like pattern guards that can be nested inside of other patterns. They are a convenient way of patternmatching against values of algebraic data types.
TupleSections
The TupleSections syntax extension allows tuples to be constructed similar to how operator sections. With this extension enabled, tuples of arbitrary size can be “partially” specified with commas and values given for specific positions in the tuple. For example for a 2tuple:
An example for a 7tuple where three values are specified in the section.
Postfix Operators
The postfix operators extensions allows userdefined operators that are placed after expressions. For example, using this extension, we could define a postfix factorial function.
MultiWayIf
Multiway if expands traditional if statements to allow pattern match conditions that are equivalent to a chain of ifthenelse statements. This allows us to write “pattern matching predicates” on a value. This alters the syntax of Haskell language.
EmptyCase
GHC normally requires at least one pattern branch in a case statement; this restriction can be relaxed with the EmptyCase
language extension. The case statement then immediately yields a Nonexhaustive patterns in case
if evaluated. For example, the following will compile using this language pragma:
LambdaCase
For case statements, the language extension LambdaCase
allows the elimination of redundant free variables introduced purely for the case of pattern matching on.
Without LambdaCase:
With LambdaCase:
NumDecimals
The extension NumDecimals
allows the use of exponential notation for integral literals that are not necessarily floats. Without it, any use of exponential notation induces a Fractional class constraint.
PackageImports
The syntax language extension PackageImports
allows us to disambiguate hierarchical package names by their respective package key. This is useful in the case where you have two imported packages that expose the same module. In practice most of the common libraries have taken care to avoid conflicts in the namespace and this is not usually a problem in most modern Haskell.
For example we could explicitly ask GHC to resolve that Control.Monad.Error
package be drawn from the mtl
library.
RecordWildCards
Record wild cards allow us to expand out the names of a record as variables scoped as the labels of the record implicitly. The extension can be used to extract variables names into a scope and/or to assign to variables in a record drawing(?), aligning the record’s labels with the variables in scope for the assignment. The syntax introduced is the {..}
pattern selector as in the following example:
{# LANGUAGE RecordWildCards #}
{# LANGUAGE OverloadedStrings #}
import Data.Text
data Example = Example
{ e1 :: Int
, e2 :: Text
, e3 :: Text
} deriving (Show)
 Extracting from a record using wildcards.
scope :: Example > (Int, Text, Text)
scope Example {..} = (e1, e2, e3)
 Assign to a record using wildcards.
assign :: Example
assign = Example {..}
where
(e1, e2, e3) = (1, "Kirk", "Picard")
NamedFieldPuns
NamedFieldPuns
provides alternative syntax for accessing record fields in a pattern match.
PatternSynonyms
Suppose we were writing a typechecker, and we needed to parse type signatures. One common solution would to include a TArr
to pattern match on type function signatures. Even though, technically it could be written in terms of more basic application of the (>)
constructor.
With pattern synonyms we can eliminate the extraneous constructor without losing the convenience of pattern matching on arrow types. We introduce a new pattern using the pattern
keyword.
So now we can write a deconstructor and constructor for the arrow type very naturally.
Pattern synonyms can be exported from a module like any other definition by prefixing them with the prefix pattern
.
DeriveFunctor
Many instances of functors over datatypes with parameters and trivial constructors are the result of trivially applying a function over the single constructor’s argument. GHC can derive this boilerplate automatically in deriving clauses if DeriveFunctor
is enabled.
DeriveFoldable
Similar to how Functors can be automatically derived, many instances of Foldable for types of kind * > *
have instances that derive the functions:
foldMap
foldr
null
For instance if we have a custom rose tree and binary tree implementation we can automatically derive the fold functions for these datatypes automatically for us.
These will generate the following instances:
DeriveTraversable
Just as with Functor and Foldable, many Traversable
instances for singleparamater datatypes of kind * > *
have trivial implementations of the traverse
function which can also be derived automatically. By enabling DeriveTraversable
we can use stock deriving to derive these instances for us.
DeriveGeneric
Data types in Haskell can derived by GHC with the DeriveGenerics extension which is able to define the entire structure of the Generic instance and associated type families. See Generics for more details on what these types mean.
For example the simple custom List type deriving Generic:
Will generate the following Generic
instance:
DeriveAnyClass
With XDeriveAnyClass
we can derive any class. The deriving logic generates an instance declaration for the type with no explicitlydefined methods or with all instances having a specific default implementation given. These are used extensively with Generics when instances provide empty Minimal Annotations which are all derived from generic logic.
A contrived example of a class with an empty minimal set might be the following:
DuplicateRecordFields
GHC 8.0 introduced the DuplicateRecordFields
extensions which loosens GHC’s restriction on records in the same module with identical accessors. The precise type that is being projected into is now deferred to the callsite.
Using just DuplicateRecordFields
, projection is still not supported so the following will not work.
OverloadedLabels
GHC 8.0 also introduced the OverloadedLabels
extension which allows a limited form of polymorphism over labels that share the same name.
To work with overloaded label types we also need to enable several language extensions that allow us to use the promoted strings and multiparam typeclasses that underlay its implementation.
This is used in more advanced libraries like Selda which do object relational mapping between Haskell datatype fields and database columns.
See:
CPP
The C++ preprocessor is the fallback whenever we really need to separate out logic that has to span multiple versions of GHC and language changes while maintaining backwards compatibility. It can dispatch on the version of GHC being used to compile a module.
It can also demarcate code based on the operating system compiled on.
For another example, it can distinguish the version of the base library used.
One can also use the CPP extension to emit Haskell source at compiletime. This is used in some libraries which have massive boilerplate obligations. Of course, this can be abused quite easily and doing this sort of compiletime stringmunging should be a last resort.
TypeApplications
The type system extension TypeApplications
allows you to use explicit annotations for subexpressions. For example if you have a subexpression which has the inferred type a > b > a
you can name the types of a
and b
by explicitly stating @Int @Bool
to assign a
to Int
and b
to Bool
. This is particularly useful when working with typeclasses where type inference cannot deduce the types of all subexpressions from the toplevel signature and results in an overly specific default. This is quite common when working with roundtrips of read
and show
. For example:
DerivingVia
DerivingVia
is an extension of GeneralizedNewtypeDeriving
. Just as newtype deriving allows us to derive instances in terms of instances for the underlying representation of the newtype, DerivingVia allows deriving instances by specifying a custom type which has a runtime representation equal to the desired behavior we’re deriving the instance for. The derived instance can then be coerced
to behave as if it were operating over the given type. This is a powerful new mechanism that allows us to derive many typeclasses in terms of other typeclasses.
DerivingStrategies
Deriving has proven a powerful mechanism to add typeclass instances and as such there have been a variety of bifurcations in its use. Since GHC 8.2 there are now four different algorithms that can be used to derive typeclass instances. These are enabled by different extensions and now have specific syntax for invoking each algorithm specifically. Turning on DerivingStrategies
allows you to disambiguate which algorithm GHC should use for individual class derivations.
stock
 Standard GHC builtin deriving (i.e.Eq
,Ord
,Show
)anyclass
 Deriving via minimal annotations with DeriveAnyClass.newtype
 Deriving with [GeneralizedNewtypeDeriving].via
 Deriving with DerivingVia.
These can be stacked and combined on top of a data or newtype declaration.
Historical Extensions
Several language extensions have either been absorbed into the core language or become deprecated in favor of others. Others are just considered misfeatures.
Rank2Types
 Rank2Types has been subsumed byRankNTypes
XPolymorphicComponents
 Was an implementation detail of higherrank polymorphism that no longer exists.NPlusKPatterns
 These were largely considered an ugly edgecase of pattern matching language that was best removed.TraditionalRecordSyntax
 Traditional record syntax was an extension to the Haskell 98 specification for what we now consider standard record syntax.OverlappingInstances
 Subsumed by explicitOVERLAPPING
pragmas.IncoherentInstances
 Subsumed by explicitINCOHERENT
pragmas.NullaryTypeClasses
 Subsumed by explicit Multiparameter Typeclasses with no parameters.TypeInType
 Is deprecated in favour of the combination ofPolyKinds
andDataKinds
and extensions to the GHC typesystem after GHC 8.0.
Typeclasses are the bread and butter of abstractions in Haskell, and even out of the box in Haskell 98 they are quite powerful. However classes have grown quite a few extensions, additional syntax and enhancements over the years to augment their utility.
Standard Hierarchy
In the course of writing Haskell there are seven core instances you will use and derive most frequently. Each of them are lawful classes with several equations associated with their methods.
Semigroup
Monoid
Functor
Applicative
Monad
Foldable
Traversable
Instance Search
Whenever a typeclass method is invoked at a callsite, GHC will perform an instance search over all available instances defined for the given typeclass associated with the method. This instance search is quite similar to backward chaining in logic programming languages. The search is performed during compilation after all types in all modules are known and is performed globally across all modules and all packages available to be linked. The instance search can either result in no instances, a single instance or multiple instances which satisfy the conditions of the call site.
Orphan Instances
Normally typeclass definitions are restricted to be defined in one of two places:
 In the same module as the declaration of the datatype in the instance head.
 In the same module as the class declaration.
These two restrictions restrict the instance search space to a system where a solution (if it exists) can always be found. If we allowed instances to be defined in any modules then we could potentially have multiple class instances defined in multiple modules and the search would be ambiguous.
This restriction can however be disabled with the fnowarnorphans
flag.
This will allow you to define orphan instances in the current module. But beware this will make the instance search contingent on your import list and may result in clashes in your codebase where the linker will fail because there are multiple modules which define the same instance head.
When used appropriately this can be the way to route around the fact that upstream modules may define datatypes that you use, but they have not defined the instances for other downstream libraries that you also use. You can then write these instances for your codebase without modifying either upstream library.
Minimal Annotations
In the presence of default implementations for typeclass methods, there may be several ways to implement a typeclass. For instance Eq is entirely defined by either defining when two values are equal or not equal by implying taking the negation of the other. We can define equality in terms of nonequality and viceversa.
Before 7.6.1 there was no way to specify what was the “minimal” definition required to implement a typeclass
Minimal pragmas are boolean expressions. For instance, with 
as logical OR
, either definition of the above functions must be defined. Comma indicates logical AND
where both definitions must be defined.
Compiling the Wmissingmethods
will warn when an instance is defined that does not meet the minimal criterion.
TypeSynonymInstances
Normally type class definitions are restricted to being defined only over fully expanded types with all type synonym indirections removed. Type synonyms introduce a “naming indirection” that can be included in the instance search to allow you to write synonym instances for multiple synonyms which expand to concrete types.
This is used quite often in modern Haskell.
FlexibleInstances
Normally the head of a typeclass instance must contain only a type constructor applied to any number of type variables. There can be no nesting of other constructors or nontype variables in the head. The FlexibleInstances
extension loosens this restriction to allow arbitrary nesting and nontype variables to be mentioned in the head definition. This extension also implicitly enables TypeSynonymInstances
.
FlexibleContexts
Just as with instances, contexts normally are also constrained to consist entirely of constraints where a class is applied to just type variables. The FlexibleContexts
extension lifts this restriction and allows any type of type variable and nesting to occur the class constraint head. There is however still a global restriction that all class hierarchies must not contain cycles.
OverlappingInstances
Typeclasses are normally globally coherent, there is only ever one instance that can be resolved for a type unambiguously at any call site in the program. There are however extensions to loosen this restriction and perform more manual direction of the instance search.
Overlapping instances loosens the coherent condition (there can be multiple instances) but introduces a criterion that it will resolve to the most specific one.
Historically enabling on the modulelevel was not the best idea, since generally we define multiple classes in a module only a subset of which may be incoherent. As of GHC 7.10 we now have the capacity to just annotate instances with the OVERLAPPING
and INCOHERENT
inline pragmas.
IncoherentInstances
Incoherent instances loosens the restriction that there be only one specific instance, it will be chosen based on a more complex search procedure which tries to identify a prime instance based on information incorporated form OVERLAPPING
pragmas on instances in the search tree. Unless one is doing very advanced typelevel programming use class constraints, this is usually a poor design decision and a sign to rethink the class hierarchy.
An example with INCOHERENT
annotations:
Haskell is a unique language that explores an alternative evaluation model called lazy evaluation. Lazy evaluation implies that expressions will be evaluated only when needed. In truth, this evaluation may even be indefinitely deferred. Consider the example in Haskell of defining an infinite list:
The primary advantage of lazy evaluation in the large is that algorithms that operate over both unbounded and bounded data structures can inhabit the same type signatures and be composed without any additional need to restructure their logic or force intermediate computations.
Still, it’s important to recognize that this is another subject on which much ink has been spilled. In fact, there is an ongoing discussion in the land of Haskell about the compromises between lazy and strict evaluation, and there are nuanced arguments for having either paradigm be the default.
Haskell takes a hybrid approach where it allows strict evaluation when needed while it uses laziness by default. Needless to say, we can always find examples where strict evaluation exhibits worse behavior than lazy evaluation and vice versa. These days Haskell can be both as lazy or as strict as you like, giving you options for however you prefer to program.
Languages that attempt to bolt laziness on to a strict evaluation model often bifurcate classes of algorithms into ones that are handadjusted to consume unbounded structures and those which operate over bounded structures. In strict languages, mixing and matching between lazy vs. strict processing often necessitates manifesting large intermediate structures in memory when such composition would “just work” in a lazy language.
By virtue of Haskell being the only language to actually explore this point in the design space, knowledge about lazy evaluation is not widely absorbed into the collective programmer consciousness and can often be nonintuitive to the novice. Some time is often needed to fully grok how lazy evaluation works
Strictness
For a more strict definition of strictnees, consider that there are several evaluation models for the lambda calculus:
 Strict  Evaluation is said to be strict if all arguments are evaluated before the body of a function.
 Nonstrict  Evaluation is nonstrict if the arguments are not necessarily evaluated before entering the body of a function.
These ideas give rise to several models, Haskell itself uses the callbyneed model.
Callbyvalue  Strict  Arguments evaluated before function entered 
Callbyname  Nonstrict  Arguments passed unevaluated 
Callbyneed  Nonstrict  Arguments passed unevaluated but an expression is only evaluated once 
Seq and WHNF
On the subject of laziness and evaluation, we have names for how fully evaluated an expression is. A term is said to be in weak head normalform if the outermost constructor or lambda expression cannot be reduced further. A term is said to be in normal form if it is fully evaluated and all subexpressions and thunks contained within are evaluated.
In Haskell, normal evaluation only occurs at the outer constructor of casestatements in Core. If we pattern match on a list, we don’t implicitly force all values in the list. An element in a data structure is only evaluated up to the outermost constructor. For example, to evaluate the length of a list we need only scrutinize the outer Cons constructors without regard for their inner values:
For example, in a lazy language the following program terminates even though it contains diverging terms.
In a strict language like OCaml (ignoring its suspensions for the moment), the same program diverges.
Thunks
In Haskell a thunk is created to stand for an unevaluated computation. Evaluation of a thunk is called forcing the thunk. The result is an update, a referentially transparent effect, which replaces the memory representation of the thunk with the computed value. The fundamental idea is that a thunk is only updated once (although it may be forced simultaneously in a multithreaded environment) and its resulting value is shared when referenced subsequently.
The GHCi command :sprint
can be used to introspect the state of unevaluated thunks inside an expression without forcing evaluation. For instance:
While a thunk is being computed its memory representation is replaced with a special form known as blackhole which indicates that computation is ongoing and allows for a short circuit when a computation might depend on itself to complete.
The seq
function introduces an artificial dependence on the evaluation of order of two terms by requiring that the first argument be evaluated to WHNF before the evaluation of the second. The implementation of the seq
function is an implementation detail of GHC.
For one example where laziness can bite you, the infamous foldl is wellknown to leak space when used carelessly and without several compiler optimizations applied. The strict foldl’ variant uses seq to overcome this.
In practice, a combination between the strictness analyzer and the inliner on O2
will ensure that the strict variant of foldl
is used whenever the function is inlinable at call site so manually using foldl'
is most often not required.
Of important note is that GHCi runs without any optimizations applied so the same program that performs poorly in GHCi may not have the same performance characteristics when compiled with GHC.
BangPatterns
The extension BangPatterns
allows an alternative syntax to force arguments to functions to be wrapped in seq. A bang operator on an argument forces its evaluation to weak head normal form before performing the pattern match. This can be used to keep specific arguments evaluated throughout recursion instead of creating a giant chain of thunks.
This is desugared into code effectively equivalent to the following:
Function application to seq’d arguments is common enough that it has a special operator.
StrictData
As of GHC 8.0 strictness annotations can be applied to all definitions in a module automatically. In previous versions to make definitions strict it was necessary to use explicit syntactic annotations at call sites.
Enabling StrictData makes constructor fields strict by default on any module where the pragma is enabled:
Is equivalent to:
Strict
Strict implies XStrictData
and extends strictness annotations to all arguments of functions.
Is equivalent to the following function declaration with explicit bang patterns:
On a modulelevel this effectively makes Haskell a callbyvalue language with some caveats. All arguments to functions are now explicitly evaluated and all data in constructors within this module are in head normal form by construction.
Deepseq
There are often times when for performance reasons we need to deeply evaluate a data structure to normal form leaving no terms unevaluated. The deepseq
library performs this task.
The typeclass NFData
(Normal Form Data) allows us to seq all elements of a structure across any subtypes which themselves implement NFData.
To force a data structure itself to be fully evaluated we share the same argument in both positions of deepseq.
Irrefutable Patterns
A lazy pattern doesn’t require a match on the outer constructor, instead it lazily calls the accessors of the values as needed. In the presence of a bottom, we fail at the usage site instead of the outer pattern match.
The Debate
Laziness is a controversial design decision in Haskell. It is difficult to write production Haskell code that operates in constant memory without some insight into the evaluation model and the runtime. A lot of industrial codebases have a policy of marking all constructors as strict by default or enabling StrictData to prevent space leaks. If Haskell were being designed from scratch it probably would not choose laziness as the default model. Future implementations of Haskell compilers would not choose this point in the design space if given the option of breaking with the language specification.
There is a lot of fear, uncertainty and doubt spread about lazy evaluation that unfortunately loses the forest for the trees and ignores 30 years of advanced research on the type system. In industrial programming a lot of software is sold on the meme of being of fast instead of being correct, and lazy evaluation is an intellectually easy talking point about these upsidedown priorities. Nevertheless the colloquial perception of laziness being “evil” is a meme that will continue to persist regardless of any underlying reality because software is intrinsically a social process.
What to Avoid?
Haskell being a 30 year old language has witnessed several revolutions in the way we structure and compose functional programs. Yet as a result several portions of the Prelude still reflect old schools of thought that simply can’t be removed without breaking significant parts of the ecosystem.
Currently it really only exists in folklore which parts to use and which not to use, although this is a topic that almost all introductory books don’t mention and instead make extensive use of the Prelude for simplicity’s sake.
The short version of the advice on the Prelude is:
 Avoid String.
 Use
fmap
instead ofmap
.  Use Foldable and Traversable instead of the Control.Monad, and Data.List versions of traversals.
 Avoid partial functions like
head
andread
or use their total variants.  Avoid exceptions, use ExceptT or Either instead.
 Avoid boolean blind functions.
The instances of Foldable for the list type often conflict with the monomorphic versions in the Prelude which are left in for historical reasons. So oftentimes it is desirable to explicitly mask these functions from implicit import and force the use of Foldable and Traversable instead.
Of course oftentimes one wishes to only use the Prelude explicitly and one can explicitly import it qualified and use the pieces as desired without the implicit import of the whole namespace.
What Should be in Prelude
To get work done on industrial projects you probably need the following libraries:
text
containers
unorderedcontainers
mtl
transformers
vector
filepath
directory
process
bytestring
optparseapplicative
unix
aeson
Custom Preludes
The default Prelude can be disabled in its entirety by twiddling the XNoImplicitPrelude
flag which allows us to replace the default import entirely with a custom prelude. Many industrial projects will roll their own Prologue.hs
module which replaces the legacy prelude.
For example if we wanted to build up a custom project prelude we could construct a Prologue module and dump the relevant namespaces we want from base
into our custom export list. Using the module reexport feature allows us to create an Exports
namespace which contains our Prelude’s symbols. Every subsequent module in our project will then have import Prologue
as the first import.
Preludes
There are many approaches to custom preludes. The most widely used ones are all available on Hackage.
Different preludes take different approaches to defining what the Haskell standard library should be. Some are interoperable with existing code and others require an “allin” approach that creates an ecosystem around it. Some projects are more community efforts and others are developed by consulting companies or industrial users wishing to standardise their commercial code.
In Modern Haskell there are many different perspectives on Prelude design and the degree to which more advanced ideas should be used. Which one is right for you is a matter of personal preference and constraints in your company.
Protolude
Protolude is a minimalist Prelude which provides many sensible defaults for writing modern Haskell and is compatible with existing code.
Protolude is one of the more conservative preludes and is developed by the author of this document.
See:
Partial Functions
A partial function is a function which doesn’t terminate and yield a value for all given inputs. Conversely a total function terminates and is always defined for all inputs. As mentioned previously, certain historical parts of the Prelude are full of partial functions.
The difference between partial and total functions is the compiler can’t reason about the runtime safety of partial functions purely from the information specified in the language and as such the proof of safety is left to the user to guarantee. They are safe to use in the case where the user can guarantee that invalid inputs cannot occur, but like any unchecked property its safety or notsafety is going to depend on the diligence of the programmer. This very much goes against the overall philosophy of Haskell and as such they are discouraged when not necessary.
A list of partial functions in the default prelude:
Partial for all inputs
error
undefined
fail
– ForMonad IO
Partial for empty lists
head
init
tail
last
foldl
foldr
foldl'
foldr'
foldr1
foldl1
cycle
maximum
minimum
Partial for Nothing
fromJust
Partial for invalid strings lists
read
Partial for infinite lists
sum
product
reverse
Partial for negative or unbounded numbers
(!)
(!!)
toEnum
genericIndex
Replacing Partiality
The Prelude has total variants of the historical partial functions (e.g. Text.Read.readMaybe
) in some cases, but often these are found in the various replacement preludes
The total versions provided fall into three cases:
May
 return Nothing when the function is not defined for the inputsDef
 provide a default value when the function is not defined for the inputsNote
 callerror
with a custom error message when the function is not defined for the inputs. This is not safe, but slightly easier to debug!
Boolean Blindness
Boolean blindness is a common problem found in many programming languages. Consider the following two definitions which deconstruct a Maybe value into a boolean. Is there anything wrong with the definitions and below and why is this not caught in the type system?
The problem with the Bool
type is that there is effectively no difference between True and False at the type level. A proposition taking a value to a Bool takes any information given and destroys it. To reason about the behavior we have to trace the provenance of the proposition we’re getting the boolean answer from, and this introduces a whole slew of possibilities for misinterpretation. In the worst case, the only way to reason about safe and unsafe use of a function is by trusting that a predicate’s lexical name reflects its provenance!
For instance, testing some proposition over a Bool value representing whether the branch can perform the computation safely in the presence of a null is subject to accidental interchange. Consider that in a language like C or Python testing whether a value is null is indistinguishable to the language from testing whether the value is not null. Which of these programs encodes safe usage and which segfaults?
From inspection we can’t tell without knowing how p is defined, the compiler can’t distinguish the two either and thus the language won’t save us if we happen to mix them up. Instead of making invalid states unrepresentable we’ve made the invalid state indistinguishable from the valid one!
The more desirable practice is to match on terms which explicitly witness the proposition as a type (often in a sum type) and won’t typecheck otherwise.
To be fair though, many popular languages completely lack the notion of sum types (the source of many woes in my opinion) and only have product types, so this type of reasoning sometimes has no direct equivalence for those not familiar with ML family languages.
In Haskell, the Prelude provides functions like isJust
and fromJust
both of which can be used to subvert this kind of reasoning and make it easy to introduce bugs and should often be avoided.
Foldable / Traversable
If coming from an imperative background retraining oneself to think about iteration over lists in terms of maps, folds, and scans can be challenging.
For a concrete example consider the simple arithmetic sequence over the binary operator (+)
:
Foldable and Traversable are the general interface for all traversals and folds of any data structure which is parameterized over its element type ( List, Map, Set, Maybe, …). These two classes are used everywhere in modern Haskell and are extremely important.
A foldable instance allows us to apply functions to data types of monoidal values that collapse the structure using some logic over mappend
.
A traversable instance allows us to apply functions to data types that walk the structure lefttoright within an applicative context.
The foldMap
function is extremely general and nonintuitively many of the monomorphic list folds can themselves be written in terms of this single polymorphic function.
foldMap
takes a function of values to a monoidal quantity, a functor over the values and collapses the functor into the monoid. For instance for the trivial Sum monoid:
For instance if we wanted to map a list of some abstract element types into a hashtable of elements based on pattern matching we could use it.
The full Foldable class (with all default implementations) contains a variety of derived functions which themselves can be written in terms of foldMap
and Endo
.
For example:
Most of the operations over lists can be generalized in terms of combinations of Foldable and Traversable to derive more general functions that work over all data structures implementing Foldable.
Unfortunately for historical reasons the names exported by Foldable quite often conflict with ones defined in the Prelude, either import them qualified or just disable the Prelude. The operations in the Foldable class all specialize to the same and behave the same as the ones in Prelude for List types.
The instances we defined above can also be automatically derived by GHC using several language extensions. The automatic instances are identical to the handwritten versions above.
The string situation in Haskell is a sad affair. The default String type is defined as linked list of pointers to characters which is an extremely pathological and inefficient way of representing textual data. Unfortunately for historical reasons large portions of GHC and Base depend on String.
The String problem is intrinsically linked to the fact that the default GHC Prelude provides a set of broken defaults that are difficult to change because GHC and the entire ecosystem historically depend on it. There are however high performance string libraries that can swapped in for the broken String
type and we will discuss some ways of working with highperformance and memory efficient replacements.
String
The default Haskell string type is implemented as a naive linked list of characters, this is hilariously terrible for most purposes but no one knows how to fix it without rewriting large portions of all code that exists, and simply nobody wants to commit the time to fix it. So it remains broken, likely forever.
However, fear not as there are are two replacement libraries for processing textual data: text
and bytestring
.
text
 Used for handling unicode data.bytestring
 Used for handling ASCII data that needs to interchange with C code or network protocols.
For each of these there are two variants for both text and bytestring.
 lazy  Lazy text objects are encoded as lazy lists of strict chunks of bytes.
 strict  Byte vectors are encoded as strict Word8 arrays of bytes or code points
Giving rise to the Cartesian product of the four common string types:
strict text `Da  ta.Text` 
lazy text `Da  ta.Text.Lazy` 
strict bytestring `Da  ta.ByteString` 
lazy bytestring `Da  ta.ByteString.Lazy` 
String Conversions
Conversions between strings types are done with several functions across the bytestring and text libraries. The mapping between text and bytestring is inherently lossy so there is some degree of freedom in choosing the encoding. We’ll just consider utf8 for simplicity.
(From : left column, To : top row) Data.Text Data.Text.Lazy Data.ByteString Data.ByteString.Lazy ——————— ——— ————– ————— —————— Data.Text id fromStrict encodeUtf8 encodeUtf8 Data.Text.Lazy toStrict id encodeUtf8 encodeUtf8 Data.ByteString decodeUtf8 decodeUtf8 id fromStrict Data.ByteString.Lazy decodeUtf8 decodeUtf8 toStrict id
Be careful with the functions (decodeUtf8
, decodeUtf16LE
, etc.) as they are partial and will throw errors if the byte array given does not contain unicode code points. Instead use one of the following functions which will allow you to explicitly handle the error case:
OverloadedStrings
With the XOverloadedStrings
extension string literals can be overloaded without the need for explicit packing and can be written as string literals in the Haskell source and overloaded via the typeclass IsString
. Sometimes this is desirable.
For instance:
We can also derive IsString for newtypes using GeneralizedNewtypeDeriving
, although much of the safety of the newtype is then lost if it is used interchangeable with other strings.
Import Conventions
Since there are so many modules that provide string datatypes, and these modules are used ubiquitously, some conventions are often adopted to import these modules as specific agreedupon qualified names. In many Haskell projects you will see the following social conventions used for distinguish text types.
For datatypes:
For IO operations:
For encoding operations:
In addition many libraries and alternative preludes will define the following type synonyms:
Text
The Text
type is a packed blob of Unicode characters.
See: Text
Text.Builder
The Text.Builder allows the efficient monoidal construction of lazy Text types without having to go through inefficient forms like String or List types as intermediates.
ByteString
ByteStrings are arrays of unboxed characters with either strict or lazy evaluation.
Printf
Haskell also has a variadic printf
function in the style of C.
Overloaded Lists
It is ubiquitous for data structure libraries to expose toList
and fromList
functions to construct various structures out of lists. As of GHC 7.8 we now have the ability to overload the list syntax in the surface language with the typeclass IsList
.
For example we could write an overloaded list instance for hash tables that simply converts to the hash table using fromList
. Some math libraries that use vectorlike structures will use overloaded lists in this fashion.
Regex
regextdfa
implements POSIX extended regular expressions. These can operate over any of the major string types and with OverloadedStrings enabled allows you to write welltyped regex expressions as strings.
Escaping Text
Haskell uses Cstyle singlecharacter escape codes
n  U+000A  newline 
U+0000  null character  
&  n/a  empty string 
’  U+0027  single quote 
\  U+005C  backslash 
a  U+0007  alert 
b  U+0008  backspace 
f  U+000C  form feed 
r  U+000D  carriage return 
t  U+0009  horizontal tab 
v  U+000B  vertical tab 
"  U+0022  double quote 
String Splitting
The split package provides a variety of missing functions for splitting list and string types.
Like monads Applicatives are an abstract structure for a wide class of computations that sit between functors and monads in terms of generality.
As of GHC 7.6, Applicative is defined as:
With the following laws:
As an example, consider the instance for Maybe:
As a rule of thumb, whenever we would use m >>= return . f
what we probably want is an applicative functor, and not a monad.
import Control.Applicative ((<$>), (<*>))
import Network.HTTP
example1 :: Maybe Integer
example1 = (+) <$> m1 <*> m2
where
m1 = Just 3
m2 = Nothing
 Nothing
example2 :: [(Int, Int, Int)]
example2 = (,,) <$> m1 <*> m2 <*> m3
where
m1 = [1, 2]
m2 = [10, 20]
m3 = [100, 200]
 [(1,10,100),(1,10,200),(1,20,100),(1,20,200),(2,10,100),(2,10,200),(2,20,100),(2,20,200)]
example3 :: IO String
example3 = (++) <$> fetch1 <*> fetch2
where
fetch1 = simpleHTTP (getRequest "http://www.python.org/") >>= getResponseBody
fetch2 = simpleHTTP (getRequest "http://www.haskell.org/") >>= getResponseBody
The pattern f <$> a <*> b ...
shows up so frequently that there is a family of functions to lift applicatives of a fixed number arguments. This pattern also shows up frequently with monads (liftM
, liftM2
, liftM3
).
Applicative also has functions *>
and <*
that sequence applicative actions while discarding the value of one of the arguments. The operator *>
discards the left while <*
discards the right. For example in a monadic parser combinator library the *>
would parse with first parser argument but return the second.
The Applicative functions <$>
and <*>
are generalized by liftM
and ap
for monads.
See: Applicative Programming with Effects
Alternative
Alternative is an extension of the Applicative class with a zero element and an associative binary operation respecting the zero.
These instances show up very frequently in parsers where the alternative operator can model alternative parse branches.
Arrows
A category is an algebraic structure that includes a notion of an identity and a composition operation that is associative and preserves identities. In practice arrows are not often used in modern Haskell and are often considered a code smell.
Arrows are an extension of categories with the notion of products.
The canonical example is for functions.
In this form, functions of multiple arguments can be threaded around using the arrow combinators in a much more pointfree form. For instance a histogram function has a nice oneliner.
Arrow notation
GHC has builtin syntax for composing arrows using proc
notation. The following are equivalent after desugaring:
In practice this notation is not often used and may become deprecated in the future.
See: Arrow Notation
Bifunctors
Bifunctors are a generalization of functors to include types parameterized by two parameters and include two map functions for each parameter.
The bifunctor laws are a natural generalization of the usual functor laws. Namely they respect identities and composition in the usual way:
The canonical example is for 2tuples.
Polyvariadic Functions
One surprising application of typeclasses is the ability to construct functions which take an arbitrary number of arguments by defining instances over function types. The arguments may be of arbitrary type, but the resulting collected arguments must either be converted into a single type or unpacked into a sum type.
There are a plethora of ways of handling errors in Haskell. While Haskell’s runtime supports throwing and handling exceptions, it is important to use the right method in the right context.
Either Monad
In keeping with the Haskell tradition it is always preferable to use pure logic when possible. In many simple cases error handling can be done quite simply by using the Monad
instance of Either. Monadic bind simply threads a Right
value through the monad and “shortcircuits” evaluation when a Left
is introduced. This is simple enough error handling which privileges the Left
constructor to hold the error. Many simple functions which can fail can simply use the Either Error a
in the result type to encode simple error handling.
The downside to this is that it forces every consumer of the function to pattern match on the result to handle the error case. It also assumes that all Error
types can be encoded inside of the sum type holding the possible failures.
ExceptT
When using the transformers
style effect stacks it is quite common to need to have a layer of the stack which can fail. When using the style of composing effects a monad transformer (which is a wrapper around Either monad) can be added which lifts the error handling into an ExceptT
effect layer.
As of mtl 2.2 or higher, the ErrorT
class has been replaced by ExceptT
at the transformers level.
And also this can be extended to the mtl MonadError
instance for which we can write instances for IO and Either themselves:
See:
Control.Exception
GHC has a builtin system for propagating errors up at the runtime level, below the business logic level. These are used internally for all sorts of concurrency and system interfaces. The runtime provides builtin operations throw
and catch
functions which allow us to throw exceptions in pure code and catch the resulting exception within IO. Note that the return value of throw
inhabits all types.
Because a value will not be evaluated unless needed, if one desires to know for sure that an exception is either caught or not it can be deeply forced into head normal form before invoking catch. The strictCatch
is not provided by the standard library but has a simple implementation in terms of deepseq
.
Exceptions
The problem with the previous approach is having to rely on GHC’s asynchronous exception handling inside of IO to handle basic operations and the bifurcation of APIs which need to expose different APIs for any monad that has failure (IO
, STM
, ExceptT
, etc.).
The exceptions
package provides the same API as Control.Exception
but loosens the dependency on IO. It instead provides a granular set of typeclasses which can operate over different monads which require a precise subset of error handling methods.
MonadThrow
 Monads which expose an interface for throwing exceptions.MonadCatch
 Monads which expose an interface for handling exceptions.MonadMask
 Monads which expose an interface for masking asynchronous exceptions.
There are three core primitives that are used in handling runtime exceptions:
finally
 For handling guaranteed finalisation of code in the presence of exceptions.onException
 For handing exception case only if an exception is thrown.bracket
 For implementing resource handling with custom acquisition and finalizer logic, in the presence of exceptions.
finally
takes an IO
action to run as a computation and a secondary function to run after the evaluation of the first.
onException
has a similar signature but the second function is run only if an exception is raised.
The bracket
function takes two functions, an acquisition function and a finalizer function which “bracket” the evaluation of the third. The finaliser will be run if the computation throwns an exception and unwinds.
A simple example of usage is bracket logic that handles file descriptors which need to be explicitly closed after evaluation is done. The initialiser in this case will return a file descriptor to the body and then run hClose
on the file descriptor after the body is done with evaluation.
In addition the exceptions
library exposes several functions for explicitly handling a variety of exceptions of various forms. Toplevel handlers that need to “catch em’ all” should use catchAny
for wildcard error handling.
A simple example of usage:
See: exceptions
Spoon
Sometimes you’ll be forced to deal with seemingly pure functions that can throw up at any point. There are many functions in the standard library like this, and many more on Hackage. You’d like to handle this logic purely as if it were returning a proper Maybe a
but to catch the logic you’d need to install an IO handler inside IO to catch it. Spoon allows us to safely (and “purely”, although it uses a referentially transparent invocation of unsafePerformIO) to catch these exceptions and put them in Maybe where they belong.
The spoon
function evaluates its argument to head normal form, while teaspoon
evaluates to weak head normal form.
When working with the wider library you will find there a variety of “advanced monads” which are higherlevel constructions on top of of the monadic interface which enrich the structure with additional rules or build APIs for combining different types of monads. Some of the mostused cases are mentioned in this section.
Function Monad
If one writes Haskell long enough one might eventually encounter the curious beast that is the ((>) r)
monad instance. It generally tends to be nonintuitive to work with, but is quite simple when one considers it as an unwrapped Reader monad.
This just uses a prefix form of the arrow type operator.
RWS Monad
The RWS monad combines the functionality of the three monads discussed above, the Reader, Writer, and State. There is also a RWST
transformer.
These three eval functions are now combined into the following functions:
The usual caveat about Writer laziness also applies to RWS.
Cont
In continuation passing style, composite computations are built up from sequences of nested computations which are terminated by a final continuation which yields the result of the full computation by passing a function into the continuation chain.
MonadPlus
Choice and failure.
MonadPlus forms a monoid with
MonadFail
Before the great awakening, Monads used to be defined as the following class.
This was eventually deemed not to be an great design and in particular the fail
function was a misplaced lawless entity that would generate bottoms. It was also necessary to define fail
for all monads, even those without a notion of failure. This was considered quite ugly and eventually a breaking change to base (landed in 4.9) was added which split out MonadFail
into a separate class where it belonged.
Some of the common instances of MonadFail are shown below:
MonadFix
The fixed point of a monadic computation. mfix f
executes the action f
only once, with the eventual output fed back as the input.
The regular donotation can also be extended with XRecursiveDo
to accommodate recursive monadic bindings.
ST Monad
The ST monad models “threads” of stateful computations which can manipulate mutable references but are restricted to only return pure values when evaluated and are statically confined to the ST monad of a s
thread.
Using the ST monad we can create a class of efficient purely functional data structures that use mutable references in a referentially transparent way.
Free Monads
Free monads are monads which instead of having a join
operation that combines computations, instead forms composite computations from application of a functor.
One of the best examples is the Partiality monad which models computations which can diverge. Haskell allows unbounded recursion, but for example we can create a free monad from the Maybe
functor which can be used to fix the calldepth of, for example the Ackermann function.
The other common use for free monads is to build embedded domainspecific languages to describe computations. We can model a subset of the IO monad by building up a pure description of the computation inside of the IOFree monad and then using the free monad to encode the translation to an effectful IO computation.
An implementation such as the one found in free might look like the following:
Indexed Monads
Indexed monads are a generalisation of monads that adds an additional type parameter to the class that carries information about the computation or structure of the monadic implementation.
The canonical usecase is a variant of the vanilla State which allows typechanging on the state for intermediate steps inside of the monad. This indeed turns out to be very useful for handling a class of problems involving resource management since the extra index parameter gives us space to statically enforce the sequence of monadic actions by allowing and restricting certain state transitions on the index parameter at compiletime.
To make this more usable we’ll use the somewhat esoteric XRebindableSyntax
allowing us to overload the donotation and ifthenelse syntax by providing alternative definitions local to the module.
{# LANGUAGE RebindableSyntax #}
{# LANGUAGE ScopedTypeVariables #}
{# LANGUAGE NoMonomorphismRestriction #}
import Data.IORef
import Data.Char
import Prelude hiding (fmap, (>>=), (>>), return)
import Control.Applicative
newtype IState i o a = IState { runIState :: i > (a, o) }
evalIState :: IState i o a > i > a
evalIState st i = fst $ runIState st i
execIState :: IState i o a > i > o
execIState st i = snd $ runIState st i
ifThenElse :: Bool > a > a > a
ifThenElse b i j = case b of
True > i
False > j
return :: a > IState s s a
return a = IState $ s > (a, s)
fmap :: (a > b) > IState i o a > IState i o b
fmap f v = IState $ i > let (a, o) = runIState v i
in (f a, o)
join :: IState i m (IState m o a) > IState i o a
join v = IState $ i > let (w, m) = runIState v i
in runIState w m
(>>=) :: IState i m a > (a > IState m o b) > IState i o b
v >>= f = IState $ i > let (a, m) = runIState v i
in runIState (f a) m
(>>) :: IState i m a > IState m o b > IState i o b
v >> w = v >>= _ > w
get :: IState s s s
get = IState $ s > (s, s)
gets :: (a > o) > IState a o a
gets f = IState $ s > (s, f s)
put :: o > IState i o ()
put o = IState $ _ > ((), o)
modify :: (i > o) > IState i o ()
modify f = IState $ i > ((), f i)
data Locked = Locked
data Unlocked = Unlocked
type Stateful a = IState a Unlocked a
acquire :: IState i Locked ()
acquire = put Locked
 Can only release the lock if it's held, try release the lock
 that's not held is a now a type error.
release :: IState Locked Unlocked ()
release = put Unlocked
 Statically forbids improper handling of resources.
lockExample :: Stateful a
lockExample = do
ptr < get :: IState a a a
acquire :: IState a Locked ()
 ...
release :: IState Locked Unlocked ()
return ptr
 Couldn't match type `Locked' with `Unlocked'
 In a stmt of a 'do' block: return ptr
failure1 :: Stateful a
failure1 = do
ptr < get
acquire
return ptr  didn't release
 Couldn't match type `a' with `Locked'
 In a stmt of a 'do' block: release
failure2 :: Stateful a
failure2 = do
ptr < get
release  didn't acquire
return ptr
 Evaluate the resulting state, statically ensuring that the
 lock is released when finished.
evalReleased :: IState i Unlocked a > i > a
evalReleased f st = evalIState f st
example :: IO (IORef Integer)
example = evalReleased <$> pure lockExample <*> newIORef 0
Lifted Base
The default prelude predates a lot of the work on monad transformers and as such many of the common functions for handling errors and interacting with IO are bound strictly to the IO monad and not to functions implementing stacks on top of IO or ST. The liftedbase provides generic control operations such as catch
can be lifted from IO or any other base monad.
monadbase
Monad base provides an abstraction over liftIO
and other functions to explicitly lift into a “privileged” layer of the transformer stack. It’s implemented as a multiparameter typeclass with the “base” monad as the parameter b.
monadcontrol
Monad control builds on top of monadbase to extended lifting operation to control operations like catch
and bracket
can be written generically in terms of any transformer with a base layer supporting these operations. Generic operations can then be expressed in terms of a MonadBaseControl
and written in terms of the combinator control
which handles the bracket and automatic handler lifting.
For example the function catch provided by Control.Exception
is normally locked into IO.
catch :: Exception e => IO a > (e > IO a) > IO a
By composing it in terms of control we can construct a generic version which automatically lifts inside of any combination of the usual transformer stacks that has MonadBaseControl
instance.
In logic a predicate is a statement about a subject. For instance the statement: Socrates is a man, can be written as:
Man(Socrates)
A predicate assigned to a variable Man(x) has a truth value if the predicate holds for the subject. The domain of a variable is the set of all variables that may be assigned to the variable. A quantifier turns predicates into propositions by assigning values to all variables. For example the statement: All men are mortal. This is an example of a universal quantifier which describe a predicate that holds forall inhabitants of the domain of variables.
Forall x. If Man(x) then Mortal(x)
The truth value that that Socrates is mortal can be derived from above relation. Programming with quantifiers in Haskell follows this same kind of logical convention except we will be working with types and constraints on types.
Universal Quantification
Universal quantification the primary mechanism of encoding polymorphism in Haskell. The essence of universal quantification is that we can express functions which operate the same way for a set of types and whose function behavior is entirely determined only by the behavior of all types in this span. These are represented at the typelevel by in the introduction of a universal quantifier (forall
or ∀
) over a set of the type variables in the signature.
Normally quantifiers are omitted in type signatures since in Haskell’s vanilla surface language it is unambiguous to assume to that free type variables are universally quantified. So the following two are equivalent:
Free Theorems
A universally quantified typevariable actually implies quite a few rather deep properties about the implementation of a function that can be deduced from its type signature. For instance the identity function in Haskell is guaranteed to only have one implementation since the only information that the information that can present in the body:
These so called free theorems are properties that hold for any welltyped inhabitant of a universally quantified signature.
For example a free theorem of fmap
is that every implementation of functor can only ever have the property that composition of maps of functions is the same as maps of the functions composed together.
Type Systems
HindleyMilner type system
The HindleyMilner type system is historically important as one of the first typed lambda calculi that admitted both polymorphism and a variety of inference techniques that could always decide principal types.
In an type checker implementation, a generalize function converts all type variables within the type into polymorphic type variables yielding a type scheme. While a instantiate function maps a scheme to a type, but with any polymorphic variables converted into unbound type variables.
RankN Types
SystemF is the type system that underlies Haskell. SystemF subsumes the HM type system in the sense that every type expressible in HM can be expressed within SystemF. SystemF is sometimes referred to in texts as the GiraldReynolds polymorphic lambda calculus or secondorder lambda calculus.
An example with equivalents of GHC Core in comments:
Normally when Haskell’s typechecker infers a type signature it places all quantifiers of type variables at the outermost position such that no quantifiers appear within the body of the type expression, called the prenex restriction. This restricts an entire class of type signatures that would otherwise be expressible within SystemF, but has the benefit of making inference much easier.
XRankNTypes
loosens the prenex restriction such that we may explicitly place quantifiers within the body of the type. The bad news is that the general problem of inference in this relaxed system is undecidable in general, so we’re required to explicitly annotate functions which use RankNTypes or they are otherwise inferred as rank 1 and may not typecheck at all.
Of important note is that the type variables bound by an explicit quantifier in a higher ranked type may not escape their enclosing scope. The typechecker will explicitly enforce this by enforcing that variables bound inside of rankn types (called skolem constants) will not unify with free meta type variables inferred by the inference engine.
In this example in order for the expression to be well typed, f
would necessarily have (Int > Int
) which implies that a ~ Int
over the whole type, but since a
is bound under the quantifier it must not be unified with Int
and so the typechecker must fail with a skolem capture error.
This can actually be used for our advantage to enforce several types of invariants about scope and use of specific type variables. For example the ST monad uses a second rank type to prevent the capture of references between ST monads with separate state threads where the s
type variable is bound within a rank2 type and cannot escape, statically guaranteeing that the implementation details of the ST internals can’t leak out and thus ensuring its referential transparency.
Existential Quantification
An existential type is a pair of a type and a term with a special set of packing and unpacking semantics. The type of the value encoded in the existential is known by the producer but not by the consumer of the existential value.
The existential over SBox
gathers a collection of values defined purely in terms of their Show interface and an opaque pointer, no other information is available about the values and they can’t be accessed or unpacked in any other way.
Passing around existential types allows us to hide information from consumers of data types and restrict the behavior that functions can use. Passing records around with existential variables allows a type to be “bundled” with a fixed set of functions that operate over its hidden internals.
Impredicative Types
Although extremely brittle, GHC also has limited support for impredicative polymorphism which allows instantiating type variable with a polymorphic type. Implied is that this loosens the restriction that quantifiers must precede arrow types and now they may be placed inside of typeconstructors.
{# LANGUAGE ImpredicativeTypes #}
 Uses higherranked polymorphism.
f :: (forall a. [a] > a) > (Int, Char)
f get = (get [1,2], get ['a', 'b', 'c'])
 Uses impredicative polymorphism.
g :: Maybe (forall a. [a] > a) > (Int, Char)
g Nothing = (0, '0')
g (Just get) = (get [1,2], get ['a','b','c'])
Use of this extension is very rare, and there is some consideration that XImpredicativeTypes
is fundamentally broken. Although GHC is very liberal about telling us to enable it when one accidentally makes a typo in a type signature!
Some notable trivia, the ($)
operator is wired into GHC in a very special way as to allow impredicative instantiation of runST
to be applied via ($)
by specialcasing the ($)
operator only when used for the ST monad.
For example if we define a function apply
which should behave identically to ($)
we’ll get an error about polymorphic instantiation even though they are defined identically!
See:
Scoped Type Variables
Normally the type variables used within the toplevel signature for a function are only scoped to the typesignature and not the body of the function and its rigid signatures over terms and let/where clauses. Enabling XScopedTypeVariables
loosens this restriction allowing the type variables mentioned in the toplevel to be scoped within the valuelevel body of a function and all signatures contained therein.
Generalized Algebraic Data types (GADTs) are an extension to algebraic datatypes that allow us to qualify the constructors to datatypes with type equality constraints, allowing a class of types that are not expressible using vanilla ADTs.
XGADTs
implicitly enables an alternative syntax for datatype declarations ( XGADTSyntax
) such that the following declarations are equivalent:
For an example use consider the data type Term
, we have a term in which we Succ
which takes a Term
parameterized by a
which spans all types. Problems arise between the clash whether (a ~ Bool
) or (a ~ Int
) when trying to write the evaluator.
And we admit the construction of meaningless terms which forces more error handling cases.
Using a GADT we can express the type invariants for our language (i.e. only typesafe expressions are representable). Pattern matching on this GADT then carries type equality constraints without the need for explicit tags.
This time around:
Explicit equality constraints (a ~ b
) can be added to a function’s context. For example the following expand out to the same types.
This is effectively the implementation detail of what GHC is doing behind the scenes to implement GADTs ( implicitly passing and threading equality terms around ). If we wanted we could do the same setup that GHC does just using equality constraints and existential quantification. Indeed, the internal representation of GADTs is as regular algebraic datatypes that carry coercion evidence as arguments.
In the presence of GADTs inference becomes intractable in many cases, often requiring an explicit annotation. For example f
can either have T a > [a]
or T a > [Int]
and neither is principal.
Kind Signatures
Haskell’s kind system (i.e. the “type of the types”) is a system consisting the single kind *
and an arrow kind >
.
There are in fact some extensions to this system that will be covered later ( see: PolyKinds and Unboxed types in later sections ) but most kinds in everyday code are simply either stars or arrows.
With the KindSignatures extension enabled we can now annotate top level type signatures with their explicit kinds, bypassing the normal kind inference procedures.
On top of default GADT declaration we can also constrain the parameters of the GADT to specific kinds. For basic usage Haskell’s kind inference can deduce this reasonably well, but combined with some other type system extensions that extend the kind system this becomes essential.
Void
The Void type is the type with no inhabitants. It unifies only with itself.
Using a newtype wrapper we can create a type where recursion makes it impossible to construct an inhabitant.
Or using XEmptyDataDecls
we can also construct the uninhabited type equivalently as a data declaration with no constructors.
The only inhabitant of both of these types is a diverging term like (undefined
).
Phantom Types
Phantom types are parameters that appear on the left hand side of a type declaration but which are not constrained by the values of the types inhabitants. They are effectively slots for us to encode additional information at the typelevel.
Notice the type variable tag
does not appear in the right hand side of the declaration. Using this allows us to express invariants at the typelevel that need not manifest at the valuelevel. We’re effectively programming by adding extra information at the typelevel.
Consider the case of using newtypes to statically distinguish between plaintext and cryptotext.
Using phantom types we use an extra parameter.
Using XEmptyDataDecls
can be a powerful combination with phantom types that contain no value inhabitants and are “anonymous types”.
The tagged library defines a similar Tagged
newtype wrapper.
Typelevel Operations
With a richer language for datatypes we can express terms that witness the relationship between terms in the constructors, for example we can now express a term which expresses propositional equality between two types.
The type Eql a b
is a proof that types a
and b
are equal, by pattern matching on the single Refl
constructor we introduce the equality constraint into the body of the pattern match.
As of GHC 7.8 these constructors and functions are included in the Prelude in the Data.Type.Equality module.
The lambda calculus forms the theoretical and practical foundation for many languages. At the heart of every calculus is three components:
 Var  A variable
 Lam  A lambda abstraction
 App  An application
There are many different ways of modeling these constructions and data structure representations, but they all more or less contain these three elements. For example, a lambda calculus that uses String names on lambda binders and variables might be written like the following:
A lambda expression in which all variables that appear in the body of the expression are referenced in an outer lambda binder is said to be closed while an expression with unbound free variables is open.
HOAS
Higher Order Abstract Syntax (HOAS) is a technique for implementing the lambda calculus in a language where the binders of the lambda expression map directly onto lambda binders of the host language ( i.e. Haskell ) to give us substitution machinery in our custom language by exploiting Haskell’s implementation.
Pretty printing HOAS terms can also be quite complicated since the body of the function is under a Haskell lambda binder.
PHOAS
A slightly different form of HOAS called PHOAS uses lambda datatype parameterized over the binder type. In this form evaluation requires unpacking into a separate Value type to wrap the lambda expression.
See:
Final Interpreters
Using typeclasses we can implement a final interpreter which models a set of extensible terms using functions bound to typeclasses rather than data constructors. Instances of the typeclass form interpreters over these terms.
For example we can write a small language that includes basic arithmetic, and then retroactively extend our expression language with a multiplication operator without changing the base. At the same time our interpreter logic remains invariant under extension with new expressions.
Finally Tagless
Writing an evaluator for the lambda calculus can likewise also be modeled with a final interpreter and a Identity functor.
See: Typed Tagless Interpretations and Typed Compilation
Datatypes
The usual handwavy way of describing algebraic datatypes is to indicate the how natural correspondence between sum types, product types, and polynomial expressions arises.
Intuitively it follows the notion that the cardinality of set of inhabitants of a type can always be given as a function of the number of its holes. A product type admits a number of inhabitants as a function of the product (i.e. cardinality of the Cartesian product), a sum type as the sum of its holes and a function type as the exponential of the span of the domain and codomain.
Recursive types correspond to infinite series of these terms.
FAlgebras
The initial algebra approach differs from the final interpreter approach in that we now represent our terms as algebraic datatypes and the interpreter implements recursion and evaluation occurs through pattern matching.
In Haskell a Falgebra is a functor f a
together with a function f a > a
. A coalgebra reverses the function. For a functor f
we can form its recursive unrolling using the recursive Fix
newtype wrapper.
In this form we can write down a generalized fold/unfold function that are datatype generic and written purely in terms of the recursing under the functor.
We call these functions catamorphisms and anamorphisms. Notice especially that the types of these two functions simply reverse the direction of arrows. Interpreted in another way they transform an algebra/coalgebra which defines a flat structurepreserving mapping between Fix f
f
into a function which either rolls or unrolls the fixpoint. What is particularly nice about this approach is that the recursion is abstracted away inside the functor definition and we are free to just implement the flat transformation logic!
For example a construction of the natural numbers in this form:
Or for example an interpreter for a small expression language that depends on a scoping dictionary.
What is especially elegant about this approach is how naturally catamorphisms compose into efficient composite transformations.
Recursion Schemes & The Morphism Zoo
Recursion schemes are a generally way of classifying a families of traversal algorithms that modify data structures recursively. Recursion schemes give rise to a rich set of algebraic structures which can be composed to devise all sorts of elaborate term rewrite systems. Most applications of recursion schemes occur in the context of graph rewriting or abstract syntax tree manipulation.
Several basic recursion schemes form the foundation of these rules. Grossly, a anamorphism is an unfolding of a data structure into a list of terms, while a catamorphism is a is the refolding of a data structure from a list of terms.
Catamorphism  cata :: (a > b > b) > b > [a] > b 
Anamorphism  ana :: (b > Maybe (a, b)) > b > [a] 
Paramorphism  para :: (a > ([a], b) > b) > b > [a] > b 
Apomorphism  apo :: (b > (a, Either [a] b)) > b > [a] 
Hylomorphism  hylo :: Functor f => (f b > b) > (a > f a) > a > b 
For a Fix
point type over a type with a Functor instance for the parameter f
we can write down the recursion schemes as the following definitions:
One can also construct monadic versions of these functions which have a result type inside of a monad. Instead of using function composition we use Kleisi composition.
The library recursionschemes
implements these basic recursion schemes as well as whole family of higherorder combinators off the shelf. These are implemented in terms of two typeclases Recursive
and Corecursive
which extend an instance of Functor with default methods for catamorphisms and anamorphisms. For the Fix
type above these functions expand into the following definitions:
The canonical example of a catamorphism is the factorial function which is a composition of a coalgebra which creates a list from n
to 1
and an algebra which multiplies the resulting list to a single result:
Another example is unfolding of lambda calculus to perform a substitution over a variable. We can define a catamoprhism for traversing over the AST.
Another use case would be to collect the free variables inside of the AST. This example use the recursionschemes
library.
See:
Hint and Mueval
GHC itself can actually interpret arbitrary Haskell source on the fly by hooking into the GHC’s bytecode interpreter ( the same used for GHCi ). The hint package allows us to parse, typecheck, and evaluate arbitrary strings into arbitrary Haskell programs and evaluate them.
This is generally not a wise thing to build a library around, unless of course the purpose of the program is itself to evaluate arbitrary Haskell code ( something like an online Haskell shell or the likes ).
Both hint and mueval do effectively the same task, designed around slightly different internals of the GHC Api.
See:
Unit testing frameworks are an important component in the Haskell ecosystem. Program correctness is a central philosophical concept and unit testing forms the third part of the ecosystem that includes strong type system and property testing. Generally speaking unit tests tend to be of less importance in Haskell since the type system makes an enormous amount of invalid programs completely inexpressible by construction. Unit tests tend to be written later in the development lifecycle and generally tend to be about the core logic of the program and not the intermediate plumbing.
A prominent school of thought on Haskell library design tends to favor constructing programs built around strong equational laws which guarantee strong invariants about program behavior under composition. Many of the testing tools are built around this style of design.
QuickCheck
Probably the most famous Haskell library, QuickCheck is a testing framework. This is a framework for generating large random tests for arbitrary functions automatically based on the types of their arguments.
The test data generator can be extended with custom types and refined with predicates that restrict the domain of cases to test.
import Test.QuickCheck
data Color = Red  Green  Blue deriving Show
instance Arbitrary Color where
arbitrary = do
n < choose (0,2) :: Gen Int
return $ case n of
0 > Red
1 > Green
2 > Blue
example1 :: IO [Color]
example1 = sample' arbitrary
 [Red,Green,Red,Blue,Red,Red,Red,Blue,Green,Red,Red]
See: QuickCheck: An Automatic Testing Tool for Haskell
SmallCheck
Like QuickCheck, SmallCheck is a property testing system but instead of producing random arbitrary test data it instead enumerates a deterministic series of test data to a fixed depth.
λ: list 3 series :: [Int]
[0,1,1,2,2,3,3]
λ: list 3 series :: [Double]
[0.0,1.0,1.0,2.0,0.5,2.0,4.0,0.25,0.5,4.0,0.25]
λ: list 3 series :: [(Int, String)]
[(0,""),(1,""),(0,"a"),(1,""),(0,"b"),(1,"a"),(2,""),(1,"b"),(1,"a"),(2,""),(1,"b"),(2,"a"),(2,"a"),(2,"b"),(2,"b")]
It is useful to generate test cases over all possible inputs of a program up to some depth.
Just like for QuickCheck we can implement series instances for our custom datatypes. For example there is no default instance for Vector, so let’s implement one:
SmallCheck can also use Generics to derive Serial instances, for example to enumerate all trees of a certain depth we might use:
QuickSpec
Using the QuickCheck arbitrary machinery we can also rather remarkably enumerate a large number of combinations of functions to try and deduce algebraic laws from trying out inputs for small cases. Of course the fundamental limitation of this approach is that a function may not exhibit any interesting properties for small cases or for simple function compositions. So in general case this approach won’t work, but practically it still quite useful.
{# LANGUAGE ConstraintKinds #}
{# LANGUAGE ScopedTypeVariables #}
{# LANGUAGE TypeOperators #}
import Data.List
import Data.Typeable
import QuickSpec hiding (arith, bools, lists)
import Test.QuickCheck.Arbitrary
type Var k a = (Typeable a, Arbitrary a, CoArbitrary a, k a)
listCons :: forall a. Var Ord a => a > Sig
listCons a =
background
[ "[]" `fun0` ([] :: [a]),
":" `fun2` ((:) :: a > [a] > [a])
]
lists :: forall a. Var Ord a => a > [Sig]
lists a =
[  Names to print arbitrary variables
funs',
funvars',
vars',
 Ambient definitions
listCons a,
 Expressions to deduce properties of
"sort" `fun1` (sort :: [a] > [a]),
"map" `fun2` (map :: (a > a) > [a] > [a]),
"id" `fun1` (id :: [a] > [a]),
"reverse" `fun1` (reverse :: [a] > [a]),
"minimum" `fun1` (minimum :: [a] > a),
"length" `fun1` (length :: [a] > Int),
"++" `fun2` ((++) :: [a] > [a] > [a])
]
where
funs' = funs (undefined :: a)
funvars' = vars ["f", "g", "h"] (undefined :: a > a)
vars' = ["xs", "ys", "zs"] `vars` (undefined :: [a])
tvar :: A
tvar = undefined
main :: IO ()
main = quickSpec (lists tvar)
Running this we rather see it is able to deduce most of the laws for list functions.
Keep in mind the rather remarkable fact that this is all deduced automatically from the types alone!
Tasty
Tasty is the commonly used unit testing framework. It combines all of the testing frameworks (Quickcheck, SmallCheck, HUnit) into a common API for forming runnable batches of tests and collecting the results.
Silently
Often in the process of testing IO heavy code we’ll need to redirect stdout to compare it some known quantity. The silently
package allows us to capture anything done to stdout across any library inside of IO block and return the result to the test runner.
Type families are a powerful extension the Haskell type system, developed in 2005, that provide typeindexed data types and named functions on types. This allows a whole new level of computation to occur at compiletime and opens an entire arena of typelevel abstractions that were previously impossible to express. Type families proved to be nearly as fruitful as typeclasses and indeed, many previous approaches to typelevel programming using classes are achieved much more simply with type families.
MultiParam Typeclasses
Resolution of vanilla Haskell 98 typeclasses proceeds via very simple context reduction that minimizes interdependency between predicates, resolves superclasses, and reduces the types to head normal form. For example:
If a single parameter typeclass expresses a property of a type ( i.e. whether it’s in a class or not in class ) then a multiparameter typeclass expresses relationships between types. For example if we wanted to express the relation that a type can be converted to another type we might use a class like:
Of course now our instances for Convertible Int
are not unique anymore, so there no longer exists a nice procedure for determining the inferred type of b
from just a
. To remedy this let’s add a functional dependency a > b
, which tells GHC that an instance a
uniquely determines the instance that b can be. So we’ll see that our two instances relating Int
to both Integer
and Char
conflict.
Now there’s a simpler procedure for determining instances uniquely and multiparameter typeclasses become more usable and inferable again. Effectively a functional dependency  a > b
says that we can’t define multiple multiparamater typeclass instances with the same a
but different b
.
Now let’s make things not so simple. Turning on UndecidableInstances
loosens the constraint on context reduction that can only allow constraints of the class to become structural smaller than its head. As a result implicit computation can now occur within in the type class instance search. Combined with a typelevel representation of Peano numbers we find that we can encode basic arithmetic at the typelevel.
If the typeclass contexts look similar to Prolog you’re not wrong, if one reads the contexts qualifier (=>)
backwards as turnstiles :
then it’s precisely the same equations.
This is kind of abusing typeclasses and if used carelessly it can fail to terminate or overflow at compiletime. UndecidableInstances
shouldn’t be turned on without careful forethought about what it implies.
Type Families
Type families allows us to write functions in the type domain which take types as arguments which can yield either types or values indexed on their arguments which are evaluated at compiletime in during typechecking. Type families come in two varieties: data families and type synonym families.
 type families are named function on types
 data families are typeindexed data types
First let’s look at type synonym families, there are two equivalent syntactic ways of constructing them. Either as associated type families declared within a typeclass or as standalone declarations at the toplevel. The following forms are semantically equivalent, although the unassociated form is strictly more general:
Using the same example we used for multiparameter + functional dependencies illustration we see that there is a direct translation between the type family approach and functional dependencies. These two approaches have the same expressive power.
An associated type family can be queried using the :kind!
command in GHCi.
Data families on the other hand allow us to create new type parameterized data constructors. Normally we can only define typeclasses functions whose behavior results in a uniform result which is purely a result of the typeclasses arguments. With data families we can allow specialized behavior indexed on the type.
For example if we wanted to create more complicated vector structures ( bitmasked vectors, vectors of tuples, … ) that exposed a uniform API but internally handled the differences in their data layout we can use data families to accomplish this:
Injectivity
The type level functions defined by typefamilies are not necessarily injective, the function may map two distinct input types to the same output type. This differs from the behavior of type constructors ( which are also typelevel functions ) which are injective.
For example for the constructor Maybe
, Maybe t1 = Maybe t2
implies that t1 = t2
.
Roles
Roles are a further level of specification for type variables parameters of datatypes.
nominal
representational
phantom
They were added to the language to address a rather nasty and longstanding bug around the correspondence between a newtype and its runtime representation. The fundamental distinction that roles introduce is there are two notions of type equality. Two types are nominally equal when they have the same name. This is the usual equality in Haskell or Core. Two types are representationally equal when they have the same representation. (If a type is higherkinded, all nominally equal instantiations lead to representationally equal types.)
nominal
 Two types are the same.representational
 Two types have the same runtime representation.
Roles are normally inferred automatically, but with the RoleAnnotations
extension they can be manually annotated. Except in rare cases this should not be necessary although it is helpful to know what is going on under the hood.
With:
See:
NonEmpty
Rather than having degenerate (and often partial) cases of many of the Prelude functions to accommodate the null case of lists, it is sometimes preferable to statically enforce empty lists from even being constructed as an inhabitant of a type.
Manual Proofs
One of most deep results in computer science, the Curry–Howard correspondence, is the relation that logical propositions can be modeled by types and instantiating those types constitute proofs of these propositions. Programs are proofs and proofs are programs.
A 
proposition 
a : A 
proof 
B(x) 
predicate 
Void 
⊥ 
Unit 
⊤ 
A + B 
A ∨ B 
A × B 
A ∧ B 
A > B 
A ⇒ B 
In dependently typed languages we can exploit this result to its full extent, in Haskell we don’t have the strength that dependent types provide but can still prove trivial results. For example, now we can model a type level function for addition and provide a small proof that zero is an additive identity.
Translated into Haskell our axioms are simply type definitions and recursing over the inductive datatype constitutes the inductive step of our proof.
Using the TypeOperators
extension we can also use infix notation at the typelevel.
Constraint Kinds
GHC’s implementation also exposes the predicates that bound quantifiers in Haskell as types themselves, with the XConstraintKinds
extension enabled. Using this extension we work with constraints as first class types.
The empty constraint set is indicated by () :: Constraint
.
For a contrived example if we wanted to create a generic Sized
class that carried with it constraints on the elements of the container in question we could achieve this quite simply using type families.
One usecase of this is to capture the typeclass dictionary constrained by a function and reify it as a value.
TypeFamilyDependencies
Type families historically have not been injective, i.e. they are not guaranteed to maps distinct elements of its arguments to the same element of its result. The syntax is similar to the multiparmater typeclass functional dependencies in that the resulting type is uniquely determined by a set of the type families parameters.
See:
Higher Kinded Types
What are higher kinded types?
The kind system in Haskell is unique by contrast with most other languages in that it allows datatypes to be constructed which take types and type constructor to other types. Such a system is said to support higher kinded types.
All kind annotations in Haskell necessarily result in a kind *
although any terms to the left may be higherkinded (* > *
).
The common example is the Monad which has kind * > *
. But we have also seen this higherkindedness in free monads.
For instance Cofree Maybe a
for some monokinded type a
models a nonempty list with Maybe :: * > *
.
Kind Polymorphism
The regular value level function which takes a function and applies it to an argument is universally generalized over in the usual HindleyMilner way.
But when we do the same thing at the typelevel we see we lose information about the polymorphism of the constructor applied.
Turning on XPolyKinds
allows polymorphic variables at the kind level as well.
Using the polykinded Proxy
type allows us to write down type class functions over constructors of arbitrary kind arity.
For example we can write down the polymorphic S
K
combinators at the type level now.
Data Kinds
The XDataKinds
extension allows us to refer to constructors at the value level and the type level. Consider a simple sum type:
With the extension enabled we see that our type constructors are now automatically promoted so that L
or R
can be viewed as both a data constructor of the type S
or as the type L
with kind S
.
Promoted data constructors can referred to in type signatures by prefixing them with a single quote. Also of importance is that these promoted constructors are not exported with a module by default, but type synonym instances can be created for the ticked promoted types and exported directly.
Combining this with type families we see we can write meaningful, typelevel functions by lifting types to the kind level.
SizeIndexed Vectors
Using this new structure we can create a Vec
type which is parameterized by its length as well as its element type now that we have a kind language rich enough to encode the successor type in the kind signature of the generalized algebraic datatype.
{# LANGUAGE GADTs #}
{# LANGUAGE DataKinds #}
{# LANGUAGE KindSignatures #}
{# LANGUAGE FlexibleInstances #}
{# LANGUAGE FlexibleContexts #}
data Nat = Z  S Nat deriving (Eq, Show)
type Zero = Z
type One = S Zero
type Two = S One
type Three = S Two
type Four = S Three
type Five = S Four
data Vec :: Nat > * > * where
Nil :: Vec Z a
Cons :: a > Vec n a > Vec (S n) a
instance Show a => Show (Vec n a) where
show Nil = "Nil"
show (Cons x xs) = "Cons " ++ show x ++ " (" ++ show xs ++ ")"
class FromList n where
fromList :: [a] > Vec n a
instance FromList Z where
fromList [] = Nil
instance FromList n => FromList (S n) where
fromList (x:xs) = Cons x $ fromList xs
lengthVec :: Vec n a > Nat
lengthVec Nil = Z
lengthVec (Cons x xs) = S (lengthVec xs)
zipVec :: Vec n a > Vec n b > Vec n (a,b)
zipVec Nil Nil = Nil
zipVec (Cons x xs) (Cons y ys) = Cons (x,y) (zipVec xs ys)
vec4 :: Vec Four Int
vec4 = fromList [0, 1, 2, 3]
vec5 :: Vec Five Int
vec5 = fromList [0, 1, 2, 3, 4]
example1 :: Nat
example1 = lengthVec vec4
 S (S (S (S Z)))
example2 :: Vec Four (Int, Int)
example2 = zipVec vec4 vec4
 Cons (0,0) (Cons (1,1) (Cons (2,2) (Cons (3,3) (Nil))))
So now if we try to zip two Vec
types with the wrong shape then we get an error at compiletime about the offbyone error.
The same technique we can use to create a container which is statically indexed by an empty or nonempty flag, such that if we try to take the head of an empty list we’ll get a compiletime error, or stated equivalently we have an obligation to prove to the compiler that the argument we hand to the head function is nonempty.
See:
Typelevel Numbers
GHC’s type literals can also be used in place of explicit Peano arithmetic.
GHC 7.6 is very conservative about performing reduction, GHC 7.8 is much less so and will can solve many typelevel constraints involving natural numbers but sometimes still needs a little coaxing.
See: TypeLevel Literals
Typelevel Strings
Since GHC 8.0 we have been able to work with typelevel strings values represented at the typelevel as Symbol
with kind Symbol
. The GHC.TypeLits
module defines a set of a typeclases for lifting these values to and from the value level and comparing and computing over the values at typelevel.
These can be used to tag specific data at the typelevel with compiletime information encoded in the strings. For example we can construct a simple unit system which allows us to attach units to numerical quantities and perform basic dimensional analysis.
Custom Errors
As of GHC 8.0 we have the capacity to provide custom type error using type families. The messages themselves hook into GHC and are expressed using the small datatype found in GHC.TypeLits
If one of these expressions is found in the signature of an expression GHC reports an error message of the form:
A less contrived example would be creating a typesafe embedded DSL that enforces invariants about the semantics at the typelevel. We’ve been able to do this sort of thing using GADTs and typefamilies for a while but the error reporting has been horrible. With 8.0 we can have typefamilies that emit useful type errors that reflect what actually goes wrong and integrate this inside of GHC.
Type Equality
Continuing with the theme of building more elaborate proofs in Haskell, GHC 7.8 recently shipped with the Data.Type.Equality
module which provides us with an extended set of typelevel operations for expressing the equality of types as values, constraints, and promoted booleans.
With this we have a much stronger language for writing restrictions that can be checked at a compiletime, and a mechanism that will later allow us to write more advanced proofs.
Proxies
Using kind polymorphism with phantom types allows us to express the Proxy type which is inhabited by a single constructor with no arguments but with a polykinded phantom type variable which carries an arbitrary type.
In cases where we’d normally pass around a undefined
as a witness of a typeclass dictionary, we can instead pass a Proxy object which carries the phantom type without the need for the bottom. Using scoped type variables we can then operate with the phantom parameter and manipulate wherever is needed.
We’ve seen constructors promoted using DataKinds, but just like at the valuelevel GHC also allows us some syntactic sugar for list and tuples instead of explicit cons’ing and pair’ing. This is enabled with the XTypeOperators
extension, which introduces list syntax and tuples of arbitrary arity at the typelevel.
Using this we can construct all variety of composite typelevel objects.
λ: :kind 1
1 :: Nat
λ: :kind "foo"
"foo" :: Symbol
λ: :kind [1,2,3]
[1,2,3] :: [Nat]
λ: :kind [Int, Bool, Char]
[Int, Bool, Char] :: [*]
λ: :kind Just [Int, Bool, Char]
Just [Int, Bool, Char] :: Maybe [*]
λ: :kind '("a", Int)
(,) Symbol *
λ: :kind [ '("a", Int), '("b", Bool) ]
[ '("a", Int), '("b", Bool) ] :: [(,) Symbol *]
Singleton Types
A singleton type is a type with a single value inhabitant. Singleton types can be constructed in a variety of ways using GADTs or with data families.
Promoted Naturals
SZ 
Sing 'Z 
0 
SS SZ 
Sing ('S 'Z) 
1 
SS (SS SZ) 
Sing ('S ('S 'Z)) 
2 
Promoted Booleans
SFalse 
Sing 'False 
False 
STrue 
Sing 'True 
True 
Promoted Maybe
SJust a 
Sing (SJust 'a) 
Just a 
SNothing 
Sing Nothing 
Nothing 
Singleton types are an integral part of the small cottage industry of faking dependent types in Haskell, i.e. constructing types with terms predicated upon values. Singleton types are a way of “cheating” by modeling the map between types and values as a structural property of the type.
The builtin singleton types provided in GHC.TypeLits
have the useful implementation that typelevel values can be reflected to the valuelevel and back up to the typelevel, albeit under an existential.
Closed Type Families
In the type families we’ve used so far (called open type families) there is no notion of ordering of the equations involved in the typelevel function. The type family can be extended at any point in the code resolution simply proceeds sequentially through the available definitions. Closed typefamilies allow an alternative declaration that allows for a base case for the resolution allowing us to actually write recursive functions over types.
For example consider if we wanted to write a function which counts the arguments in the type of a function and reifies at the valuelevel.
The variety of functions we can now write down are rather remarkable, allowing us to write meaningful logic at the type level.
{# LANGUAGE DataKinds #}
{# LANGUAGE PolyKinds #}
{# LANGUAGE TypeFamilies #}
{# LANGUAGE TypeOperators #}
{# LANGUAGE ScopedTypeVariables #}
{# LANGUAGE UndecidableInstances #}
import GHC.TypeLits
import Data.Proxy
import Data.Type.Equality
 Typelevel functions over typelevel lists.
type family Reverse (xs :: [k]) :: [k] where
Reverse '[] = '[]
Reverse xs = Rev xs '[]
type family Rev (xs :: [k]) (ys :: [k]) :: [k] where
Rev '[] i = i
Rev (x ': xs) i = Rev xs (x ': i)
type family Length (as :: [k]) :: Nat where
Length '[] = 0
Length (x ': xs) = 1 + Length xs
type family If (p :: Bool) (a :: k) (b :: k) :: k where
If True a b = a
If False a b = b
type family Concat (as :: [k]) (bs :: [k]) :: [k] where
Concat a '[] = a
Concat '[] b = b
Concat (a ': as) bs = a ': Concat as bs
type family Map (f :: a > b) (as :: [a]) :: [b] where
Map f '[] = '[]
Map f (x ': xs) = f x ': Map f xs
type family Sum (xs :: [Nat]) :: Nat where
Sum '[] = 0
Sum (x ': xs) = x + Sum xs
ex1 :: Reverse [1,2,3] ~ [3,2,1] => Proxy a
ex1 = Proxy
ex2 :: Length [1,2,3] ~ 3 => Proxy a
ex2 = Proxy
ex3 :: (Length [1,2,3]) ~ (Length (Reverse [1,2,3])) => Proxy a
ex3 = Proxy
 Reflecting type level computations back to the value level.
ex4 :: Integer
ex4 = natVal (Proxy :: Proxy (Length (Concat [1,2,3] [4,5,6])))
 6
ex5 :: Integer
ex5 = natVal (Proxy :: Proxy (Sum [1,2,3]))
 6
 Couldn't match type ‘2’ with ‘1’
ex6 :: Reverse [1,2,3] ~ [3,1,2] => Proxy a
ex6 = Proxy
The results of type family functions need not necessarily be kinded as (*)
either. For example using Nat or Constraint is permitted.
Kind Indexed Type Families
Just as typeclasses are normally indexed on types, type families can also be indexed on kinds with the kinds given as explicit kind signatures on type variables.
HLists
A heterogeneous list is a cons list whose type statically encodes the ordered types of its values.
Of course this immediately begs the question of how to print such a list out to a string in the presence of typeheterogeneity. In this case we can use typefamilies combined with constraint kinds to apply the Show over the HLists parameters to generate the aggregate constraint that all types in the HList are Showable, and then derive the Show instance.
Typelevel Dictionaries
Much of this discussion of promotion begs the question whether we can create data structures at the typelevel to store information at compiletime. For example a typelevel association list can be used to model a map between typelevel symbols and any other promotable types. Together with typefamilies we can write down typelevel traversal and lookup functions.
If we ask GHC to expand out the type signature we can view the explicit implementation of the typelevel map lookup function.
Advanced Proofs
Now that we have the lengthindexed vector let’s go write the reverse function, how hard could it be?
So we go and write down something like this:
Running this we find that GHC is unhappy about two lines in the code:
As we unfold elements out of the vector we’ll end up doing a lot of typelevel arithmetic over indices as we combine the subparts of the vector backwards, but as a consequence we find that GHC will run into some unification errors because it doesn’t know about basic arithmetic properties of the natural numbers. Namely that forall n. n + 0 = 0
and forall n m. n + (1 + m) = 1 + (n + m)
. Which of course it really shouldn’t be given that we’ve constructed a system at the typelevel which intuitively models arithmetic but GHC is just a dumb compiler, it can’t automatically deduce the isomorphism between natural numbers and Peano numbers.
So at each of these call sites we now have a proof obligation to construct proof terms. Recall from our discussion of propositional equality from GADTs that we actually have such machinery to construct this now.
One might consider whether we could avoid using the singleton trick and just use typelevel natural numbers, and technically this approach should be feasible although it seems that the natural number solver in GHC 7.8 can decide some properties but not the ones needed to complete the natural number proofs for the reverse functions.
Caveat should be that there might be a way to do this in GHC 7.6 that I’m not aware of. In GHC 7.10 there are some planned changes to solver that should be able to resolve these issues. In particular there are plans to allow pluggable type system extensions that could outsource these kind of problems to third party SMT solvers which can solve these kind of numeric relations and return this information back to GHC’s typechecker.
As an aside this is a direct transliteration of the equivalent proof in Agda, which is accomplished via the same method but without the song and dance to get around the lack of dependent types.
Liquid Haskell
LiquidHaskell is an extension to GHC’s typesystem that adds the capacity for refinement types using the annotation syntax. The type signatures of functions can be checked by the external for richer type semantics than default GHC provides, including nonexhaustive patterns and complex arithmetic properties that require external SMT solvers to verify. For instance LiquidHaskell can statically verify that a function that operates over a Maybe a
is always given a Just
or that an arithmetic function always yields an Int that is an even positive number.
LiquidHaskell analyses the modules and discharges proof obligations to an SMT solver to see if the conditions are satisfiable. This allows us to prove the absence of a family of errors around memory safety, arithmetic exceptions and information flow.
You will need either the Microsoft Research Z3 SMT solver or Stanford CVC4 SMT solver.
For Linux:
For Mac:
Then install LiquidHaskell either with Cabal or Stack:
Then with the LiquidHaskell framework installed you can annotate your Haskell modules with refinement types and run the liquid
The module can be run through the solver using the liquid
command line tool.
To run Liquid Haskell over a Cabal project you can include the cabal directory by passing cabaldir
flag and then including the source directory which contains your application code. You can specify additional specification for external modules by including a spec
folder containing special LH modules with definitions.
An example specification module.
To run the checker over your project:
For more extensive documentation and further use cases see the official documentation:
Haskell has several techniques for automatic generation of type classes for a variety of tasks that consist largely of boilerplate code generation such as:
 Pretty Printing
 Equality
 Serialization
 Ordering
 Traversals
Generic
The most modern method of doing generic programming uses type families to achieve a better method of deriving the structural properties of arbitrary type classes. Generic implements a typeclass with an associated type Rep
( Representation ) together with a pair of functions that form a 2sided inverse ( isomorphism ) for converting to and from the associated type and the derived type in question.
GHC.Generics defines a set of named types for modeling the various structural properties of types in available in Haskell.
Using the deriving mechanics GHC can generate this Generic instance for us mechanically, if we were to write it by hand for a simple type it might look like this:
Use kind!
in GHCi we can look at the type family Rep
associated with a Generic instance.
Now the clever bit, instead writing our generic function over the datatype we instead write it over the Rep and then reify the result using from
. So for an equivalent version of Haskell’s default Eq
that instead uses generic deriving we could write:
To accommodate the two methods of writing classes (genericderiving or custom implementations) we can use the DefaultSignatures
extension to allow the user to leave typeclass functions blank and defer to Generic or to define their own.
Now anyone using our library need only derive Generic and create an empty instance of our typeclass instance without writing any boilerplate for GEq
.
Here is a complete example for deriving equality generics:
See:
 Cooking Classes with Datatype Generic Programming
 Datatypegeneric Programming in Haskell
 genericderiving
Generic Deriving
Using Generics many common libraries provide a mechanisms to derive common typeclass instances. Some real world examples:
The hashable library allows us to derive hashing functions.
The cereal library allows us to automatically derive a binary representation.
{# LANGUAGE DeriveGeneric #}
import Data.Word
import Data.ByteString
import Data.Serialize
import GHC.Generics
data Val = A [Val]  B [(Val, Val)]  C
deriving (Generic, Show)
instance Serialize Val where
encoded :: ByteString
encoded = encode (A [B [(C, C)]])
 "NULNULNULNULNULNULNULNULSOHSOHNULNULNULNULNULNULNULSOHSTXSTX"
bytes :: [Word8]
bytes = unpack encoded
 [0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,1,2,2]
decoded :: Either String Val
decoded = decode encoded
The aeson library allows us to derive JSON representations for JSON instances.
See: A Generic Deriving Mechanism for Haskell
Higher Kinded Generics
Using the same interface GHC.Generics provides a separate typeclass for higherkinded generics.
So for instance Maybe
has Rep1
of the form:
Typeable
The Typeable
class be used to create runtime type information for arbitrary types.
Using the Typeable instance allows us to write down a type safe cast function which can safely use unsafeCast
and provide a proof that the resulting type matches the input.
Of historical note is that writing our own Typeable classes is currently possible of GHC 7.6 but allows us to introduce dangerous behavior that can cause crashes, and shouldn’t be done except by GHC itself. As of 7.8 GHC forbids handwritten Typeable instances. As of 7.10 XAutoDeriveTypeable
is enabled by default.
See: Typeable and Data in Haskell
Dynamic Types
Since we have a way of querying runtime type information we can use this machinery to implement a Dynamic
type. This allows us to box up any monotype into a uniform type that can be passed to any function taking a Dynamic type which can then unpack the underlying value in a typesafe way.
In GHC 7.8 the Typeable class is polykinded so polymorphic functions can be applied over functions and higher kinded types.
Use of Dynamic is somewhat rare, except in odd cases that have to deal with foreign memory and FFI interfaces. Using it for business logic is considered a code smell. Consider a more idiomatic solution.
Data
Just as Typeable lets us create runtime type information, the Data class allows us to reflect information about the structure of datatypes to runtime as needed.
The types for gfoldl
and gunfold
are a little intimidating ( and depend on RankNTypes
), the best way to understand is to look at some examples. First the most trivial case a simple sum type Animal
would produce the following code:
For a type with nonempty containers we get something a little more interesting. Consider the list type:
Looking at gfoldl
we see the Data has an implementation of a function for us to walk an applicative over the elements of the constructor by applying a function k
over each element and applying z
at the spine. For example look at the instance for a 2tuple as well:
This is pretty neat, now within the same typeclass we have a generic way to introspect any Data
instance and write logic that depends on the structure and types of its subterms. We can now write a function which allows us to traverse an arbitrary instance of Data and twiddle values based on pattern matching on the runtime types. So let’s write down a function over
which increments a Value
type for both for ntuples and lists.
{# LANGUAGE DeriveDataTypeable #}
import Data.Data
import Control.Monad.Identity
import Control.Applicative
data Animal = Cat  Dog deriving (Data, Typeable)
newtype Val = Val Int deriving (Show, Data, Typeable)
incr :: Typeable a => a > a
incr = maybe id id (cast f)
where f (Val x) = Val (x * 100)
over :: Data a => a > a
over x = runIdentity $ gfoldl cont base (incr x)
where
cont k d = k <*> (pure $ over d)
base = pure
example1 :: Constr
example1 = toConstr Dog
 Dog
example2 :: DataType
example2 = dataTypeOf Cat
 DataType {tycon = "Main.Animal", datarep = AlgRep [Cat,Dog]}
example3 :: [Val]
example3 = over [Val 1, Val 2, Val 3]
 [Val 100,Val 200,Val 300]
example4 :: (Val, Val, Val)
example4 = over (Val 1, Val 2, Val 3)
 (Val 100,Val 200,Val 300)
We can also write generic operations, for example to count the number of parameters in a data type.
Uniplate
Uniplate is a generics library for writing traversals and transformation for arbitrary data structures. It is extremely useful for writing AST transformations and rewriting systems.
The descend
function will apply a function to each immediate descendant of an expression and then combines them up into the parent expression.
The transform
function will perform a single pass bottomup transformation of all terms in the expression.
The rewrite
function will perform an exhaustive transformation of all terms in the expression to fixed point, using Maybe to signify termination.
Alternatively Uniplate instances can be derived automatically from instances of Data without the need to explicitly write a Uniplate instance. This approach carries a slight amount of overhead over an explicit handwritten instance.
Biplate
Biplates generalize plates where the target type isn’t necessarily the same as the source, it uses multiparameter typeclasses to indicate the type sub of the subtarget. The Uniplate functions all have an equivalent generalized biplate form.
Numeric Tower
Haskell’s numeric tower is unusual and the source of some confusion for novices. Haskell is one of the few languages to incorporate statically typed overloaded literals without a mechanism for “coercions” often found in other languages.
To add to the confusion numerical literals in Haskell are desugared into a function from a numeric typeclass which yields a polymorphic value that can be instantiated to any instance of the Num
or Fractional
typeclass at the callsite, depending on the inferred type.
To use a blunt metaphor, we’re effectively placing an object in a hole and the size and shape of the hole defines the object you place there. This is very different than in other languages where a numeric literal like 2.718
is hard coded in the compiler to be a specific type ( double or something ) and you cast the value at runtime to be something smaller or larger as needed.
The numeric typeclass hierarchy is defined as such:
Conversions between concrete numeric types ( from : left column, to : top row ) is accomplished with several generic functions.
Double  id  fromRational  truncate  truncate  truncate  toRational 
Float  fromRational  id  truncate  truncate  truncate  toRational 
Int  fromIntegral  fromIntegral  id  fromIntegral  fromIntegral  fromIntegral 
Word  fromIntegral  fromIntegral  fromIntegral  id  fromIntegral  fromIntegral 
Integer  fromIntegral  fromIntegral  fromIntegral  fromIntegral  id  fromIntegral 
Rational  fromRational  fromRational  truncate  truncate  truncate  id 
GMP Integers
The Integer
type in GHC is implemented by the GMP (libgmp
) arbitrary precision arithmetic library. Unlike the Int
type, the size of Integer values is bounded only by the available memory.
Most notably libgmp
is one of the few libraries that compiled Haskell binaries are dynamically linked against. An alternative library integersimple
can be linked in place of libgmp.
Complex Numbers
Haskell supports arithmetic with complex numbers via a Complex datatype from the Data.Complex
module. The first argument is the real part, while the second is the imaginary part. The type has a single parameter and inherits its numerical typeclass components (Num, Fractional, Floating) from the type of this parameter.
The Num
instance for Complex
is only defined if parameter of Complex
is an instance of RealFloat
.
Decimal & Scientific Types
Scientific provides arbitraryprecision numbers represented using scientific notation. The constructor takes an arbitrarily sized Integer argument for the digits and an Int for the exponent. Alternatively the value can be parsed from a String or coerced from either Double/Float.
Polynomial Arithmetic
The standard library for working with symbolic polynomials is the poly
library. It exposes a interface for working with univariate polynomials which are backed by an efficient vector library. This allows us to efficiently manipulate and perform arithmetic operations over univariate polynomails.
For example we can instantiate symbolic polynomials, write recurrence rules and generators over them and factor them.
See: poly
Combinatorics
Combinat is the standard Haskell library for doing combinatorial calculations. It provides a variety of functions for computing:
See: combinat
Number Theory
Arithmoi is the standard number theory library for Haskell. It provides functions for calculing common number theory operations used in combinators and cryptography applications in Haskell. Including:
 Modular square roots
 Möbius Inversions
 Primarily Testing
 Riemann Zeta Functions
 Pollard’s Rho Algorithm
 Jacobi symbols
 MeijerG Functions
See: arithmoi
Stochastic Calculus
HQuantLib
provides a variety of functions for working with stochastic processes. This primarily applies to stochastic calculus applied to pricing financial products such as the BlackScholes pricing engine and routines for calculating volatility smiles of options products.
See: HQuantLib
Differential Equations
There are several Haskell libraries for finding numerical solutions to systems of differential equations. These kind of problems show up quite frequently in scientific computing problems.
For example a simple differential equation is Van der Pol oscillator which occurs frequently in physics. This is a second order differential equation which relates the position of a oscillator x in terms of time, acceleration ${d^{2}x over dt^{2}}$, and the velocity $dx over dt$ a scalar parameter μ. It is given by the equation.
$$
{displaystyle {d^{2}x over dt^{2}}mu (1x^{2}){dx over dt}+x=0,}
$$
For example this equation can be solved for a fixed μ and set of boundary conditions for the time parameter t. The solution is returned as an HMatrix vector.
Statistics & Probability
Haskell has a basic statistics library for calculating descriptive statistics, generating and sampling probability distributions and performing statistical tests.
Constructive Reals
Instead of modeling the real numbers on finite precision floating point numbers we alternatively work with Num
which internally manipulates the power series expansions for the expressions when performing operations like arithmetic or transcendental functions without losing precision when performing intermediate computations. Then we simply slice off a fixed number of terms and approximate the resulting number to a desired precision. This approach is not without its limitations and caveats ( notably that it may diverge ).
SAT Solvers
A collection of constraint problems known as satisfiability problems show up in a number of different disciplines from type checking to package management. Simply put a satisfiability problem attempts to find solutions to a statement of conjoined conjunctions and disjunctions in terms of a series of variables. For example:
(A v ¬B v C) ∧ (B v D v E) ∧ (D v F)
To use the picosat library to solve this, it can be written as zeroterminated lists of integers and fed to the solver according to a numbertovariable relation:
import Picosat
main :: IO [Int]
main = do
solve [[1, 2, 3], [2,4,5], [4,6]]
 Solution [1,2,3,4,5,6]
The SAT solver itself can be used to solve satisfiability problems with millions of variables in this form and is finely tuned.
See:
SMT Solvers
A generalization of the SAT problem to include predicates other theories gives rise to the very sophisticated domain of “Satisfiability Modulo Theory” problems. The existing SMT solvers are very sophisticated projects ( usually bankrolled by large institutions ) and usually have to be called out to via foreign function interface or via a common interface called SMTlib. The two most common of use in Haskell are cvc4
from Stanford and z3
from Microsoft Research.
The SBV library can abstract over different SMT solvers to allow us to express the problem in an embedded domain language in Haskell and then offload the solving work to the third party library.
As an example, here’s how you can solve a simple cryptarithm
M O N A D 
+ B U R R I T O 
= B A N D A I D 
using SBV library:
import Data.Foldable
import Data.SBV
  val [4,2] == 42
val :: [SInteger] > SInteger
val = foldr1 (d r > d + 10*r) . reverse
puzzle :: Symbolic SBool
puzzle = do
ds@[b,u,r,i,t,o,m,n,a,d] < sequenceA [ sInteger [v]  v < "buritomnad" ]
constrain $ distinct ds
for_ ds $ d > constrain $ inRange d (0,9)
pure $ val [b,u,r,r,i,t,o]
+ val [m,o,n,a,d]
.== val [b,a,n,d,a,i,d]
Let’s look at all possible solutions,
Map
A map is an associative array mapping any instance of Ord
keys to values of any type.
Initialization  empty 
O(1) 
Size  size 
O(1) 
Lookup  lookup 
O(log (n)) 
Insertion  insert 
O(log (n)) 
Traversal  traverse 
O(n) 
Tree
A tree is directed graph with a single root.
Initialization  empty 
O(1) 
Size  size 
O(1) 
Lookup  lookup 
O(log (n)) 
Insertion  insert 
O(log (n)) 
Traversal  traverse 
O(n) 
Set
Sets are unordered data structures containing Ord
values of any type and guaranteeing uniqueness with in the structure. They are not identical to the mathematical notion of a Set even though they share the same namesake.
Initialization  empty 
O(1) 
Size  size 
O(1) 
Insertion  insert 
O(log (n)) 
Deletion  delete 
O(log (n)) 
Traversal  traverse 
O(n) 
Membership Test  member 
O(log (n)) 
Vector
Vectors are high performance single dimensional arrays that come come in six variants, two for each of the following types of a mutable and an immutable variant.
Initialization  empty 
O(1) 
Size  length 
O(1) 
Indexing  (!) 
O(1) 
Append  append 
O(n) 
Traversal  traverse 
O(n) 
 Data.Vector
 Data.Vector.Storable
 Data.Vector.Unboxed
The most notable feature of vectors is constant time memory access with ((!)
) as well as variety of efficient map, fold and scan operations on top of a fusion framework that generates surprisingly optimal code.
Mutable Vectors
Mutable vectors are variants of vectors which allow inplace updates.
Initialization  empty 
O(1) 
Size  length 
O(1) 
Indexing  (!) 
O(1) 
Append  append 
O(n) 
Traversal  traverse 
O(n) 
Update  modify 
O(1) 
Read  read 
O(1) 
Write  write 
O(1) 
Within the IO monad we can perform arbitrary read and writes on the mutable vector with constant time reads and writes. When needed a static Vector can be created to/from the MVector
using the freeze/thaw functions.
The vector library itself normally does bounds checks on index operations to protect against memory corruption. This can be enabled or disabled on the library level by compiling with boundschecks
cabal flag.
Unordered Containers
Both the HashMap
and HashSet
are purely functional data structures that are drop in replacements for the containers
equivalents but with more efficient space and time performance. Additionally all stored elements must have a Hashable
instance. These structures have different time complexities for insertions and lookups.
Initialization  empty 
O(1) 
Size  size 
O(1) 
Lookup  lookup 
O(log (n)) 
Insertion  insert 
O(log (n)) 
Traversal  traverse 
O(n) 
See: Announcing Unordered Containers
Hashtables
Hashtables provides hashtables with efficient lookup within the ST or IO monad. These have constant time lookup like most languages:
Initialization  empty 
O(1) 
Size  size 
O(1) 
Lookup  lookup 
O(1) 
Insertion  insert 
O(1) amortized 
Traversal  traverse 
O(n) 
Graphs
The Graph module in the containers library is a somewhat antiquated API for working with directed graphs. A little bit of data wrapping makes it a little more straightforward to use. The library is not necessarily wellsuited for large graphtheoretic operations but is perfectly fine for example, to use in a typechecker which needs to resolve strongly connected components of the module definition graph.
So for example we can construct a simple graph:
ex1 :: [(String, String, [String])]
ex1 = [
("a","a",["b"]),
("b","b",["c"]),
("c","c",["a"])
]
ts1 :: [String]
ts1 = topo' (fromList ex1)
 ["a","b","c"]
sc1 :: [[String]]
sc1 = scc' (fromList ex1)
 [["a","b","c"]]
Or with two strongly connected subgraphs:
ex2 :: [(String, String, [String])]
ex2 = [
("a","a",["b"]),
("b","b",["c"]),
("c","c",["a"]),
("d","d",["e"]),
("e","e",["f", "e"]),
("f","f",["d", "e"])
]
ts2 :: [String]
ts2 = topo' (fromList ex2)
 ["d","e","f","a","b","c"]
sc2 :: [[String]]
sc2 = scc' (fromList ex2)
 [["d","e","f"],["a","b","c"]]
See: GraphSCC
Graph Theory
The fgl
library provides a more efficient graph structure and a wide variety of common graphtheoretic operations. For example calculating the dominance frontier of a graph shows up quite frequently in control flow analysis for compiler design.
import qualified Data.Graph.Inductive as G
cyc3 :: G.Gr Char String
cyc3 = G.buildGr
[([("ca",3)],1,'a',[("ab",2)]),
([],2,'b',[("bc",3)]),
([],3,'c',[])]
 Loop query
ex1 :: Bool
ex1 = G.hasLoop x
 Dominators
ex2 :: [(G.Node, [G.Node])]
ex2 = G.dom x 0
x :: G.Gr Int ()
x = G.insEdges edges gr
where
gr = G.insNodes nodes G.empty
edges = [(0,1,()), (0,2,()), (2,1,()), (2,3,())]
nodes = zip [0,1 ..] [2,3,4,1]
DList
Initialization  empty 
O(1) 
Size  size 
O(1) 
Lookup  lookup 
O(log (n)) 
Insertion  insert 
O(log (n)) 
Traversal  traverse 
O(n) 
Append  (>) 
O(1) 
Prepend  (<) 
O(1) 
A dlist is a listlike structure that is optimized for O(1) append operations, internally it uses a Church encoding of the list structure. It is specifically suited for operations which are appendonly and need only access it when manifesting the entire structure. It is particularly wellsuited for use in the Writer monad.
Sequence
The sequence data structure behaves structurally similar to list but is optimized for append/prepend operations and traversal.
Haskell does not exist in a vacuum and will quite often need to interact with or offload computation to another programming language. Since GHC itself is built on the GCC ecosystem interfacing with libraries that can be linked via a C ABI is quite natural. Indeed many high performance libraries will call out to Fortran, C, or C++ code to perform numerical computations that can be linked seamlessly into the Haskell runtime. There are several approaches to combining Haskell with other languages in the via the Foreign Function Interface or FFI.
Pure Functions
Wrapping pure C functions with primitive types is trivial.
Storable Arrays
There exists a Storable
typeclass that can be used to provide lowlevel access to the memory underlying Haskell values. Ptr
objects in Haskell behave much like C pointers although arithmetic with them is in terms of bytes only, not the size of the type associated with the pointer ( this differs from C).
The Prelude defines Storable interfaces for most of the basic types as well as types in the Foreign.Storable
module.
To pass arrays from Haskell to C we can again use Storable Vector and several unsafe operations to grab a foreign pointer to the underlying data that can be handed off to C. Once we’re in C land, nothing will protect us from doing evil things to memory!
/* $(CC) c qsort.c o qsort.o */
void swap(int *a, int *b)
{
int t = *a;
*a = *b;
*b = t;
}
void sort(int *xs, int beg, int end)
{
if (end > beg + 1) {
int piv = xs[beg], l = beg + 1, r = end;
while (l < r) {
if (xs[l] <= piv) {
l++;
} else {
swap(&xs[l], &xs[r]);
}
}
swap(&xs[l], &xs[beg]);
sort(xs, beg, l);
sort(xs, r, end);
}
}
The names of foreign functions from a C specific header file can be qualified.
Prepending the function name with a &
allows us to create a reference to the function pointer itself.
Function Pointers
Using the above FFI functionality, it’s trivial to pass C function pointers into Haskell, but what about the inverse passing a function pointer to a Haskell function into C using foreign import ccall "wrapper"
.
Will yield the following output:
hsc2hs
When doing socket level programming, when handling UDP packets there is a packed C struct with a set of fields defined by the Linux kernel. These fields are defined in the following C pseudocode.
If we want to marshall packets to and from Haskell datatypes we need to be able to be able to take a pointer to memory holding the packet message header and scan the memory into native Haskell types. This involves knowing some information about the memory offsets for the packet structure. GHC ships with a tool known as hsc2hs
which can be used to read information from C header files to automatically generate the boilerplate instances of Storable
to perform this marshalling. The hsc2hs
library acts a preprocessor over .hsc
files and can fill in information as specific by several macros to generate Haskell source.
For example the following module from the network
library must introspect the msghdr
struct from
.
#include
#include
import Network.Socket.Imports
import Network.Socket.Internal (zeroMemory)
import Network.Socket.Types (SockAddr)
import Network.Socket.ByteString.IOVec (IOVec)
data MsgHdr = MsgHdr
{ msgName :: !(Ptr SockAddr)
, msgNameLen :: !CUInt
, msgIov :: !(Ptr IOVec)
, msgIovLen :: !CSize
}
instance Storable MsgHdr where
sizeOf _ = (#const sizeof(struct msghdr))
alignment _ = alignment (undefined :: CInt)
peek p = do
name < (#peek struct msghdr, msg_name) p
nameLen < (#peek struct msghdr, msg_namelen) p
iov < (#peek struct msghdr, msg_iov) p
iovLen < (#peek struct msghdr, msg_iovlen) p
return $ MsgHdr name nameLen iov iovLen
poke p mh = do
zeroMemory p (#const sizeof(struct msghdr))
(#poke struct msghdr, msg_name) p (msgName mh)
(#poke struct msghdr, msg_namelen) p (msgNameLen mh)
(#poke struct msghdr, msg_iov) p (msgIov mh)
(#poke struct msghdr, msg_iovlen) p (msgIovLen mh)
Running the command line tool over this module we get the following Haskell output Example.hs
. This can also be run as part of a Cabal build step by including hsc2hs
in your buildtools
.
GHC Haskell has an extremely advanced parallel runtime that embraces several different models of concurrency to adapt to adapt to needs for different domains. Unlike other languages Haskell does not have any Global Interpreter Lock or equivalent. Haskell code can be executed in a multithreaded context and have shared mutable state and communication channels between threads.
A thread in Haskell is created by forking off from the main process using the forkIO
command. This is performed within the IO monad and yields a ThreadId which can be used to communicate with the new thread.
Haskell threads are extremely cheap to spawn, using only 1.5KB of RAM depending on the platform and are much cheaper than a pthread in C. Calling forkIO 10^{6} times completes just short of 1s. Additionally, functional purity in Haskell also guarantees that a thread can almost always be terminated even in the middle of a computation without concern.
See:
Sparks
The most basic “atom” of parallelism in Haskell is a spark. It is a hint to the GHC runtime that a computation can be evaluated to weak head normal form in parallel.
rpar a
spins off a separate spark that evaluates a to weak head normal form and places the computation in the spark pool. When the runtime determines that there is an available CPU to evaluate the computation it will evaluate ( convert ) the spark. If the main thread of the program is the evaluator for the spark, the spark is said to have fizzled. Fizzling is generally bad and indicates that the logic or parallelism strategy is not well suited to the work that is being evaluated.
The spark pool is also limited ( but useradjustable ) to a default of 8000 (as of GHC 7.8.3 ). Sparks that are created beyond that limit are said to overflow.
An argument to rseq
forces the evaluation of a spark before evaluation continues.
Fizzled 
The resulting value has already been evaluated by the main thread so the spark need not be converted. 
Dud 
The expression has already been evaluated, the computed value is returned and the spark is not converted. 
GC'd 
The spark is added to the spark pool but the result is not referenced, so it is garbage collected. 
Overflowed 
Insufficient space in the spark pool when spawning. 
The parallel runtime is necessary to use sparks, and the resulting program must be compiled with threaded
. Additionally the program itself can be specified to take runtime options with rtsopts
such as the number of cores to use.
The runtime can be asked to dump information about the spark evaluation by passing the s
flag.
The parallel computations themselves are sequenced in the Eval
monad, whose evaluation with runEval
is itself a pure computation.
Threads
For finegrained concurrency and parallelism, Haskell has a lightweight thread system that schedules logical threads on the available operating system threads. These lightweight threads are called unbound threads, while native operating systems are called bound threads since they are bound to a single operating system thread. The functions to spawn an run tasks inside these threads all live in the IO monad. The number of possible simultaneous threads is given by the getNumCapabilities
functions based on the system environment.
Managed threads work with the runtime system’s IO manager which will schedule and manage cooperative multitaksing and polling. When a individual unbound thread is blocked polling on a file description or lock it will yield to another runnable thread managed by the runtime. This yield action can also be explicitly invoked with the yield
function. A thread can also schedule a wait using threadDelay
to yield to the scheduler for a fixed interval given in microseconds.
Once a thread is forked the fork action will give back a ThreadId
which can be used to call actions and kill the thread from another context. Inside of a running thread the current ThreadId can be queried with myThreadId
.
An exception can also be raised in a given ThreadId
given an instance of Exception
typeclass.
When individually polling on file descriptors there are several functions that can schedule the thread to wake up again when the given file is given a wake event from the kernel. The following functions will yield the current thread waiting on either a read or write event on the given file description Fd
.
IORef
IORef
is a mutable reference that can be read and writen to within the IO monad. It is the simplest most lowlevel mutable reference provided by the base library.
For example we could construct two IORef
s which mutably hold the balances for two imaginary bank accounts. These references can be passed to another IO
function which can update the values in place.
There are also several atomic functions to update IORef
when working with the threaded runtime.
The atomic modify function atomicModifyIORef
reads the value of r
and applies the function f
to r
giving back (a',b)
. Then value r
is updated with the new value a'
and b
is the return value. Both the read and the write are done atomically so it is not possible that any value will alter the underlying IORef
between the read and write.
Normally IORef
is garbage collected like any other value. Once it is out of scope and the runtime has no more references to it, the runtime will collect the thunk holding the IORef
as well as the value the underlying pointer points at. Sometimes when working with these references will require adding additional finalisation logic.
The mkWeakIORef
attaches a finalizer function in the second argument which is run when the value is garbage collected.
MVars
MVars are mutable references like IORefs that can be used to share mutable state between threads. An MVar
has two states empty and full. Reading from an empty MVar will block the current thread. Writing to a full MVar will also block the current thread. Thus only one value can be held inside the MVar allowing us to synchronize the value across threads. MVars are building blocks for many higher concurrent primitives which use them under the hood.
An MVar can either be initialised in an empty state or with a supplied value.
The function takeMVar
operates like a read returning the value, but once the value is read the state of the underlying MVar is left empty. This read is performed once for the first thread to wake up polling for the read.
As an example consider a multithreaded scenario where a second thread is created which polls on atomically on an MVar update.
If a thread is left sleeping waiting on an MVar and the runtime no longer has any references to code which can write to the MRef (i.e. all references to the MVar are garbage collected) the thread will be thrown the exception BlockedIndefinitelyOnMVar
since no value can subsequently be written to it.
TVar
TVars are transactional mutable variables which can be read and written to within in the STM monad. The STM monad provides support for Software Transactional Memory which is a higher level abstraction for concurrent communication that doesn’t require explict thread maintenance and has lovely easy compositional nature.
The STM monad magically hooks into the runtime system and provides two key operations atomically
and retry
which allow monadic blocks of STM actions to be performed atomically and passed around symbolically. In the event that the runtime fails to commit a transaction, the retry
function can rerun the logic contained in a STM a
.
TVars can be created just like IORefs but instead of being in IO they can also be created with the STM monad.
Read, writes and updates proceed exactly like IORef updates but inside of STM.
As an example consider the IORef account transfers from above, but instead the two modifyTVar
actions are performed atomically inside of the transfer function.
There is an additional TMVar
which behaves precisely like the traditional MVar
(i.e. it has an empty and full state) but which is embedded in IO. It is has precisely the same semantics as MVar but emits values within STM.
Chans
Channels are unbounded queues to which an unbounded number of values can be written an unbounded number of times . Channels are implemented using MVars and can be consumed by any number of other threads which read data off of the Chan. Channels are created, read from and written to using a simple new
, read
and write
interface just as we’ve seen with other concurrency primitives.
An example in which a channel is created between a producer and consumer threads is shown below. This can be used to share data between threads and create work queue background processing systems.
There is also an STM variant of Chan called TChan
.
Semaphores
Semaphores are a concurrency primitive used to control access to a common resource used by multiple threads. A semaphore is a variable containing an integral value that can be incremented or decremented by concurrent processes. A semaphore will restrict concurrency to a integral count of consumers called the limit. The QSem
provides an interface for a simple lock semaphore that can be created in IO and polled on using waitQSem
.
A simple example of usage:
QSem also have a variant QSemN
which allows a resource to be acquired and released in a fixed quantity other than one. The waitQSemN
function then takes an integral quantity to wait for.
There is also an STM variant of QSem called TSem
which has the same semantics.
Threadscope
Passing the flag l
generates the eventlog which can be rendered with the threadscope library.
See:
Strategies
Sparks themselves form the foundation for higher level parallelism constructs known as strategies
which adapt spark creation to fit the computation or data structure being evaluated. For instance if we wanted to evaluate both elements of a tuple in parallel we can create a strategy which uses sparks to evaluate both sides of the tuple.
This pattern occurs so frequently the combinator using
can be used to write it equivalently in operatorlike form that may be more visually appealing to some.
For a less contrived example consider a parallel parmap
which maps a pure function over a list of a values in parallel.
The functions above are quite useful, but will break down if evaluation of the arguments needs to be parallelized beyond simply weak head normal form. For instance if the arguments to rpar
is a nested constructor we’d like to parallelize the entire section of work in evaluated the expression to normal form instead of just the outer layer. As such we’d like to generalize our strategies so the evaluation strategy for the arguments can be passed as an argument to the strategy.
Control.Parallel.Strategies
contains a generalized version of rpar
which embeds additional evaluation logic inside the rpar
computation in Eval monad.
Using the deepseq library we can now construct a Strategy variant of rseq that evaluates to full normal form.
We now can create a “higher order” strategy that takes two strategies and itself yields a computation which when evaluated uses the passed strategies in its scheduling.
These patterns are implemented in the Strategies library along with several other general forms and combinators for combining strategies to fit many different parallel computations.
See:
STM
Software transactional memory is a technique for demarcating blocks of atomic transactions that are guaranteed by the runtime to have several properties:
 No parallel processes can read from the atomic block until the transaction commits.
 The current process is isolated cannot see any changes made by other parallel processes.
This is similar to the atomicity that databases guarantee. The stm
library provides a lovely compositional interface for building up higher level primitives that can be composed in atomic blocks to build safe concurrent logic without worrying about deadlocks and memory corruption from threaded and mutable reference approaches to building parallel algorithms.
The strength of Haskell’s purity guarantees that transactions within STM are pure and can always be rolled back if a commit fails. An example of usage is shown below.
Monad Par
Using the Par monad we express our computation as a data flow graph which is scheduled in order of the connections between forked computations which exchange resulting computations with IVar
.
Async
Async is a higher level set of functions that work on top of Control.Concurrent and STM.
Parser combinators were originally developed in the Haskell programming language and the last 10 years have seen a massive amount of refinement and improvements on parser combinator libraries. Today Haskell has an amazing parser ecosystem.
Parsec
For parsing in Haskell it is quite common to use a family of libraries known as Parser Combinators which let us write code to generate parsers which construct themselves from an abstract description of the grammar described with combinators.
<> 
The choice operator tries to parse the first argument before proceeding to the second. 
many 
Consumes an arbitrary number of expressions matching the given pattern and returns them as a list. 
many1 
Like many but requires at least one match. 
optional 
Optionally parses a given pattern returning its value as a Maybe. 
try 
Backtracking operator will let us parse ambiguous matching expressions and restart with a different pattern. 
<>
can be chained sequentially to generate a sequence of options.
There are two styles of writing Parsec, one can choose to write with monads or with applicatives.
The same code written with applicatives uses the applicative combinators:
Now for instance if we want to parse simple lambda expressions we can encode the parser logic as compositions of these combinators which yield the string parser when evaluated with parse
.
Custom Lexer
In our previous example a lexing pass was not necessary because each lexeme mapped to a sequential collection of characters in the stream type. If we wanted to extend this parser with a nontrivial set of tokens, then Parsec provides us with a set of functions for defining lexers and integrating these with the parser combinators. The simplest example builds on top of the builtin Parsec language definitions which define a set of most common lexical schemes.
For instance we’ll build on top of the empty language grammar on top of the haskellDef grammar that uses the Text token instead of string.
{# LANGUAGE OverloadedStrings #}
import Text.Parsec
import Text.Parsec.Text
import qualified Text.Parsec.Token as Tok
import qualified Text.Parsec.Language as Lang
import Data.Functor.Identity (Identity)
import qualified Data.Text as T
import qualified Data.Text.IO as TIO
data Expr
= Var T.Text
 App Expr Expr
 Lam T.Text Expr
deriving (Show)
lexer :: Tok.GenTokenParser T.Text () Identity
lexer = Tok.makeTokenParser style
style :: Tok.GenLanguageDef T.Text () Identity
style = Lang.emptyDef
{ Tok.commentStart = "{"
, Tok.commentEnd = "}"
, Tok.commentLine = ""
, Tok.nestedComments = True
, Tok.identStart = letter
, Tok.identLetter = alphaNum <> oneOf "_'"
, Tok.opStart = Tok.opLetter style
, Tok.opLetter = oneOf ":!#$%&*+./<=>?@\^~"
, Tok.reservedOpNames = []
, Tok.reservedNames = []
, Tok.caseSensitive = True
}
parens :: Parser a > Parser a
parens = Tok.parens lexer
reservedOp :: T.Text > Parser ()
reservedOp op = Tok.reservedOp lexer (T.unpack op)
ident :: Parser T.Text
ident = T.pack <$> Tok.identifier lexer
contents :: Parser a > Parser a
contents p = do
Tok.whiteSpace lexer
r < p
eof
return r
var :: Parser Expr
var = do
var < ident
return (Var var )
app :: Parser Expr
app = do
e1 < expr
e2 < expr
return (App e1 e2)
fun :: Parser Expr
fun = do
reservedOp "\"
binder < ident
reservedOp "."
rhs < expr
return (Lam binder rhs)
expr :: Parser Expr
expr = do
es < many1 aexp
return (foldl1 App es)
aexp :: Parser Expr
aexp = fun <> var <> (parens expr)
test :: T.Text > Either ParseError Expr
test = parse (contents expr) ""
repl :: IO ()
repl = do
str < TIO.getLine
print (test str)
repl
main :: IO ()
main = repl
See: Text.Parsec.Language
Simple Parsing
Putting our lexer and parser together we can write down a more robust parser for our little lambda calculus syntax.
Trying it out:
Megaparsec
Megaparsec is a generalisation of parsec which can work with the several input streams.
 Text (strict and lazy)
 ByteString (strict and lazy)
 String = [Char]
Megaparsec is an expanded and optimised form of parsec which can be used to write much larger complex parsers with custom lexers and Clangstyle error message handling.
An example below for the lambda calculus is quite concise:
Attoparsec
Attoparsec is a parser combinator like Parsec but more suited for bulk parsing of large text and binary files instead of parsing language syntax to ASTs. When written properly Attoparsec parsers can be efficient.
One notable distinction between Parsec and Attoparsec is that backtracking operator (try
) is not present and reflects on attoparsec’s different underlying parser model.
For a simple little lambda calculus language we can use attoparsec much in the same we used parsec:
For an example try the above parser with the following simple lambda expression.
Attoparsec adapts very well to binary and network protocol style parsing as well, this is extracted from a small implementation of a distributed consensus network protocol:
Configurator
Configurator is a library for configuring Haskell daemons and programs. It uses a simple, but flexible, configuration language, supporting several of the most commonly needed types of data, along with interpolation of strings from the configuration or the system environment.
An example configuration file:
Configurator also includes an import
directive allows the configuration of a complex application to be split across several smaller files, or configuration data to be shared across several applications.
Optparse Applicative
Optparseapplicative is a combinator library for building command line interfaces that take in various user flags, commands and switches and maps them into Haskell data structures that can handle the input. The main interface is through the applicative functor Parser
and various combinators such as strArgument
and flag
which populate the option parsing table with some monadic action which returns a Haskell value. The resulting sequence of values can be combined applicatively into a larger Config data structure that holds all the given options. The help
header is also automatically generated from the combinators.
Optparse Generic
Many optparseapplicative
command line parsers can also be generated using Generics from descriptions of records. This approach is not foolproof but works well enough for simple command line applications with a few options. For more complex interfaces with subcommands and help information you’ll need to go back to the optparseapplicative
level. For example:
Happy & Alex
Happy is a parser generator system for Haskell, similar to the tool `yacc’ for C. It works as a preprocessor with its own syntax that generates a parse table from two specifications, a lexer file and parser file. Happy does not have the same underlying parser implementation as parser combinators and can effectively work with leftrecursive grammars without explicit factorization. It can also easily be modified to track position information for tokens and handle offside parsing rules for indentationsensitive grammars. Happy is used in GHC itself for Haskell’s grammar.
 Lexer.x
 Parser.y
Running the standalone commands will take Alex/Happy source files from stdin and generate and output Haskell modules. Alex and Happy files can contain arbitrary Haskell code that can be escaped to the output.
The generated modules are not human readable generally and unfortunately error messages are given in the Haskell source, not the Happy source. Anything enclosed in braces is interpreted as literal Haskell while the code outside the braces is interpeted as parser grammar.
Happy and Alex can be integrated into a cabal file simply by including the Parser.y
and Lexer.x
files inside of the exposed modules and adding them to the buildtools pragma.
Lexer
For instance we could define a little toy lexer with a custom set of tokens.
Parser
The associated parser is list of a production rules and a monad to run the parser in. Production rules consist of a set of options on the left and generating Haskell expressions on the right with indexed metavariables ($1
, $2
, …) mapping to the ordered terms on the left (i.e. in the second term term
~ $1
, term
~ $2
).
An example parser module:
As a simple input consider the following simple program.
Lazy IO
The problem with using the usual monadic approach to processing data accumulated through IO is that the Prelude tools require us to manifest large amounts of data in memory all at once before we can even begin computation.
Reading from the file creates a thunk for the string that forced will then read the file. The problem is then that this method ties the ordering of IO effects to evaluation order which is difficult to reason about in the large.
Consider that normally the monad laws ( in the absence of seq
) guarantee that these computations should be identical. But using lazy IO we can construct a degenerate case.
So what we need is a system to guarantee deterministic resource handling with constant memory usage. To that end both the Conduits and Pipes libraries solved this problem using different ( though largely equivalent ) approaches.
Pipes
Pipes is a stream processing library with a strong emphasis on the static semantics of composition. The simplest usage is to connect “pipe” functions with a (>>)
composition operator, where each component can await
and yield
to push and pull values along the stream.
For example we could construct a “FizzBuzz” pipe.
To continue with the degenerate case we constructed with Lazy IO, consider than we can now compose and sequence deterministic actions over files without having to worry about effect order.
This is a simple sampling of the functionality of pipes. The documentation for pipes is extensive and great deal of care has been taken make the library extremely thorough. pipes
is a shining example of an accessible yet category theoretic driven design.
See: Pipes Tutorial
ZeroMQ
As a motivating example, ZeroMQ is a network messaging library that abstracts over traditional Unix sockets to a variety of network topologies. Most notably it isn’t designed to guarantee any sort of transactional guarantees for delivery or recovery in case of errors so it’s necessary to design a layer on top of it to provide the desired behavior at the application layer.
In Haskell we’d like to guarantee that if we’re polling on a socket we get messages delivered in a timely fashion or consider the resource in an error state and recover from it. Using pipessafe
we can manage the life cycle of lazy IO resources and can safely handle failures, resource termination and finalization gracefully. In other languages this kind of logic would be smeared across several places, or put in some global context and prone to introduce errors and subtle race conditions. Using pipes we instead get a nice tight abstraction designed exactly to fit this kind of use case.
For instance now we can bracket the ZeroMQ socket creation and finalization within the SafeT
monad transformer which guarantees that after successful message delivery we execute the pipes function as expected, or on failure we halt the execution and finalize the socket.
Conduits
Conduits are conceptually similar though philosophically different approach to the same problem of constant space deterministic resource handling for IO resources.
The first initial difference is that await function now returns a Maybe
which allows different handling of termination.
Since 1.2.8 the separate connecting and fusing operators are deprecated in favor of a single fusing operator (.)
.
Recently Haskell has seen quite a bit of development of cryptography libraries as it serves as an excellent language for working with and manipulating algebraic structures found in cryptographic primitives. In addition to most of the basic hashing, elliptic curve and cipher suites libraries, Haskell has a excellent standard cryptography library called cryptonite
which provides the standard kitchen sink of most modern primitives. These include hash functions, elliptic curve cryptography, digital signature algorithms, ciphers, one time passwords, entropy generation and safe memory handling.
SHA Hashing
A cryptographic hash function is a special class of hash function that has certain properties which make it suitable for use in cryptography. It is a mathematical algorithm that maps data of arbitrary size to a bit string of a fixed size (a hash function) which is designed to also be a oneway function, that is, a function which is infeasible to invert.
SHA256 is a cryptographic hash function from the SHA2 family and is standardized by NIST. It produces a 256bit message digest.
Password Hashing
Modern applications should use one of either the Blake2 or Argon2 hashing algorithms for storing passwords in a database as part of an authentication workflow.
To use Argon2:
To use Blake2:
Curve25519 DiffieHellman
Curve25519 is a widely used DiffieHellman function suitable for a wide variety of applications. Private and public keys using Curve25519 are 32 bytes each. Elliptic curve DiffieHellman is a protocol in which two parties can exchange their public keys in the clear and generate a shared secret which can be used to share information across a secure channel.
A private key is a large integral value which is multiplied by the base point on the curve to generate the public key. Going to backwards from a public key requires one to solve the elliptic curve discrete logarithm which is believed to be computationally infeasible.
DiffieHellman key exchange be performed by executing the function dh
over the private and public keys for Alice and Bob.
An example is shown below:
See:
Ed25519 EdDSA
EdDSA is a digital signature scheme based on Schnorr signature using the twisted Edwards curve Ed25519 and SHA512 (SHA2). It generates succinct (64 byte) signatures and has fast verification times.
See Also:
Merkle Trees
Merkle trees are a type of authenticated data structure that consits of a sequence of data that is divided into an even number of partitions which are incrementally hashed in a binary tree, with each level of the tree hashing to produce the hash of the next level until the root of the tree is reached. The root hash is called the Merkle root and uniquely identifies the data included under it. Any change to the leaves, or any reorordering of the nodes will produce a different hash.
A Merkle tree admits an efficient “proof of inclusion” where to produce evidence that a single node is included in the set can be done by simply tracing the roots of a single node up to the binary tree to the root. This is a logarithmic order set of hashes and is quite efficient.
Secure Memory Handling
When using Haskell for cryptography work and even inside web services, some care must be taken to ensure that the primitives you are using don’t accidentally expose secrets or user data accidentally. This can occur in many ways through the mishandling of keys, timing attacks against interactive protocols, and the insecure wiping of memory.
When using Haskell integers be aware that arithmetic operations are not constant time and are simply backed by GMP integers. This may or may not be appropriate for your code if you expect arithmetic operations to be branchfree or have constant time addition or multiplication. If you need constant arithmetic you will likely have to drop down to C or Assembly and link the resulting code into your Haskell logic. Many Haskell cryptography libraries do just this.
With regards to timing attacks, take note of which functions are marked as vulnerable to timing attacks as most of these are marked in public API documentation.
When comparing hashes and unencrypted data for equality also make sure to use an equality test which is constant time. The default derived instance for Eq
does not have this property. The securemem
library provides a SecureMem
datatype which can hold an arbitrary sized ByteString and can only be compared against other SecureMem
ByteStrings by a constant time algorithm.
This data structure will also automatically scrub its bytes with a runtime integrated finalizer on the pointer to the underlying memory. This ensures that as soon as the value is garbage collected, its underlying memory is wiped to zero values and does not linger on the process’s memory.
AES Encryption
AES (Advanced Encryption Standard) is a symmetric block cipher standardized by NIST. The cipher block size is fixed at 16 bytes and it is encrypted using a key of 128, 192 or 256 bits. AES is common cipher standard for symmetric encryption and used heavily in internet protocols.
An example of encrypting and decrypting data using the cryptonite
library is shown below:
Galois Fields
Many modern cryptographic protocols require the use of finite field arithmetic. Finite fields are algebraic structures that have algebraic field structure (addition, multiplication, division) and closure
See:
Elliptic Curves
Elliptic curves are a type of algebraic structure that are used heavily in cryptography. Most generally elliptic curves are families of curves to second order plane curves in two variables defined over finite fields. These elliptic curves admit a group construction over the curve points which has multiplication and addition. For finite fields with large order computing inversions is quite computationally difficult and gives rise to a trapdoor function which is easy to compute in one direction but difficult in reverse.
There are many types of plane curves with different coefficients that can be defined. The widely studied groups are one of the four classes. These are defined in the ellipticcurve
library as lifted datatypes which are used at the typelevel to distinguish curve operations.
 Binary
 Edwards
 Montgomery
 Weierstrass
On top of these curves there is an additional degree of freedom in the choice of coordinate system used. There are many ways to interpret the Cartesian plane in terms of coordinates and some of these coordinate systems admit more efficient operations for multiplication and addition of points.
 Affine
 Jacobian
 Projective
For example the common Ed25519 curve can be defined as the following group structure defined as a series of typelevel constructions:
Operations on this can be executed by several type classes functions.
See: ellipticcurve
Pairing Cryptography
Cryptographic pairings are a novel technique that allows us to construct bilinear mappings of the form:
e : 𝔾_{1} × 𝔾_{2} → 𝔾_{T}
These are bilinear over group addition and multiplication.
e(g_{1} + g_{2}, h) = e(g_{1}, h)e(g_{2}, h)
e(g, h_{1} + h_{2}) = e(g, h_{1})e(g, h_{2})
There are many types of pairings that can be computed. The pairing
library implements the Ate pairing over several elliptic curve groups including the BarretoNaehrig family and the BLS12381 curve. These types of pairings are used quite frequently in modern cryptographic protocols such as the construction of zkSNARKs.
See
zkSNARKs
zkSNARKS (zero knowledge succinct noninteractive arguments of knowledge) are a modern cryptographic construction that enable two parties called the Prover and Verifier to convince the verifier that a general computational statement is true without revealing anything else.
Haskell has a variety of libraries for building zkSNARK protocols including libraries to build circuit representations of embedded domain specific languages and produce succinct pairing based zero knowledge proofs.
 zkp  Implementation of the Groth16 protocol based on bilinear pairings.
 bulletproofs  Implementation of the Bulletproofs protocol.
 arithmeticcircuits Generic data structures for construction arithmetic circuits and Rank1 constraint systems (R1CS) in Haskell.
time
Haskell’s datetime library is unambiguously called time it exposes six core data structure which hold temporal quantities of various precisions.
 Day  Datetime triple of day, month, year in the Gregorian calendar system
 TimeOfDay  A clock time measure in hours, minutes and seconds
 UTCTime  A unix time measured in seconds since the Unix epoch.
 TimeZone  A ISO8601 timezone
 LocalTime  A Day and TimeOfDay combined into a aggregate type.
 ZonedTime  A LocalTime combined with TimeZone.
There are several delta types that correspond to changes in time measured in various units of days or seconds.
 NominalDiffTime  Time delta measured in picoseconds.
 CalendarDiffDays  Calendar delta measured in months and days offset.
 CalendarDiffTime  Time difference measured in months and picoseconds.
ISO8601
The ISO standard for rendering and parsing datetimes can work with the default temporal datatypes. These work bidirectionally for both parsing and pretty printing. Simple use case is shown below:
JSON
Aeson is a library for efficient parsing and generating JSON. It is the canonical JSON library for handling JSON.
A point of some subtlety to beginners is that the return types for Aeson functions are polymorphic in their return types meaning that the resulting type of decode is specified only in the context of your programs use of the decode function. So if you use decode in a point your program and bind it to a value x
and then use x
as if it were an integer throughout the rest of your program, Aeson will select the typeclass instance which parses the given input string into a Haskell integer.
Value
Aeson uses several high performance data structures (Vector, Text, HashMap) by default instead of the naive versions so typically using Aeson will require that we import them and use OverloadedStrings
when indexing into objects.
The underlying Aeson structure is called Value
and encodes a recursive tree structure that models the semantics of untyped JSON objects by mapping them onto a large sum type which embodies all possible JSON values.
For instance the Value expansion of the following JSON blob:
Is represented in Aeson as the Value
:
Let’s consider some larger examples, we’ll work with this contrived example JSON:
Unstructured or Dynamic JSON
In dynamic scripting languages it’s common to parse amorphous blobs of JSON without any a priori structure and then handle validation problems by throwing exceptions while traversing it. We can do the same using Aeson and the Maybe monad.
Structured JSON
This isn’t ideal since we’ve just smeared all the validation logic across our traversal logic instead of separating concerns and handling validation in separate logic. We’d like to describe the structure beforehand and the invalid case separately. Using Generic also allows Haskell to automatically write the serializer and deserializer between our datatype and the JSON string based on the names of record field names.
Now we get our validated JSON wrapped up into a nicely typed Haskell ADT.
The functions fromJSON
and toJSON
can be used to convert between this sum type and regular Haskell types with.
As of 7.10.2 we can use the new XDeriveAnyClass to automatically derive instances of FromJSON and TOJSON without the need for standalone instance declarations. These are implemented entirely in terms of the default methods which use Generics under the hood.
{# LANGUAGE DeriveAnyClass #}
{# LANGUAGE DeriveGeneric #}
import Data.Aeson
import Data.ByteString.Lazy.Char8 as BL
import Data.Text
import GHC.Generics
data Refs
= Refs
{ a :: Text,
b :: Text
}
deriving (Show, Generic, FromJSON, ToJSON)
data Data
= Data
{ id :: Int,
name :: Text,
price :: Int,
tags :: [Text],
refs :: Refs
}
deriving (Show, Generic, FromJSON, ToJSON)
main :: IO ()
main = do
contents < BL.readFile "example.json"
let Just dat = decode contents
print $ name dat
print $ a (refs dat)
BL.putStrLn $ encode dat
Hand Written Instances
While it’s useful to use generics to derive instances, sometimes you actually want more fine grained control over serialization and de serialization. So we fall back on writing ToJSON and FromJSON instances manually. Using FromJSON we can project into hashmap using the (.:)
operator to extract keys. If the key fails to exist the parser will abort with a key failure message. The ToJSON instances can never fail and simply require us to pattern match on our custom datatype and generate an appropriate value.
The law that the FromJSON and ToJSON classes should maintain is that encode . decode
and decode . encode
should map to the same object. Although in practice there many times when we break this rule and especially if the serialize or de serialize is one way.
See: Aeson Documentation
Yaml
Yaml is a textual serialization format similar to JSON. It uses an indentation sensitive structure to encode nested maps of keys and values. The Yaml interface for Haskell is a precise copy of Data.Aeson
YAML Input:
YAML Output:
Object
(fromList
[ ( "invoice" , Number 34843.0 )
, ( "date" , String "20010123" )
, ( "billto"
, Object
(fromList
[ ( "address"
, Object
(fromList
[ ( "state" , String "MI" )
, ( "lines" , String "458 Walkman Dr.nSuite #292n" )
, ( "city" , String "Royal Oak" )
, ( "postal" , Number 48046.0 )
])
)
, ( "family" , String "Dumars" )
, ( "given" , String "Chris" )
])
)
])
To parse this file we use the following datatypes and functions:
{# LANGUAGE DeriveAnyClass #}
{# LANGUAGE DeriveGeneric #}
{# LANGUAGE ScopedTypeVariables #}
import qualified Data.ByteString as BL
import Data.Text (Text)
import Data.Yaml
import GHC.Generics
data Invoice
= Invoice
{ invoice :: Int,
date :: Text,
bill :: Billing
}
deriving (Show, Generic, FromJSON)
data Billing
= Billing
{ address :: Address,
family :: Text,
given :: Text
}
deriving (Show, Generic, FromJSON)
data Address
= Address
{ lines :: Text,
city :: Text,
state :: Text,
postal :: Int
}
deriving (Show, Generic, FromJSON)
main :: IO ()
main = do
contents < BL.readFile "example.yaml"
let (res :: Either ParseException Invoice) = decodeEither' contents
case res of
Left err > print err
Right val > print val
Which generates:
CSV
Cassava is an efficient CSV parser library. We’ll work with this tiny snippet from the iris dataset:
sepal_length,sepal_width,petal_length,petal_width,plant_class
5.1,3.5,1.4,0.2,Irissetosa
5.0,2.0,3.5,1.0,Irisversicolor
6.3,3.3,6.0,2.5,Irisvirginica
Unstructured CSV
Just like with Aeson if we really want to work with unstructured data the library accommodates this.
We see we get the nested set of stringy vectors:
[ [ "sepal_length"
, "sepal_width"
, "petal_length"
, "petal_width"
, "plant_class"
]
, [ "5.1" , "3.5" , "1.4" , "0.2" , "Irissetosa" ]
, [ "5.0" , "2.0" , "3.5" , "1.0" , "Irisversicolor" ]
, [ "6.3" , "3.3" , "6.0" , "2.5" , "Irisvirginica" ]
]
Structured CSV
Just like with Aeson we can use Generic to automatically write the deserializer between our CSV data and our custom datatype.
And again we get a nice typed ADT as a result.
[ Plant
{ sepal_length = 5.1
, sepal_width = 3.5
, petal_length = 1.4
, petal_width = 0.2
, plant_class = "Irissetosa"
}
, Plant
{ sepal_length = 5.0
, sepal_width = 2.0
, petal_length = 3.5
, petal_width = 1.0
, plant_class = "Irisversicolor"
}
, Plant
{ sepal_length = 6.3
, sepal_width = 3.3
, petal_length = 6.0
, petal_width = 2.5
, plant_class = "Irisvirginica"
}
]
There is a common meme that it is impossible to build web CRUD applications in Haskell. This absolutely false and the ecosystem provides a wide variety of tools and frameworks for building modern web services. That said, although Haskell has web frameworks the userbase of these libraries is several orders of magnitude less than common tools like PHP and WordPress and as such are not close to the level of polish, documentation, or userbase. Put simply you won’t be able to drunkenly muddle your way through building a Haskell web application by copying and pasting code from Stackoverflow.
Building web applications in Haskell is always a balance between the power and flexibility of the typedriven way of building software versus the network effects of ecosystems based on dynamically typed languages with lower barriers to entry.
Web packages can mostly be broken down into several categories:
 Web servers  Services that handle the TCP level of content delivery and protocol servicing.
 Request libraries  Libraries for issuing HTTP requests to other servers.
 Templating Libraries  Libraries to generate HTML from interpolating strings.
 HTML Generation  Libraries to generate HTML from Haskell datatypes.
 Form Handling & Validation  Libraries for handling form input and serialisation and validating data against a given schema and constraint sets.
 Web Frameworks  Frameworks for constructing RESTful services and handling the lifecycle of HTTP requests within a business logic framework.
 Database Mapping  ORM and database libraries to work with database models and serialise data to web services. See Databases.
Frameworks
There are three large Haskell web frameworks:
Servant
Servant is the newest of the standard Haskell web frameworks. It emerged after GHC 8.0 and incorporates many modern language extensions. It is based around the key idea of having a typesafe routing system in which many aspects of the request/response cycle of the server are expressed at the typelevel. This allows many common errors found in web applications to be prevented. Servant also has very advanced documentation generation capability and can automatically generate API endpoint documentation from the type signatures of an application. Servant has a reputation for being a bit more challenging to learn but is quite powerful and has an wide userbase in the industrial Haskell community.
See: Servant
Scotty
Scotty is a minimal web framework that builds on top of the Warp web server. It is based on a simple routing model and that makes standing up simple REST API services quite simple. Its design is modeled after the Flask and Sinatra models found in Python and Ruby.
See: Scotty
Yesod
Yesod is a large featureful ecosystem built on lots of metaprogramming using Template Haskell. There is excellent documentation and a book on building real world applications. This style of metaprogramming appeals to some types of programmers who can work with the code generation style.
Snap
Snap is a small Haskell web framework which was developed heavily in the early 2000s. It is based on a very welltested core and has a modular framework in which “snaplets” can extend the base server. Much of the Haskell.org infrastructure of packages and development runs on top of Snap web applications.
HTTP Requests
Haskell has a variety of HTTP request and processing libraries. The simplest and most flexible is the HTTP library.
Req
Req is a modern HTTP request library that provides a simple monad for executing batches of HTTP requests to servers. It integrates closely with the Aeson library for JSON handling and exposes a type safe API to prevent the mixing of invalid requests and payload types.
The two toplevel functions of note are req
and runReq
which run inside of a Req
monad which holds the socket state.
A end to end example can include serialising and de serialising requests to and from JSON from RESTful services.
Blaze
Blaze is an HTML combinator library that provides that capacity to build composable bits of HTML programmatically. It doesn’t string templating libraries like Hastache but instead provides an API for building up HTML documents from logic where the format out of the output is generated procedurally.
For sequencing HTML elements the elements can either be sequenced in a monad or with monoid operations.
For custom datatypes we can implement the ToMarkup
class to convert between Haskell data structures and HTML representation.
Lucid
Lucid is another HTML generation library. It takes a different namespacing approach than Blaze and doesn’t use names which clash with the default Prelude exports. So elements like div
, id
, and head
are replaced with underscore suffixed functions. div_
, id_
and head_
.
The base interface is defined through a ToHTML
typeclass which renders an element into a text builder interface wrapped in HtmlT
transformer.
New elements and attributes can be created by the smart constructors for Attribute
and Element
types.
A simple example of usage is shown below:
Hastache
Hastache is string templating based on the “Mustache” style of encoding metavariables with double braces {{ x }}
. Hastache supports automatically converting many Haskell types into strings and uses the efficient Text functions for formatting.
The variables loaded into the template are specified in either a function mapping variable names to printable MuType values. For instance using a function.
Or using DataTypeable record and mkGenericContext
, the Haskell field names are converted into variable names.
The MuType and MuContext types can be parameterized by any monad or transformer that implements MonadIO
, not just IO.
Warp
Warp is a efficient massively concurrent web server, it is the backend server behind several of popular Haskell web frameworks. The internals have been finely tuned to utilize Haskell’s concurrent runtime and is capable of handling a great deal of concurrent requests. For example we can construct a simple web service which simply returns a 200 status code with a ByteString which is flushed to the socket.
See: Warp
Scotty
Continuing with our trek through web libraries, Scotty is a web microframework similar in principle to Flask in Python or Sinatra in Ruby.
Of importance to note is the Blaze library used here overloads donotation but is not itself a proper monad so the various laws and invariants that normally apply for monads may break down or fail with error terms.
A collection of useful related resources can be found on the Scotty wiki: Scotty Tutorials & Examples
Servant
Servant is a modern Haskell web framework heavily based on typelevel programming patterns. Servant’s novel invention is a typesafe way of specifying URL routes. This consists of two typelevel infix combinators :>
and :<>
combinators which combine URL fragments into routes that are run by the web server. The two datatypes are defined as followings:
For example the URL endpoint for a GET route that returns JSON.
GET /api/hello 
"api" :> "hello" :> Get ‘[JSON] String 
The HTTP methods are lifted to the type level as DataKinds from the following definition.
And the common type synonyms are given for successful requests:
For requests that receive a payload from the client a ReqBody
is attached to the route which contains the content type of the requested payload. This takes a typelevel list of options and the Haskell value type to serialize into.
POST /api/hello 
"api" :> "hello" :> ReqBody '[JSON] MyData :> Post '[JSON] MyData 
The application itself is expressed simply as a function which takes a Request
containing the headers and payload and handles it by evaluating to a Response
inside of the IO. The underlying server used in servantserver
is Warp.
Middleware is then simply a higher order function which takes an Application
to another Application
.
Handlers are specified defined in servantserver
and are IO computations with failures handed by ServerError
. The toplevel functions run
and serve
can be used to instantiate the application inside of a server.
For error handling the throwError
function can be used attached to an error response code.
Minimal Example
The simplest end to end example is simply a router which has a single endpoint mapping to a server handler which returns the String “Hello World” as a application/json
content type.
Full Example
As a second case, we consider a larger application which builds a user interface which will enable the interface to send and receive data from the client to the REST API.
First we define a custom User
datatype and using generic deriving we can derive the serializer from URI form data automatically.
The URL routes are specified in an API type which maps the REST verbs to response handlers.
The handler is an inhabitant of the API
type and defines the value level handlers corresponding to the routes at the typelevel :<>
terms.
The page rendering itself is mostly blaze boilerplate that generates the markup programmatically using combinators. One could just as easily plug in any of the templating languages (Mustache, …) instead here.
The page will include the html and header containing the source files. In this case we’ll simply load the Bootstrap library from a CDN.
And then the handler for POST for the single endpoint will simply deserialize the User datatype form the POST data and render it into a page with the fields extracted.
Putting it all together we can invoke run on a given port and serve the application. Point your browser at localhost:8000
to see it run.
From here you could all manner of additional logic, like adding in the Selda object relational mapper, adding in servantauth
for authentication or using swagger2
for building Open API specifications.
Haskell has bindings for most major databases and persistence engines. Generally the libraries will consist of two different layers. The raw bindings which wrap the C library or wire protocol will usually be called simple
. So for example postgresqlsimple
is the Haskell library for interfacing with the C library libpqdev
. Higher level libraries will depend on this library for the bindings and provide higher level interfaces for building queries, managing transactions, and connection pooling.
Postgres
Postgres is an objectrelational database management system with a rich extension of the SQL standard. Consider the following tables specified in DDL.
The postgresqlsimple bindings provide a thin wrapper to various libpq commands to interact with a Postgres server. These functions all take a Connection
object to the database instance and allow various bytestring queries to be sent and result sets mapped into Haskell datatypes. There are four primary functions for these interactions:
The result of the query
function is a list of elements which implement the FromRow typeclass. This can be many things including a single element (Only), a list of tuples where each element implements FromField
or a custom datatype that itself implements FromRow
. Under the hood the database bindings inspects the Postgres oid
objects and then attempts to convert them into the Haskell datatype of the field being scrutinised. This can fail at runtime if the types in the database don’t align with the expected types in the logic executing the SQL query.
Tuples
This yields the result set:
[ ( 7808 , "The Shining" , 4156 )
, ( 4513 , "Dune" , 1866 )
, ( 4267 , "2001: A Space Odyssey" , 2001 )
, ( 1608 , "The Cat in the Hat" , 1809 )
, ( 1590 , "Bartholomew and the Oobleck" , 1809 )
, ( 25908 , "Franklin in the Dark" , 15990 )
, ( 1501 , "Goodnight Moon" , 2031 )
, ( 190 , "Little Women" , 16 )
, ( 1234 , "The Velveteen Rabbit" , 25041 )
, ( 2038 , "Dynamic Anatomy" , 1644 )
, ( 156 , "The TellTale Heart" , 115 )
, ( 41473 , "Programming Python" , 7805 )
, ( 41477 , "Learning Python" , 7805 )
, ( 41478 , "Perl Cookbook" , 7806 )
, ( 41472 , "Practical PostgreSQL" , 1212 )
]
Custom Types
This yields the result set:
[ Book { id_ = 7808 , title = "The Shining" , author_id = 4156 }
, Book { id_ = 4513 , title = "Dune" , author_id = 1866 }
, Book { id_ = 4267 , title = "2001: A Space Odyssey" , author_id = 2001 }
, Book { id_ = 1608 , title = "The Cat in the Hat" , author_id = 1809 }
]
Quasiquoter
As SQL expressions grow in complexity they often span multiple lines and sometimes it’s useful to just drop down to a quasiquoter to embed the whole query. The quoter here is pure, and just generates the Query
object behind as a ByteString.
This yields the result set:
[ Book
{ id_ = 41472
, title = "Practical PostgreSQL"
, first_name = "John"
, last_name = "Worsley"
}
, Book
{ id_ = 25908
, title = "Franklin in the Dark"
, first_name = "Paulette"
, last_name = "Bourgeois"
}
, Book
{ id_ = 1234
, title = "The Velveteen Rabbit"
, first_name = "Margery Williams"
, last_name = "Bianco"
}
, Book
{ id_ = 190
, title = "Little Women"
, first_name = "Louisa May"
, last_name = "Alcott"
}
]
Sqlite
The sqlitesimple
library provides a binding to the libsqlite3
which can interact with and query SQLite databases. It provides precisely the same interface as the Postgres library of similar namesakes.
All datatypes can be serialised to and from result sets by defining FromRow
and ToRow
datatypes which map your custom datatypes to a RowParser which convets result sets, or a serialisers which maps custom to one of the following primitive sqlite types.
SQLInteger
SQLFloat
SQLText
SQLBlob
SQLNull
For examples of serialising to datatype see the previous Postgres section as it has an identical interface.
Redis
Redis is an inmemory keyvalue store with support for a variety of datastructures. The Haskell exposure is exposed in a Redis
monad which sequences a set of redis commands taking ByteString arguments and then executes them against a connection object.
Redis is quite often used as a lightweight pubsub server, and the bindings integrate with the Haskell concurrency primitives so that listeners can be sparked and shared across threads off without blocking the main thread.
Acid State
Acidstate allows us to build a “database” for around our existing Haskell datatypes that guarantees atomic transactions. For example, we can build a simple keyvalue store wrapped around the Map type.
Selda
Selda is a object relation mapper and database abstraction which provides a higher level interface for creating database schemas for multiple database backends, as well as a typesafe query interface which makes use of advanced type system features to ensure integrity of queries.
Selda is very unique in that it uses the OverloadedLabels
extension to query refer to database fields that map directly to fields of records. By deriving Generic
and instantiating SqlRow
via DeriveAnyClass
we can create databases schemas automatically with generic deriving.
The tables themselves can be named, annotated with metadata about constraints and foreign keys and assigned to a Haskell value.
This table can then be generated and populated.
This will generate the following Sqlite DDL to instantiate the tables directly from the types of the Haskell data strutures.
Selda also provides an embedded query language for specifying typesafe queries by allowing you to add the overloaded labels to work with these values directly as SQL selectors.
An example SELECT
SQL query:
Compiler Design
The flow of code through GHC is a process of translation between several intermediate languages and optimizations and transformations thereof. A common pattern for many of these AST types is they are parametrized over a binder type and at various stages the binders will be transformed, for example the Renamer pass effectively translates the HsSyn
datatype from a AST parametrized over literal strings as the user enters into a HsSyn
parameterized over qualified names that includes modules and package names into a higher level Name type.
GHC Compiler Passes
 Parser/Frontend: An enormous AST translated from human syntax that makes explicit all possible expressible syntax ( declarations, donotation, where clauses, syntax extensions, template haskell, … ). This is unfiltered Haskell and it is enormous.
 Renamer takes syntax from the frontend and transforms all names to be qualified (
base:Prelude.map
instead ofmap
) and any shadowed names in lambda binders transformed into unique names.  Typechecker is a large pass that serves two purposes, first is the core type bidirectional inference engine where most of the work happens and the translation between the frontend
Core
syntax.  Desugarer translates several higher level syntactic constructors
where
statements are turned into (possibly recursive) nestedlet
statements. Nested pattern matches are expanded out into splitting trees of case statements.
 donotation is expanded into explicit bind statements.
 Lots of others.
 Simplifier transforms many Core constructs into forms that are more adaptable to compilation. For example let statements will be floated or raised, pattern matches will simplified, inner loops will be pulled out and transformed into more optimal forms. Nonintuitively the resulting may actually be much more complex (for humans) after going through the simplifier!
 Stg pass translates the resulting Core into STG (Spineless Tagless GMachine) which effectively makes all laziness explicit and encodes the thunks and update frames that will be handled during evaluation.
 Codegen/Cmm pass will then translate STG into Cmm a simple imperative language that manifests the lowlevel implementation details of runtime types. The runtime closure types and stack frames are made explicit and lowlevel information about the data and code (arity, updatability, free variables, pointer layout) made manifest in the info tables present on most constructs.
 Native Code The final pass will than translate the resulting code into either LLVM or Assembly via either through GHC’s home built native code generator (NCG) or the LLVM backend.
Information for each pass can be dumped out via a rather large collection of flags. The GHC internals are very accessible although some passes are somewhat easier to understand than others. Most of the time ddumpsimpl
and ddumpstg
are sufficient to get an understanding of how the code will compile, unless of course you’re dealing with very specialized optimizations or hacking on GHC itself.
ddumpparsed 
Frontend AST. 
ddumprn 
Output of the rename pass. 
ddumptc 
Output of the typechecker. 
ddumpsplices 
Output of TemplateHaskell splices. 
ddumptypes 
Typed AST representation. 
ddumpderiv 
Output of deriving instances. 
ddumpds 
Output of the desugar pass. 
ddumpspec 
Output of specialisation pass. 
ddumprules 
Output of applying rewrite rules. 
ddumpvect 
Output results of vectorize pass. 
ddumpsimpl 
Output of the SimplCore pass. 
ddumpinlinings 
Output of the inliner. 
ddumpcse 
Output of the common subexpression elimination pass. 
ddumpprep 
The CorePrep pass. 
ddumpstg 
The resulting STG. 
ddumpcmm 
The resulting Cmm. 
ddumpoptcmm 
The resulting Cmm optimization pass. 
ddumpasm 
The final assembly generated. 
ddumpllvm 
The final LLVM IR generated. 
GHC API
GHC can be used as a library to manipulate and transform Haskell source code into executable code. It consists of many functions, the primary drivers in the pipeline are outlined below.
The output of these functions consists of four main data structures:
 ParsedModule
 TypecheckedModule
 DesugaredModule
 CoreModule
GHC itself can be used as a library just as any other library. The example below compiles a simple source module “B” that contains no code.
DynFlags
The internal compiler state of GHC is largely driven from a set of many configuration flags known as DynFlags. These flags are largely divided into four categories:
 Dump Flags
 Warning Flags
 Extension Flags
 General Flags
These are flags are set via the following modifier functions:
See:
Package Databases
A package is a library of Haskell modules known to the compiler. Compilation of a Haskell module through Cabal uses a directory structure known as a package database. This directory is named package.conf.d
, and contains a file for each package used for compiling a module and is combined with a binary cache of package’s cabal data in package.cache
.
When Cabal operates it stores the active package database in the environment variable: GHC_PACKAGE_PATH
To see which packages are currently available, use the ghcpkg list command:
The package database can be queried for specific metadata of the cabal files associated with each package. For example to query the version of base library currently used for compilation we can query from the ghcpkg
command:
HIE Bios
A session is fully specified by a set GHC dynflags that are needed to compile a module. Typically when the compiler is invoked by Cabal these are all generated during compilation time. These flags contain the entire transitive dependency graph of the module, the language extensions and the file system locations of all paths. Given the bifucation of many of these tools setting up the GHC environment from inside of libraries has been nontrivial in the past. HIEbios is a new library which can read package metadata from Cabal and Stack files and dynamically set up the appropriate session for a project.
Hiebios will read a Cradle file (hie.yaml
) file in the root of the workspace which describes how to setup the environment. For example for using Stack this file would contain:
While using Cabal the file would contain:
This is particularly useful for projects that require access to the internal compiler artifacts or do static analysis on top of Haskell code. An example of setting a compiler session from a cradle is shown below:
Abstract Syntax Tree
GHC uses several syntax trees during its compilation. These are defined in the following modules:
HsExpr
 Syntax tree for the frontend of GHC compiler.StgSyn
 Syntax tree of STG intermediate representationCmm
 Syntax tree for the CMM intermediate representation
GHC’s frontend source tree are grouped into datatypes for the following language constructs and use the naming convention:
Binds
 Declarations of functions. For example the body of a class declaration or class instance.Decl
 Declarations of datatypes, types, newtypes, etc.Expr
 Expressions. For example, let statements, lambdas, ifblocks, doblocks, etc.Lit
 Literals. For example, integers, characters, strings, etc.Module
 Modules including import declarations, exports and pragmas.Name
 Names that occur in other constructs. Such as modules names, constructors and variables.Pat
 Patterns that occur in case statements and binders.Type
 Type syntax that occurs in toplevel signatures and explicit annotations.
Generally all AST in the frontend of the compiler is annotated with position information that is kept around to give better error reporting about the provenance of the specific problematic set of the syntax tree. This is done through a datatype GenLocated
with attaches the position information l
to element e
.
For example, the type of located source expressions is defined by the type:
The HsSyn
AST is reused across multiple compiler passes.
Individual elements of the syntax are defined by type families which a single parameter for the pass.
The type of HsExpr
used in the parser pass can then be defined simply as LHsExpr GhcPs
and from the typechecker pass LHsExpr GhcTc
.
Names
GHC has an interesting zoo of names it uses internally for identifiers in the syntax tree. There are more than the following but these are the primary ones you will see most often:
RdrName
 Names that come directly from the parser without metadata.OccName
 Names with metadata about the namespace the variable is in.Name
 A unique name introduced during the renamer pass with metadata about its provenance.Var
 A typed variable name with metadata about its use sites.Id
 A termlevel identifier. Type Synonym for Var.TyVar
 A typelevel identifier. Type Synonym for Var.TcTyVar
 A type variable used in the typechecker. Type Synonym for Var.
See: Trees That Grow
Parser
The GHC parser is itself written in Happy. It defines its Parser monad as the following definition which emits a sequences of Located
tokens with the lexemes position information. The parser is embedded inside the P
monad.
Since there are many flavours of Haskell syntax enabled by language syntax extensions, the monad parser itself is passed a specific set of DynFlags
which specify the language specific Haskell syntax to parse. An example parser invocation would look like:
The parser
argument above can be one of the following Happy entry point functions which parse different fragments of the Haskell grammar.
parseModule
parseSignature
parseStatement
parseDeclaration
parseExpression
parseTypeSignature
parseStmt
parseIdentifier
parseType
See:
Outputable
GHC internally use a pretty printer class for rendering its core structures out to text. This is based on the WadlerLeijen style and uses a Outputable
class as its interface:
The primary renderer for SDoc types is showSDoc
which takes as argument a set of DynFlags which determine how the structure are printed.
We can also cheat and use a unsafe show which uses a dummy set of DynFlags.
See:
Datatypes
GHC has many datatypes but several of them are central data structures that are the core datatypes that are manipulated during compilation. These are divided into seven core categories.
Monads
The GHC monads which encapsulate the compiler driver pipeline and statefully hold the interactions between the user and the internal compiler phases.
GHC
 The toplevel GHC monad that contains the compiler driver.P
 The parser monad.Hsc
 The compiler module for a single module.TcRn
 The monad holding state for typechecker and renamer passes.DsM
 The monad holding state for desugaring pass.SimplM
 The monad holding state of simplification pass.MonadUnique
 A monad for generating unique identifiers
Names
ModuleName
 A qualified module name.Name
 A unique name generated after renaming pass with provenance information of the symbol.Var
 A typedName
.Type
 The representation of a type in the GHC type system.RdrName
 A name generated from the parser without scoping or type information.Token
 Alex lexer tokensSrcLoc
 The position information of a lexeme within the source code.SrcSpan
 The span information of a lexeme within the source code.Located
 Source code location newtype wrapper for AST containing position and span information.
Session
DynFlags
 A mutable state holding all compiler flags and options for compiling a project.HscEnv
 An immutable monad state holding the flags and session for compiling a single module.Settings
 Immutable datatype holding holding system settings, architecture and paths for compilation.Target
 A compilation target.TargetId
 Name of a compilation target, either module or file.HscTarget
 Target code output. Either LLVM, ASM or interpreted.GhcMode
 Operation mode of GHC, either multimodule compilation or single shot.ModSummary
 An element in a project’s module graph containing file information and graph location.InteractiveContext
 Context for GHCI interactive shell when using interpreter target.TypeEnv
 A symbol table mapping from Names to TyThings.GlobalRdrEnv
 A symbol table mappingRdrName
toGlobalRdrElt
.GlobalRdrElt
 A symbol emitted by the parser with provenance about where it was defined and brought into scope.TcGblEnv
 A symbol table generated after a module is completed typechecking.FixityEnv
 A symbol table mapping infix operators to fixity delcarations.Module
 A module name and identifier.ModGuts
 The total state of all passes accumulated by compiling a module. After compilationModIFace
andModDetails
are kept.ModuleInfo
 Container for information about a Module.ModDetails
 Data structure summarises all metadata about a compiled module.AvailInfo
 Symbol table of what objects are in scope.Class
 Data structure holding all metadata about a typeclass definition.ClsInt
 Data structure holding all metadata about a typeclass instance.FamInst
 Data structure holding all metadata about a type/data family instance declaration.TyCon
 Data structure holding all metadata about a type constructor.DataCon
 Data structure holding all metadata about a data constructor.InstEnv
 A InstEnv hodlings a mapping of known instances for that family.TyThing
 A global name with a type attached. Classified by namespace.DataConRep
 Data constructor representation generated from parser.GhcException
 Exceptions thrown by GHC inside of Hsc monad for aberrant compiler behavior. Panics or internal errors.
HsSyn
HsModule
 Haskell source module containing all toplevel definitions, pragmas and imports.HsBind
 Universal type for any Haskell binding mapping names to scope.HsDecl
 Toplevel declaration in a module.HsGroup
 A classifier type of toplevel decalarations.HsExpr
 An expression used in a declaration.HsLit
 An literal expression (number, character, char, etc) used in a declaration.Pat
 A pattern match occuring in a function declaration of left of a pattern binding.HsType
 Haskell source representation of a typelevel expression.Literal
 Haskell source representation of a literal mapping to either a literal numeric type or a machine type.
CoreSyn
The core syntax is a very small set of constructors for the Core intermediate language. Most of the datatypes are contained in the Expr
datatype. All core expressions consists of toplevel Bind
of expressions objects.
Expr
 Core expression.Bind
 Core binder, either recursive or nonrecursive.Arg
 Expression that occur in function arguments.Alt
 A pattern match case split alternative.AltCon
 A case alterantive constructor.
StgSyn
Spineless tagless Gmachine or STG is the intermediate representation GHC uses before generating native code. It is an even simpler language than Core and models a virtual machine which maps to the native compilation target.
StgTopBinding
 A toplevel module STG binding.StgBinding
 An STG binding, either recursive or nonrecursive.StgExpr
 A STG expression over Id names.StgApp
 Application of a function to a fixed set of arguments.StgLit
 An expression literal.StgConApp
 An application of a data constructor to a fixed set of values.StgOpApp
 An application of a primop to a fixed set of arguments.StgLam
 An STG lambda binding.StgCase
 An STG case expansion.StgLet
 An STG let binding.
Core
Core is the explicitly typed SystemF family syntax through which all Haskell constructs can be expressed.
To inspect the core from GHCi we can invoke it using the following flags and the following shell alias. We have explicitly disabled the printing of certain metadata and longform names to make the representation easier to read.
At the interactive prompt we can then explore the core representation interactively:
ghccore is also very useful for looking at GHC’s compilation artifacts.
Alternatively the major stages of the compiler ( parse tree, core, stg, cmm, asm ) can be manually outputted and inspected by passing several flags to the compiler:
Reading Core
Core from GHC is roughly human readable, but it’s helpful to look at simple human written examples to get the hang of what’s going on.
Machine generated names are created for a lot of transformation of Core. Generally they consist of a prefix and unique identifier. The prefix is often pass specific ( e.g ds
for desugar generated names) and sometimes specific names are generated for specific automatically generated code. A list of the common prefixes and their meaning is show below.
$f... 
Dictfun identifiers (from inst decls) 
$dmop 
Default method for ‘op’ 
$wf 
Worker for function ‘f’ 
$sf 
Specialised version of f 
$gdm 
Generated class method 
$d 
Dictionary names 
$s 
Specialized function name 
$f 
Foreign export 
$pnC 
n’th superclass selector for class C 
T:C 
Tycon for dictionary for class C 
D:C 
Data constructor for dictionary for class C 
NTCo:T 
Coercion for newtype T to its underlying runtime representation 
Of important note is that the Λ and λ for typelevel and valuelevel lambda abstraction are represented by the same symbol () in core, which is a simplifying detail of the GHC’s implementation but a source of some confusion when starting.
The seq
function has an intuitive implementation in the Core language.
One particularly notable case of the Core desugaring process is that pattern matching on overloaded numbers implicitly translates into equality test (i.e. Eq
).
Of course, adding a concrete type signature changes the desugar just matching on the unboxed values.
See:
Inliner
Having to enter a secondary closure every time we used ($)
would introduce an enormous overhead. Fortunately GHC has a pass to eliminate small functions like this by simply replacing the function call with the body of its definition at appropriate callsites. The compiler contains a variety of heuristics for determining when this kind of substitution is appropriate and the potential costs involved.
In addition to the automatic inliner, manual pragmas are provided for more granular control over inlining. It’s important to note that naive inlining quite often results in significantly worse performance and longer compilation times.
For example the contrived case where we apply a binary function to two arguments. The function body is small and instead of entering another closure just to apply the given function, we could in fact just inline the function application at the call site.
Looking at the core, we can see that in test1
the function has indeed been expanded at the call site and simply performs the addition there instead of another indirection.
Cases marked with NOINLINE
generally indicate that the logic in the function is using something like unsafePerformIO
or some other unholy function. In these cases naive inlining might duplicate effects at multiple callsites throughout the program which would be undesirable.
See:
Primops
GHC has many primitive operations that are intrinsics built into the compiler. You can manually invoke these functions inside of optimised code which allows you to drop down to the same level of performance you can achieve in C or by handwriting inline assembly. These functions are intrinsics that are builtin to the compiler and operate over unboxed machines types.
Depending on the choice of code generator and CPU architecture these instructions will map to single CPU instructions over machines.
See ghcprim
SIMD Intrinsics
GHC has procedures for generating code that use SIMD vector instructions when using the LLVM backend (fllvm
). For example the following <8xfloat>
and <8xdouble>
are used internally by the following datatypes exposed by ghcprim
.
FloatX8#
DoubleX8#
And operations over these map to single CPU instructions that work with the bulk values instead of single values. For instance adding two vectors:
For example:
{# LANGUAGE BangPatterns #}
{# LANGUAGE MagicHash #}
{# LANGUAGE UnboxedTuples #}
{# OPTIONS_GHC mavx #}
{# OPTIONS_GHC msse #}
{# OPTIONS_GHC msse2 #}
{# OPTIONS_GHC msse4 #}
import GHC.Exts
import GHC.Prim
data ByteArray = BA (MutableByteArray# RealWorld)
data FloatX4 = FX4# FloatX4#
instance Show FloatX4 where
show (FX4# f) = case unpackFloatX4# f of
(# a, b, c, d #) > show (F# a, F# b, F# c, F# d)
main :: IO ()
main = do
let a = packFloatX4# (# 4.5#, 7.8#, 2.3#, 6.5# #)
let b = packFloatX4# (# 8.2#, 6.3#, 4.7#, 9.2# #)
let c = FX4# (broadcastFloatX4# 1.5#)
print (FX4# a)
print (FX4# (plusFloatX4# a b))
print c
When you generate this code to LLVM you will see that GHC is indeed allocating the values as vector types if you browse the assembly output.
Using the native SIMD instructions you can perform lowlevel vectorised operations over the unboxed memory, typically found in numerical computing problems.
See: SIMD Operations
Rewrite Rules
Consider the composition of two fmaps. This operation maps a function g
over a list xs
and then maps a function f
over the resulting list. This results in two full traversals of a list of length n.
This is equivalent to the following more efficient form which applies the composition of f and g over the list elementwise resulting in a single iteration of the list instead. For large lists this will be vastly more efficient.
GHC is a clever compiler and allows us to write custom rules to transform the AST of our programs at compile time in order to do these kind of optimisations. These are called fusion rules and many highperformance libraries make use of them to generate more optimal code.
By adding a RULES
pragma to a module where map
is defined we can tell GHC to rewrite all cases of double map to their more optimal form across all modules that use this definition. Rule are applied during the optimiser pass in GHC compilation.
It is important to note that these rewrite rules must be syntactically valid Haskell, but GHC makes no guarantees that they are semantically valid. One could very easily introduce a rewrite rule that introduces subtle bugs by redefining functions nonsensically and GHC will happily rewrite away. Be careful when doing these kind of optimisations.
Boot Libraries
GHC itself ships with a variety of libraries that are necessary to bootstrap the compiler and compile itself.
 array  Mutable and immutable array data structures.
 base  The base library. See Base.
 binary  Binary serialisation to ByteStrings
 bytestring  Unboxed arrays of bytes.
 Cabal  The Cabal build system.
 containers  The default data structures.
 deepseq  Deeply evaluate nested data structures.
 directory  Directory and file traversal.
 disthaddock  Haddock build utilities.
 filepath  File path manipulation.
 ghcboot  Shared datatypes for GHC package databases
 ghcbootth  Shared datatypes for GHC and TemplateHaskell iserv
 ghccompact  GHC support for compact memory regions.
 ghcheap  C library for Haskell GC types.
 ghci  GHCI interactive shell.
 ghcprim  GHC builtin primitive operations.
 haskeline  Readline library.
 hpc  Code coverage reporting.
 integergmp  GMP integer datatypes for GHC.
 libiserv  External interpreter for Template Haskell.
 mtl  Monad transformers library.
 parsec  Parser combinators.
 pretty  Pretty printer.
 process  Operating system process utilities.
 stm  Software transaction memory.
 templatehaskell  Metaprogramming for GHC.
 terminfo  System terminal information.
 text  Unboxed arrays of Unicode characters.
 time  System time.
 transformers  Monad transformers library.
 unix  Interactions with Linux operating system.
 xhtml  HTML generation utilities.
Dictionaries
The Haskell language defines the notion of Typeclasses but is agnostic to how they are implemented in a Haskell compiler. GHC’s particular implementation uses a pass called the dictionary passing translation part of the elaboration phase of the typechecker which translates Core functions with typeclass constraints into implicit parameters of which recordlike structures containing the function implementations are passed.
This class can be thought as the implementation equivalent to the following parameterized record of functions.
Num
and Ord
have simple translations but for monads with existential type variables in their signatures, the only way to represent the equivalent dictionary is using RankNTypes
. In addition a typeclass may also include superclasses which would be included in the typeclass dictionary and parameterized over the same arguments and an implicit superclass constructor function is created to pull out functions from the superclass for the current monad.
Indeed this is not that far from how GHC actually implements typeclasses. It elaborates into projection functions and data constructors nearly identical to this, and are expanded out to a dictionary argument for each typeclass constraint of every polymorphic function.
Specialization
Overloading in Haskell is normally not entirely free by default, although with an optimization called specialization it can be made to have zero cost at specific points in the code where performance is crucial. This is not enabled by default by virtue of the fact that GHC is not a wholeprogram optimizing compiler and most optimizations ( not all ) stop at module boundaries.
GHC’s method of implementing typeclasses means that explicit dictionaries are threaded around implicitly throughout the call sites. This is normally the most natural way to implement this functionality since it preserves separate compilation. A function can be compiled independently of where it is declared, not recompiled at every point in the program where it’s called. The dictionary passing allows the caller to thread the implementation logic for the types to the callsite where it can then be used throughout the body of the function.
Of course this means that in order to get at a specific typeclass function we need to project ( possibly multiple times ) into the dictionary structure to pluck out the function reference. The runtime makes this very cheap but not entirely free.
Many C++ compilers or whole program optimizing compilers do the opposite however, they explicitly specialize each and every function at the call site replacing the overloaded function with its typespecific implementation. We can selectively enable this kind of behavior using class specialization.
Nonspecialized
In the specialized version the typeclass operations placed directly at the call site and are simply unboxed arithmetic. This will map to a tight set of sequential CPU instructions and is very likely the same code generated by C.
The nonspecialized version has to project into the typeclass dictionary ($fFloatingFloat
) 6 times and likely go through around 25 branches to perform the same operation.
For a tight loop over numeric types specializing at the call site can result in orders of magnitude performance increase. Although the cost in compiletime can often be nontrivial and when used at many function callsites this can slow GHC’s simplifier pass to a crawl.
The best advice is profile and look for large uses of dictionary projection in tight loops and then specialize and inline in these places.
Using the SPECIALISE INLINE
pragma can unintentionally cause GHC to diverge if applied over a recursive function, it will try to specialize itself infinitely.
Static Compilation
On Linux, Haskell programs can be compiled into a standalone statically linked binary that includes the runtime statically linked into it.
In addition the file size of the resulting binary can be reduced by stripping unneeded symbols.
upx can additionally be used to compress the size of the executable down further.
Unboxed Types
The usual numerics types in Haskell can be considered to be a regular algebraic datatype with special constructor arguments for their underlying unboxed values. Normally unboxed types and explicit unboxing are not used in normal code, they are wiredin to the compiler.
3# 
GHC.Prim.Int# 
3## 
GHC.Prim.Word# 
3.14# 
GHC.Prim.Float# 
3.14## 
GHC.Prim.Double# 
'c'# 
GHC.Prim.Char# 
"Haskell"## 
GHC.Prim.Addr# 
An unboxed type has kind #
and will never unify a type variable of kind *
. Intuitively a type with kind *
indicates a type with a uniform runtime representation that can be used polymorphically.
 Lifted  Can contain a bottom term, represented by a pointer. (
Int
,Any
,(,)
)  Unlited  Cannot contain a bottom term, represented by a value on the stack. (
Int#
,(#, #)
)
The function for integer arithmetic used in the Num
typeclass for Int
is just pattern matching on this type to reveal the underlying unboxed value, performing the builtin arithmetic and then performing the packing up into Int
again.
Where (+#)
is a low level function built into GHC that maps to intrinsic integer addition instruction for the CPU.
Runtime values in Haskell are by default represented uniformly by a boxed StgClosure*
struct which itself contains several payload values, which can themselves either be pointers to other boxed values or to unboxed literal values that fit within the system word size and are stored directly within the closure in memory. The layout of the box is described by a bitmap in the header for the closure which describes which values in the payload are either pointers or nonpointers.
The unpackClosure#
primop can be used to extract this information at runtime by reading off the bitmap on the closure.
For example the datatype with the UNPACK
pragma contains 1 nonpointer and 0 pointers.
While the default packed datatype contains 1 pointer and 0 nonpointers.
The closure representation for data constructors are also “tagged” at the runtime with the tag of the specific constructor. This is however not a runtime type tag since there is no way to recover the type from the tag as all constructors simply use the sequence (0, 1, 2, …). The tag is used to discriminate cases in pattern matching. The builtin dataToTag#
can be used to pluck off the tag for an arbitrary datatype. This is used in some cases when desugaring pattern matches.
For example:
 data Bool = False  True
 False ~ 0
 True ~ 1
a :: (Int, Int)
a = (I# (dataToTag# False), I# (dataToTag# True))
 (0, 1)
 data Ordering = LT  EQ  GT
 LT ~ 0
 EQ ~ 1
 GT ~ 2
b :: (Int, Int, Int)
b = (I# (dataToTag# LT), I# (dataToTag# EQ), I# (dataToTag# GT))
 (0, 1, 2)
 data Either a b = Left a  Right b
 Left ~ 0
 Right ~ 1
c :: (Int, Int)
c = (I# (dataToTag# (Left 0)), I# (dataToTag# (Right 1)))
 (0, 1)
String literals included in the source code are also translated into several primop operations. The Addr#
type in Haskell stands for a static contiguous buffer preallocated on the Haskell heap that can hold a char*
sequence. The operation unpackCString#
can scan this buffer and fold it up into a list of Chars from inside Haskell.
This is done in the early frontend desugarer phase, where literals are translated into Addr#
inline instead of giant chain of Cons’d characters. So our “Hello World” translates into the following Core:
See:
IO/ST
Both the IO and the ST monad have special state in the GHC runtime and share a very similar implementation. Both ST a
and IO a
are passing around an unboxed tuple of the form:
The RealWorld#
token is “deeply magical” and doesn’t actually expand into any code when compiled, but simply threaded around through every bind of the IO or ST monad and has several properties of being unique and not being able to be duplicated to ensure sequential IO actions are actually sequential. unsafePerformIO
can thought of as the unique operation which discards the world token and plucks the a
out, and is as the name implies not normally safe.
The PrimMonad
abstracts over both these monads with an associated data family for the world token or ST thread, and can be used to write operations that generic over both ST and IO. This is used extensively inside of the vector package to allow vector algorithms to be written generically either inside of IO or ST.
ghcheapview
Through some dark runtime magic we can actually inspect the StgClosure
structures at runtime using various C and Cmm hacks to probe at the fields of the structure’s representation to the runtime. The library ghcheapview
can be used to introspect such things, although there is really no use for this kind of thing in everyday code it is very helpful when studying the GHC internals to be able to inspect the runtime implementation details and get at the raw bits underlying all Haskell types.
A constructor (in this for cons constructor of list type) is represented by a CONSTR
closure that holds two pointers to the head and the tail. The integer in the head argument is a static reference to the preallocated number and we see a single static reference in the SRT (static reference table).
We can also observe the evaluation and update of a thunk in process ( id (1+1)
). The initial thunk is simply a thunk type with a pointer to the code to evaluate it to a value.
When forced it is then evaluated and replaced with an Indirection closure which points at the computed value.
When the copying garbage collector passes over the indirection, it then simply replaces the indirection with a reference to the actual computed value computed by indirectee
so that future access does need to chase a pointer through the indirection pointer to get the result.
STG
After being compiled into Core, a program is translated into a very similar intermediate form known as STG ( Spineless Tagless GMachine ) an abstract machine model that makes all laziness explicit. The spineless indicates that function applications in the language do not have a spine of applications of functions are collapsed into a sequence of arguments. Currying is still present in the semantics since arity information is stored and partially applied functions will evaluate differently than saturated functions.
All let statements in STG bind a name to a lambda form. A lambda form with no arguments is a thunk, while a lambdaform with arguments indicates that a closure is to be allocated that captures the variables explicitly mentioned.
Thunks themselves are either reentrant (r
) or updatable (u
) indicating that the thunk and either yields a value to the stack or is allocated on the heap after the update frame is evaluated. All subsequent entries of the thunk will yield the alreadycomputed value without needing to redo the same work.
A lambda form also indicates the static reference table a collection of references to static heap allocated values referred to by the body of the function.
For example turning on ddumpstg
we can see the expansion of the following compose function.
For a more sophisticated example, let’s trace the compilation of the factorial function.
Notice that the factorial function allocates two thunks ( look for u
) inside of the loop which are updated when computed. It also includes static references to both itself (for recursion) and the dictionary for instance of Num
typeclass over the type Int
.
The type system of STG system consists of the following types. The size of these types depend on the size of a void*
pointer on the architecture.
 StgWord  An unsigned system integer type of word size
 StgPtr  Basic pointer type
 StgBool  Boolean int bit flag
 StgInt 
Int#
 StgChar 
Char#
 StgFloat 
Float#
 StgDouble 
Double#
 StgAddr 
Addr#
(void *
pointer)  StgStablePtr 
StablePtr#
 StgOffset  Byte offset within a closure
 StgFunPtr  Pointer to a C functions
 StgVolatilePtr  Pointer to a volatile word
Worker/Wrapper
With O2
turned on GHC will perform a special optimization known as the WorkerWrapper transformation which will split the logic of the factorial function across two definitions, the worker will operate over stack unboxed allocated machine integers which compiles into a tight inner loop while the wrapper calls into the worker and collects the end result of the loop and packages it back up into a boxed heap value. This can often be an order of of magnitude faster than the naive implementation which needs to pack and unpack the boxed integers on every iteration.
See:
ZEncoding
The Zencoding is Haskell’s convention for generating names that are safely represented in the compiler target language. Simply put the zencoding renames many symbolic characters into special sequences of the z character.
foo 
foo 
z 
zz 
Z 
ZZ 
() 
Z0T 
(,) 
Z2T 
(,,) 
Z3T 
_ 
zu 
( 
ZL 
) 
ZR 
: 
ZC 
# 
zh 
. 
zi 
(#,#) 
Z2H 
(>) 
ZLzmzgZR 
In this way we don’t have to generate unique unidentifiable names for character rich names and can simply have a straightforward way to translate them into something unique but identifiable.
So for some example names from GHC generated code:
ZCMain_main_closure 
:Main_main_closure 
base_GHCziBase_map_closure 
base_GHC.Base_map_closure 
base_GHCziInt_I32zh_con_info 
base_GHC.Int_I32#_con_info 
ghczmprim_GHCziTuple_Z3T_con_info 
ghcprim_GHC.Tuple_(,,)_con_in 
ghczmprim_GHCziTypes_ZC_con_info 
ghcprim_GHC.Types_:_con_info 
Cmm
Cmm is GHC’s complex internal intermediate representation that maps directly onto the generated code for the compiler target. Cmm code generated from Haskell is CPSconverted, all functions never return a value, they simply call the next frame in the continuation stack. All evaluation of functions proceed by indirectly jumping to a code object with its arguments placed on the stack by the caller.
This is drastically different than C’s evaluation model, where are placed on the stack and a function yields a value to the stack after it returns.
There are several common suffixes you’ll see used in all closures and function names:
0 
No argument 
p 
Garbage Collected Pointer 
n 
Wordsized nonpointer 
l 
64bit nonpointer (long) 
v 
Void 
f 
Float 
d 
Double 
v16 
16byte vector 
v32 
32byte vector 
v64 
64byte vector 
Cmm Registers
There are 10 registers that described in the machine model. Sp is the pointer to top of the stack, SpLim is the pointer to last element in the stack. Hp is the heap pointer, used for allocation and garbage collection with HpLim the current heap limit.
The R1 register always holds the active closure, and subsequent registers are arguments passed in registers. Functions with more than 10 values spill into memory.
Sp
SpLim
Hp
HpLim
HpAlloc
R1
R2
R3
R4
R5
R6
R7
R8
R9
R10
Examples
To understand Cmm it is useful to look at the code generated by the equivalent Haskell and slowly understand the equivalence and mechanical translation maps one to the other.
There are generally two parts to every Cmm definition, the info table and the entry code. The info table maps directly StgInfoTable
struct and contains various fields related to the type of the closure, its payload, and references. The code objects are basic blocks of generated code that correspond to the logic of the Haskell function/constructor.
For the simplest example consider a constant static constructor. Simply a function which yields the Unit value. In this case the function is simply a constructor with no payload, and is statically allocated.
Lets consider a few examples to develop some intuition about the Cmm layout for simple Haskell programs.
Haskell:
Cmm:
Consider a static constructor with an argument.
Haskell:
Cmm:
Consider a literal constant. This is a static value.
Haskell:
Cmm:
Consider the identity function.
Haskell:
Cmm:
Consider the constant function.
Haskell:
Cmm:
Consider a function where application of a function ( of unknown arity ) occurs.
Haskell:
Cmm:
Consider a function which branches using pattern matching:
Haskell:
Cmm:
Macros
Cmm itself uses many macros to stand for various constructs, many of which are defined in an external C header file. A short reference for the common types:
C_ 
char 
D_ 
double 
F_ 
float 
W_ 
word 
P_ 
garbage collected pointer 
I_ 
int 
L_ 
long 
FN_ 
function pointer (no arguments) 
EF_ 
extern function pointer 
I8 
8bit integer 
I16 
16bit integer 
I32 
32bit integer 
I64 
64bit integer 
Inside of Cmm logic there are several functions which are commonly invoked:
Sp_adj
 Adjusts the stack pointer.GET_ENTRY
ENTER
jump

Many of the predefined closures (stg_ap_p_fast
, etc) are themselves mechanically generated and more or less share the same form ( a giant switch statement on closure type, update frame, stack adjustment). Inside of GHC is a file named GenApply.hs
that generates most of these functions. For example the output for stg_ap_p_fast
.
stg_ap_p_fast
{ W_ info;
W_ arity;
if (GETTAG(R1)==1) {
Sp_adj(0);
jump %GET_ENTRY(R11) [R1,R2];
}
if (Sp  WDS(2) < SpLim) {
Sp_adj(2);
W_[Sp+WDS(1)] = R2;
Sp(0) = stg_ap_p_info;
jump __stg_gc_enter_1 [R1];
}
R1 = UNTAG(R1);
info = %GET_STD_INFO(R1);
switch [INVALID_OBJECT .. N_CLOSURE_TYPES] (TO_W_(%INFO_TYPE(info))) {
case FUN,
FUN_1_0,
FUN_0_1,
FUN_2_0,
FUN_1_1,
FUN_0_2,
FUN_STATIC: {
arity = TO_W_(StgFunInfoExtra_arity(%GET_FUN_INFO(R1)));
ASSERT(arity > 0);
if (arity == 1) {
Sp_adj(0);
R1 = R1 + 1;
jump %GET_ENTRY(UNTAG(R1)) [R1,R2];
} else {
Sp_adj(2);
W_[Sp+WDS(1)] = R2;
if (arity < 8) {
R1 = R1 + arity;
}
BUILD_PAP(1,1,stg_ap_p_info,FUN);
}
}
default: {
Sp_adj(2);
W_[Sp+WDS(1)] = R2;
jump RET_LBL(stg_ap_p) [];
}
}
}
Inline CMM
Handwritten Cmm can be included in a module manually by first compiling it through GHC into an object and then using a special FFI invocation.
Optimisation
GHC uses a suite of assembly optimisations to generate more optimal code.
Tables Next to Code
GHC will place the info table for a toplevel closure directly next to the entrycode for the objects in memory such that the fields from the info table can be accessed by pointer arithmetic on the function pointer to the code itself. Not performing this optimization would involve chasing through one more pointer to get to the info table. Given how often infotables are accessed using the tablesnexttocode optimization results in a tractable speedup.
Pointer Tagging
Depending on the type of the closure involved, GHC will utilize the last few bits in a pointer to the closure to store information that can be read off from the bits of pointer itself before jumping into or access the info tables. For thunks this can be information like whether it is evaluated to WHNF or not, for constructors it contains the constructor tag (if it fits) to avoid an info table lookup.
Depending on the architecture the tag bits are either the last 2 or 3 bits of a pointer.
These occur in Cmm most frequently via the following macro definitions:
So for instance in many of the precompiled functions, there will be a test for whether the active closure R1
is already evaluated.
Interface Files
During compilation GHC will produce interface files for each module that are the binary encoding of specific symbols (functions, typeclasses, etc) exported by that module as well as any package dependencies it itself depends on. This is effectively the serialized form of the ModGuts structure used internally in the compiler. The internal structure of this file can be dumped using the showiface
flag. The precise structure changes between versions of GHC.
Runtime System
The GHC runtime system is a massive part of the compiler. It comes in at around 70,000 lines of C and Cmm. There is simply no way to explain most of what occurs in the runtime succinctly. There is more than three decades worth of work that has gone into making this system and it is quite advanced. Instead lets look at the basic structure and some core modules.
The golden source of truth for all GHC internals is the GHC Wiki Commentary written by the compiler maintainers:
https://gitlab.haskell.org/ghc/ghc/wikis/commentary
Inside the GHC source tree the runtime system spans multiple modules. The bulk of the runtime logic is stored across the includes
, utils
and rts
folders.
The toplevel for the runtime interface is exposed through six key header files found in the /includes
folder.
The stg
folder contains many of the macros used in the evaluation of STG as well as the memory layout and mappings from to STG to machine types.
The storage
folder contains format definitions define that define the memory layout of closures, InfoTables, sparks, etc as they are represented on the heap.
Inside the utils
folder of the GHC source tree are several utilities that generate Cmm modules that GHC is compiled against. These are boilerplate modules that define the Cmm macros in terms of the Haskell datatypes defined in the Stg
definitions in the compiler.
 genprimop  Generate the builtin primop definitions.
 genapply  Generate the entry logic for manipulating the stack when entering functions of various arities.
 deriveConstants  Generates the header files containing constant values (pointer size, word sizes, etc) of the target platform
For genprimop
, the primops are generated from a custom domain specific langauge specified in primops.txt.pp
which defines the primops, their arity, commutative and associvaity properties and the machine types they operate over. An example for integer addition for (+#
) looks like:
For genapply
this generates all the Cmm definitions in Apply.cmm
for manipulating the stack when evaluating a closure. For example a function of arity 2 (ap
) is applied to 2 pointer arguments (pp
) we would jump to stg_ap_stk_pp
definition.
The conventions for these single letters is described by the following datatype in Main.hs
of genapply
:
The include/rts
folder itself contains all the public header files for all aspects of the runtime. Most of thes are included in Rts.h
toplevel import.
The runtime system folder itself contains several modules which are written in Cmm.
The core library for the garbage collector used in the runtime is stored in the sm
subfolder of rts
and contains several implementations of the garbage collectors that Haskell programs can be compiled with.
The source for the whole runtime in rts
contains 50 or so modules. The core units of logic are described briefly below.
The runtime system itself also has three different modes/ways of operation.
 Vanilla  Runtime without additional settings. Single threaded.
 Threaded  Runtime linked using the
threaded
option.  Profiling  Runtime linked using the
prof
option.
The specific flags can be checked by passing +RTS info
to a compiled binary.
[("GHC RTS", "YES")
,("GHC version", "8.6.5")
,("RTS way", "rts_v")
,("Build platform", "x86_64unknownlinux")
,("Build architecture", "x86_64")
,("Build OS", "linux")
,("Build vendor", "unknown")
,("Host platform", "x86_64unknownlinux")
,("Host architecture", "x86_64")
,("Host OS", "linux")
,("Host vendor", "unknown")
,("Target platform", "x86_64unknownlinux")
,("Target architecture", "x86_64")
,("Target OS", "linux")
,("Target vendor", "unknown")
,("Word size", "64")
,("Compiler unregisterised", "NO")
,("Tables next to code", "YES")
]
The state of the runtime can also be queried at runtime for statistics about the heap, garbage collector and wall time. The getRTSStats
generates two datatypes with all the queryable information contained in RTSStats
and GCDetails
.
Criterion
Criterion is a statistically aware benchmarking tool. It exposes a library which allows us to benchmark individual functions over and over and test the distribution of timings for aberrant beahvior and stability. These kind of tests are quite common to include in libraries which need to test that the introduction of new logic doesn’t result in performance regressions.
Criterion operates largely with the following four functions.
The whnf
function evaluates a function applied to an argument a
to weak head normal form, while nf
evaluates a function applied to an argument a
deeply to normal form. See Laziness.
The bench
function samples a function over and over according to a configuration to develop a statistical distribution of its runtime.
These criterion reports can be generated out to either CSV or to an HTML file output with plots of the data.
To generate an HTML page containing the benchmark results plotted
EKG
EKG is a monitoring tool that can monitor various aspect of GHC’s runtime alongside an active process. The interface for the output is viewable within a browser interface. The monitoring process is forked off (in a system thread) from the main process.
RTS Profiling
The GHC runtime system can be asked to dump information about allocations and percentage of wall time spent in various portions of the runtime system.
$ ./program +RTS s
1,939,784 bytes allocated in the heap
11,160 bytes copied during GC
44,416 bytes maximum residency (2 sample(s))
21,120 bytes maximum slop
1 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 2 colls, 0 par 0.00s 0.00s 0.0000s 0.0000s
Gen 1 2 colls, 0 par 0.00s 0.00s 0.0002s 0.0003s
INIT time 0.00s ( 0.00s elapsed)
MUT time 0.00s ( 0.01s elapsed)
GC time 0.00s ( 0.00s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 0.01s ( 0.01s elapsed)
%GC time 5.0% (7.1% elapsed)
Alloc rate 398,112,898 bytes per MUT second
Productivity 91.4% of total user, 128.8% of total elapsed
Productivity indicates the amount of time spent during execution compared to the time spent garbage collecting. Well tuned CPU bound programs are often in the 9099% range of productivity range.
In addition individual function profiling information can be generated by compiling the program with prof
flag. The resulting information is outputted to a .prof
file of the same name as the module. This is useful for tracking down hotspots in the program.
Haskell is widely regarded as being a best in class for the construction of compilers and there are many examples of programming languages that were bootstrapped on Haskell.
Compiler development largely consists of a process of transforming one graph representation of a program or abstract syntax tree into simpler graph representations while preserving the semantics of the languages. Many of these operations can be written quite concisely using Haskell’s pattern matching machinery.
Haskell itself also has a rich academic tradition and an enormous number of academic papers will use Haskell as the implementation language used to describe a typechecker, parser or other novel compiler idea.
In addition the Hackage ecosystem has a wide variety of modules that many individuals have abstracted out of their own compilers into reusable components. These are broadly divided into several categories:
 Binder libraries  Libraries for manipulating lambda calculus terms and perform captureavoiding substitution, alpha renaming and beta reduction.
 Name generation  Generation of fresh names for use in compiler passes which need to generates names which don’t clash with each other.
 Code Generators  Libraries for emitting LLVM or other assembly representations at the end of the compiler.
 Source Generators  Libraries for emitting textual syntax of another language used for doing sourcetosource translations.
 Graph Analysis  Libraries for doing control flow analysis.
 Pretty Printers  Libraries for turning abstract syntax trees into textual forms.
 Parser Generators  Libraries for generating parsers and lexers from higherlevel syntax descriptions.
 Traversal Utilities  Libraries for writing traversal and rewrite systems across AST types.
 REPL Generators  Libraries fo building command line interfaces for ReadEvalPrint loops.
Unbound
Several libraries exist to mechanize the process of writing name capture and substitution, since it is largely mechanical. Probably the most robust is the unbound
library. For example we can implement the infer function for a small HindleyMilner system over a simple typed lambda calculus without having to write the name capture and substitution mechanics ourselves.
{# LANGUAGE TemplateHaskell #}
{# LANGUAGE FlexibleInstances #}
{# LANGUAGE UndecidableInstances #}
{# LANGUAGE MultiParamTypeClasses #}
{# LANGUAGE OverloadedStrings #}
module Infer where
import Data.String
import Data.Map (Map)
import Control.Monad.Error
import qualified Data.Map as Map
import qualified Unbound.LocallyNameless as NL
import Unbound.LocallyNameless hiding (Subst, compose)
data Type
= TVar (Name Type)
 TArr Type Type
deriving (Show)
data Expr
= Var (Name Expr)
 Lam (Bind (Name Expr) Expr)
 App Expr Expr
 Let (Bind (Name Expr) Expr)
deriving (Show)
$(derive [''Type, ''Expr])
instance IsString Expr where
fromString = Var . fromString
instance IsString Type where
fromString = TVar . fromString
instance IsString (Name Expr) where
fromString = string2Name
instance IsString (Name Type) where
fromString = string2Name
instance Eq Type where
(==) = eqType
eqType :: Type > Type > Bool
eqType (TVar v1) (TVar v2) = v1 == v2
eqType _ _ = False
uvar :: String > Expr
uvar x = Var (s2n x)
tvar :: String > Type
tvar x = TVar (s2n x)
instance Alpha Type
instance Alpha Expr
instance NL.Subst Type Type where
isvar (TVar v) = Just (SubstName v)
isvar _ = Nothing
instance NL.Subst Expr Expr where
isvar (Var v) = Just (SubstName v)
isvar _ = Nothing
instance NL.Subst Expr Type where
data TypeError
= UnboundVariable (Name Expr)
 GenericTypeError
deriving (Show)
instance Error TypeError where
noMsg = GenericTypeError
type Env = Map (Name Expr) Type
type Constraint = (Type, Type)
type Infer = ErrorT TypeError FreshM
empty :: Env
empty = Map.empty
freshtv :: Infer Type
freshtv = do
x < fresh "_t"
return $ TVar x
infer :: Env > Expr > Infer (Type, [Constraint])
infer env expr = case expr of
Lam b > do
(n,e) < unbind b
tv < freshtv
let env' = Map.insert n tv env
(t, cs) < infer env' e
return (TArr tv t, cs)
App e1 e2 > do
(t1, cs1) < infer env e1
(t2, cs2) < infer env e2
tv < freshtv
return (tv, (t1, TArr t2 tv) : cs1 ++ cs2)
Var n > do
case Map.lookup n env of
Nothing > throwError $ UnboundVariable n
Just t > return (t, [])
Let b > do
(n, e) < unbind b
(tBody, csBody) < infer env e
let env' = Map.insert n tBody env
(t, cs) < infer env' e
return (t, cs ++ csBody)
Unbound Generics
Recently unbound was ported to use GHC.Generics instead of Template Haskell. The API is effectively the same, so for example a simple lambda calculus could be written as:
See:
Pretty Printers
Pretty is the first WadlerLeijen style combinator library, it exposes a simple set of primitives to print Haskell datatypes to legacy strings programmatically. You probably don’t want to use this library but it inspired most of the ones that followed after. There are many many many pretty printing libraries for Haskell.
WadlerLeijen Style
 pretty
 wlpprint
 wlpprinttext
 wlpprintansiterm
 wlpprintterminfo
 wlpprintannotated
 wlpprintconsole
 ansipretty
 ansiterminal
 ansiwlpprint
Modern
 prettyprinter
 prettyprinteransiterminal
 prettyprintercompatannotatedwlpprint
 prettyprintercompatansiwlpprint
 prettyprintercompatwlpprint
 prettyprinterconvertansiwlpprint
Specialised
 layout
 aesonpretty
These days it is best to avoid the pretty printer and use the standard prettyprinter
library which subsumes most of the features of these previous libraries under one modern uniform API.
prettyprinter
Pretty printer is a printer combinator library which allows us to write typeclasses over datatypes to render them to strings with arbitrary formatting. These kind of libraries show up everywhere where the default Show
instance is insufficient for rendering.
The base interface to these libraries is exposed as a Pretty
class which monoidally composes a variety of documents together. The Monoid append operation simply concatenates two documents while a variety of higher level combinators add additional string elements into the language.
The Pretty
class maps an arbitrary value into a Doc
type which is annotated with the renderer.
The Doc
type can then be rendered to any number of strings type means of a layout algorithm. The builtin methods are Compact
, Smart
and Pretty
.
The common combinators are shown below,
<> 
Concatenation 
<+> 
Spaced concatenation 
nest 
Nested a document with whitespace 
group 
Lays out on a line by removing line breaks 
align 
Lays out with the nesting level at the current column 
hang 
Lays out with the nesting level relative to the first line 
indent 
Increases indentation by a given count 
list 
Lays out a given list with braces and commas. 
tupled 
Lays out a given list with parens and commas. 
hsep 
Concatenates list of docs horizontally with a separator 
hcat 
Concatenates list of docs horizontally 
vcat 
Concatenates list of docs vertically 
puncutate 
Appends a given doc to all elements of a list of docs 
parens 
Surrounds with parentheses 
dquotes 
Surrounds with double quotes 
For example the common pretty printed form of the lambda calculus k
combinator is:
The prettyprinter library can be used to pretty print nested data structures in a more human readable form for any type that implements Show
. For example a dump of the structure for the AST of SK combinator with ppShow
.
A full example of pretty printing the lambda calculus is shown below. This uses a custom Pretty
class to pass an integral value which indicates the depth of the lambda expression. Alternatively the builtin Pretty
class could be used for simpler datatypes.
prettysimple
prettysimple is a Haskell library that renders Show instances in a prettier way. It exposes functions which are drop in replacements for show and print.
A simple example is shown below.
Prettysimple can be used as the default GHCi printer as shown in the .ghci.conf section.
Haskeline
Haskeline is a Haskell library exposing crossplatform readline. It provides a monad which can take user input from the command line and allow the user to edit and go back forth on a line of input as well simple tab completion.
A simple example of usage is shown below:
Repline
Certain sets of tasks in building command line REPL interfaces are so common that is becomes useful to abstract them out into a library. While haskeline provides a sensible lowerlevel API for interfacing with GNU readline, it is somewhat tedious to implement tab completion logic and common command logic over and over. To that end Repline assists in building interactive shells that that resemble GHCi’s default behavior.
Trying it out. (
indicates a user keypress )
See:
LLVM
Haskell has a rich set of LLVM bindings that can generate LLVM and JIT dynamic code from inside of the Haskell runtime. This is especially useful for building custom programming languages and compilers which need native performance. The llvmhs library is the defacto standard for compiler construction in Haskell.
We can link effectively to the LLVM bindings which provide an efficient JIT which can generate fast code from runtime. These can serve as the backend to an interpreter, generating fast SIMD operations for linear algebra, or compiling dataflow representations of neural networks into code as fast as C from dynamic descriptions of logic in Haskell.
The llvmhs library is split across two modules:
llvmhspure
 Pure Haskell datatypesllvmhs
 Bindings to C++ framework for optimisation and JIT
The llvmhs
bindings allow us to construct LLVM abstract syntax tree by manipulating a variety of Haskell datatypes. These datatypes all can be serialised to the C++ bindings to construct the LLVM module’s syntax tree.
This will generate the following LLVM module which can be pretty printed out:
An alternative interface uses an IRBuilder monad which interactively constructs up the LLVM AST using monadic combinators.
See:
Template Haskell is a very powerful set of abstractions, some might say too powerful. It effectively allows us to run arbitrary code at compiletime to generate other Haskell code. You can some absolutely crazy things, like going off and reading from the filesystem or doing network calls that informs how your code compiles leading to nondeterministic builds.
While in some extreme cases TH is useful, some discretion is required when using this in production setting. TemplateHaskell can cause your build times to grow without bound, force you to manually sort all definitions your modules, and generally produce unmaintainable code. If you find yourself falling back on metaprogramming ask yourself, what in my abstractions has failed me such that my only option is to write code that writes code.
Consideration should be used before enabling TemplateHaskell. Consider an idiomatic solution first.
Quasiquotation
Quasiquotation allows us to express “quoted” blocks of syntax that need not necessarily be the syntax of the host language, but unlike just writing a giant string it is instead parsed into some AST datatype in the host language. Notably values from the host languages can be injected into the custom language via userdefinable logic allowing information to flow between the two languages.
In practice quasiquotation can be used to implement custom domain specific languages or integrate with other general languages entirely via codegeneration.
We’ve already seen how to write a Parsec parser, now let’s write a quasiquoter for it.
Testing it out:
One extremely important feature is the ability to preserve position information so that errors in the embedded language can be traced back to the line of the host syntax.
languagecquote
Of course since we can provide an arbitrary parser for the quoted expression, one might consider embedding the AST of another language entirely. For example C or CUDA C.
Evaluating this we get back an AST representation of the quoted C program which we can manipulate or print back out to textual C code using ppr
function.
In this example we just spliced in the antiquoted Haskell string in the printf statement, but we can pass many other values to and from the quoted expressions including identifiers, numbers, and other quoted expressions which implement the Lift
type class.
GPU Kernels
For example now if we wanted programmatically generate the source for a CUDA kernel to run on a GPU we can switch over the CUDA C dialect to emit the C code.
{# LANGUAGE QuasiQuotes #}
{# LANGUAGE TemplateHaskell #}
import qualified Language.C.Quote.CUDA as Cuda
import qualified Language.C.Syntax as C
import Text.PrettyPrint.Mainland
import Text.PrettyPrint.Mainland.Class (Pretty (..))
cuda_fun :: String > Int > Float > C.Func
cuda_fun fn n a =
[Cuda.cfun
__global__ void $id:fn (float *x, float *y) {
int i = blockIdx.x*blockDim.x + threadIdx.x;
if ( i<$n ) { y[i] = $a*x[i] + y[i]; }
}
]
cuda_driver :: String > Int > C.Func
cuda_driver fn n =
[Cuda.cfun
void driver (float *x, float *y) {
float *d_x, *d_y;
cudaMalloc(&d_x, $n*sizeof(float));
cudaMalloc(&d_y, $n*sizeof(float));
cudaMemcpy(d_x, x, $n, cudaMemcpyHostToDevice);
cudaMemcpy(d_y, y, $n, cudaMemcpyHostToDevice);
$id:fn<<<($n+255)/256, 256>>>(d_x, d_y);
cudaFree(d_x);
cudaFree(d_y);
return 0;
}
]
makeKernel :: String > Float > Int > [C.Func]
makeKernel fn a n =
[ cuda_fun fn n a,
cuda_driver fn n
]
main :: IO ()
main = do
let ker = makeKernel "saxpy" 2 65536
mapM_ (putDocLn . ppr) ker
Running this we generate:
__global__ void saxpy(float* x, float* y)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
if (i < 65536) {
y[i] = 2.0 * x[i] + y[i];
}
}
int driver(float* x, float* y)
{
float* d_x, * d_y;
cudaMalloc(&d_x, 65536 * sizeof(float));
cudaMalloc(&d_y, 65536 * sizeof(float));
cudaMemcpy(d_x, x, 65536, cudaMemcpyHostToDevice);
cudaMemcpy(d_y, y, 65536, cudaMemcpyHostToDevice);
saxpy<<<(65536 + 255) / 256, 256>>>(d_x, d_y);
return 0;
}
Pipe the resulting output through NVidia CUDA Compiler with nvcc ptx c
to get the PTX associated with the outputted code.
Template Haskell
Of course the most useful case of quasiquotation is the ability to procedurally generate Haskell code itself from inside of Haskell. The templatehaskell
framework provides four entry points for the quotation to generate various types of Haskell declarations and expressions.
Q Exp 
[e ... ] 
expression 
Q Pat 
[p ... ] 
pattern 
Q Type 
[t ... ] 
type 
Q [Dec] 
[d ... ] 
declaration 
The logic evaluating, splicing, and introspecting compiletime values is embedded within the Q monad, which has a runQ
which can be used to evaluate its context. These functions of this monad is deeply embedded in the implementation of GHC.
Just as before, TemplateHaskell provides the ability to lift Haskell values into the their AST quantities within the quoted expression using the Lift type class.
In many cases Template Haskell can be used interactively to explore the AST form of various Haskell syntax.
Using Language.Haskell.TH we can piece together Haskell AST element by element but subject to our own custom logic to generate the code. This can be somewhat painful though as the sourcelanguage (called HsSyn
) to Haskell is enormous, consisting of around 100 nodes in its AST many of which are dependent on the state of language pragmas.
As a debugging tool it is useful to be able to dump the reified information out for a given symbol interactively, to do so there is a simple little hack.
Splices are indicated by $(f)
syntax for the expression level and at the toplevel simply by invocation of the template Haskell function. Running GHC with ddumpsplices
shows our code being spliced in at the specific location in the AST at compiletime.
At the point of the splice all variables and types used must be in scope, so it must appear after their declarations in the module. As a result we often have to mentally topologically sort our code when using TemplateHaskell such that declarations are defined in order.
See: Template Haskell AST
Antiquotation
Extending our quasiquotation from above now that we have TemplateHaskell machinery we can implement the same class of logic that it uses to pass Haskell values in and pull Haskell values out via pattern matching on templated expressions.
Templated Type Families
Just like at the valuelevel we can construct typelevel constructions by piecing together their AST.
For example consider that typelevel arithmetic is still somewhat incomplete in GHC 7.6, but there often cases where the span of typelevel numbers is not full set of integers but is instead some bounded set of numbers. We can instead define operations with a typefamily instead of using an inductive definition ( which often requires manual proofs ) and simply enumerates the entire domain of arguments to the typefamily and maps them to some result computed at compiletime.
For example the modulus operator would be nontrivial to implement at typelevel but instead we can use the enumFamily
function to splice in typefamily which simply enumerates all possible pairs of numbers up to a desired depth.
In practice GHC seems fine with enormous typefamily declarations although compiletime may increase a bit as a result.
The singletons library also provides a way to automate this process by letting us write seemingly valuelevel declarations inside of a quasiquoter and then promoting the logic to the typelevel. For example if we wanted to write a valuelevel and typelevel map function for our HList this would normally involve quite a bit of boilerplate, now it can stated very concisely.
Templated Type Classes
Probably the most common use of Template Haskell is the automatic generation of typeclass instances. Consider if we wanted to write a simple Pretty printing class for a flat data structure that derived the ppr method in terms of the names of the constructors in the AST we could write a simple instance.
In a separate file invoke the pretty instance at the toplevel, and with ddumpsplice
if we want to view the spliced class instance.
Multiline Strings
Haskell has no language support for multiline string literals, although we can emulate this by using a quasiquoter. The resulting String literal is then converted using toString into whatever result type is desired.
In a separate module we can then enable Quasiquotes and embed the string.
Path Files
Oftentimes it is necessary to embed the specific Git version hash of a build inside the executable. Using gitembed the compiler will effectively shell out to the command line to retrieve the version information of the CWD Git repository and use Template Haskell to define embed this information at compiletime. This is often useful for embedding in version
information in the command line interface to your program or service.
This example also makes use of the Cabal Paths_pkgname
module during compile time which contains which contains several functions for querying target paths and included data files for the Cabal project. This can be included in the exposedmodules
of a package to be accessed directly by the project, otherwise it is placed automatically in othermodules
.
An example of usage to query the Git metadata into the compiled binary of a project using the gitembed
package:
Do I need to Learn Category Theory?
Short answer: No. Most of the idea of category theory aren’t really applicable to writing Haskell.
The long answer: It is not strictly necessary to learn, but so few things in life are. Learning new topics and ways of thinking about problems only enrich your thinking and give you new ways of thinking about code and abstractions. Category theory is never going to help you write a web application better but it may give you insights into problems that algebraic in nature. A tiny group of Haskellers espouse philosophies about it being an inspiration for certain abstractions, but most do not.
Some understanding of abstract algebra, and conventions for discussing algebraic structures and equational reasoning with laws are essential to modern Haskell and we will discuss these leading up to some basic category theory.
Abstract Algebra
Algebraic theory taught at higher levels generalises notions of arithmetic to operate over more generic structures than simple numbers. These structures are called sets and are a very broad notion of generic ways of describing groups of mathematical objects that can be equated and grouped. Over these sets we can define ways of combining and operating over elements of the set. These generalised notions of arithmetic are described in terms of and operations. Operations which take elements of a set to the same set are said to be closed in the set. When discussing operations we use the conventions:
 Properties  Predicates attached to values and operations over a set.
 Binary Operations  Operations which map two elements.
 Unary Operations  Operations which map a single element.
 Constants  Specific values with specific properties in a set.
 Relations  Pairings of elements in a set.
Binary operations are generalisations of operations like multiplication and addition. That map two elements of a set to another element of a set. Unary operations map an element of a set to a single element of a set. Ternary operations map three elements. Higherlevel operations are usually not given specific names.
Constants are specific elements of the set, that generalise values like 0 and 1 which have specific laws in relation to the operations defined over the set.
Certain properties show up so frequently we typically refer to their properties by an algebraic term. These terms are drawn from an equivalent abstract algebra concept. Several of the common algebraic laws are defined in the table below.
Associativity
Equations:
a × (b × c) = (a × b) × c
Haskell:
Haskell Predicate:
Commutativity
Equations:
a × b = b × a
Haskell:
Haskell Predicates:
Units
Equations:
a × e = a
e × a = a
Haskell:
Haskell Predicates:
Inversion
Equations:
a^{ − 1} × a = e
a × a^{ − 1} = e
Haskell:
Haskell Predicates:
Zeros
Equations:
a × 0 = 0
0 × a = 0
Haskell
Haskell Predicates:
Linearity
Equations:
f(x + y) = f(x) + f(y)
Haskell:
Haskell Predicates:
Idempotency
Equations:
f(f(x)) = f(x)
Haskell Predicates:
Distributivity
Equations:
a × (b + c) = (a × b) + (a × c)
(b + c) × a = (b × a) + (c × a)
Haskell:
Haskell Predicates:
Anticommutativity
Equations:
a × b = (b × a)^{ − 1}
Haskell:
Haskell Predicates:
Homomorphisms
Equations:
f(x × y) = f(x) + f(y)
Haskell:
Haskell Predicates:
Combinations of these properties over multiple functions gives rise to higher order systems of relations that occur over and over again throughout functional programming, and once we recognize them we can abstract over them. For instance a monoid is a combination of a unit and a single associative operation over a set of values.
You will often see this notation in tuple form. Where a set S
(called the carrier) will be enriched with a variety of operations and elements that are closed over that set. For example a semigroup is a set equipped with an associative closed binary operation. If you add an identity element e
to the semigroup you get a monoid.
Semigroup  (S, • ) 
Monoid  (S, •,e) 
Monad  (S, μ, η) 
Categories
The most basic structure is a category which is an algebraic structure of objects (Obj
) and morphisms (Hom
) with the structure that morphisms compose associatively and the existence of an identity morphism for each object. A category is defined entirely in terms of its:
 Elements
 Morphisms
 Composition Operation
A morphism f written as f : x → y an abstraction on the algebraic notion of homomorphisms. It is an arrow between two objects in a category x and y called the domain and codomain respectively. The set of all morphisms between two given elements x and y is called the homset and written Hom(x, y).
In Haskell, with kind polymorphism enabled we can write down the general category parameterized by a type variable “c” for category. This is the instance Hask
the category of Haskell types with functions between types as morphisms.
Categories are interesting since they exhibit various composition properties and ways in which various elements in the category can be composed and rewritten while preserving several invariants about the program.
Some annoying curmudgeons will sometimes pit nicks about this not being a “real category” because all Haskell values are potentially inhabited by a bottom type which violates several rules of composition. This is mostly silly nitpicking and for the sake of discussion we’ll consider “ideal Haskell” which does not have this property.
Isomorphisms
Two objects of a category are said to be isomorphic if we can construct a morphism with 2sided inverse that takes the structure of an object to another form and back to itself when inverted.
Such that:
For example the types Either () a
and Maybe a
are isomorphic.
Duality
One of the central ideas is the notion of duality, that reversing some internal structure yields a new structure with a “mirror” set of theorems. The dual of a category reverse the direction of the morphisms forming the category C^{Op}.
See:
Functors
Functors are mappings between the objects and morphisms of categories that preserve identities and composition.
Natural Transformations
Natural transformations are mappings between functors that are invariant under interchange of morphism composition order.
Such that for a natural transformation h
we have:
The simplest example is between (f = List
) and (g = Maybe
) types.
Regardless of how we chase safeHead
, we end up with the same result.
Or consider the Functor (>)
.
A lot of the expressive power of Haskell types comes from the interesting fact that, with a few caveats, polymorphic Haskell functions are natural transformations.
See: You Could Have Defined Natural Transformations
Kleisli Category
Kleisli composition (i.e. Kleisli Fish) is defined to be:
The monad laws stated in terms of the Kleisli category of a monad m
are stated much more symmetrically as one associativity law and two identity laws.
Stated simply that the monad laws above are just the category laws in the Kleisli category.
For example, Just
is just an identity morphism in the Kleisli category of the Maybe
monad.
Monoidal Categories
On top of the basic category structure there are other higherlevel objects that can be constructed that enrich the category with additional operations.
 A bifunctor is a functor whose domain is the product of two categories.
 A monoidal category is a category which has a tensor product and a unit object.
 A braided monoidal category is a category which has tensor product and an operation
braid
which swaps elements in the tensor product.  A cartesian monoidal category is a is a monoidal category with, binary product, and diagonal.
 A cartesian closed category has is a monoidal category with a terminal object, binary products and exponential objects.
An example of this tower is is the Hask
with (>)
as exponential, (,)
as product and ()
as unit object.
type Hask = (>)
instance Category (>) where
id = Prelude.id
(.) = (Prelude..)
instance Bifunctor (>) (,) where
bimap f g = (a,b) > (f a,g b)
instance Associative (>) (,) where
associate ((a,b),c) = (a,(b,c))
coassociate (a,(b,c)) = ((a,b),c)
instance Monoidal (>) (,) () where
idl ((),a) = a
idr (a,()) = a
coidl a = ((),a)
coidr a = (a,())
instance Braided (>) (,) where
braid (a,b) = (b,a)
instance Cartesian (>) (,) () where
fst = Prelude.fst
snd = Prelude.snd
diag x = (x,x)
instance CCC (>) (,) () (>) where
apply (f,a) = f a
curry = Prelude.curry
uncurry = Prelude.uncurry
Further Resources
Category theory is an entire branch of mathematics that should be studeid independently of Haskell and programming. The classic text is “Category Theory” by Awodey. This text assumes a undergraduate level mathematics background.
For a programming perspective there are several lectures and functional programming oriented resources:
 Category Theory for Programmers PDF
 Category Theory for Programmers Lectures
 Category Theory Foundations
All code is available from this Github repository. This code is dedicated to the public domain. You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission.
https://github.com/sdiehl/wiwinwlh
Chapters:
 01basics/
 02monads/
 03monadtransformers/
 04extensions/
 05laziness/
 06prelude/
 07textbytestring/
 08applicatives/
 09errors/
 10advancedmonads/
 11quantification/
 12gadts/
 13lambdacalculus/
 14interpreters/
 15testing/
 16typefamilies/
 17promotion/
 18generics/
 19numbers/
 20datastructures/
 21ffi/
 22concurrency/
 23graphics/
 24parsing/
 25streaming/
 26dataformats/
 27web/
 28databases/
 29ghc/
 30languages/
 31templatehaskell/
 32cryptography/
 33categories/
 34time/