Thursday, October 4, 2012

runtime meta-programming in Haskell

In the spirit of Edward Yang, half-baked ideas lie ahead

There are many reasons programmers may want to use metaprogramming facilities, but one of the most common is for greater efficiency.  The premise is simple: if runtime data becomes available at different times, a metaprogramming environment may allow the programmer to generate specialized versions of code based upon certain data which is known sooner, leading to more efficient computations.

A running example in a great deal of literature is the power function.   Given pow a b = b ^ a, for certain values of a a much more specialized function can be created, e.g.

pow 1 b = b
pow 2 b = b*b
pow 4 b = let sq = b*b in sq*sq
pow n b = b ^ n

but writing out all possible cases like this is far from ideal.  It's tedious and error-prone, and it still won't result in the hoped-for performance gains.  Another problem is that it's generally not possible for the programmer to enumerate every possible case.  A better solution is desired.

Template Haskell is generally disparaged as a possible solution because it's a compile-time facility.  A meta-programming approach to the pow function would be this Template Haskell expression:

mkPow 0 = [| const 1 |]
mkPow n = [| \x -> x * $(unMeta $ mkPow (n-1)) x |]

while this works, Template Haskell splices may only be evaluated at compile-time.  For a wide range of interesting metaprogramming problems, template haskell simply isn't available.  However, by using the GHC API, we can make use of Template Haskell during runtime.  By using a package like plugins or hint, the code is remarkably simple.

I've placed a proof-of-concept implementation on github.  meta-expressions are created in the Meta newtype, which wraps Template Haskell's ExpQ type.  The Meta monad has an extra phantom type parameter to assist with type safety and inference.  Here's a simple example of Meta in use, with mkPow essentially as defined above:

{-# LANGUAGE TemplateHaskell #-}
{-# OPTIONS_GHC -Wall #-}

module Data.Meta.Example where

import Data.Meta.Meta
import Data.Meta.ExampleExpr

main :: IO ()
main = do
  n1 <- readLn
  let expr1 = mkPow n1 :: Meta (Int -> Int)
  n2 <- readLn
  let expr2 = mkPow n2 :: Meta (Int -> Int)
      expr3 = compose expr1 expr2
  result <- metaCompile expr3
  case result of
    Left err -> putStrLn "failed" >> putStrLn err
    Right fn -> do
      j <- readLn
      print $ fn j
      k <- readLn
      print $ fn k


Incidentally, this approach seems remarkably similar to the ideas in Geoffrey Mainland's presentation at ICFP 2012. The big differences are that he defines his own quasi-quoters instead of using Template Haskell, and he also uses a compiler other than GHC. Both of these are necessary if you want to metaprogram in a language other than Haskell, naturally.

Tuesday, June 12, 2012

Language-ObjC released

I have uploaded a new package to hackage, language-objc.  This package provides an AST, parser, and pretty-printer for Objective-C.  It's a fork of the excellent language-c package, so if you're familiar with that you should be able to pick up language-objc pretty quickly.

Source repository is on github.

If you "cabal unpack language-objc; cd language-objc-0.4.2.0/examples/", there are a number of examples that can get you started.  For the truly impatient:

import Language.ObjC
import qualified Data.ByteString.Char8 as B

-- parsing bytestrings
objcTranslUnit = B.pack "int a=0;"

errOrAST = parseC objcTranslUnit nopos

-- parsing from a file
fileAST sourceFile = do
    result <- parseCFile (newGCC "gcc") Nothing [] sourceFile
    return $ either (error . show) id result
A few things to note:
  1. Input files must be preprocessed.  'parseCFile' will do this for you.  Non-preprocessed source can only be parsed in very limited situations (did you know that 'id' is a typedef?  And 'id' is also commonly used as an identifier to methods/functions, even in system libraries, so it can't be a keyword?).
  2. The objective-C extensions have only been lightly tested, but I think most of the core functionality works.  I am unaware of a full specification for the language, which makes it difficult to say if it's complete.
  3. Most of the code in Language.ObjC.Analysis is not yet compatible with Objective-C extensions.
  4. One difference from language-c is that certain GNU attributes can no longer be parsed due to grammar ambiguities.
  5. Attempting to compile at -O2 requires a lot of memory due to the giant Happy parser that's produced.

I am most grateful to Benedikt Huber, Manuel Chakravarty, Duncan Coutts, Bertram Felgenhauer, and all contributors to language-c, as they're responsible for most of the code.  Bugs are mine.  Comments and patches welcome.

Monday, June 11, 2012

Understandings of Iteratees

Although still relatively new, iteratee/enumerator style I/O has achieved a fairly large userbase in the Haskell community.  Like many young technologies, the design space is fairly broad, with several different packages available from Hackage.  Upon surveying the alternatives, I came to realize that there are so many alternatives in part because different developers have chosen to focus on different properties of iteratees.  Doubtless this is a contributing factor to much of the confusion surrounding iteratees; when a user's mental model is closely aligned to the developer's, the package is likely to be much easier for the user to understand.

This post isn't meant to explain how to use iteratees, rather to explore what I believe are factors that influenced various design decisions in different packages.  In some cases I have first- or second-hand knowledge of those decisions, but sometimes I'm extrapolating from the code.  If you find one or two packages to be sensible and can't understand why anyone would use something different, you may find enlightenment here.  Or not.  For those who've never used an iteratee package, I hope that these posts might help point you to a good starting option.  Wizards probably know this already.  Note that just because I think the designers have a particular viewpoint for their code doesn't mean that other elements won't be present; these are all just different ways of looking at iteratees.  All packages will have these properties to some extent.

Glossary

Some definitions, to be perfectly clear (the adventurous can skip this)
  • iteratee: some sort of resource consumer.  The defining feature of an iteratee is that it provides a mechanism to pump it with data until it returns a result.  I may use "iteratee" generally, referring to code in an iteratee style.
  • enumerator: some sort of resource provider.  It stuffs data into an iteratee until it gets a result.
  • iteratee style, iteratee/enumerator style: writing code that makes use of iteratees, enumerators, and other associated functions
  • iteratee package: a library to provide functions used to write in iteratee style.
  • underlying monad: for iteratees that are monad transformers, the monad which is being transformed.  Often either IO or a monad stack based on IO.

1: Iteratees as Resource Managers


One of the design goals of Oleg Kiselyov's original iteratee work was improved resource management.  Lazy I/O can make it difficult to reason about resource management (half-closed?), while strict I/O traditionally does not compose well (compare to network programming in C).  With iteratees, these problems are much closer to being solved.  Input streams are opened on demand, evaluated strictly, and closed immediately, with no possibility of the handle or other resource leaking.(1)  And iteratees are quite composable (perhaps too much so!), solving the other issue as well.

These properties make iteratees extremely useful for network programming, as network sockets are a scarce resource.  However, this usage highlights a particular drawback: iteratees solve the resource management problem for input, but don't address it for output.

I believe that for this issue, all developers currently agree the best solution for output resource management is to use a monadic region, such as that provided by regions or ResourceT .  This is currently the only approach guaranteed to run cleanup code with complicated monad stacks in the presence of exceptions.(2)  In some cases Control.Exception.bracket can be used, however it's often not possible to call it at a convenient time, leaving resources open for longer than necessary.

Having established that a monadic region is necessary for output resource management, using the same mechanism for input resource management as well is a reasonable decision.  Developers taking this view may then write their package so that even input streams may cause resource leaks without the resource management layer, unlike the original design.(3)

2: Iteratees as Stream Processors


Iteratee packages with a stream processing bent focus on composibility as a design goal.  The libraries usually provide a multitude of stream processors. They are frequently designed with a UNIX-pipeline-style model in mind; data can be passed through one processor into the next and so on, until finally the last processor in the chain consumes the resulting data.  There may be clear demarcations between produces, consumers, and processors, or perhaps not.  There are frequently combinator functions to make these connections, sometimes with a syntax inspired by unix pipes.  Many of the functions will be similar to traditional Haskell list-processing functions, such as maps, accumulators, and filters.

People with this viewpoint tend to view stream processor composition, which Oleg refers to as an "Enumeratee", as the most important/useful means of composing iteratees.

Although I've presented stream processing and resource management as two different viewpoints, I addressed them together because packages generally can't exclude either element.  One approach is to divide responsibilities, with a dedicated resource manager and combinators that can use those resources.  Other packages may use the iteratee's own handling of input resource management, leaving output resource management to the user's discretion.

Conclusion


For the curious,  I would place conduit and iterIO squarely in the stream processor camp, with enumerator having strong leanings this way as well.  To some extent all iteratee packages provide similar stream-processing capability; I make these judgements based upon what I think the designers think iteratees are for.  If your favorite iteratee package hasn't shown up yet, maybe next time!

Notes

(1)  There is at least one way to force a resource to be open longer than intended.  Since iteratees are generally implemented as monad transformers, it is possible to craft an iteratee over ContT IO that generated a lazy stream from an input source, available within another iteratee.  The resource would be closed when the outer iteratee was evaluated, making the outer iteratee a type of monadic region.  I consider this an egregious abuse of Cont, however, and extremely unlikely to happen accidentally.

(2) I'm fairly outspoken in my dislike for exceptions.  They mostly shouldn't be used.  The main purpose of exceptions is to make it easy to reason about how to deal with various conditions, and they suck at this.  There are basically three kinds of exceptions:
  1. Stuff you can fix, like a file not existing.  This should be modeled by Either or Maybe or something similar.  It's much easier to reason about a Maybe than having to figure out where in IO some result will be evaluated.  This is by far the most common reason programmers use exceptions, and (in Haskell at least) it's the worst choice because Haskell provides a better mechanism for dealing with this problem.
  2. Stuff you can't fix, like out of memory conditions.  The only sensible thing to do is terminate the program and let the OS clean it up.  You shouldn't try to catch these, and if you do catch one you shouldn't try to do anything, because that might fail too.  I suppose it's okay to model these with an exception, but maybe they shouldn't be exposed to the program logic at all.
  3. Asynchronous exceptions.  I begrudgingly accept that these may need to be used in concurrent code.

(3)  The basic resource provided by an OS is generally a cursor, e.g. handle.  Oleg Kiselyov argues that cursors and enumerators are isomorphic, although enumerators have certain desirable properties that should make them preferred.  The traditional design of an iteratee package is to provide enumerators that wrap access to a cursor in a closure, preventing the cursor from leaking.  Packages like pipes or conduit that require a monadic region for safe access to input hold the cursor as data and don't provide enumerators-as-closures.  In essence they evolved from cursor to enumerator back to cursor.  Because of this I'm not entirely convinced they should be classed as "iteratee packages" at all, though they certainly occupy the same design space.

Sunday, April 29, 2012

Adding state support to BNFC

The BNF converter is pretty cool tool.  The idea is simple:  provide a grammer written in Labelled BNF, and it spits out a lexer, parser, pretty printer, and abstract syntax tree.  The very nice part is that the generated code can be in several formats: Alex/Happy/Haskell, flex/bison/C/C++, or JLex/CUP/Java.  It's especially convenient when you're developing a language, as you can make changes as you go and automatically keep your AST, parser, and printers in sync.

BNFC takes as input a file written in Labelled BNF, which is essentially a BNF grammar with a label attached to each production rule.  Brackets "[ ]" indicate lists rather than optional items, and there are few extensions from minimal BNF other than pragmas to define lexer tokens and positions.

Writing one grammar file instead of 4 separate modules is very attractive.  Unfortunately BNFC has a serious problem - only context-free grammars are supported.

LBNF is sufficient to represent context-free grammars, and within this domain the system works very well. But despite the claims of the LBNF Report, many interesting languages are not context-free, and they still need to be lexed and parsed.  The most notable example is probably C, in which a string can be lexed as an identifier or a type name, depending on previous typedefs and other things.  I present here a small set of extensions to enable support for passing state between the lexing and parsing stages, enabling a wider variety of parsers to be constructed from LBNF.

A darcs repository of BNFC, with state support as described here, is available from http://www.tiresiaspress.us/haskell/bnfc/.  These features are only supported in the Haskell/Alex backend (Alex-3 is required).

An initial examination

The approach I took is founded on two simple ideas.  First, the lexer needs to access to a state table to determine the correct token, and secondly, the parser needs to update the state table in response to certain rules.

Lexing rules

In LBNF, tokenizing rules are specified like this:
token Ident (alpha [alpha|digit|'_']*) ;
where "Ident" is the type of the token produced, and the remainder is a regular expression to determine valid "Ident" strings.

A stateful token rule is similar.  The implementation supports multiple sets of state, so it's necessary to specify which state set the tokenizer should check.
state token TypedefName typedefNameSet [alpha|digit|'_']+ ;
First the lexer matches the regular expression against the input string.  It then takes the match and checks if it's present in "typedefNameSet".  If so, a token of type "TypedefName" is produced.  Otherwise an "Ident" is produced.(1).

Parsing Rules

Standard production rules are extended with pragmas that specify state update annotations:
-- after this rule, the final Ident will be lexed as a TypedefName
Etypedef.  Declaration ::= "typedef" TypeName Ident <% state typedefNameSet add 3 %> ;
The number "3" specifies that a token string derived from the 3rd production (Ident) is to be added to the set "typedefNameSet".(2)

Similarly, names can be removed from a state set:
-- after this rule, the second token will be lexed as an Ident
UnTydeclarator. Direct_declarator ::= TypeName TypedefName <% state remove 2 %> ;
When removing a token from a set, it's unnecessary to specify the set the token is being removed from.

Extended support

Initial Values

Initial members of the state sets can be specified in the grammar via the pragma TODO.

Scoping

The implementation maintains a stack of state values in order to support lexical scoping of changes.  Two additional pragmas, "state push" and "state pop", provide access to stack manipulations.  A common usage would be:
PushParen.  PushParen ::= "(" <% state push %>
PopParen.   PopParen  ::= ")" <% state pop %>
It is an error to pop the last frame off the state stack.

The implementation only supports one stack for all state sets.  It is not currently possible to represent multiple state sets with differing scoping rules.

A drawback of the current implementation is that production rules such as PushParen and PopParen are no longer terminals.  Therefore the implementation creates abstract representations for them, although no additional meaning is conveyed than if they were omitted completely.  I hope to address this in the future.

Future Work

Deriving the state string

A major question is, how to determine which token is added to the state set for a given production?  The above cases are simple.  In a production rule like:
Etypedef.  Declaration ::= "typedef" TypeName Ident <% state typedefNameSet add 3 %> ;
"Ident" is a token, so it's simple to derive the string that should be added to the set.  However, it's frequently desirable to modify state based upon a non-terminal rule, such as:
Rule.  SomeProd ::= Declaration <% state stateset add 1 %> ;
How should the token be determined from the Declaration?

The current implementation only supports state adjustments based upon tokens and lists of tokens, e.g. [Ident].  Code generated from rules like the above "Rule" currently results in a compile-time error due to missing instances.  The instances can be manually provided in a separate file, although creating this file can be tedious.  I plan to add an additional pragma to the LBNF grammar that would allow for the necessary instances to be automatically generated.

Notes

(1) - The astute reader will notice that the TypedefName regex matches strictly more identifiers than the built-in "Ident" regex.  Therefore this match provides a "backdoor" into Ident, allowing strings that don't match the "Ident" regex to actually be produced as "Ident"s.  I find this behavior useful, and in any case it can be mitigated simply by using the Ident regex for state tokenizing rules.  Another possibility would be to use a user-provided alternative in case the match fails.

(2) - An alternative notation would omit the numbers and place the pragma adjacent to the production.

Sunday, March 25, 2012

Pure delay lines

While working on my dsp-extras project, I needed a pure delay line.  Based on previous work, I used Data.Sequence.Seq, and was somewhat surprised to see a very large performance hit when compared to an undelayed signal.  Obviously there was more to creating pure buffers in Haskell than my previous microbenchmarks revealed.  I resolved to get the performance back.

I created a test script to measure the performance of different delay line implementations in a real-world application.  The application simply reads in a sound file, applies a fractional delay, and writes the output.  I measured the performance with buffer sizes ranging from 1 to 10000 samples.  Due to interpolation (I used cubic interpolation for all my tests), I couldn't actually measure a 1-sample buffer; the program actually creates a 4-sample buffer for the smallest possible delay.  All other buffer sizes are accurate though.

I tested the following buffer types:
  • Data.Sequence.Seq - cons elements onto the front and drop from the tail as each element is pushed
  • Data.Vector.Unboxed.Vector - a naive copying implementation
  • T10 - a 10 element, strict data type (e.g. data T10 a = T10 !a !a ...)
  • T25 - a 25 element, strict data type
  • Evil - based upon Data.Vector.Unboxed.Vector, the Evil buffer thaws the vector, mutates a single element, and returns the updated vector.  This buffer is closest to an imperative circular buffer implementation, although it violates referential transparency.  Hence the Evil moniker.
  • ComboBuffer - the ComboBuffer consists of two unboxed vectors and an update list.  After the update last grows large enough, it's frozen into a vector and the oldest data is dropped.  Writing elements is performed in amortized O(1) time.  The read behavior is more complex.
The results are summarized online here.  A few notes about interpreting the results:
  • Given times are the result of a single run, although they are fairly stable over multiple executions.
  • The chart shows times in a log scale to help differentiate times for small buffers.  Otherwise the O(n) behavior of Vector obliterates these differences.
  • Current is the behavior of an algorithm that adaptively selects a buffer type based upon size.
  • All the results (except one) are for reading off the end of the buffer.  For most buffer types this shouldn't matter. It's optimal for Seq.  For the ComboBuffer, reads from the last half of the buffer can be done in O(1), but reads from the first half are more complex, as a list lookup may be required.  The Current - first quarter data is the result of accessing points 0.25 of the way into the buffer.
So for reads from the end of the buffer, my ComboBuffer is pretty good.  It needs some tuning though, especially for reads from near the front of the buffer.

Faking it with higher rank (Existential Quantification pt. 2)

In my previous post I provided a short background on Existential Quantification.  In this post, I'll provide an alternative for some uses and suggest why it may be preferred.

Suppose we want to create a data store that uses lists for small amounts of data, but otherwise uses vectors.  We can easily use E.Q.

import Data.Vector (Vector)
import qualified Data.Vector as V
class Buffer c where
  type El c :: *
  init :: Int -> El c -> c
  (!)  :: c -> Int -> El c
  push :: c -> El c -> c
 
instance Buffer [a] where
  type El [a] = a
  init = replicate
  (!)  = (!!)
  push lst el = el : init lst
 
instance Buffer (Vector a) where
  type El (Vector a) = a
  init = V.replicate
  (!)  = (V.!) 
  push vec el = V.cons el $ V.unsafeInit vec
 
data EBuffer a = forall c. (Buffer c, El c ~ a) => EBuffer c
 
initBuffer :: Int -> EBuffer Double
initBuffer sz
  | sz < 10   = EBuffer (init sz 0 :: [Double])
  | otherwise = EBuffer (init sz 0 :: Vector Double)
 
and now we have our existentially-quantified buffer.

Unfortunately this code doesn't offer anywhere near the hoped-for performance (at least not for the problem that led to this post).  The problem is that now that the actual type of buffer is hidden in the EBuffer data, the compiler is no longer able to inline function calls to (!) or push.  Instead it needs to go through the dictionary methods, adding a lot of extra overhead.  Even worse, a lot of further optimization opportunities are lost.

It turns out that there is another approach that doesn't suffer from this problem.  The approach is based on higher-rank types.  Normal Haskell code uses rank-1 type variables, which means that any variables in a type signature are instantiated by the calling code.  However, with the RankNTypes extension, it's possible to write code of any rank desired.  Consider the function
hr1 :: (forall a. a -> a) -> (Char, Int)
hr1 myId = (myId 'a', myId 0)
The first parameter to hr1 is a rank-2 function.  It needs to be a function that can take an input of any type, and return a value with the same type.  The only total function that can do this is id, and calling hr1 id returns exactly what we would expect.

(What happens if you change the type signature to hr1 :: (a -> a) -> (Char, Int) ? )

Armed with higher-rank types, we can replace the existentially-quantified type in two steps:

1.  Refactor whichever code uses the data so that it's in the form someFunc :: Buffer buf => buf -> AResult.  This part usually isn't too difficult.

2.  Create another function like following:
runWithBuffer
  :: Int
  -> (forall buf. (Buffer buf, El buf ~ Double) => buf -> b)
  -> b
runWithBuffer sz fn
  | sz < 10   = let buf = init sz 0 :: [Double]
                in fn buf
  | otherwise = let buf = init sz 0 :: Vector Double
                in fn buf

With our new runWithBuffer function, we've successfully decoupled the buffer selection algorithm from the usage site without using existential quantification.  Plus, in many cases GHC generates better code with more opportunities for further optimization.

A slightly more extensive example is available in my dsp-extras codebase.  The class and various buffer types are in the Data.RingBuffer modules.  The buffers are used in Language.AuSL.CompileArr, with the function optimizeBufType performing the duties of runWithBuffer.

Saturday, March 24, 2012

Existential Quantification pt. 1

If you have experience with OOP, you're already an expert on existential quantification.  In brief, existential quantification is the reason why you can write code like this pseudocode:

class MyList: 
  private List myList;
  init(Int sz):
    if sz < 10: 
      self.myList = new LinkedList();
    else:
      self.myList = new ArrayList();

In this example the private member variable myList is existentially quantified.  For any instance of MyList, after init is called the compiler knows that a data structure exists and that it implements the List interface, but it doesn't know the exact type of myList.  Any time you declare a variable of an interface type, the data structure referenced by the variable is existentially quantified.

One of the most common uses of existentially quantified data is in implementing the Strategy design pattern, which allows for an algorithm to be selected at runtime.  The MyList class above hints at this usage; for very small lists it uses a LinkedList but otherwise uses an ArrayList.  The idea of selecting an optimal algorithm at runtime based upon some known quantity is very common in high-performance code (e.g. fftw, Atlas, introsort, many others), and existential quantification is a very common way to implement it.

It's very common for programmers with an OOP background to assume that existential quantification and interfaces always go together, which leads to surprise that the following Haskell code doesn't compile:

getNum :: Num a => Char -> a
getNum 'd' = 0 :: Double
getNum _  = 0 :: Int

In OOP languages, this works.  Both Double and Int are instances of Num, and the only promise the type of getNum makes about its output is that it has a Num instance.  However, type variables aren't existentially quantified in Haskell.  This means that, instead of returning whichever Num instance it likes, getNum must be able to provide any Num that the caller demands.

However, thanks to the ExistentialQuantification language extension, e.q. is supported in Haskell (GADTs also provide this feature and more).  It can only be used within new data types however, not bare functions.  So we need to make a wrapper to hold existentially quantified data:
{-# LANGUAGE ExistentialQuantification #-}
data ENum = forall a. Num a => ENum a

getNum :: Char -> ENum
getNum 'd' = ENum (0::Double)
getNum _   = ENum (0::Int)

Now we have our existentially-quantified data, and all is right with the world.  For now.