Scala: Stable Identifier Required

This error message seems to be the source of a lot of confusion and pain across the Scala ecosystem. Here, we will attempt to make it less scary.

Take the following:

val b = 2
def simpleMatch(a: Int): String = a match{
  case `b` => "OK"
  case _ => a + " NOK"}
(0 to 5).map(simpleMatch)

The result is:

Vector(0 NOK, 1 NOK, OK, 3 NOK, 4 NOK, 5 NOK)

When we do something as seemingly benign as change the val to a var our work life is immediately made worse

var b = 2
def simpleMatch(a: Int): String = a match{
  case `b` => "OK"
  case _ => a + " NOK"}
(0 to 5).map(simpleMatch)

And the compiler attempts to help us by kindly letting us know error: stable identifier required, but this.b found. What?!?!?!

The Scala Language Specification

When an error message is too terse for understanding a few great places to start are stack overflow, gitter, twitter or any other messaging or questioning service where a lot of scala developers gather. Assuming you've already exhausted these options we dig into the scala language specification (SLS) itself.

Section 3.1 of the SLS states very clearly

A stable identifier is a path which ends in an identifier.

Ok. Not super helpful immediately but, it does lead us to further reading; we need to understand both, paths and identifiers.

Identifiers

Section 1.1 defines identifiers and chapter 2 describes them. It is very dry however illuminating reading. Identifiers are the names of things. The word name here is applied broadly; operators (such as *, +, =, ++=) are also names.

Paths

Section 3.1 of the SLS is on Paths. It gives four cases

  • The empty path ε
  • C.this, where C references a class. The path this is taken as a shorthand for C.this where C is the name of the class directly enclosing the reference.
  • p.x where p is a path and x is a stable member of p. Stable members are packages or members introduced by object definitions or by value definitions of non-volatile types.
  • C.super.x.x or C.super[M].x[M].x where C references a class and xx references a stable member of the super class or designated parent class M of C. The prefix super is taken as a shorthand for C.super where C is the name of the class directly enclosing the reference.

The third case helps us here. Our error states error: stable identifier required, but this.b found (this.b looks like p.x). The difference is our b is not a package, object definition or val; it is a var. This makes some semantic sense; the word stable can hardly be used to describe something like a var which can change at any time.

Basically, a stable identifier is simply a name which is bound statically to a value. They are required for certain tasks (like pattern matching) so the compiler can make sense of the code it is generating and the types it is inferring. Next time you see this error just check that your types are well defined, and you are not shadowing any stable names with unstable ones. A quick work around is instead of matching on the unstable identifier you set it equal to a stable one

val definitelyStable = b
def simpleMatch(a: Int): String = a match{
  case `definitelyStable` => "OK"
  case _ => a + " NOK"}
(0 to 5).map(simpleMatch)

Introductory TyDD in Scala: Deconstructing complex Types

These types are ugly and cumbersome; they are not at all human readable. How do we get data out of such a type and format it in a way that is useful to the operator? Let's again start with a naive example.

def stringify1[A, B, C](
  fa: A => String, fb: B => String, fc: C => String,
  in: List[(A, (B, C))]): String = {
  in.map{case (a, (b, c)) =>
    fa(a) + ", " + fb(b) + ", " + fc(c)
  }.mkString("(", "; ", ")")
}

Here we take a List of nested pairs and return a string that is hopefully more human readable than the toString method would provide.

Abstract the F

Like before, we will abstract the F from our function that it may be used with any type constructor of arity 1 rather than hard-coded to List.

import cats.Functor
val functorList = new Functor[List]{
  override def map[A, B](fa: List[A])(f: A => B): F[B] =
    fa.map(f)
}
def stringify2[F[_]: Functor, A, B, C](
  fa: A => String, fb: B => String, fc: C => String,
  in: F[(A, (B, C))]): String = {
  val F = implicitly[Functor[F]]
  F.map(in){case (a, (b, c)) =>
    fa(a) + ", " + fb(b) + ", " + fc(c)
  }
  ???
}

The cats library provides a nifty Functor typeclass for us so, we can abstract the map call pretty easily. What of the mkString? As it turns out, cats provides a typeclass for this as well! It is called Show; let's see how it works.

import cats.Show
def stringify3[F[_]: Functor, A, B, C](
  fa: A => String, fb: B => String, fc: C => String,
  in: F[(A, (B, C))])(implicit
  FS: Show[F[String]]): String = {
  val F = implicitly[Functor[F]]
  val result = F.map(in){case (a, (b, c)) =>
    fa(a) + ", " + fb(b) + ", " + fc(c)
  }
  FS.show(result)
}

And extending this idea of Show to the Function1 instances we have

def stringify4[F[_]: Functor, A: Show, B: Show, C: Show](
  in: F[(A, (B, C))])(implicit
  FS: Show[F[String]]): String = {
  val F = implicitly[Functor[F]]
  val fa = implicitly[Show[A]].show _
  val fb = implicitly[Show[B]].show _
  val fc = implicitly[Show[C]].show _
  val result = F.map(in){case (a, (b, c)) =>
    fa(a) + ", " + fb(b) + ", " + fc(c)
  }
  FS.show(result)
}

Abstracting over Arity

We'll attempt to build a recursive version of this function using the same principled we've used in previous posts.

def stringify5[F[_]: Functor, A: Show, B: Show](
  in: F[(A, B)])(implicit
  FS: Show[F[String]]): String = {
  val F = implicitly[Functor[F]]
  val fa = implicitly[Show[A]].show _
  val fb = implicitly[Show[B]].show _
  val result = F.map(in){case (a, b) =>
    fa(a) + ", " + fb(b)
  }
  FS.show(result)
}

This is not what we want! We need to recurse on the Show instance inside the Functor. Let's make a recursive function for Show. This will follow the code we read from Shapeless very closely.

implicit def makeShow[A: Show, B: Show]: Show[(A, B)] = {
  val fa = implicitly[Show[A]].show _
  val fb = implicitly[Show[B]].show _
  new Show[(A, B)]{
    override def show(t: (A, B)): String = {
      val (a, b) = t
      "(" + fa(a) + ", " + fb(b) + ")“
    }
  }
}

This says, Given any two Show instances, a Show instances for their pair can be produced. Alternatively, it can be said that given a proof for Show[A] and a proof for Show[B] a proof for Show[(A, B)] follows.

So, now we have the following stringify function:

def stringify[F[_]: Functor, A: Show](
  in: F[A])(implicit
  FS: Show[F[String]]): String = {
  val F = implicitly[Functor[F]]
  val fa = implicitly[Show[A]].show _
  val result = F.map(in)(fa)
  FS.show(result)
}

Notice how simple our code has become. All of the specific type information has been absorbed into the recursive implicit functions. This is simply the Show instance for Functor itself. Furthermore, our makeShow function will produce Show instances for any nesting of Tuple2 instances; it generates Show instances for binary trees. The makeShow function is 10 lines of code (even including the Scala boilerplate) and gave us a giant boost in usefulness for our library code.

What's the Point?

Let's see an example of how this can be used. Given our individual Show instances:

implicit val ShowListString = new Show[List[String]]{
  def show(in: List[String]): String =
    in.mkString("(", "; ", ")")
}
implicit val showInt = new Show[Int]{
  override def show(in: Int): String = in.toString}
implicit val showLong = new Show[Long]{
  override def show(in: Long): String = in.toString}
implicit val showString = new Show[String]{
  override def show(in: String): String = in}
implicit val showDouble = new Show[Double]{
  override def show(in: Double): String = f"$in%.2f"}
implicit val showFloat = new Show[Float]{
  override def show(in: Float): String = f"$in%.2f"}
implicit val showArrayByte = new Show[Array[Byte]]{
  override def show(in: Array[Byte]): String = new String(in)}

Our writer and reader code becomes:

//Write the thing with our writer
val result = implicitly[Result1]
//Read the thing back with our reader
println(stringify(result))

For anyone to create an application which reads and writes data with our library they need only define the specific business logic and types for their application. If they miss writing an instance, the compiler will tell them. If they rearrange the order of the types in IO, the compiler will figure it out for them. Many common pain points of writing data readers and writers have been taken care of by the library implementor giving the rest of the team the ability to focus on business logic.

The goal here is not to have an entire company of developers who write this kind of code. Just like the goal of any business is to have employees who each bring something new to the team, the goal here is to have a few developers who write these libraries that other developers can use to develop business applications. The benefit of this is the libraries help limit the kinds of errors that can make it to production; the entire production cycle gets a confidence boost.

Introductory TyDD in Scala: Writing Type Level Functions

Type level functions consume types and produce types. This allows us to tell the compiler exactly what we want in our code that errors may be caught at compile time rather than at later stages in our product deployment cycle. The true power of this approach lies in the fact that with Scala no application can be produced without compilation. No matter how undisciplined a developer may be, she cannot skip compilation. Moving error checking into compilation gives us more confidence in the performance of our binary than confidence gained by any other means. In short, when there is a failure in production we always ask "were the tests run?" but never do we ask "was it compiled?"

A Basic Zipper

Here we have a zipper.

def zipper[A, B, C](
  a: List[A], b: List[B], c: List[C]
): List[(A, (B, C))] = a.zip(b.zip(c))

From previous posts in this series we know this can be generalized with a type class. Let's exercise this muscle immediately.

trait Zip[F[_]]{
  def apply[A, B](a: F[A], b: F[B]): F[(A, B)]
}
def zipper[F[_]: Zip, A, B, C](
  a: F[A], b: F[B], c: F[C]): F[(A, (B, C))] = {
  val F = implicitly[Zip[F]]
  F(a, F(b, c))
}

This is nicer than the first version but it is still super restrictive. It only works for zipping exactly 3 values. If we want to zip 2 or 4 or 70 values, we are out of luck! We saw how the shapeless HList allowed us to compose an arbitrary number of arbitrary types into a single type. Let's try to use the same kinds of methods here to produce a zipper of arbitrary arity.

Step 1 - Simplify as much as is possible

We will simplify a zipper to the purest form we can. The purest zipper takes two instances of a particular type constructor and produces a single instance of that type constructor on a pair. Let's write that.

def zipper[F[_]: Zip, H, T](
  h: F[H], t: F[T]): F[(H, T)] = {
  val F = implicitly[Zip[F]]
  F(h, t)
}

Now, we don't need separate functions for different arity versions of a zipper. We can simply call this function recursively to produce the desired result.

val (list1, list2, list3, list4, list5, list6) = ...
implicit val ZipList = new Zip[List]{
  override def apply[A, B](
    a: List[A], b: List[B]): List[(A, B)] = a.zip(b)
}
val with2 = zipper(list1, list2)
val with3 = zipper(list1,
            zipper(list2, list3))
val with6 = zipper(list1,
            zipper(list2,
            zipper(list3,
            zipper(list4,
            zipper(list5, list6)))))

If we recall the shapeless HList code learned to read, we see the same pattern here of a recursive type being produced from recursive calls. This can be done in the compiler using implicits.

Step 2 - Replace explicit recursion with implicit

implicit def zipper[F[_]: Zip, H, T](implicit
  h: F[H], t: F[T]): F[(H, T)] = {
  val F = implicitly[Zip[F]]
  F(h, t)
}

This is the same code just the keyword implicit is placed in two locations. In scala we can promote an explicit call to an implicit call simply by adding a keyword. We inform the inputs with implicit so the compiler knows to find the head and tail by itself. We inform the function with implicit so the compiler knows to call the function implicitly if a value of the necessary type is not found.

We communicate our intent to the compiler with implicits and types. Type aliases help simplify business logic.

type F[A] = List[A]
type Result =
  F[
    (Int, (Long, (String, (Double, (Float, Array[Byte])
  ))))]

To tell the compiler which values it can use during the application of functions we inform the values as implicit

implicit val list1: List[Int] = ???
implicit val list2: List[Long] = ???
implicit val list3: List[String] = ???
implicit val list4: List[Double] = ???
implicit val list5: List[Float] = ???
implicit val list6: List[Array[Byte]] = ???

The implicitly function tells the compiler what it needs to execute.

implicitly[Result]

And that's it! The compiler assembles the recursive calls for us. No more errors from placing things in the wrong order; refactoring is as simple as rearranging the order of the types in our alias Result. One change on one line propagates throughout the entire code base automatically.

Why stop at Tuple2?

There is nothing here that requires that a tuple be used. In fact, the only thing we need is a type constructor of arity 2. We can express as

trait ZipG[F[_], G[_, _]]{
  def apply[A, B](a: F[A], b: F[B]): F[G[A, B]]
}
implicit def zipper[F[_], G[_, _], H, T](implicit
  F: ZipG[F, G], h: F[H], t: F[T]): F[G[H, T]] = {
  F(h, t)
}

And we can zip Tuple2s or Eithers by creating type class instances

implicit val zipListTuple2 = new ZipG[List, Tuple2]{
  override def apply[A, B](
    a: List[A], b: List[B]): List[(A, B)] = a.zip(b)
}
implicit val zipListEither = new ZipG[List, Either]{
  override def apply[A, B](
    a: List[A], b: List[B]): List[Either[A, B]] =
    for{a <- a; b <- b}yield{
      if(a.toString.size < b.toString.size) Left(a)
      else Right(b)
    }
}

For Tuple2 we have business logic like

type F[A] = List[A]
type Result =
  F[
    (Int, (Long, (String, (Double, (Float, Array[Byte])
  ))))]

implicit val list1: List[Int] = ???
implicit val list2: List[Long] = ???
implicit val list3: List[String] = ???
implicit val list4: List[Double] = ???
implicit val list5: List[Float] = ???
implicit val list6: List[Array[Byte]] = ???

implicitly[Result]

This is the same as before. Commonly, further abstractions at the type level have little or no effect on code at the value level. Abstractions should allow the code to be more expressive than it was prior to the abstraction exercise never less expressive.

Changing our business logic to use Either as our Zipping class is simple

type F[A] = List[A]
type Result =
  F[
    Either[Int, Either[Long, Either[String,
    Either[Double, Either[Float, Array[Byte]]
  ]]]]]

implicit val list1: List[Int] = ???
implicit val list2: List[Long] = ???
implicit val list3: List[String] = ???
implicit val list4: List[Double] = ???
implicit val list5: List[Float] = ???
implicit val list6: List[Array[Byte]] = ???

implicitly[Result]

This is very powerful. By changing our type aliases, we were able to entirely change the meaning of our business logic without complex refactorings. As long as there are the correct type classes in implicit scope, the business logic need not be bothered by those implementation details.

What we have here is a sort of data pipeline and writer. Now that we have formatted data that we can work with in code, how do we present that data to an operator? Next, we'll write a reader for our types.

 

Introductory TyDD in Scala: Reading Type Level Functions

This is shapeless' HList

sealed trait HList extends Product with Serializable

final case class ::[+H, +T <: HList](head : H, tail : T) extends HList {
  override def toString = head match {
    case _: ::[_, _] => "("+head+") :: "+tail.toString
    case _ => head+" :: "+tail.toString
  }
}

sealed trait HNil extends HList {
  def ::[H](h : H) = shapeless.::(h, this)
  override def toString = "HNil"
}

case object HNil extends HNil

The important thing to note is it is very similar to a standard scala List just at the type level. It has a head and tail as type parameters and the tail is itself an HList. The implementation details are not important for our purposes here. All we need to take from this is an HList is a recursive list (head, tail) which is either empty (HNil) or whose elements can be of different types.

A Type Level Operation on HList

This is a function defined in shapeless which helps client code work with HLists

trait Mapped[L <: HList, F[_]] extends Serializable {
  type Out <: HList
}
object Mapped {
  …
  type Aux[L <: HList, F[_], Out0 <: HList] =
    Mapped[L, F] { type Out = Out0 }
 implicit def hnilMapped[F[_]]: Aux[HNil, F, HNil] =
    new Mapped[HNil, F] { type Out = HNil }
  …
  implicit def hlistMapped1[
  H, T <: HList, F[_], OutM <: HList](implicit
  mt: Mapped.Aux[T, F, OutM]): Aux[H :: T, F, F[H] :: OutM] =
    new Mapped[H :: T, F] { type Out = F[H] :: OutM }
}

We can see all the parts we mentioned in the first post of this series. Recall:

  1. Keywords
  2. Input types
  3. Output types
  4. Body

Note there are multiple implicit def declarations. The modifiers implicit def are similar to case statements in value function bodies. Let's take this piece by piece.

The First Case

implicit def hnilMapped[F[_]]: Aux[HNil, F, HNil] =
new Mapped[HNil, F] { type Out = HNil }

When focusing on the types, the intent becomes clear

implicit def hnilMapped[F[_]]: Aux[HNil, F, HNil] =
  new Mapped[HNil, F] { type Out = HNil }

This reads something like: Given a type constructor, yield the empty HList. Simple enough! This stuff is comprehensible afterall!

The Second Case

implicit def hlistMapped1[
H, T <: HList, F[_], OutM <: HList](implicit
mt: Mapped.Aux[T, F, OutM]): Aux[H :: T, F, F[H] :: OutM] =
  new Mapped[H :: T, F] { type Out = F[H] :: OutM }

This is admittedly more complex. Let's take it in stride again focusing on the types.

implicit def hlistMapped1[
H, T <: HList, F[_], OutM <: HList](implicit
mt: Mapped.Aux[T, F, OutM]): Aux[H :: T, F, F[H] :: OutM] =
  new Mapped[H :: T, F] { type Out = F[H] :: OutM }
  • Given a head, a tail which is an HList, a type constructor and another HList
  • Also, given proof that this type class holds good for our tail
  • We can generate an HList where our type constructor is applied to our head and the tail follows

A brief Word on Induction

Many of the type level functions encountered in the wild are of this type. Where there is at the very least

  1. a trivial case which produces some terminal element
  2. a more complex case where given a type or arity n can produce a type of arity n+1

In other words, recursive types can be generated inductively assuming you have an instance of the recursive type and all the necessary parts to wrap that instance within the target type.

In the next post we'll create some type level code using these methods.

Introductory TyDD in Scala: Basic Type Class Development

(A more complete treatment of type classes and higer kinds.)

A simple Type Class

Take the following

trait Mapping[A, B]{
  def map(a: A): B
}

and an instance for it

val mapping: Mapping[List[Int], List[String]] =
  new Mapping[List[Int], List[String]]{
    override def map(a: List[Int]): List[String] =
      a.map(_.toString)
  }

This instance is super restrictive. It only works for taking Int into String. We want to map a List of any type. Since we know what our type parameters are, we can achieve our goal by passing in a function

trait ListMapping[A, B]{
  def map(list: List[A])(f: A => B): List[B]
}

So, given a List[A] and a function, A => B, we can get a List[B]. And by taking the type parameters from the trait definition and placing them onto the function definition, we can squeeze out a bit more freedom.

trait ListMapping{
  def map[A, B](list: List[A])(f: A => B): List[B]
}
val mapping: ListMapping =
  new ListMapping{
    override def map[A, B](a: List[A])(f: A => B): List[B] =
      a.map(f)
  }

Now, why would anyone ever do this? The List type provides a map function which does exactly this. With this approach one may provide any number of methods for mapping a list. Take this alternative:

val reverseMapping: ListMapping =
  new ListMapping{
    override def map[A, B](a: List[A])(f: A => B): List[B] =
      a.reverse.map(f)
  }

Through type classes, we can define new functionality for old data structures on the fly. Similar code can be written for sort order, string formatting or just about anything else.

Making things even more general

While, the ability to map Lists of any type in any number of ways is fairly abstract it is not abstract enough for our purposes. What if we want to map a different data structure such as an Option or a Stream or a spark Dataset?

Luckily, Scala has a language feature which can help us out here.

trait WithMap[F[_]]{
  def map[A, B](m: F[A])(f: A => B): F[B]
}

The type parameter, F[_], has a type parameter of _, this tells the compiler that our type parameter itself requires a type parameter. Notice in our definition all mention of List has been replaced by our parameter, F. This just says that given a type, F, which itself takes a type parameter, we can change the inner type or F without changing F. We can do this with any parameterized type of arity 1.

implicit val listWithMap = new WithMap[List]{
  override def map[A, B](m: List[A])(f: A => B): List[B] =
    m.map(f)
}
implicit val optionWithMap = new WithMap[Option]{
  override def map[A, B](m: Option[A])(f: A => B): Option[B] =
    m.map(f)
}
implicit val streamWithMap = new WithMap[Stream]{
  override def map[A, B](m: Stream[A])(f: A => B): Stream[B] =
    m.map(f)
}
val reverseListWithMap = new WithMap[List]{
  override def map[A, B](m: List[A])(f: A => B): List[B] =
    m.reverse.map(f)
}


With these techniques we can define super polymorphic functions. Take this pretty strinfigy function

def prettyString[F[_]: WithMap, A](m: F[A])(f: A => String): String = {
  implicitly[WithMap[F]].map(m)(f).toString
}

This takes two type parameters, F[_]: WithMap and A. The `:` character in the first type parameter tells the compiler that it needs an implicit instance of WithMap defined for our type F.

And here is a data processor defined in the same way

def processData[F[_]: WithMap, A, B, C, D](
  m1: F[A])(f1: A => B)(f2: B => C)(f3: C => D): F[D] = {
  val F = implicitly[WithMap[F]] 
  val m2 = F.map(m1)(f1)
  val m3 = F.map(m2)(f2)
  F.map(m3)(f3)
}

We have taken an implementation detail (the map function on List, Option, etc...) and brought it outside the type. This has given us the ability to talk about data which has a sensible map function without knowing what that data necessarily looks like.

Next we'll learn how to read some of the Type Level functions that exist in the shapeless library.

Introductory TyDD in Scala: Anatomy of a Type Level Function

Value Level Functions

A value level function typically looks like

def f(a: Int, b: Int): String = {
  a.toString + b.toString
}

The key parts are

  1. A keyword: def
  2. Inputs: a: Int, b: Int
  3. Outputs: String
  4. Body: a.toString + b.toString

When you see these 4 parts, you know you are reading a value level function. Nothing surprising here. Now, let's see what a similar definition looks like at the type level.

Type Level Functions

A type level function looks like

trait MyTrait[A, B]{type Out}
object MyTrait{
  def apply[A, B, C](): MyTrait[A, B]{type Out = C} =
    new MyTrait[A, B]{override type Out = C}
}

In this definition there is a type refinement, MyTrait[A, B]{type Out = C}. These are undesirable artifacts of type level development. To simplify these definitions we use the Aux alias (a document about on this). Aux helps us remove type refinements from our logic.

A type level function looks like

trait MyTrait[A, B]{type Out}
object MyTrait{
  type Aux[A, B, C] = MyTrait[A, B]{type Out = C}
  def apply[A, B, C](): Aux[A, B, C] =
    new MyTrait[A, B]{override type Out = C}
}

The type refinement from the previous example is replaced by the nicer (more readable, fewer braces, less code) Aux type.

Type level functions have the same 4 key parts as value level functions

  1. Keywords: trait object
  2. Inputs: A, B
  3. Outputs: type Out
  4. Body: def apply[A, B, C](): Aux[A, B, C]

Here the inputs are type parameters and outputs are type members. This is so the output types are not erased and can be referenced later in business logic. This is similar to value level functions as the result of a value level function does not expose the inputs required by the function.

Bodies of type level functions are value level functions. They are typically only a few lines long. Their purpose is to present the compiler with a way to construct a new type from the types provided. This is what these blog posts will focus on.

Whenever you see a set of definitions which have these 4 qualities, you know you are looking at a type level function.

The type class is the fundamental element of this style of type driven development. The next post will give an overview of this concept.

Introductory TyDD in Scala

Here are the slides (pptx, pdf) and code from the PHASE talk on Type Driven Development in Scala. Long form blog post version is forthcoming.

  1. Anatomy of a Type Level Function
  2. Type Classes & Higher Kinds
  3. Reading Type Level Functions
  4. Writing Type Level Functions
  5. Reading Data that is this strongly typed

Abstraction in F[_]: Lift Implementation Details Outside the Class

We now know that we can lift implementation details outside the class to gain some freedom in the client code; we have gained abstraction through decoupling. Why not apply a similar abstraction on the rest of the code.

We had:

trait Pipeline[F[_], A, B]{
  final def apply(uri: URI)(implicit F: Functor[F]): F[Unit] = {
    val in = read(uri)
    val computed = F.map(in)(computation)
    F.map(computed)(write)
  }
  def read(uri: URI): F[A]
  def computation(in: A): B
  def write(in: B): Unit
}

And we can abstract it into:

trait Read[F[_], A] extends Function1[URI, F[A]]
trait Computation[A, B] extends Function1[A, B]
trait Write[B] extends Function1[B, Unit]

trait Pipeline[F[_], A, B]{
  final def apply(uri: URI)(implicit
    F: Functor[F],
    read: Read[F, A],
    computation: Computation[A, B],
    write: Write[B]): F[Unit] = {
    val in = read(uri)
    val computed = F.map(in)(computation)
    F.map(computed)(write)
  }
}

While we're at it, let's abstract out the apply method as well.

sealed trait Pipeline[F[_], A, B]{
  def apply(uri: URI): F[Unit]
}
object Pipeline{
  final def apply[F[_]: Functor, A, B](implicit
    read: Read[F, A],
    computation: Computation[A, B],
    write: Write[B]) = {
    val F: Functor[F] = implicitly
    new Pipeline[F, A, B]{
      override def apply(uri: URI): F[Unit] = {
        val in = read(uri)
        val computed = F.map(in)(computation)
        F.map(computed)(write)
      }
    }
  }
}

Now our trait is a simple expression of inputs to outputs. But why would we ever perform this kind of abstraction? It seems like we are complicating the problem, not making it simpler.

 

The benefits here are more subtle and only appear in certain use cases. Say you have two pipelines:

val p1: Pipeline[Stream, Int, String] = ???
val p2: Pipeline[Stream, String, Array[Char]] = ???

And you wanted to combine them into a single

val pipeline: Pipeline[Stream, Int, Array[Char]] = ???

Taking two Pipeline instances and composing them is not a readily understood idea. However, taking two Function1 instances and composing them is very well understood. Notice we brought the functions read, compute and write outside the class as simple Function1 instances. Abstracting the Pipeline functions outside the trait provides the developer who is writing the client code with a clear and well understood method for composing multiple Pipelines.

This is still an incomplete implementation. We can see a path forward for composing any number of Pipelines whose computations can be composed but, how do we compose Pipelines who accept different inputs?

A simple switching mechanism

Say we have three Pipeline instances which require separate inputs.

val p1: Pipeline[Stream, ...] = ...
val p2: Pipeline[Stream, ...] = ...
val p3: Pipeline[Stream, ...] = ...

Our application would need to accept a URI and choose which pipeline (if any) should run it.

def perform(uri: URI): Stream[Unit] = {
if(uri.toString.contains("p1")) p1(uri)
else if(uri.toString.contains("p2")) p2(uri)
else if(uri.toString.contains("p3")) p3(uri)
else Stream()
}

This is a lot of boilerplate. Especially when you consider the number of Pipelines (for any successful business) is expected to increase. Let's unpack what we have and see if we can't abstract it into our Pipeline definition.

  1. Uniform Input URI is the input to ALL Pipeline instances
  2. Guards checking a URI against some
  3. Constant value defining a Pipeline for use in a Guard
  4. Default case in case the input matches no Pipeline

Our uniform input means we don't have to worry about which Pipeline can take what Types of values. This is already abstract enough.

We'll build a typeclass to model Guards and Constants associated with each pipeline.

trait Guard[-T]{
def name: String
}
sealed trait Pipeline[-T, A, B]{
def apply(uri: URI): F[Unit]
}
object Pipeline{
final def apply[T: Guard, F[_]: Functor, A, B](implicit
read: Read[F, A],
computation: Computation[A, B],
write: Write[B]): Default[T, F, A, B] = {
val G: Guard[T] = implicitly
val F: Functor[F] = implicitly
new Pipeline[T, A, B]{
override def apply(uri: URI): F[Unit] = ???
  }
}
}

We have an issue here. The last else case of our function returns an empty Stream. In the Pipeline object we don't know what our effect type is. We cannot return an empty version thereof. This problem takes me back to a talk given by Runar Bjarnason last year wherein he describes how when we liberate our types, we constrain our implementation and when we constrain our types we liberate our implementation. We have liberated all of our types here (except URI) leaving ourselves no room to implement what we need. So, we need to constrain a type that we may regain our ability to implement our function. Let's constrain our output type.

 
trait Guard[-T]{
  def name: String
}
sealed trait Pipeline[-T, A, B]{
type Out
  def apply(uri: URI): Out
}
object Pipeline{
  final def apply[T: Guard, F[_]: Functor, A, B](implicit
    read: Read[F, A],
    computation: Computation[A, B],
    write: Write[B]): Default[T, F, A, B] = {
    val G: Guard[T] = implicitly
    val F: Functor[F] = implicitly
    new Pipeline[T, A, B]{
  type Out = Either[Unit, F[Unit]]
      override def apply(uri: URI): Out = {
        val from = uri.toString
        if(from.contains(G.name)) Right{
          val in = read(uri)
          val computed = F.map(in)(computation)
          F.map(computed)(write)
        } else Left(())
      }
    }
  }
}

So our client code becomes

trait P1
trait P2
trait P3
implicit def guardP1 = new Guard[P1]{
  final override def name: String = "p1"
}
implicit def guardP2 = new Guard[P2]{
  final override def name: String = "p2"
}
implicit def guardP3 = new Guard[P3]{
  final override def name: String = "p3"
}
implicit def p1: Pipeline[P1, Stream, ...]
implicit def p2: Pipeline[P2, Stream, ...]
implicit def p3: Pipeline[P3, Stream, ...]
def perform(uri: URI): Either[Either[Either[Unit, Stream[Unit]], Stream[Unit]], Stream[Unit]] = {
  p1(uri).fold(
    _ => Left(p2(uri).fold(
      _ => Left(p3(uri).fold(
        _ => Left(()),
        a => Right(a)
      )),
      a => Right(a)
    )),
    a => Right(a)
  )
}

This has made things much worse. There is even more boiler plate and the nesting will become unreasonable in short order. But we have something we can easily reason about at the type level:

  • Given a known set of Pipeline instances
  • Created a computation which is at most 1 Pipeline
  • Resulting in a nested Data Structure

These characteristics indicate we can take an inductive approach to building our Pipeline library. Enter Shapeless.

 

Abstraction in F[_]: Abstract Your Functions

We have a reasonably abstract pipeline in

trait Pipeline[F[_], A, B]{
  final def apply(uri: URI): F[Unit] =
    write(computation(read(uri)))
  def read(uri: URI): F[A]
  def computation(in: F[A]): F[B]
  def write(in: F[B]): F[Unit]
}

Recognizing Higher-Kinded Duplication

Taking a close look at the trait, we see the computation and write functions are the same aside from their type variables. In fact, if we rename them to have the same name, the compiler complains

scala> :paste
// Entering paste mode (ctrl-D to finish)
trait Pipeline[F[_], A, B]{
  def perform(in: F[A]): F[B]
  def perform(in: F[B]): F[Unit]
}
// Exiting paste mode, now interpreting.
<console>:9: error: double definition:
method perform:(in: F[B])F[Unit] and
method perform:(in: F[A])F[B] at line 8
have same type after erasure: (in: Object)Object
         def perform(in: F[B]): F[Unit]

Since these are the same, we can build an abstraction to simplify our API even further.

trait Pipeline[F[_], A, B]{
  final def apply(uri: URI): F[Unit] = {
    val in = read(uri)
    val computed = convert(in)(computation)
    convert(computed)(write)
  }
  def read(uri: URI): F[A]
  def computation(in: A): B
  def write(in: B): Unit
  def convert[U, V](in: F[U], f: U => V): F[V]
}

We've removed the need for the developer to understand the effect type in order to reason about a computation or write step. Now, let's focus on this new function

def convert[U, V](in: F[U], f: U => V): F[V]

This is super abstract. Like so abstract it is meaningless without context. I am reminded of this video in which Rob Norris explains how he continued to abstract his database code until some mathematical principles sort of arose from the work. In this talk, he points out that anytime he writes something sufficiently abstract he checks a library for it, as he probably has not himself discovered some new basic principle of mathematics. We do the same here.

Inside the cats library we find the following def within the Functor class

def map[A, B](fa: F[A])(f: A => B): F[B]

This is the same as if we wrote our convert function as curried rather than multiple argument. We replace our custom function with one from a library; the chance is greater that a developer is well-informed on cats than our internal library. (post on implicits and type classes)

trait Pipeline[F[_], A, B]{
  final def apply(uri: URI)(implicit F: Functor[F]): F[Unit] = {
    val in = read(uri)
    val computed = F.map(in)(computation)
    F.map(computed)(write)
  }
  def read(uri: URI): F[A]
  def computation(in: A): B
  def write(in: B): Unit
}

Here we were able to replace an internal (thus constrained) implementation detail with an external (thus liberated) argument. In other words, we have lifted an implementation detail outside our class giving the client code freedom to use the same instantiated Pipeline in a multitude of contexts.

Abstraction in F[_]: Abstract your Types

A Pipeline of Stream from BigInt to String

Say we have a  data pipeline:

trait Pipeline{
  final def apply(uri: URI): Stream[Unit] =
    write(computation(read(uri)))
  def read(uri: URI): Stream[BigInt]
  def computation(in: Stream[BigInt]): Stream[String]
  def write(in: Stream[String]): Stream[Unit]
}

The limitations here are many. The most important limitation is this only works for data pipelines your team can model as streams of an input to a BigInt to a String to an output. This is not very useful. The first step is abstracting over your computation types.

A Pipeline of Stream

Removing the constraint on BigInt and String requires type parameters on our Pipeline trait:

trait Pipeline[A, B]{
  final def apply(uri: URI): Stream[Unit] =
    write(computation(read(uri)))
  def read(uri: URI): Stream[A]
  def computation(in: Stream[A]): Stream[B]
  def write(in: Stream[B]): Stream[Unit]
}

We have gained a bit of freedom in implementation. We can now write Pipelines that can be modeled as streams of an input to a A to a B to an output given any A and B. Now instead of being constrained to BigInt and String, we have gained some liberty through our abstraction.

Still, we are constrained to the scala Stream type. This, too, is a nuisance what if we require Pipelines that effect through fs2 Stream or spark Dataset or any other suitable effect? Similar to how we abstracted away from BigInt and String by making them type parameters A and B, we can do the same with our Stream.

A Pipeline

Using a higher-kinded type parameter, we can abstract over any effect assuming the effect has a single type parameter.

trait Pipeline[F[_], A, B]{
  final def apply(uri: URI): F[Unit] =
    write(computation(read(uri)))
  def read(uri: URI): F[A]
  def computation(in: F[A]): F[B]
  def write(in: F[B]): F[Unit]
}

Now, we can make a data Pipeline using any such types! We can have our original Pipeline of Stream from BigInt to String

val pipeline: Pipeline[Stream, BigInt, String] = ???

We can have a Pipeline of fs2 Stream with some type construction:

type MyStream[A] = fs2.Stream[fs2.Task, A]
val pipeline: Pipeline[MyStream, BigInt, String] = ???

We can even do this with spark

val pipeline: Pipeline[Dataset, BigInt, String] = ???

Any Pipeline your team can model as a read effect a computation and a write effect can be defined with Pipeline defined this way.

Abstraction in F[_]

I gave a talk at typelevel today about abstraction data pipelines and some ways to ease the use of Spark in purely functional application. It seemed to have gone pretty well, here you will find the video, deck (pptx, pdf) and code

In the coming weeks I'll be posting the long-form version of the talk.

  1. Abstract Your Types
  2. Abstract Your Functions
  3. Lift Implementation Details Outside the Class
  4. Implicits and Induction

Simple Generic Derivation with Shapeless

(Code on GitHub)

Shapeless is a library which makes generic programming in Scala easier. Generic Programming gives you the ability to build abstractions that the language does not directly support

First thing's first:

import shapeless._

For the purposes of this post, this is the only import needed.

HList

The HList type provides a way to maintain a collection of items whose types may be different. For example say you have an Int, a String and a Double that you want inside a list. Using a standard Scala List one has:

scala> List(1, "1", 1.0)
res0: List[Any]

The problem is the type information is lost, so when we use this data, we need to cast the appropriate index to the appropriate type. HList is a list which holds type information for each element separately. With the shapeless import, HLists are constructed using the :: operator

scala> 1 :: "1" :: 1.0 :: HNil//very similar syntax to standard List
res1: shapeless.::[Int,shapeless.::[String,shapeless.::[Double,shapeless.HNil]]]

We can see the type information for Int, String and Double are kept as part of the overall type. This implies we can produce concrete types for lists of arbitrary elements. This is the cornerstone of Generic Derivation with Shapeless.

Note: Yes, Scala has perfectly reasonable Product types with tuples and case classes and whatnot; however, for reasons that will become clear later in this post, these types are insufficient for our purposes here.

Typeclass Boilerplate

Let's take a typeclass

trait Foo[Type]{
  def bar(t: Type): String
}

Not super exciting but, it will get the point across. Say we create instances for Int, String and Double

implicit def fooInt = new Foo[Int]{
  def bar(t: Int): String = t + ": Int"
}
implicit def fooString = new Foo[String]{
  def bar(t: String): String = t + ": String"
}
implicit def fooDouble = new Foo[Double]{
  def bar(t: Double): String = t + ": Double"
}

Now we have instances we can use for all things that are Int or String or Double and we are happy with this for a time. Then we get a request for a Foo instance that can be used for both Int and String so, we produce one using our new friend the HList:

implicit def fooIntString = new Foo[Int :: String :: HNil]{
  def bar(t: Int :: String :: HNil): String = {
    val i :: s :: HNil = t //very similar syntax to standard List
    fooInt.bar(i) + ", " + fooString.bar(s)
  }
}

But, this is coming in from user input on a webservice and users are notoriously inconsistent with the ordering of their arguments. We need one for String then Int as well:

implicit def fooStringInt = new Foo[String :: Int :: HNil]{
  def bar(t: String :: Int :: HNil): String = {
    val s :: i :: HNil = t
    fooString.bar(s) + ", " + fooInt.bar(i)
  }
}

Great! But what about combinations with Double? And what about when we need to support Long or List[String] or any other type? The combinations here blow up quite quickly in any reasonable application. Luckily, Scala and Shapeless give us a few tricks to get rid of all the tedious boilerplate present in any typeclass polymorphic system.

Implicits to the Rescue

All of the typeclass definitions thus far have been declared implicit. There is a fantastic reason for this: Implicits get the Scala compiler to produce boilerplate declarations for us at compile time. We can greatly reduce the amount of boilerplate needed to create typeclass instances for combinations of types. All we have to do is tell the compiler what to do.

Recall, all our HList declarations end with HNil. We need an instance for HNil

implicit def fooNil = new Foo[HNil]{def bar(t: HNil): String = ""}

Recall too, what the type of our HList looked like

scala> 1 :: "1" :: 1.0 :: HNil
res1: shapeless.::[Int,shapeless.::[String,shapeless.::[Double,shapeless.HNil]]]

removing the noise, we can see a pattern of nesting: [Int,[String,[Double,HNil]]]. We know from math that when we have nested syntax, we evaluate from the inside out; this applies for Scala code as well. So logically, we start with an HNil, prepend a Double, prepend a String then prepend an Int.

We have given the compiler implicit typeclass instances for each of these types; all we need is to give it an implicit operation for prepending an instance onto another instance:

implicit def fooPrepend[Head, Tail<:HList](implicit
  head: Foo[Head],
  tail: Foo[Tail]): Foo[Head :: Tail] = new Foo[Head :: Tail]{
  def bar(t: Head :: Tail): String = {
    val hd :: tl = t
    head.bar(hd) + ", " + tail.bar(tl)
  }
}

This says, Given a Foo instance for some type, Head, and some HList, Tail, we can produce a new Foo instance for the HList, Head :: Tail. We can see this work in the REPL:

scala> :paste
// Entering paste mode (ctrl-D to finish)

val a: Foo[Int :: HNil] = implicitly
val b: Foo[String :: HNil] = implicitly
val c: Foo[String :: Int :: HNil] = implicitly
val d: Foo[Double :: String :: Int :: HNil] = implicitly
val e: Foo[Double :: String :: Int :: String :: HNil] = implicitly

// Exiting paste mode, now interpreting.

a: Foo[shapeless.::[Int,shapeless.HNil]]
b: Foo[shapeless.::[String,shapeless.HNil]]
c: Foo[shapeless.::[String,shapeless.::[Int,shapeless.HNil]]]
d: Foo[shapeless.::[Double,shapeless.::[String,shapeless.::[Int,shapeless.HNil]]]]
e: Foo[shapeless.::[Double,shapeless.::[String,shapeless.::[Int,shapeless.::[String,shapeless.HNil]]]]]

There was no need to define by hand Foo instances for any HList type that is a combination of types for which implicit Foo instances exist in scope.

The Reason for HList

Why not just use any old Product type? HList, unlike Product, has a way to take any HList and combine it with another type to produce another HList. There is no such capability baked into Product.

Recall, the last thing we did was build a Foo instance for a larger HList out of a Foo instance for a smaller HList. This process, started with the smallest possible HList, HNil, and built larger and larger HLists until the required type was produced.

This talk of Products leads us to our next step. Most applications have a lot of Product types and need typeclass instances for these Products. Given an HList is just a really fancy Product can we generically derive instances for tuples and case classes?

Note: There is an excellent talk on how to do this with nested Tuples.

Generic

Shapeless provides a typeclass, Generic, which helps convert between Product and its similar (isomorphic) HList. It is fairly straight forward to produce a Generic instance for a Product and get an HList:

scala> :paste
// Entering paste mode (ctrl-D to finish)

val gen = Generic[(Int, String)]
val asH = gen.to((1, "1"))

// Exiting paste mode, now interpreting.

gen: shapeless.Generic[(Int, String)]
asH: gen.Repr = 1 :: 1 :: HNil

With this, we can take any Product and create a Foo instance for an HList that is similar:

scala> :paste
// Entering paste mode (ctrl-D to finish)

val genIS = Generic[(Int, String)]
val genDS = Generic[(Double, String)]
val a = implicitly[Foo[genIS.Repr]].bar(genIS.to(1, "1"))
val b = implicitly[Foo[genDS.Repr]].bar(genDS.to(1.0, "1"))

// Exiting paste mode, now interpreting.

genIS: shapeless.Generic[(Int, String)]
genDS: shapeless.Generic[(Double, String)]
a: String = 1: Int, 1: String
b: String = "1.0: Double, 1: String, "

This implies given:

  1. a Product, P
  2. a Generic instance for P, gen
  3. and an implicit Foo instance for the HList, gen.Repr

We can produce a result that would be valid for an instance of Foo[P]. All we need to do is find a way to delegate the work Foo[P] needs to do to an instance of Foo[gen.Repr].

Generic Derivation

Like before, we'll use an implicit def so the compiler helps us along.

implicit def fooProduct[P<:Product, H<:HList](implicit
  gen: Generic[P]{type Repr = H},//compiler needs to know Generic converts P to H
  foo: Foo[H]): Foo[P] = {
  new Foo[P]{
  def bar(p: P): String = foo.bar(gen.to(p))
}
}

This states, Given a Product and an HList, P and H, a Generic instance, gen, which can convert between P and H and a Foo instance for H, we can construct a Foo instance for P. There are two things to note:

  1. We do not need to type implicit instanced for Generic like we had to for Foo
  2. The implicit Generic needs a structural bound

The shapeless library provides out of the box, automatic Generic instances for any Product type and brings these into implicit scope. The problem with Generic is the type Repr is a member (dependent) type not a type parameter. In order to make sure the compiler can prove Repr is indeed our parameter H, we need to give it explicitly. And with this, we have our generic derivation for Foo:

scala> :paste
// Entering paste mode (ctrl-D to finish)

val f: Foo[(String, Int)] = implicitly
val g: Foo[(Double, String, Int)] = implicitly
val h: Foo[(Double, String, Int, String)] = implicitly

case class A(i1: Int, i2: Int, s: String)
case class B(d: Double, s: String)
case class C(i1: Int, d1: Double, s: String, d2: Double, i2: Int)
val i: Foo[A] = implicitly
val j: Foo[B] = implicitly
val k: Foo[C] = implicitly

// Exiting paste mode, now interpreting.

f: Foo[(String, Int)]
g: Foo[(Double, String, Int)]
h: Foo[(Double, String, Int, String)]
defined class A
defined class B
defined class C
i: Foo[A]
j: Foo[B]
k: Foo[C]

And that's that! We can derive instances for our type class for any Product given each individual type within the product has an implicit instance defined.

In Sum

Shapeless provides a framework for Generic Programming which can help us remove boilerplate from our applications and derive instances for types that would be too tedious to write out ourselves. To take advantage of the derivation one needs a few parts:

  1. Implicit instances for basic types like Int, String, Double, etc...
  2. An implicit instance for HNil
  3. An implicit def which when given implicit instances for a type and an HList, Head and Tail, can produce an implicit instance for the HList, Head :: Tail.
  4. An implicit def which for a Product, P, when given implicit instances for an HList, H, and a Generic which can convert between P and H, can produce an implicit instance for P

Asynchronicity

(video, slides, code)

I gave a talk at PHASE (PHiladelphia Area Scala Enthusiasts) about Asynchronous programming. Specifically, we covered how to transform a basic linear and sequential computation (factorial) into one which could be processed in parallel. Moreover, we added an extra layer of complexity onto the computation (n choose k) to show how Actors and Futures in Scala can work together to help a developer build an asynchronous system.

What the Hell is an "Effect Type"?

I was reading through the fs2 documentation and user guide and thought to myself, this is really straight forward! And, to their credit it is. Anyone who is used to FP and Scalaz or Haskell will take to the documentation with little friction. However, when an FP novice or someone from a language with less powerful FP implementations (F#, C#, Java) encounters this documentation, its a pain to trudge through. 

Through many attempts at explaining fs2, I have found the main topic of concern is the "effect type". On its surface, this term seems rather benign:

  1. Everyone knows FP strives to have no side effects
  2. We all know certain things (IO for example) are fundamentally effectful
  3. In order to encode these effects into FP style, we build abstractions
  4. These abstractions for effects are our effect types

So, taking the canonical example from the fs2 documentation

import fs2.{io, text, Task}
import java.nio.file.Paths

def fahrenheitToCelsius(f: Double): Double =
(f - 32.0) * (5.0/9.0)

val converter: Task[Unit] =
  io.file.readAll[Task](Paths.get("testdata/fahrenheit.txt"), 4096)
    .through(text.utf8Decode)
    .through(text.lines)
    .filter(s => !s.trim.isEmpty && !s.startsWith("//"))
    .map(line => fahrenheitToCelsius(line.toDouble).toString)
    .intersperse("\n")
    .through(text.utf8Encode)
    .through(io.file.writeAll(Paths.get("testdata/celsius.txt")))
    .run

// at the end of the universe...
val u: Unit = converter.unsafeRun()

We see a file read, a parse, a transformation then a file write (reading and writing files are side effect heavy). The effect type is Task and the result is Unit. This can be read as

The converter exists to create a Task which reads a file as bytes, converts those bytes to utf-8 Strings, transform those Strings and write them back to disk in a separate file returning no result. The converter is purely effectful.

One can do this with a standard scala.collection.immutable.Stream as well.

val path = Paths.get("testdata/fahrenheit.txt")
val out = Paths.get("testdata/celsiusStream.txt")
val readerT =
  Try(Files.newBufferedReader(path, StandardCharsets.UTF_8))
val writerT =
  Try(Files.newBufferedWriter(out, StandardCharsets.UTF_8))
val result = for{
  reader <- readerT
  writer <- writerT
}yield{
  Stream.continually{reader.readLine}
    .takeWhile(null != _)
    .filter(s => !s.trim.isEmpty && !s.startsWith("//"))
    .map(line => fahrenheitToCelsius(line.toDouble).toString)
    .flatMap{Stream(_, "\n")}
    .foreach(writer.write)
}
readerT.foreach(_.close())
writerT.foreach(_.close())

So, other than the obvious fs2 io convenience functions, to most FP unindoctrinated it seems the standard Stream version is about as useful and as safe as the fs2 Stream version. However, the fs2 Stream is much better FP practice.

Function Parameters should be Declared

Upon Stream creation, there is a big difference between fs2 and standard. With fs2, the creation mechanism is a pure function

io.file.readAll[Task](Paths.get("testdata/fahrenheit.txt"), 4096)

It takes all of its necessary data as parameters and returns a Stream with effect type Task. On the other hand, the standard Stream requires a closure to be initialized

Stream.continually{reader.readLine}

It requires a by-name parameter, the by-name returns a different result upon each invocation and the body of the by-name depends on the closure within which the function is called. This line of code is impossible to understand by itself; taken out of context, it is meaningless. In other words the function lacks referential transparency.

Function Duties should be Declared

Lines from a file in fs2 are produced with

scala> io.file.readAll[Task](Paths.get("testdata/fahrenheit.txt"), 4096).
     |       through(text.utf8Decode).
     |       through(text.lines)
res3: fs2.Stream[fs2.Task,String] = evalScope(Scope(Free)).flatMap(<function1>)

and standard Stream we have

scala> val path = Paths.get("testdata/fahrenheit.txt")
path: java.nio.file.Path = testdata\fahrenheit.txt

scala> val reader = Files.newBufferedReader(path,
 | java.nio.charset.StandardCharsets.UTF_8)
reader: java.io.BufferedReader = java.io.BufferedReader@3aeed31e

scala> Stream.continually{reader.readLine}
res4: scala.collection.immutable.Stream[String] = Stream(120, ?)

There are two important differences here. The first is evaluation; even though standard streams are considered lazy, they evaluate the head value eagerly. We can see fs2 gives us a computation where standard Streams gives us a value. The second difference is the type.

In fs2 we have a type of Stream[Task, String]; standard gives us Stream[String]. The fs2 Stream explicitly expresses the intention for effectful computation, whereas the standard Stream hides this implementation detail. Another way of looking at this is, standard Streams hide their effects where fs2 Streams surface their effects. In fact, fs2 Streams (if effectful) only allow the developer to find its result through the effect type. This gives the developer a sort of heads up about what's going on in the application behind the scenes.

What the Effect Type Represents

The effect type in any such functional library represents the intent to perform an operation outside the scope of the return type. This is very common for IO (as we've seen) and other effectful operations like Logging or showing the user a pop up. The computation usually produces some sort of a result but, before returning the result writes it to disk, logs it or tells the user about a completion state. In FP, we like to express all data in a function through the function definition and effects are just weird data.

Implicits and Type Classes for Mortals

Implicits and Type Classes can be fairly hard to nail down for a developer learning Functional Programming. They can seem more like magic than like provably-correct code and, clever compiler operations can exacerbate the issue.

For those of us who have been working with these concepts for a while, implicit scope and type class polymorphism come second natrure. They are like an extension of the fingers. An expression of this divide appeared in my twitter feed a few months ago: 

This post aims to impart some of this natural feeling to those who are just breaking in. Hopefully, type class code will be easier to work with after reading.

A Crash Course in Implicit Resolution

Implicit scope in scala is insanely complex. Luckily, understanding the subtleties of the compiler algorithm are not a necessity. We only need to understand a few simple points:

  1. The implicit keyword
  2. Implicit values
  3. Implicit function parameters

The Implicit Keyword

This keyword tells the compiler that a function or value should be searchable in implicit scope. Implicit scope is the set of all implicit declarations in scope.

We'll now build up to using identifiers that are searchable in implicit scope.

Implicit Values

An implicit value is a value defined using the implicit keyword.

implicit val x = 7

Now, the value x is searchable in implicit scope. We'll make use of it later.

Implicit Function Parameters

An implicit function parameter, like an implicit value, is a parameter to a function that is marked with implicit.

def mkList(begin: Int)(implicit end: Int): List[Int] = {
  (begin to end).toList
}
assert(List(1,2,3,4,5,6,7) == mkList(1)(7))

An implicit parameter works just like a parameter without the implicit declaration. So, what's the point?

Implicit parameters do not need to be explicitly written; if a value of the correct type is found in implicit scope, the compiler will automatically input that value for the developer to the function. The following is valid:

implicit val x = 7
def mkList(begin: Int)(implicit end: Int): List[Int] = {
  (begin to end).toList
}
assert(List(1,2,3,4,5,6,7) == mkList(1))
assert(List(5,6,7) == mkList(5))

So What?

Implicit resolution is very handy for applying contexts to an application. A simple example is logging.

import scala.concurrent._
import ExecutionContext.Implicits.global
case class Logger(location: String){
  def log(message: String) =
    println(s"$message\t$location")
}

def monitoredOperation(a: String, b: Int)(implicit logger: Logger) = {
  logger.log(a)
  logger.log(b.toString)
  logger.log(a + (2 * b))
}

val logA = Logger("logDir/a.log")
val logB = Logger("logDir/b.log")

def performA() = Future{
  implicit val log = logA
  monitoredOperation("A1", 5)
  monitoredOperation("A2", 4)
  monitoredOperation("A3", 3)
  monitoredOperation("A4", 6)
}
def performB() = Future{
  implicit val log = logB
  monitoredOperation("B1", 4)
  monitoredOperation("B2", 6)
  monitoredOperation("B3", 5)
  monitoredOperation("B4", 7)
}

implicit val logMain = Logger("logDir/.log")
for{
  a <- performA()
  b <- performB()
}{
  monitoredOperation(".1", 8)
}

There are three different paths for logging which are necessary at three separate points in the application. The implicit dependency monitoredOperation has on the logger lets the developer define a logging context at an appropriate scope and then code without giving thought to logging. This helps in two ways:

  1. The developer is free to code to the business case without being further burdened by logging
  2. Forces the developer to decouple code which logs to separate locations

The second help is illustrated by the following

implicit val logA = Logger("logDir/a.log")
implicit val logB = Logger("logDir/b.log")
monitoredOperation("", 0)//compile error: ambiguous implicit values

Bringing two values of the same type into implicit scope causes the compiler to fail on implicit resolution. The compiler cannot reason about which is the one you want.

Type Classes for Mortals

Type Classes are a technique for purely functional polymorphism. They are a powerful tool for using the scala compiler to help the developer group data types with similar use cases.

Building Type Classes

Say you need to extract lengths from GUIDs for validation however the GUIDs are presented in multiple formats.

  1. String
  2. List[Char]
  3. List[String]

For example:

val valid = 32
val id1 = "3F2504E0-4F89-41D3-9A0C-0305E82C3301"
val id2 = List('3', 'F', '2', '5', '0', '4', 'E', '0',
  '4', 'F', '8', '9',
  '4', '1', 'D', '3',
  '9', 'A', '0', 'C',
  '0', '3', '0', '5', 'E', '8', '2', 'C', '3', '3', '0', '1')
val id3 = List("3F2504E0", "4F89", "41D3", "9A0C", "0305E82C3301")
println(valid == id1.size)//false
println(valid == id2.size)//true
println(valid == id3.size)//false

In scala there is a built in size member for all three of these classes. The issue is, the three values represent the same GUID logically but, their size methods return very different results. Most of the time in OO, we would gain access to the classes, create a trait which validates them as GUIDs and have them each implement that trait. This is not possible since we are dealing with standard library classes. Type Classes provide a performant solution to this gap in functionality.

What is a Type Class?

A type class in scala is a trait with a parameter and at least one abstract method  which contracts over the parameterized type. In our example it would look like:

trait Guid[T]{
  def isGuid(item: T): Boolean
}

This interface allows us to write a function for validating a GUID given an implementation of the Guid trait.

def guidIsValid[T](item: T, guid: Guid[T]): Boolean = {
  guid.isGuid(item)
}

Now, we just need to implement this trait for our three classes and use them as required. All together we have

val valid = 32
val id1 = "3F2504E0-4F89-41D3-9A0C-0305E82C3301"
val id2 = List('3', 'F', '2', '5', '0', '4', 'E', '0',
  '4', 'F', '8', '9',
  '4', '1', 'D', '3',
  '9', 'A', '0', 'C',
  '0', '3', '0', '5', 'E', '8', '2', 'C', '3', '3', '0', '1')
val id3 = List("3F2504E0", "4F89", "41D3", "9A0C", "0305E82C3301")
//use case

trait Guid[T]{
  def isGuid(item: T): Boolean
}
def guidIsValid[T](item: T, guid: Guid[T]): Boolean = {
  guid.isGuid(item)
}
val stringGuid = new Guid[String]{
  override def isGuid(str: String): Boolean = {
    val stripped = str.filter('-' != _)
    valid == stripped.size
  }
}
val lCharGuid = new Guid[List[Char]]{
  override def isGuid(chars: List[Char]): Boolean = {
    valid == chars.size
  }
}
val lStringGuid = new Guid[List[String]]{
  override def isGuid(strs: List[String]): Boolean = {
    val size = strs.map(_.size).sum
    valid == size
  }
}
guidIsValid(id1, stringGuid)//true
guidIsValid(id2, lCharGuid)//true
guidIsValid(id3, lStringGuid)//true

Here, the compiler makes sure we have a valid implementation of our type class, Guid, for each call. This is nice but, the extra argument to the guidIsValid function makes the syntax more cumbersome than the OO version (OO version would have a validGuid method on String, List[Char] and List[String]). What we would like is for the last three lines to read

guidIsValid(id1)//true
guidIsValid(id2)//true
guidIsValid(id3)//true

Remember Implicits

Let's redefine our guidIsValid function so we can leverage the implicit functionality baked into the scala compiler to input our type class argument for our function

def guidIsValid[T](item: T)(implicit guid: Guid[T]): Boolean = {
  guid.isGuid(item)
}

Now, if we redefine our type class implementations as implicits we have

trait Guid[T]{
  def isGuid(item: T): Boolean
}
def guidIsValid[T](item: T)(implicit guid: Guid[T]): Boolean = {
  guid.isGuid(item)
}
implicit val stringGuid = new Guid[String]{
  override def isGuid(str: String): Boolean = {
    val stripped = str.filter('-' != _)
    valid == stripped.size
  }
}
implicit val lCharGuid = new Guid[List[Char]]{
  override def isGuid(chars: List[Char]): Boolean = {
    valid == chars.size
  }
}
implicit val lStringGuid = new Guid[List[String]]{
  override def isGuid(strs: List[String]): Boolean = {
    val size = strs.map(_.size).sum
    valid == size
  }
}
guidIsValid(id1)//true
guidIsValid(id2)//true
guidIsValid(id3)//true

Better compiler messages

To make it easier on implementors to reason about compile errors, type class library designers should always use the implicit not found annotation on their type classes.

@annotation.implicitNotFound("${T} has no implicit Guid instance defined in scope.")
trait Guid[T]{
  def isGuid(item: T): Boolean
}

Working With Type Classes

We finally get to the main point of the tweet we referenced at the very beginning of this post: 

how hard this code is ... to work with

First let's pick out just the type class and a single implementation from our code.

val valid = 32
trait Guid[T]{
  def isGuid(item: T): Boolean
}
implicit val stringGuid = new Guid[String]{
  override def isGuid(str: String): Boolean = {
    val stripped = str.filter('-' != _)
    valid == stripped.size
  }
}

val id1 = "3F2504E0-4F89-41D3-9A0C-0305E82C3301"

This is the most basic kind of type class; its just a type class. The library has no functionality or guidance about how to use it. Most library designers add in a lot of extra implementation to the companion object for the type class. The companion object typically holds

  1. convenience functions for performing operations functionally
  2. something called Ops which wraps values in a context for the type class
  3. an implicit def into the Ops class for quickly performing operations with OO-style
trait Guid[T]{
  def isGuid(item: T): Boolean
}
object Guid{
  def apply[T](item: T)(implicit guid: Guid[T]): Boolean = {
guid.isGuid(item)
  }
  class GuidOps[T](item: T, guid: Guid[T]){
    def isGuid(): Boolean = guid.isGuid(item)
  }
  implicit def lift[T](item: T)(implicit guid: Guid[T]): GuidOps[T] = {
    new GuidOps[T](item, guid)
  }
}

With this definition in the library, the developer can choose which style to use

import Guid._
implicit val stringGuid = new Guid[String]{
  override def isGuid(str: String): Boolean = {
    val stripped = str.filter('-' != _)
    valid == stripped.size
  }
}
val id1 = "3F2504E0-4F89-41D3-9A0C-0305E82C3301"
Guid(id1)//functional
id1.isGuid()//object oriented

Type classes are simply the FP version of the decorator pattern from OO (This is made clear by the implicit def from T to GuidOps[T]; a direct mapping from the type class for some T to a decorator for T). They take a class and apply new uses to a single instance of that class without affecting other instances of that class. Together with implicits, they seem to augment a class directly (js prototype style) but, this appearance is simply an illusion created by the compiler as it decorates functions and objects for the developer.

Type Variance - Why it Matters

The company I work for has a robust Intern program; as a result, I work with a lot of young engineers and computer scientists. To date:

  1. 100% of their resumes mention they have a tight grasp on Object Oriented Programming
  2. 100% of them fail to understand the finer points of subtyping and furthermore subclassing

I have given more explanations of variance than I have given explanations of anything else on the job. So, in a effort to practice the DRY principle in all my affairs, I decided to put it into documentation I can point to.

Note: Type Variance has A LOT of math (type theory & category theory) behind it. This post will focus on its usage in the Scala language not on the math.

Sub Classes

class Foo[T]
def check[A, B](a:A, b:B)(implicit ev: A <:< B): Unit = {}

Here the class Foo is parameterized by T and the check function simply checks if the type of its first argument is a subclass of the type of its second argument. Don't worry about the check function, its implementation details are beyond the scope of this post but, its a nifty trick!

val str = ""
val obj = new Object()
check(str, obj)//compiles

So, the check here compiles which tells us String is a subclass of Object.

val fStr = new Foo[String]
val fObj = new Foo[Object]
check(fStr, fObj)//error: Cannot prove that zzz.simple.Foo[String] <:< zzz.simple.Foo[Object].

This doesn't compile because the type parameter of Foo allows for no variation in its relationship; Foo is invariant in T.

We will begin by briefly describing variance.

Variance

Variance is a huge part of programming with types. It is an intrinsic property of class hierarchies and can be witnessed as such in languages like C++ and Scala who have compiler errors along the lines of:

  • covariant whatever in contravariant position
  • whatever is in covariant position but is not a subclass of whatever

In short, type variance describes the types that may be substituted in place of another type.

Covariance: We say a substitution is covariant with a type, Foo, if Foo or any other class with a subclass relationship to Foo is valid.
Contravariance: We say a substitution is contravariant with a type, Bar, if Bar or any other class with a superclass relationship to Bar is valid.
Invariance: We say a substitution is in variant with a type, Foo, if only types that are exactly Foo are valid.

Variance in Practice

We already saw invariance above. In Scala covariance and contravariance are denoted by using the + and - symbols respectively.

Covariance

case class Foo[+T]//covariant in T

Redeclaring Foo in this way makes it covariant so our test now validates

val str = ""
val obj = new Object()
check(str, obj)//compiles
val fStr = new Foo[String]
val fObj = new Foo[Object]
check(fStr, fObj)//compiles

Declaring the type variable with a + (as covariant) tells the compiler that the subclass relationship between type parameters gives rise to a direct subclass relationship in Foo. So any def, val or var requiring a Foo[Object] can take a Foo[String] as an argument in place.

Contravariance

case class Foo[-T]//contravariant in T

This redeclaration makes Foo contravariant and breaks our test again

val str = ""
val obj = new Object()
check(str, obj)//compiles
val fStr = new Foo[String]
val fObj = new Foo[Object]
check(fStr, fObj)//error: Cannot prove that zzz.simple.Foo[String] <:< zzz.simple.Foo[Object].

This is what we expect! Contravariance implies a superclass relationship not a subclass relationship. We can fix this by reversing our input arguments

check(fObj, fStr)//compiles

This declaration is a hint to the compiler that the subclass relationship between type parameters gives rise to a superclass relationship in Foo. So any def, val or var requiring a Foo[String] can take a Foo[Object].

How to use Variance

Where covariance preserves the subclass relationship from the type parameter into the type, contraveriance reverses this relationship.

Covariance is used a lot in Scala by the collections library. Most of the immutable collections are covariant. This makes working with your data types inside the collection the same as working with them outside the collection when writing interfaces.

Contravariance is less prominent. I use contravariance for typeclasses a lot.

trait Bar[-T]{
  def bar(t:T): Unit
}
implicit val bar = new Bar[Object]{def bar(o:Object): Unit = ()}
def procBar[T:Bar](t: T){
  implicitly[Bar[T]].bar(t)
}
procBar(obj)//compiles
procBar(str)//pulled the superclass instance in

If a class does not have a typeclass instance in implicit scope for its type it can use a contravariant instance if one is in scope.

 

 

Cats (Introducing Typelevel Scala into an OO Environment)

(Examples can be found on GitHub)

Last time, we discovered Type Classes and how they give us more power to build extensible software than sub class polymorphic trees can give us. Here we introduce the Cats library to use a production quality library instead of the Adder and Chainer classes from the previous post.

We'll need the following imports from the cats library:

import cats.Monoid
import cats.Monad
import cats.implicits._

and to recall our Team definition:

case class Team[Type](members: List[Type])

The Monoid Type Class

Recall the Adder trait described the process of adding two containers of the same type together to create a new container of that type. This is the basic idea for a structure in category theory called a Monoid. A full treatment of a Monoid is beyond the scope of this post; the important thing here is they describe semantics for adding members of other types together.

In the cats library, a Monoid for our unstructured Team data is defined thus:

implicit def adder[Arg]: Monoid[Team[Arg]] =
  new Monoid[Team[Arg]]{
    override def empty: Team[Arg] = Team(Nil)
    override def combine(
        left: Team[Arg], right: Team[Arg]): Team[Arg] = {
      val newMembers = left.members ++ right.members
      Team(newMembers)
    }
  }

A Monoid has two operations, empty and combine, where empty is the identity element under the combine operation. In other words empty is a value, e, that when combined with any other value, v, returns v. Monoid replaces our Adder trait from the previous post. Our structured Team would have Monoid:

implicit def adder[Arg]: Monoid[Team[Arg]] =
  new Monoid[Team[Arg]]{
    override def empty: Team[Arg] = Team(Nil)
    override def combine(
        left: Team[Arg], right: Team[Arg]): Team[Arg] = {
      val (lead1, indi1) = left.members.splitAt(2)
      val (lead2, indi2) = right.members.splitAt(2)
      val newMembers = lead1 ++ lead2 ++ indi1 ++ indi2
      Team(newMembers)
    }
  }

Note, the empty value is the same for both cases. This is often the case for multiple Monoids over the same data structure; no matter how you combine elements, the identity is trivially applied.

The Monad Type Class

Now, we shift our focus to the Chainer trait from the previous post. This trait described how to sort of flatten a nesting of containers for instance, a Team[Team[_]] into a Team[_]. This is the basic operation behind the Monad. Again, we're not interested in figuring out Monads in detail; we're just trying to use a library in our work.

With Cats we'll have:

implicit def chainer: Monad[Team] = new Monad[Team]{
  override def flatMap[Arg, Ret](
      team: Team[Arg])(f: Arg => Team[Ret]): Team[Ret] = {
    val newMembers = team.members.flatMap(f(_).members)
    Team(newMembers)
  }
}

for our unstructred Monad and our structured would look like:

implicit def chainer: Monad[Team] = new Monad[Team]{
  override def flatMap[Arg, Ret](
      team: Team[Arg])(f: Arg => Team[Ret]): Team[Ret] = {
    val (leaders, individuals) = team.members.map{member =>
      val mems = f(member).members
      mems.splitAt(2)
    }.unzip
    Team(
        leaders.flatMap {x=>x} ++
        individuals.flatMap{x=>x})
  }
}

Monads add the flatMap (also called bind or >>= in some circles) operation to a data type. flatMap describes how to take a value of Team and a function which maps a member to a Team to produce another Team. It is a flattening operation. These are important as they describe data flow and functional composition through a system. To get the individual contributors from their Directors on could:

case class Director(name: String)
case class Manager(name: String)
case class Individual(name: String)
val directors: Team[Director] = ???
def managers(director: Director): Team[Managers] = ???
def individualContributors(manager: Manager): Team[Individual] = ???
val individuals = directors >>= (managers) >>= (individualContributors)

Then swapping out different functionality is simply recombining your function calls around your Monadic chain.

Monoids and Monads are simple to use. They describe operations to combine and process data as simple, type safe functional chains.

5. Typeclasses over Subclasses (Introducing Typelevel Scala into an OO Environment)

(Examples can be found on GitHub)

In the previous post, we introduced the Argonaut library to convert between values and JSON strings. The important part of this conversion is there was no superclass or interface to implement in order to get the benefit of JSON across classes. All we needed to do was define values of type CodecJson for each of the types we wanted to convert. We added the functionality to the class without changing the class itself. 

Argonaut allowed us to call toJson on classes with a codec and decodeOption on Strings to produce values of classes with a codec defined. This type of polymorphism, where a function's implementation depends on its inputs is called ad-hoc polymorphism. Furthermore, when we define a type, T, which defines functionality across classes to be used in ad-hoc polymorphic functions we call T a Type Class. Type Class polymorphism is a specific flavor of ad-hoc polymorphism.

Type Class polymorphism is a powerful tool for expressing context based functionality far more powerful than subclass polymorphism. As a well-known example take the Java interfaces Comparable and Comparator. If some data is defined in a class which implements Comparable, it can be sorted one way and needs an entire second class definition to be sorted with a different method. On the other hand, using Comparator the data is defined with a single class and each sort method gets its own Comparator. Comparator is a Type Class and allows the developer to determine in which contexts which sorting method should be used.

Subclass Method

Take the following traits:

trait Adder[Type]{
  def add(other: Type): Type
}
trait Chainer[Arg, Type[Arg]]{
  def chain[Res](f: Arg => Type[Res]): Type[Res]
}

Adder describes how to add two values of some Type together. Chainer describes how to chain operations over a parameterized type.

We'll use the idea of a team to illustrate. For the sake of simplicity we say a team consists of people of a certain profession. So we can have a team of engineers or a team of doctors or a team of cashiers or ...

Teams can (trivially) grow by hiring but, they can also grow by combining with other teams. Teams can be added.

Teams can have members who are themselves team leads. At times, the members of a lead's team must join the team the lead belongs to. This implies an operation which develops teams out of the members of teams. Teams can be chained.

Here is our implementation of Team given this functionality:

case class Team[Type](members: List[Type])
  extends Adder[Team[Type]]
  with Chainer[Type, Team]{
  override def add(other: Team[Type]): Team[Type] = {
    Team(members ++ other.members)
  }
  override def chain[Res](
      f: Type => Team[Res]): Team[Res] = {
    val list = members.flatMap(member => f(member).members)
    Team(list)
  }
}

Simple enough but this doesn't account for an organization of structured teams. For an organization who develops teams that each have one product lead and one technical lead, simple concatenation won't maintain a soft ranking of individuals within the new team. We need a new Team definition which accounts for this.

case class TeamStructured[Type](members: List[Type])
  extends Adder[TeamStructured[Type]]
  with Chainer[Type, TeamStructured]{
  override def add(
      other: TeamStructured[Type]): TeamStructured[Type] = {
    val (lead1, indi1) = members.splitAt(2)
    val (lead2, indi2) = other.members.splitAt(2)
    TeamStructured(lead1 ++ lead2 ++ indi1 ++ indi2)
  }
  override def chain[Res](
      f: Type => TeamStructured[Res]): TeamStructured[Res] = {
    val (leaders, individuals) = members.map{member =>
      val mems = f(member).members
      mems.splitAt(2)
    }.unzip
    TeamStructured(
        leaders.flatMap {x=>x} ++
        individuals.flatMap{x=>x})
  }
}

Now we have two definitions for the same data that differ only by functionality. We have a triple coupling here:

  1. Data Definition
  2. Addition Description
  3. Chaining Description

If the data needs to change (from List to Set is a good place to start) the change needs to be made in two places. Each function which accepts a Team for the purpose of team composition and combination needs to know which style of team it needs at development time. These problems gets worse for each possibility for combining and chaining teams (maybe a round robin or reverse algorithm would fit in certain situations). Type Classes solve these issues.

Type Class Method

Our traits become:

//The underscore here implies we need a parameterized type.
trait Adder[Type[_]]{
  def add[Item](
      left: Type[Item], right: Type[Item]): Type[Item]
}
trait Chainer[Type[_]]{
  def chain[Item, Res](
      arg: Type[Item], f: Item => Type[Res]): Type[Res]
}

These have the same uses as their counterparts above. However we have a single definition of the Team type:

case class Team[Type](members: List[Type])

The data is defined in a single place. Each piece of software which requires a Team has a consistent idea about what a Team is and means. The two versions of functionality are defined by:

object unstructured{
  implicit def adder: Adder[Team] = new Adder[Team]{
    override def add[Item](
        left: Team[Item], right: Team[Item]): Team[Item] = {
      Team(left.members ++ right.members)
    }
  }

  implicit def chainer: Chainer[Team] = new Chainer[Team]{
    override def chain[Item, Res](
        arg: Team[Item], f: Item => Team[Res]): Team[Res] = {
      val list = arg.members.flatMap(
          member => f(member).members)
      Team(list)
    }
  }
}

object structured{
  implicit def adder: Adder[Team] = new Adder[Team]{
    override def add[Item](
        left: Team[Item], right: Team[Item]): Team[Item] = {
      val (lead1, indi1) = left.members.splitAt(2)
      val (lead2, indi2) = right.members.splitAt(2)
      Team(lead1 ++ lead2 ++ indi1 ++ indi2)
    }
  }

  implicit def chainer: Chainer[Team] = new Chainer[Team]{
    override def chain[Item, Res](
        arg: Team[Item], f: Item => Team[Res]): Team[Res] = {
      val (leaders, individuals) = arg.members.map{member =>
        val mems = f(member).members
        mems.splitAt(2)
      }.unzip
      Team(
          leaders.flatMap {x=>x} ++
          individuals.flatMap{x=>x})
    }
  }
}

Now, each function which accepts a team, if needed, will also accept an adder or chainer or both (wholly decoupled). The down side here is each call to such a function requires at least one extra argument from the subclass versions. Scala has a fix for this limitation.

Implicits

The implicit keyword before a definition is an important part of making Type Class polymorphism beneficial to the developer. The word implicit, according to Oxford Dictionaries, means Implied though not plainly expressed. In Scala it means we can prepend the implicit keyword to an argument list and not explicitly produce the value in code assuming a valid value is in scope. For example:

def chainTeams[Type, Result](
  team: Team[Type])(
  func: Type => Team[Result])(
  implicit chain: Chainer[Team]): Team[Result] = {
  chain.chain(team, func)
}

This has three arguments, the team to operate on, the operation to perform, and the chainer for application. However, since the final argument is implicit, if we bring a valid implicit value into scope, there is no need to pass it in directly.

import structured._
val team: Team[Person] = Team(List(???))
val func: Person => Team[Person] = {(p: Person) => ???}
val newTeam: Team[Person] = chainTeams(team)(func)//valid

Since, we don't need to explicitly state the Chainer it keeps boilerplate clean. A nice effect of implicit resolution is if you have scoped two separate valid values for the implicit argument, the compiler will complain. The suggestion if you have multiple valid implicits in scope is to decouple your code functionally. No single scope should have use of more than one implicit of the same type; this is a code smell. A corollary to this is one should not explicitly provide implicit arguments; let the compiler do its work.

In the final post of this series, we will introduce another library, Cats.

Monocle & Argonaut (Introducing Typelevel Scala into an OO Environment)

(Examples can be found on GitHub)

In the last post, we put to rest our use of mutable objects for good. Here we learn how to make use of our new found Functional Programming powers in the real world.

Every application of sufficient user base requires a persistent settings store. The more users one has, the more styles one is responsible for accommodating. In my experience, JSON has been the most useful format for small-scale persistence. JSON is widely understood, works on the web and is plaintext. We'll use Monocle and Argonaut to implement a persistent settings store.

The first question here is "Why not circe?". I found circe to be a bit too ethereal for most Java developers to wrap their heads around. Implicit scope (especially when its as magical as circe's auto) is an alarming feature for people who come from a language with no developer-defined implicit semantics (C++ developers are quite comfortable with this notion).

Monocle

The Monocle library provides semantics for defining simple accessors and combinators on nested data. Monocle is especially well suited for handling nested Case Classes which is what we'll focus on. Take the following data definition

case class Color(r: Byte, g: Byte, b: Byte)
case class FishTank(liters: Int, color: Color, fish: List[Fish])

We have nested Case Classes as well as a nested collection. To cover the changes that can occur here we would need to define:

  • eighteen operations
  • six of which are a composition from a Fish Tank into a Color
  • one of which is nested within a List structure.

Monocle makes this simple:

val (tankLiters, tankColor, tankFish) = {
  val gen = GenLens[FishTank]
  (gen(_.liters), gen(_.color), gen(_.fish))
}
val (colorR, colorG, colorB) = {
  val gen = GenLens[Color]
  (gen(_.r), gen(_.g), gen(_.b))
}
val (tankColorR, tankColorG, tankColorB) = (
  tankColor.composeLens(colorR),
  tankColor.composeLens(colorG),
  tankColor.composeLens(colorB)
)

Defining your data using Case Classes provides Monocle with the information it needs in order to generate lenses (nested views) into your data structures. Lens composition in Monocle is a single straightforward call. We can get into and out of our data with very little boilerplate.

Now that we can define settings and alter them, we need a way to persist them and communicate them to other parts of the system. We'll use JSON as our data format and Argonaut as our transcoder.

Argonaut

Argonaut provides semantics for converting between classes and JSON strings. Like Monocle, its easiest to use with Case Classes. Taking the same classes as above we would have:

//ignore the implicit keyword.
//I promise we'll get to it in the next post!
implicit def codecTank: CodecJson[FishTank] =
  casecodec3(
      FishTank.apply, FishTank.unapply
  )("liters", "color", "fish")
implicit def codecColor: CodecJson[Color] =
  casecodec3(
      Color.apply, Color.unapply
  )("r", "g", "b")
implicit def codecFish: CodecJson[Fish] =
  CodecJson(
    (f: Fish) =>
      ("name" := f.name) ->:
      ("color" := f.color) ->:
      jEmptyObject,
    (c: HCursor) => for{
      name <- (c --\ "name").as[String]
      color <- (c --\ "color").as[String]
    }yield{(name, color) match{
      case ("One Fish", "Red Fish") => OneFish
      case ("Two Fish", "Blue Fish") => TwoFish
      case _ => NotFish
    }}
  )

There are quite a few operators here. This could make things tricky for Java developers at first but, there are few of them so no big deal. For Case Classes, Argonaut has next to no boilerplate; one passes in the apply and unapply functions and names everything. Also, composition in Argonaut is implicit. It gets a little tricky with non Case Classes but, its still not much. At most Argonaut requires two functions; one from the class to JSON, the other from JSON to the class.

One more thing to note is that codecs for simple standard collections are implicit. The codec for List[Fish] is implicitly defined by the codec for Fish.

Putting it all together

A settings object would look something like:

object settings{
  private val settings: mutable.Map[String, FishTank] =
    mutable.Map()
  
  def apply(key: String): Option[FishTank] = settings.get(key)
  def update(key: String, byte: Byte): Unit = {
    settings(key) = settings.get(key) match{
      case Some(tank) =>
        tankColor.modify { _ => Color(byte, byte, byte) }(tank)
      case None =>
        FishTank(0, Color(byte, byte, byte), Nil)
    }
  }
  def update(key: String, size: Int): Unit = {
    settings(key) = settings.get(key) match{
      case Some(tank) =>
        tankLiters.modify(_ => size)(tank)
      case _ =>
        FishTank(size, Color(0,0,0), Nil)
    }
  }
  def update(key: String, fish:List[Fish]): Unit = {
    settings(key) = settings.get(key) match{
      case Some(tank) =>
        tankFish.modify(_ => fish)(tank)
      case _ =>
        FishTank(1, Color(0,0,0), fish)
    }
  }
  
  def persist(): Unit = {
    val jsonRaw = settings.toList.asJson
    val json = jsonRaw.nospaces
    putOnWire(json)
    writeToDisk(json)
  }
  def recall(): Unit = {
    val str = getFromDisk()
    val opt = str.decodeOption[List[(String, FishTank)]]
    opt.foreach{list =>
      settings ++= list.toMap
    }
  }
}

But, of course this is not threadsafe and it uses a mutable collection to perform its work. A different threadable implementation could look something like:

val actorSystem: ActorSystem = ???
implicit val timeout: akka.util.Timeout = ???
implicit val ec: ExecutionContext = ???
object asyncSettings{
  private sealed trait Message
  private case class Get(key: String)
      extends Message
  private case class SetGrey(key: String, hue: Byte)
      extends Message
      
  private class Perform extends Actor{
    override val receive: Receive = step(Map())
    def step(map: Map[String, FishTank]): Receive = {
        case Get(key) => sender ! map(key)
        case SetGrey(key, value) =>
          val newTank: FishTank = ???
          val newMap = map + (key -> newTank)
          context.become(step(newMap))
    }
    override def preStart(): Unit = ???//recall
    override def postStop(): Unit = ???//persist
  }
  
  val actor: ActorRef = actorSystem.actorOf{
    Props(new Perform())
  }
  def apply(key: String): Future[FishTank] =
    (actor ? Get(key)).collect{
      case Some(t: FishTank) => t
    }
  def update(key: String, hue: Byte) = 
    actor ! SetGrey(key, hue)
}

Here we use akka for asynchronous operations. One could also employ scalaz Task or simple Future composition or really any other asynchronous library.

Next we'll cover Type Classes to further decouple our data from functionality.

 

 

 

 

4. Objects are not Coroutines (Introducing Typelevel Scala into an OO Environment)

In the previous posts, we went over how to introduce immutability, combinators and case classes to move toward functional programming. These three points together are the basis for the point described in this point that Objects are not Coroutines.

If you are unfamiliar with coroutines, wikipedia has a basic description of them.

In Java, the usual application runs a little like this:

  1. Initialize an object
  2. Perform an operation
  3. Mutate the object
  4. Perform an Operation
  5. ...

This habit breaks all of the FP ideas we have developed so far.

When introducing Typelevel Scala, it is important to note we are not simply adding a library to an already existing system (the JVM). We are trying to change how people do their day to day work. Some of them have been writing OO Imperative software products for decades making a change of paradigm difficult. Keeping to simple language is key to our goal of shifting an organization's workflow.

The workflow shift we are suggesting here is:

  1. Define
  2. Apply Combinator
  3. ...

We will be moving from a paradigm based in mutability and state to one built on immutability and functions.

OO Imperative Style

We will be defining a school of fish and how that school of fish grows.

class BadSchool(){
  private var name: String = null
  private var depth: Depth = null
  private var location: Location = null
  private var fish: mutable.Buffer[Fish] = null
  def setName(newName: String): Unit = {
    name = newName
  }
  def getName(): String = name
  def setDepth(newDepth: Depth): Unit = {
    depth = newDepth
  }
  def getDepth(): Depth = depth
  def setLocation(newLocation: Location): Unit = {
    location = newLocation
  }
  def getLocation(): Location = location
  def setFish(newFish: mutable.Buffer[Fish]): Unit = {
    fish = newFish
  }
  def removeFish(aFish: Fish): Unit = {
    fish -= aFish
  }
  def addFish(aFish: Fish): Unit = {
    fish += aFish
  }
  def getFish(): mutable.Buffer[Fish] = fish
  override def toString(): String= {
    s"School(\n\t$name,\n\t$depth,\n\t$location,\n\t$fish)"
  }
}

In my experience, this is the type of code commonly written by people who have just made the jump from an OO language into Scala. Here we initialize the object's members to null and use mutable containers to maintain and augment state. A typical use case would look like:

def asCoroutine(): Unit = {
  val coroutine = new BadSchool()
  val (name, depth, location, fish) = someInit()
  coroutine.setName(name)
  coroutine.setDepth(depth)
  coroutine.setLocation(location)
  coroutine.setFish(fish)
  convertToJsonAndPutOnTheWire(coroutine)
  var newFish: Fish = null
  for(i <- (0 to 10)){
    newFish = nextFish(coroutine)
    coroutine.addFish(newFish)
    convertToJsonAndPutOnTheWire(coroutine)
  }
}

This application is difficult to follow and very cluttered. In order to create a new valid school of fish, one must initialize the object and call four set methods. When growing a school of fish, the developer needs to destroy the previous school of fish forcing any interaction with the object to be synchronous.

This is like a very messy coroutine. The addFish method is like a yield and there is no way to get back to the previously returned yield state.

Typelevel Style

The above impure code can be fairly simply converted to a more FP style. First we define our school of fish.

case class School(
    name: String,
    depth: Depth,
    location: Location,
    fish: immutable.Queue[Fish])

This is short and to the point. The intent of the code is clear and there are no messy methods defined for maintaining, getting and mutating state. The same use case would be implemented functionally like:

def aBetterWay(): Unit = {
  @annotation.tailrec
  def perform(qty: Int, acc: List[School]): List[School] = {
    if(qty > 0 && acc.nonEmpty){
      val head :: tail = acc
      val currentFish = head.fish.last
      val next = nextFish(currentFish)
      val result = head.copy(fish = head.fish.enqueue(next))
      perform(qty - 1, result :: acc)
    }else acc
  }
  val school = School(
      "Bikini Bottom",
      Deep,
      South,
      immutable.Queue(OneFish))
  val result = perform(10, List(school))
  result.foreach(convertToJsonAndPutOnTheWire)
}

There are two main ideas:

  1. In lieu of initializing state we define data. 
  2. Once data is defined, it cannot be redefined.

The function buildSchool builds a new school of fish from a provided school of fish. State is created, never destroyed, and there is no initialization step. Moreover, the webservice call can be made asynchronous without worry for synchronization or heap issues.

Next, we'll introduce our first libraries: Monocle and Argonaut.