Best Practices

The big picture

There were a lot of discussions and thoughts on “What does an idiomatic Finch program look like?” (see the Best Practices and Abstractions issue and the Future Finch writeup), but we’re not sure we should end up sticking with a “one-size-fits-all” style in Finch. In fact, zero lines of Finch’s code were written with some particular style in mind, rather reasonable composition and reuse of its building blocks. That said, Finch will always be a library that doesn’t promote any concrete style for organizing a code base (frameworks usually do that), but does promote composability and compile-time guarantees for its abstractions.

Given that major Finch’s abstractions are pretty generic, you can make them have any faces you like as long as it makes programming fun again: some of the Finch users write Jersey-style programs; some of them stick with CRUD-style ones; others use macros to group endpoints in controllers-like structures.

Note that all of the Finch examples are written in the “Vanilla Finch” style, where no additional levels of indirections are layered on top of endpoints.

Picking a JSON library

It’s highly recommended to use Circe, whose purely functional nature aligns very well with Finch (see Cookbook docs on how to enable Circe support in Finch). In addition to compile-time derived decoders and encoders, Circe has great performance and very useful features such as case class patchers and incomplete decoders.

Case class patchers are extremely useful for PATCH and PUT HTTP endpoints, when it’s required to update the case class instance with new data parsed from a JSON object. In Finch, Circe’s case class patchers are usually represented as Endpoint[A => A], which

  1. parses an HTTP request for a partial JSON object that contains the fields which need to be updated and
  2. represents that partial JSON object as a function A => A that takes a case class and updates it with all the values from a JSON object
import java.util.UUID
import io.finch._
import io.finch.circe._
import io.finch.syntax._
import io.circe.generic.auto._


case class Todo(id: UUID, title: String, completed: Boolean, order: Int)
object Todo {
  def list():List[Todo] = ???
}
val patchedTodo: Endpoint[Todo => Todo] = jsonBody[Todo => Todo]
val patchTodo: Endpoint[Todo] = patch("todos" :: path[UUID] :: patchedTodo) { (id: UUID, pt: Todo => Todo) =>
  val oldTodo = ??? // get by id
  val newTodo = pt(oldTodo)
  // store newTodo in the DB
  Ok(newTodo)
}

Incomplete decoders are generated by Circe and Finch to decode JSON objects with some fields missing. This is very useful for POST HTTP endpoints that create new entries from JSON supplied by user and some internally generated ID. A Circe’s incomplete decoder is represented in Finch as a Endpoint[(A, B, ..., Z) => Out], where A, B, ..., Z is a set of missing fields from the Out type. This type signature basically means that an endpoint, instead of a final type (a complete JSON object), gives us a function, to which we’d need to supply some missing bits to get the final instance.

import io.finch._
import io.finch.circe._
import io.circe.generic.auto._

val postedTodo: Endpoint[Todo] = jsonBody[UUID => Todo].map(_(UUID.randomUUID()))
val postTodo: Endpoint[Todo] = post("todos" :: postedTodo) { t: Todo =>
  // store t in the DB
  Ok(t)
}

By default, Finch uses Circe’s Printer to serialize JSON values into strings, which is quite convenient given that it’s possible to configure and enable some extra options (eg., to drop null keys in the output string, replace the io.finch.circe._ import with io.finch.circe.dropNullKeys._ ), but it’s not the most efficient printer in Circe. Always use the Jackson-powered printer with Circe (i.e., replace the io.finch.circe._ import with io.finch.circe.jacksonSerializer) unless you are really unhappy with its output format (i.e., want to drop null keys).

Do not block an endpoint

Finagle is very sensitive to whether or not its worker threads are blocked. In order to make sure your HTTP server always makes progress (accepts new connections/requests), do not block Finch endpoints. Use FuturePools to wrap expensive computations.

import io.finch._
import io.finch.syntax._
import com.twitter.util.FuturePool

val expensive: Endpoint[BigInt] = get(path[Int]) { i: Int =>
  FuturePool.unboundedPool {
    Ok(BigInt(i).pow(i))
  }
}.handle {
  case e: Error.NotPresent => BadRequest(e)
}

Refer to this blog post to find out how much time a Finagle service spend blocking.

Use TwitterServer

Always serve Finch endpoints within TwitterServer, a lightweight server template used in production at Twitter. TwitterServer wraps a Finagle application with a bunch of useful features such as command line flags, logging and more importantly an HTTP admin interface that can tell a lot on what’s happening with your server. One of the most powerful features of the admin interface is Metrics, which captures a snapshot of all the system-wide stats (free memory, CPU usage, request success rate, request latency and many more) exported in a JSON format.

Use the following template to empower your Finch application with TwitterServer.

import io.finch._

import com.twitter.finagle.param.Stats
import com.twitter.server.TwitterServer
import com.twitter.finagle.{Http, Service}
import com.twitter.finagle.http.{Request, Response}
import com.twitter.util.Await

object Main extends TwitterServer {

  val api: Service[Request, Response] = ???

  def main(): Unit = {
    val server = Http.server
      .configured(Stats(statsReceiver))
      .serve(":8081", api)

    onExit { server.close() }

    Await.ready(adminHttpServer)
  }
}

Monitor your application

Use Metrics to export domain-specific metrics from your application and export them automatically with TwitterServer. Metrics are extremely valuable and helps you better understand your application under different circumstances.

One of the easiest things to export is a counter that captures the number of times some event occurred.

import io.finch._
import io.finch.circe._
import io.circe.generic.auto._

import com.twitter.server.TwitterServer
import com.twitter.finagle.stats.Counter

object Main extends TwitterServer {
  val todos: Counter = statsReceiver.counter("todos")
  val postTodo: Endpoint[Todo] = post("todos" :: postedTodo) { t: Todo =>
    todos.incr()
    // add todo into the DB
    Ok(t)
  }
}

It’s also possible to export histograms over random values (latencies, number of active users, etc).

import io.finch._
import io.finch.circe._
import io.circe.generic.auto._

import com.twitter.server.TwitterServer
import com.twitter.finagle.stats.Stat

object Main extends TwitterServer {
  val getTodosLatency: Stat = statsReceiver.stat("get_todos_latency")
  val getTodos: Endpoint[List[Todo]] = get("todos") {
    Stat.time(getTodosLatency) { Ok(Todo.list()) }
  }
}

Both Finagle and user-defined stats are available via the TwitterServer’s HTTP admin interface or through the /admin/metrics.json HTTP endpoint.

Picking HTTP statuses for responses

There is no one-size-fits-all answer on what HTTP status code to use for a particular response, but there are best practices established by the community as well as world-famous APIs. Finch is trying to take a neutral stand on this question by providing an Output API abstracted over the HTTP statuses so any of them might be used to construct a payload, a failure or an empty output. On the other hand, there is an optional and lightweight API (i.e., methods Ok, BadRequest, NoContent, etc) providing a reasonable mapping between response type (payload, failure, empty) and their status code. This API (mapping) shouldn’t be blindly followed since there are always exceptions (specific applications, opinionated design principles) from the general rule. Simply speaking, you’re more than welcome to use the mapping we believe makes sense, but you don’t have to stick with that and can always drop down to the Output.* API.

The current Finch’s mapping is following.

Output type Status codes
Payload 200, 201
Empty 202, 204
Failure 4xx, 5xx

Configuring Finagle

Finch uses Finagle to serve its endpoints (converted into Finagle Services) so it’s important to know what Finagle can do for you in order to improve the resiliency of your application. While Finagle servers are quite simple compared to Finagle clients, there are still useful server-side features that might be useful for most of the use cases.

All Finagle servers are configured via a collection of with-prefixed methods available on their instances (i.e., Http.server.with*). For example, it’s always a good idea to put bounds on the most critical parts of your application. In case of Finagle HTTP server, you might think of overriding a concurrency limit (a maximum number of concurrent requests allowed) on it (disabled by default).

 import com.twitter.finagle.Http
 import com.twitter.finagle.http.{Request, Response}
 
 object Main extends TwitterServer {
   
   val ping: Endpoint[String] = get("ping") { Ok("Pong") }
   val service: Service[Request, Response] = ping.toServiceAs[Text.Plain]

   def main(): Unit = {
    val server = Http.server
      .withAdmissionControl
      .concurrencyLimit(
        maxConcurrentRequests = 10, 
        maxWaiters = 10)
      .serve(":8080", service)
 
     onExit { server.close() }
 
     Await.ready(adminHttpServer)
   }
 }

Finagle Filters vs. Finch Endpoints

Finch endpoints are designed to be able to substitute (when it’s reasonable) for both Finagle services and filters. While it’s clear that an Endpoint in Finch is a core abstraction and can be thought of as analogous to a Finagle Service, it’s not always clear when you should use an endpoint over a Finagle Filter for things such as authentication.

Due to the Output ADT, a Finch endpoint can easily simulate a Finagle filter and reject an authentication request with an Output.failure. Although, this doesn’t completely mean endpoints should be preferred to filters in all cases.

The following rule of thumb might be used to determine what building block (filter or endpoint) to pick for a given use case.

Use Filter instead of Endpoint[HNil] (only Endpoint[HNil] might be replaced with a filter) when:

  • You want to implement logic that might be shared across all the endpoints in the program (e.g., authorization, CORS).
  • It’s only used for side-effects (e.g., logging, metrics).

Please note that “error handling” is a special case in Finch. While it seems like a shared logic that might be placed into a filter, it’s preferred to use Endpoint.handle that allows to convert exceptions into Output.failures in the fine grained way.

Pulling it all Together

The best practices described here provide a good starting point for building a Finch-based app, but there are other, more fully-featured requirements when building out services using Finch. There are several community efforts to build a simple service template for getting a fully-fledged service up & running quickly.