Elixir/Phoenix deployments using Distillery

Yuva  - November 26, 2016  |   ,

Of late, we have been porting one of our internal apps from Rails to Phoenix. We are using Capistrano for deploying Rails apps. We have Jenkins CI which listens to features merged into master, and uses Capistrano to deploy to production.

For Elixir/Phoenix, we are looking for something similar. Merge features into master, let Jenkins CI run, and package Phoenix app which can be run on production. In Elixir world, there are bunch of package managers

You can read more about Exrm and Distillery here

Since Distillery is the latest one and also fits our use case nicely, let’s dig through that tool more. Documentation of Distillery is quite nice. In this blog post, we are going to explore:

Before getting started, you need these:

Since Elixir has a compilation step where it compiles Elixir code to BEAM, we need to set up a work flow for compiling and deploying our Elixir application. A typical work flow would look like:

Interesting thing to note here is, CI server needs source code of all dependencies of the Elixir app, but production server uses only the compiled BEAM code just to make it clear. So there is no bloat on production servers, it’s easy to spin up new servers, unpack the package, and start the app.

Let’s explore Distillery, and how it helps in deploying Phoenix apps. The steps to create and run the distillery based app are taken from Distillery documentation.

Create Phonenix app and initialize Distillery

Let’s create a simple Phoenix app which we will use throughout our discussion.

> mix phoenix.new --no-ecto --no-brunch phoenix_app
* creating phoenix_app/config/config.exs
* creating phoenix_app/config/prod.secret.exs
* creating phoenix_app/config/test.exs
* ...

Fetch and install dependencies? [Yn] Y
* running mix deps.get

We are all set! Run your Phoenix application:

> cd phoenix_app
> mix phoenix.server

If it’s all good, you should be able to see the Phoenix app at localhost:4000 Now, open mix.exs and append distillery

  defp deps do
    [{:phoenix, "~> 1.2.1"},
     {:phoenix_pubsub, "~> 1.0"},
     {:phoenix_html, "~> 2.6"},
     {:phoenix_live_reload, "~> 1.0", only: :dev},
     {:gettext, "~> 0.11"},
     {:cowboy, "~> 1.0"},
     {:distillery, "~> 0.10.1"}]
  end

Now, fetch dependencies using mix, and initialize repository with Distillery

> mix deps.get
> mix release.init

This should create a rel folder with config.exs file in it. Please take a moment to go through this file which is well commented to understand what it does.

Configuring CI and generating release pack

CI does all the heavy lifting in order to cut a release. You can run these commands locally and replicate the same on CI server. Make sure CI server has these packages installed:

Cutting a release is very easy, just run this command:

> MIX_ENV=prod mix release --env=prod
==> gettext
Compiling 1 file (.erl)
...
==> Assembling release..
==> Building release phoenix_app:0.0.1 using environment prod
==> Including ERTS 8.1 from /usr/local/Cellar/erlang/19.1/lib/erlang/erts-8.1
==> Packaging release..
==> Release successfully built!
    You can run it in one of the following ways:
      Interactive: rel/phoenix_app/bin/phoenix_app console
      Foreground: rel/phoenix_app/bin/phoenix_app foreground
      Daemon: rel/phoenix_app/bin/phoenix_app start

If you look at the output of this command, you’ll notice that mix is packaging everything along with the app. Interpreting mix release output:

Optionally you can set up Travis/Jenkins to observe features merged into git, automatically pulling latest source code, and packaging app

Now, take a look at rel/phoenix_app folder:

rel
└── phoenix_app
    ├── bin
    │   ├── nodetool
    │   ├── phoenix_app
    │   ├── release_utils.escript
    │   └── start_clean.boot
    ├── erts-8.1
    │   ├── bin
    │   │   ├── beam
    │   │   ├── beam.smp
    ....
    │   ├── include
    │   │   ├── driver_int.h
    │   │   ├── erl_nif.h
    │   ├── lib
    │   │   ├── internal
    │   │   ├── liberts.a
    │   │   └── liberts_r.a
    ├── lib
    │   ├── compiler-7.0.2
    │   ├── cowboy-1.0.4
    │   ├── crypto-3.7.1
    ...
    │   ├── phoenix-1.2.1
    │   ├── phoenix_app-0.0.1
    │   ├── ranch-1.2.1
    └── releases
        ├── 0.0.1
        │   ├── commands
        │   ├── hooks
        │   ├── phoenix_app.boot
        │   ├── phoenix_app.rel
        │   ├── phoenix_app.script
        │   ├── phoenix_app.sh
        │   ├── phoenix_app.tar.gz
        │   ├── start_clean.boot
        │   ├── sys.config
        │   └── vm.args
        ├── RELEASES
        └── start_erl.data

Going through the top level folders:

Making successive deployments

Erlang has a rich heritage, and Erlang programs are designed to run for years without bringing servers down, which guarantees nearly 100% up time. Rolling out bug fixes, new features, improvements are done using hot updates to servers. Erlang provides ways to patch existing running code on production servers so that there is no need to stop and start the app. Let’s look at ways to deploy phoenix_app

Hot upgrades

distillery provides support to create releases which can be applied as hot upgrades. The process of generating the tar file is same, but the command is different. For the sake of brevity, change version number in mix.exs from 0.0.1 to 0.0.2 before proceeding

> MIX_ENV=prod mix release --env=prod --upgrade
==> Assembling release..
==> Building release phoenix_app:0.0.2 using environment prod
==> Including ERTS 8.1 from /usr/local/Cellar/erlang/19.1/lib/erlang/erts-8.1
==> Generated .appup for phoenix_app 0.0.1 -> 0.0.2
==> Relup successfully created
==> Packaging release..
==> Release successfully built!
    You can run it in one of the following ways:
      Interactive: rel/phoenix_app/bin/phoenix_app console
      Foreground: rel/phoenix_app/bin/phoenix_app foreground
      Daemon: rel/phoenix_app/bin/phoenix_app start

The command to create a new release is same as that of first run, but there is a new argument, i.e --upgrade. Also, output of command is also slightly different. Generated .appup for phoenix_app 0.0.1 -> 0.0.2, where .appup means hot upgrade. There will be new folder 0.0.2 under releases. There will be another file called relup which contains instructions about how to upgrade. It looks like this:

{"0.0.2",
 [{"0.0.1",[],
   [{load_object_code,{phoenix_app,"0.0.2",
                                   ['Elixir.PhoenixApp.Endpoint',
                                    'Elixir.PhoenixApp.Gettext']}},
    point_of_no_return,
    {load,{'Elixir.PhoenixApp.Endpoint',brutal_purge,brutal_purge}},
    {load,{'Elixir.PhoenixApp.Gettext',brutal_purge,brutal_purge}}]}],
 [{"0.0.1",[],
   [{load_object_code,{phoenix_app,"0.0.1",
                                   ['Elixir.PhoenixApp.Endpoint',
                                    'Elixir.PhoenixApp.Gettext']}},
    point_of_no_return,
    {load,{'Elixir.PhoenixApp.Endpoint',brutal_purge,brutal_purge}},
    {load,{'Elixir.PhoenixApp.Gettext',brutal_purge,brutal_purge}}]}]}.

This file contains instructions about how to switch between 0.0.1 and 0.0.2. If the upgrade needs to be rolled back, this file helps in downgrading from 0.0.2 to 0.0.1.

Note about versions

Mix knows only semantic versioning. If one has to use SHA-IDs as the version, mix will throw errors. Say, you change version from 0.0.2 to 48dcbccd, and try to generate new package, mix throws this error:

> MIX_ENV=prod mix release --env=prod --upgrade
Compiling 11 files (.ex)
** (Mix) Expected :version to be a SemVer version, got: "48dcbccd"

If you are coming from Rails world, and used to continuous deployments using Capistrano, editing version every time for deployments is a pain. Let’s see how we can get Capistrano kind of continuous deployments.

Generating versions on the fly

One way to generate versions on the fly is to read version from environment variable. Also, since mix enforces semantic versioning, generating incremental versions makes sense. Capistrano generates folders with YYYYMMDDHHMMSS format, i.e. year, month, date. Change mix.exs file to have version like this:

def project do
  [app: :phoenix_app,
   version: (if Mix.env == :prod, do: System.get_env("APP_VERSION"), else: "0.0.1"),
   elixir: "~> 1.2",
   elixirc_paths: elixirc_paths(Mix.env),
   compilers: [:phoenix, :gettext] ++ Mix.compilers,
   build_embedded: Mix.env == :prod,
   start_permanent: Mix.env == :prod,
   deps: deps()]
end

Now when mix is building package for production, the app read the version from the environment variable, otherwise it’s hard coded to 0.0.1. Since version can be dynamic now, we can let CI specify what version needs to be generated. Following how Capistrano generates versions, we can do

export APP_VERSION=`date +%Y.%m.%d%H%M`

npm install                     # for brunch
mix deps.get
MIX_ENV=prod mix release --env=prod --upgrade

So, this code generates the app version with year as major version, month as minor version, and as patch version. Note that in order for `--upgrade` to work on CI, you should have previous versions in releases folder, otherwise upgrade will fail because there is no reference release. If you are using Travis/Circle-CI, make sure that releases folder is cached.

Recovering from failures, doing proper patch updates

Say code which is pushed in is buggy, and CI has made a release. It can happen that hot upgrade itself fails. In such cases, CI will have a stale releases like this:

rel/
  phoenix_app/
    releases/
      0.0.1             # deploy succeeded
      2016.09.250826    # deploy succeeded
      2016.09.250829    # deploy failed
      2016.09.251007    # new package based on failed deploy package

When CI runs again, it will generate a new package assuming previous package is a good one. Hot upgrades will start failing from now on!

It’s very important to keep a note of which deploy succeeded, or which one failed. There are two ways this can be done:

If you follow 2nd approach, you can read current running version from the file start_erl.data. It contains runtime version and app version. CI can read that version from production server, and create a hot upgrade from that version. In addition to --upgrade option, Distillery supports --upfrom option also. This takes in app version from which upgrade package has to be generated. Typically CI code looks like this:

export PREV_APP_VERSION=`ssh mix@10.0.0.5 cat /srv/app/releases/start_erl.data | cut -d' ' -f2`
export APP_VERSION=`date +%Y.%m.%d%H%M`

touch mix.exs                   # so that new app_version is picked
npm install                     # for brunch
mix deps.get
MIX_ENV=prod mix release --env=prod --upgrade --upfrom=$PREV_APP_VERSION

Now hot upgrades always work!

Immutable infrastructure

Immutable infra means - services in production have to be treated as immutable entities and upgrades should not change anything in production, but instead should be redeployed each time. This means no hot-upgrades are allowed in production. If you are running stateless services like web servers written in Phoenix, sticking with immutable infrastructure is recommended. Just create a package, spin up a new node behind load balancer, run the app, and nuke old nodes.

Compiling assets using plugins

NOTE: We haven’t initialized our app to use brunch, so the following section won’t work. Try creating a new app without --no-brunch option, and follow this section.

Mostly people use Phoenix for APIs, and use Reactjs or other JS framework for front-end. If you are reading through Distillery docs, it suggests to push assets compilation to shell script itself. Let’s take a small detour and see how we can do that using plugins. A Distillery plugin can be used to hook into the package generation process. It has 5 hooks:

Names are pretty self-explanatory. If you want to know more about these, you can take a look at them here. You can hook into before_assembly block, and compile assets.

defmodule PhoenixApp.PhoenixDigestTask do
  use Mix.Releases.Plugin

  def before_assembly(%Release{} = _release) do
    info "before assembly!"
    case System.cmd("npm", ["run", "deploy"]) do
      {output, 0} ->
        info output
        Mix.Task.run("phoenix.digest")
        nil
      {output, error_code} ->
        {:error, output, error_code}
    end
  end

  def after_assembly(%Release{} = _release) do
    info "after assembly!"
    nil
  end

  def before_package(%Release{} = _release) do
    info "before package!"
    nil
  end

  def after_package(%Release{} = _release) do
    info "after package!"
    nil
  end

  def after_cleanup(%Release{} = _release) do
    info "after cleanup!"
    nil
  end
end


environment :prod do
  set plugins: [PhoenixApp.PhoenixDigestTask]
  set include_erts: true
  set include_src: false
end

So, more Elixir code and less shell scripting. When you run release command again, output looks like this:

==> Assembling release..
==> Building release phoenix_app:2016.10.251007 using environment prod
==> before assembly!                 ## <- Plugin print here
==>
> brunch build --production

25 Oct 10:07:47 - info: compiled 3 files into 2 files, copied 3 in 3.1 sec

Check your digested files at "priv/static"
==> Including ERTS 8.0.2 from /usr/lib/erlang/erts-8.0.2
==> Generated .appup for phoenix_app 2016.09.261241 -> 2016.10.251007
==> Relup successfully created
==> after assembly!                  ## <- Plugin print here
==> Packaging release..
==> before package!                  ## <- Plugin print here
==> after package!                   ## <- Plugin print here
==> Release successfully built!
    You can run it in one of the following ways:
      Interactive: rel/phoenix_app/bin/phoenix_app console
      Foreground: rel/phoenix_app/bin/phoenix_app foreground
      Daemon: rel/phoenix_app/bin/phoenix_app start

You can see logs from the plugin throughout the deploy process. Plugins is an interesting concept, please take a look at that.

You can find whole source code here

Thanks for reading! If you would like to get updates about subsequent blog posts from Codemancers, do follows us on twitter: @codemancershq. Feel free to get in touch if you’d like Codemancers to help you build your Elixir/Phoenix app.

Understanding Exit Signals in Erlang/Elixir

Emil Soman  - February 29, 2016  |  

“Process linking and how processes send and handle exit signals” - a very important topic to understand in order to build robust apps in Erlang or Elixir, but also the source of a lot of confusions for beginners. In this post, we’ll cover this topic and understand it well, once and for all.

Note: Processes are the same in both Erlang and Elixir, so everything below is equally applicable to both languages.

Processes

Processes in Erlang are like threads that don’t share any data. These processes are VM level, so don’t confuse them with OS processes. These VM level processes are scheduled for execution by the Erlang VM, like how the OS schedules the OS level processes for execution. Since these Erlang processes own their own data, they are scheduled freely on all available CPUs and this is how Erlang makes concurrency easy for developers.

processes

But a bunch of processes that run in isolation are rarely of any use. To build anything useful, processes need to work together by communicating with each other.

Messages

In Erlang processes communicate among themselves by message passing. Every process has one mailbox. A process can read messages that appear in its mailbox and can also send messages to mailboxes that belong to other processes. This way, processes can communicate with each other without having to share any data. This frees developers from writing locks around data access when writing code that may run concurrently.

messaging

Apart from these regular messages, processes also communicate using “exit signals”, a special type of message.

Exit Signals

Processes have a signalling mechanism by which they can let other processes know that they are exiting. These “exit signals” also contain an exit reason, which help other processes to decide how to react to the signal.

A process can terminate for 3 reasons:

  1. A normal exit - This happens when a process is done with its job and ends execution. Since these exits are normal, usually nothing needs to be done when they happen. So these signals are usually ignored, but they are emitted anyway for the sake of interested processes. The exit reason for this kind of exit is the atom :normal.
  2. Because of unhandled errors - This happens when an exception is raised inside the process and not caught. A pattern matching error is an example - a technique used by Erlang programmers to “fail fast”. The exit reason for this kind of exit is the exception details - name of the exception and some stack trace.
  3. Forcefully killed - This happens when another process sends an exit signal with the reason :kill, which forces the receiving process to terminate.

A process can subscribe to another process’s exit signal by establishing a “link” with that process. When a process terminates, all the linked processes receive the exit signal from the terminating process.

links

exit_signals

The force-kill signals, the ones with exit reason :kill, will terminate the receiving process no matter what. But the other kinds of exit signals - those with reasons :normal or any other reason - can cause different effects on the receiving process depending on whether the receiving process is trapping exits or not. Let’s see how trapping of exit signals works.

Trapping exits

When a process traps exit signals, the exit signals that are received from the links will be converted into messages which are then put inside the mailbox that belongs to the process. Here’s how a process can trap exits in Elixir:

Process.flag(:trap_exit, true)

receive do
  {:EXIT, from, reason} ->
    # Handle exit as needed

When this process receives an exit signal other than :kill signal, it will be converted into a message that will be received inside the receive block. In Erlang/Elixir, this is what makes supervisor trees possible.

A supervisor is a process whose responsibility is to start child processes and keep them alive by restarting them if necessary. Let’s see how a supervisor does that. If you look at the source code of supervisor in Erlang, you can see that the first thing that happens in the init function is trapping exit signals:

init(...) ->
    process_flag(trap_exit, true),
    ...

This means that exit signals from child processes will be converted into messages. The supervisor then handles these messages by restarting the child processes, based on the restart strategy of the supervisor. This is how you write fault tolerant apps in Elixir or Erlang - you let your processes fail fast, and the supervisor that spawned these processes will make sure they are restarted.

So if processes can trap exit signals, how is it possible to kill them? Using the :kill exit signal, of course. The exit reason :kill is used to forcefully terminate the receiving process even if it’s trapping exits.

In Elixir, this is how you kill a process using its pid:

Process.exit(pid, :kill)

A process can also send an exit signal to itself using:

Process.exit(self(), <reason>)

The process responds to this signal from self in a similar manner it would respond to an exit signal it receives from another process, but with one exception. If a process sends itself an exit signal with the reason :normal, the process terminates and when it does, it sends a :normal exit signal to all linked processes.

Recap

  1. :normal exit signals are harmless. These are ignored by the receiving process unless trapping exits, in which case, these will be received as messages. If it’s sent by self, it will cause the process to terminate with a :normal exit reason.
  2. :kill exit signals always result in the termination of the receiving process.
  3. Exit signals with other reasons will terminate the receiving process unless trapping exits, in which case, these will be received as messages.

Here’s a cheatsheet for your reference (click to enlarge) :

Cheatsheet

Thanks for reading! If you would like to get updates about subsequent blog posts about Elixir from Codemancers, do follows us on twitter: @codemancershq.

Visualizing Parallel Requests in Elixir

Emil Soman  - January 14, 2016  |  

We have been evaluating Elixir at Codemancers and today I was learning how to spin up a minimal HTTP API endpoint using Elixir. Like Rack in Ruby land, Elixir comes with Plug, a swiss army knife for dealing with HTTP connections.

Using Plug to build an HTTP endpoint

First, let’s create a new Elixir project:

$ mix new http_api --sup

This creates a new Elixir OTP app. Let’s add :cowboy and :plug as hex and application dependencies:

# Change the following parts in mix.exs

  def application do
    [applications: [:logger, :cowboy, :plug],
     mod: {HttpApi, []}]
  end

  defp deps do
    [
      {:cowboy, "~>1.0.4"},
      {:plug, "~>1.1.0"}
    ]
  end

Plug comes with a router which we can use to build HTTP endpoints with ease. Let’s create a module to encapsulate the router:

# lib/http_api/router.ex
defmodule HttpApi.Router do
  use Plug.Router

  plug :match
  plug :dispatch

  get "/" do
    send_resp(conn, 200, "Hello Plug!")
  end

  match _ do
    send_resp(conn, 404, "Nothing here")
  end
end

If you have worked with sinatra-like frameworks, this should look familiar to you. You can read the router docs to understand what everything does if you are curious.

To start the server, we’ll tell the application supervisor to start the Plug’s Cowboy adapter:

# lib/http_api.ex

defmodule HttpApi do
  use Application

  def start(_type, _args) do
    import Supervisor.Spec, warn: false

    children = [
      # `start_server` function is used to spawn the worker process
      worker(__MODULE__, [], function: :start_server)
    ]
    opts = [strategy: :one_for_one, name: HttpApi.Supervisor]
    Supervisor.start_link(children, opts)
  end

  # Start Cowboy server and use our router
  def start_server do
    { :ok, _ } = Plug.Adapters.Cowboy.http HttpApi.Router, []
  end
end

The complete code for the above example can be found here. You can run the server using:

$ iex -S mix

This starts the interactive Elixir shell and runs your application on the Erlang VM. Now comes the fun part.

Visualizing processes using :observer

In the iex prompt, start the Erlang :observer tool using this command:

iex> :observer.start

This opens a GUI tool that looks like this:

observer

On the left hand side of the Applications panel, you can see a list of all the applications currently running on the Erlang VM - this includes our app (http_api) and all its dependencies, the important ones being cowboy and ranch.

Cowboy and Ranch

Cowboy is a popular HTTP server in the Erlang world and it uses Ranch , another Erlang library, to handle TCP connections behind the scenes. When we start the Plug router, we pass on the router module to Plug’s Cowboy adapter. Now when Cowboy receives a connection, it passes it over to Plug, and Plug runs it through it’s plug pipeline and sends back the request.

Concurrent Requests

Plug by default asks cowboy to start 100 TCP connection acceptor processes in Ranch. You can see the 100 acceptor processes for yourself if you see the application graph of ranch using :observer.

acceptors

Does this mean that we can only have 100 concurrent connections? Let’s find out. We’ll change the number of acceptors to 2 by passing it as a parameter to Plug’s Cowboy adapter:

Plug.Adapters.Cowboy.http HttpApi.Router, [], [acceptors: 2]

Let’s see the how the processes look like now:

acceptors

Okay, so we’ve got only 2 TCP connection acceptor processes running. Let’s try making 5 long running concurrent requests and see what happens.

# lib/http_api/router.ex

# Modify router to add some sleep
defmodule HttpApi.Router do
  use Plug.Router

  plug :match
  plug :dispatch

  # Sleep for 100 seconds before sending the reponse
  get "/" do
    :timer.sleep(100000)
    send_resp(conn, 200, "Hello Plug!")
  end

  match _ do
    send_resp(conn, 404, "Nothing here")
  end
end

Let’s make 5 requests now by running this in the iex prompt:

for n <- 1..5, do: spawn(fn -> :httpc.request('http://localhost:4000') end)

Start :observer from iex using :observer.start and see the process graph:

connection processes

We can see that there are only 2 acceptor processes still, but 5 other processes were spawned somewhere else. These are connection processes which hold accepted connections. What this means is that, acceptor processes do not dictate how many processes we can run at a time, it just restricts how many new processes can be accepted at a time. Even if you want to serve 1000 concurrent requests, it’s safe to leave the number of acceptor processes at the default value of 100.

Summary