Wednesday, July 29, 2015

Assigning port numbers to (micro)services in Disnix deployment models

I have been working on many Disnix related aspects for the last few months. For example, in my last blog post I have announced a new Disnix release supporting experimental state management.

Although I am quite happy with the most recent feature addition, another major concern that the basic Disnix toolset does not solve is coping with the dynamism of the services and the environment in which a system has been deployed.

Static modeling of services and the environment have the following consequences:

  • We must write an infrastructure model reflecting all relevant properties of all target machines. Although writing such a configuration file for a new environment is doable, it is quite tedious and error prone to keep it up to date and in sync with their actual configurations -- whenever a machine's property or the network changes, the infrastructure model must be updated accordingly.

    (As a sidenote: when using the DisnixOS extension, a NixOS network model is used instead of an infrastructure model from which the machine's configurations can be automatically deployed making the consistency problem obsolete. However, the problem remains to persist if we need to deploy to a network of non-NixOS machines)
  • We must manually specify the distribution of services to machines. This problem typically becomes complicated if services have specific technical requirements on the host that they need to run (e.g. operating system, CPU architecture, infrastructure components such as an application server).

    Moreover, a distribution could also be subject to non-functional requirements. For example, a service providing access to privacy-sensitive data should not be deployed to a machine that is publicly accessible from the internet.

    Because requirements may be complicated, it is typically costly to repeat the deployment planning process whenever the network configuration changes, especially if the process is not automated.

To cope with the above listed issues, I have developed a prototype extension called Dynamic Disnix and wrote a paper about it. The extension toolset provides the following:

  • A discovery service that captures the properties of the machines in the network from which an infrastructure model is generated.
  • A framework allowing someone to automate deployment planning processes using a couple of algorithms described in the literature.

Besides the dynamism of the infrastructure model and distribution models, I also observed that the services model (capturing the components of which a system consists) may be too static in certain kinds of situations.

Microservices


Lately, I have noticed that some kind of new paradigm named Microservice architectures is gaining a lot of popularity. In many ways this new trend reminds me of the service-oriented architectures days -- everybody was talking about it and had success stories, but nobody had a full understanding of it, nor an idea what it exactly was supposed to mean.

However, if I would restrict myself to some of their practical properties, microservices (like "ordinary" services in a SOA-context) are software components and one important trait (according to Clemens Szyperski's Component Software book) is that a software component:

is a unit of independent deployment

Another important property of microservices is that they interact with other by sending messages through the HTTP communication protocol. In practice, many people accomplish this by running processes with an embedded HTTP server (as opposed to using application servers or external web servers).

Deploying Microservices with Disnix


Although Disnix was originally developed to deploy a "traditional" service-oriented system case-study (consisting of "real" web services using SOAP as communication protocol), it has been made flexible enough to deploy all kinds of components. Likewise, Disnix can also deploy components that qualify themselves as microservices.

However, when deploying microservices (running embedded HTTP servers) there is one practical issue -- every microservice must listen on their own unique TCP port on a machine. Currently, meeting this requirement is completely the responsibility of the person composing the Disnix deployment models.

In some cases, this problem is more complicated than expected. For example, manually assigning a unique TCP port to every service for the initial deployment is straight forward, but it may also be desired to move a service from one machine to another. It could happen that a previously assigned TCP port will conflict with another service after moving it, breaking the deployment of the system.

The port assignment problem


So far, I take the following aspects into account when assignment ports:

  • Each service must listen on a port that is unique to the machine the service runs on. In some cases, it may also be desirable to assign a port that is unique to the entire network (instead of a single machine) so that it can be uniformly accessed regardless of its location.
  • The assigned ports must be within a certain range so that (for example) they do not collide with system services.
  • Once a port number has been assigned to a service, it must remain reserved until it gets undeployed.

    The alternative would be to reassign all port numbers to all services for each change in the network, but that can be quite costly in case of an upgrade. For example, if we upgrade a network running 100 microservices, all 100 of them may need to be deactivated and activated to make them listen on their newly assigned ports.

Dynamically configuring ports in Disnix models


Since it is quite tedious and error prone to maintain port assignments in Disnix models, I have developed a utility to automate the process. To dynamically assign ports to services, they must be annotated with the portAssign property in the services model (which can be changed to any other property through a command-line parameter):

{distribution, system, pkgs}:

let
  portsConfiguration = if builtins.pathExists ./ports.nix
    then import ./ports.nix else {};
  ...
in
rec {
  roomservice = rec {
    name = "roomservice";
    pkg = customPkgs.roomservicewrapper { inherit port; };
    dependsOn = {
      inherit rooms;
    };
    type = "process";
    portAssign = "private";
    port = portsConfiguration.ports.roomservice or 0;
  };

  ...

  stafftracker = rec {
    name = "stafftracker";
    pkg = customPkgs.stafftrackerwrapper { inherit port; };
    dependsOn = {
      inherit roomservice staffservice zipcodeservice;
    };
    type = "process";
    portAssign = "shared";
    port = portsConfiguration.ports.stafftracker or 0;
    baseURL = "/";
  };
}

In the above example, I have annotated the roomservice component with a private port assignment property meaning that we want to assign a TCP port that is unique to the machine and the stafftracker component with a shared port assignment meaning that we want to assign a TCP port that is unique to the network.

By running the following command we can assign port numbers:

$ dydisnix-port-assign -s services.nix -i infrastructure.nix \
    -d distribution.nix > ports.nix

The above command generates a port assignment configuration Nix expression (named: ports.nix) that contains port reservations for each service and port assignment configurations for the network and each individual machine:

{
  ports = {
    roomservice = 8001;
    ...
    zipcodeservice = 3003;
  };
  portConfiguration = {
    globalConfig = {
      lastPort = 3003;
      minPort = 3000;
      maxPort = 4000;
      servicesToPorts = {
        stafftracker = 3002;
      };
    };
    targetConfigs = {
      test2 = {
        lastPort = 8001;
        minPort = 8000;
        maxPort = 9000;
        servicesToPorts = {
          roomservice = 8001;
        };
      };
    };
  };
}

The above configuration attribute set contains three properties:

  • The ports attribute contains the actual port numbers that have been assigned to each service. The services defined in the services model (shown earlier) refer to the port values defined here.
  • The portConfiguration attribute contains port configuration settings for the network and each target machine. The globalConfig attribute defines a TCP port range with ports that must be unique to the network. Besides the port range it also stores the last assigned TCP port number and all global port reservations.
  • The targetConfigs attribute contains port configuration settings and reservations for each target machine.

We can also run the port assign command-utility again with an existing port assignment configuration as a parameter:

$ dydisnix-port-assign -s services.nix -i infrastructure.nix \
    -d distribution.nix -p ports.nix > ports2.nix

The above command-line invocation reassigns TCP ports, taking the previous port reservations into account so that these will be reused where possible (e.g. only new services get a port number assigned). Furthermore, it also clears all port reservations of the services that have been undeployed. The new port assignment configuration is stored in a file called ports2.nix.

Conclusion


In this blog post, I have identified another deployment planning problem that manifests itself when deploying microservices that all have to listen on a unique TCP port. I have developed a utility to automate this process.

Besides assigning port numbers, there are many other kinds of problems that need a solution while deploying microservices. For example, you might also want to restrict their privileges (e.g. by running all of them as separate unprivileged users). It is also possible to take care of that with Dysnomia.

Availability


The dydisnix-port-assign utility is part of the Dynamic Disnix toolset that can be obtained from my GitHub page. Unfortunately, the Dynamic Disnix toolset is still a prototype with no end-user documentation or a release, so you have to be brave to use it.

Moreover, I have created yet another Disnix example package (a Node.js variant of the ridiculous StaffTracker example) to demonstrate how "microservices" can be deployed. This particular variant uses Node.js as implementation platform and exposes the data sets through REST APIs. All components are microservices using Node.js' embedded HTTP server listening on their own unique TCP ports.

I have also modified the TCP proxy example to use port assignment configurations generated by the tool described in this blog post.

Wednesday, July 8, 2015

Deploying state with Disnix


A couple of months ago, I announced a new Disnix release after a long period of only little development activity.

As I have explained earlier, Disnix's main purpose is to automatically deploy service-oriented systems into heterogeneous networks of machines running various kinds of operating systems.

In addition to automating deployment, it has a couple of interesting non-functional properties as well. For example, it supports reliable deployment, because components implementing services are stored alongside existing versions and older versions are never automatically removed. As a result, we can always roll back to the previous configuration in case of a failure.

However, there is one major unaddressed concern when using Disnix to deploy a service-oriented system. Like the Nix the package manager -- that serves as the basis of Disnix --, Disnix does not manage state.

The absence of state management has a number of implications. For example, when deploying a database, it gets created on first startup, often with a schema and initial data set. However, the structure and contents of a database typically evolves over time. When updating a deployment configuration that (for example) moves a database from one machine to another, the changes that have been made since its initial deployment are not migrated.

So far, state management in combination with Disnix has always been a problem that must be solved manually or by using an external solution. For a single machine, manual state management is often tedious but still doable. For large networks of machines, however, it may become a problem that is too big too handle.

A few years ago, I rushed out a prototype tool called Dysnomia to address state management problems in conjunction with Disnix and wrote a research paper about it. In the last few months, I have integrated the majority of the concepts of this prototype into the master versions of Dysnomia and Disnix.

Executing state management activities


When deploying a service oriented system with Disnix, a number of deployment activities are executed. For the build and distribution activities, Disnix consults the Nix package manager.

After all services have been transferred, Disnix activates them and deactivates the ones that have become obsolete. Disnix consults Dysnomia to execute these activities through a plugin system that delegates the execution of these steps to an appropriate module for a given service type, such as a process, source code repository or a database.

Deployment activities carried out by Dysnomia require two mandatory parameters. The first parameter is a container specification capturing the properties of a container that hosts one or more mutable components. For example, a MySQL DBMS instance can be specified as follows:

type=mysql-database
mysqlUsername=root
mysqlPassword=verysecret

The above specification states the we have a container of type mysql-database that can be reached using the above listed credentials. The type attribute allows Dysnomia to invoke the module that executes the required deployment steps for MySQL.

The second parameter refers to a logical representation of the initial state of a mutable component. For example, a MySQL database is represented as a script that generates its schema:

create table author
( AUTHOR_ID  INTEGER       NOT NULL,
  FirstName  VARCHAR(255)  NOT NULL,
  LastName   VARCHAR(255)  NOT NULL,
  PRIMARY KEY(AUTHOR_ID)
);

create table books
( ISBN       VARCHAR(255)  NOT NULL,
  Title      VARCHAR(255)  NOT NULL,
  AUTHOR_ID  VARCHAR(255)  NOT NULL,
  PRIMARY KEY(ISBN),
  FOREIGN KEY(AUTHOR_ID) references author(AUTHOR_ID)
    on update cascade on delete cascade
);

A MySQL database can be activated in a MySQL DBMS, by running the following command-line instruction with the two configuration files shown earlier as parameters:

$ dysnomia --operation activate \
  --component ~/testdb \
  --container ~/mysql-production

The above command first checks if a MySQL database named testdb exists. If it does not exists, it gets created and the initial schema is imported. If the database with the given name exists already, it does nothing.

With the latest Dysnomia, it is also possible to run snapshot operations:
$ dysnomia --operation snapshot \
  --component ~/testdb \
  --container ~/mysql-production

The above command invokes the mysqldump utility to take a snapshot of the testdb in a portable and consistent manner and stores the output in a so-called Dysnomia snapshot store.

When running the following command-line instruction, the contents of the snapshot store is displayed for the MySQL container and testdb component:

$ dysnomia-snapshots --query-all --container mysql-database --component testdb
mysql-production/testdb/9b0c3562b57dafd00e480c6b3a67d29146179775b67dfff5aa7a138b2699b241
mysql-production/testdb/1df326254d596dd31d9d9db30ea178d05eb220ae51d093a2cbffeaa13f45b21c
mysql-production/testdb/330232eda02b77c3629a4623b498855c168986e0a214ec44f38e7e0447a3f7ef

As may be observed, the dysnomia-snapshots utility outputs three relative paths that correspond to three snapshots. The paths reflect over a number of properties, such as the container name and component name. The last path component is a SHA256 hash code reflecting its contents (that is computed from the actual dump).

Each container type follows its own naming convention to reflect its contents. While MySQL and most of the other Dysnomia modules use output hashes, also different naming conventions are used. For example, the Subversion module uses the revision id of the repository.

A naming convention using an attribute to reflect its contents has all kinds of benefits. For example, if the MySQL database does not change and we run the snapshot operation again, it discovers that a snapshot with the same output hash already exists, preventing it to store the same snapshot twice improving storage efficiency.

The absolute versions of the snapshot paths can be retrieved with the following command:

$ dysnomia-snapshots --resolve mysql-database/testdb/330232eda02b77c3629a4623b498855c...
/var/state/dysnomia/snapshots/mysql-production/testdb/330232eda02b77c3629a4623b498855...

Besides snapshotting, it is also possible to restore state with Dysnomia:

$ dysnomia --operation restore \
  --component ~/testdb \
  --container ~/mysql-production

The above command restores the latest snapshot generation. If no snapshot exist in the store, it does nothing.

Finally, it is also possible to clean things up. Similar to the Nix package manager, old components are never deleted automatically, but must be explicitly garbage collected. For example, deactivating the MySQL database can be done as follows:

$ dysnomia --operation deactivate \
  --component ~/testdb \
  --container ~/mysql-production

The above command does not delete the MySQL database. Instead, it simply marks it as garbage, but otherwise keeps it. Actually deleting the database can be done by invoking the garbage collect operation:

$ dysnomia --operation collect-garbage \
  --component ~/testdb \
  --container ~/mysql-production

The above command first checks whether the database has been marked as garbage. If this is the case (because it has been deactivated) it is dropped. Otherwise, this command does nothing (because we do not want to delete stuff that is actually in use).

Besides the physical state of components, also all generations of snapshots in the store are kept by default. They can be removed by running the snapshot garbage collector:

$ dysnomia-snapshots --gc --keep 3

The above command states that all but the last 3 snapshot generations should be removed from the snapshot store.

Managing state of service-oriented systems


With the new snapshotting facilities provided by Dysnomia, we have extended Disnix to support state deployment of service-oriented systems.

By default, the new version of Disnix does not manage state and its behaviour remains exactly the same as the previous version, i.e. it only manages the static parts of the system. To allow Disnix to manage state of services, they must be explicitly annotated as such in the services model:

staff = {
  name = "staff";
  pkg = customPkgs.staff;
  dependsOn = {};
  type = "mysql-database";
  deployState = true;
};

Adding the attribute deployState to a service that is set to true causes Disnix to manage its state as well. For example, when changing the target machine of the database in the distribution model and by running the following command:

$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

Disnix executes the data migration phase after the configuration has been successfully activated. In this phase, Disnix snapshots the state of the annotated services on the target machines, transfers the snapshots to the new targets (through the coordinator machine), and finally restores their state.

In addition to data migration, Disnix can also be used as a backup tool. Running the following command:

$ disnix-snapshot

Captures the state of all annotated services in the configuration that has been previously deployed and transfers their snapshots to the coordinator machine's snapshot store.

Likewise, the snapshots can be restored as follows:

$ disnix-restore

By default, the above command only restores the state of the services that are in the last configuration, but not in the configuration before. However, it may also be desirable to force the state of all annotated services in the current configuration to be restored. This can be done as follows:

$ disnix-restore --no-upgrade

Finally, the snapshots that are taken on the target machines are not deleted automatically. Disnix can also automatically clean the snapshot stores of a network of machines:

$ disnix-clean-snapshots --keep 3 infrastructure.nix

The above command deletes all but the last three snapshot generations from all machines defined in the infrastructure model.

Discussion


The extended implementations of Dysnomia and Disnix implement the majority of concepts described in my earlier blog post and the corresponding paper. However, there are a number of things that are different:

  • The prototype implementation stores snapshots in the /dysnomia folder (analogous to the Nix store that resides in /nix/store), which is a non-FHS compliant directory. Nix has a number of very good reasons to deviate from the FHS and requires packages to be addressed by their absolute paths across machines so that they can be uniformly accessed by a dynamic linker.

    However, such a level of strictness is not required for addressing snapshots. In the current implementation, snapshots are stored in /var/state/dysnomia which is FHS-compliant. Furthermore, snapshots are identified by their relative paths to the store folder. The snapshot store's location can be changed by setting the DYSNOMIA_STATEDIR environment variable, allowing someone to have multiple snapshot stores.
  • In the prototype, the semantics of the deactivate operation also imply deleting the state of mutable component in a container. As this is a dangerous and destructive operation, the current implementation separates the actual delete operation into a garbage collect operation that must be invoked explicitly.
  • In both the prototype and the current implementation, a Dysnomia plugin can choose its own naming convention to identify snapshots. In the prototype, the naming must reflect both the contents and the order in which the snapshots have been taken. As a general fallback, I proposed using timestamps.

    However, timestamps are unreliable in a distributed setting because the machines' clocks may not be in sync. In the current implementation, I use output hashes as a general fallback. As hashes cannot reflect the order in their names, Dysnomia provides a generations folder containing symlinks to snapshots which names reflect the order in which they have been taken.

The paper also describes two concepts that are still unimplemented in the current master version:

  • The incremental snapshot operation is unimplemented. Although this feature may sound attractive, I could only properly do this with Subversion repositories and MySQL databases with binary logging enabled.
  • To upgrade a service-oriented system (that includes moving state) atomically, access to the system in the transition and data migration phases must be blocked/queued. However, when moving large data sets, this time window could be incredibly big.

    As an optimization, I proposed an improved upgrade process in which incremental snapshots are transferred inside the locking time window, while full snapshots are transferred before the locking starts. Although it may sound conceptually nice, it is difficult to properly apply it in practice. I may still integrate it some day, but currently I don't need it. :)

Finally, there are some practical notes when using Dysnomia's state management facilities. Its biggest drawback is that the way state is managed (by consulting tools that store dumps on the filesystem) is typically expensive in terms of time (because it may take a long time writing a dump to disk) and storage. For very large databases, the costs may actually be too high.

As described in the previous blog post and the corresponding paper, there are alternative ways of doing state management:

  • Filesystem-level snapshotting is typically faster since files only need to be copied. However, its biggest drawback is that physical state may be inconsistent (because of unfinished write operations) and non-portable. Moreover, it may be difficult to manage individual chunks of state. NixOps, for example, supports partition-level state management of EBS volumes.
  • Database replication engines can also typically capture and transfer state much more efficiently.

Because Dysnomia's way of managing state has some expensive drawbacks, it has not been enabled by default in Disnix. Moreover, this was also the main reason why I did not integrate the features of the Dysnomia prototype sooner.

The reason why I have proceeded anyway, is that I have to manage a big environment of small databases, which sizes are only several megabytes each. For such an environment, Dysnomia's snapshotting facilities work fine.

Availability


The state management facilities described in this blog post are part of Dysnomia and Disnix version 0.4. I also want to announce their immediate availability! Visit the Disnix homepage for more information!

As with the previous release, Disnix still remains a tool that should be considered an advanced prototype, despite the fact that I am using it on a daily basis to eat my own dogfood. :)

Tuesday, June 9, 2015

Developing command-line utilities

Some people around me have noticed that I frequently use the command-line for development and system administration tasks, in particular when I'm working on Linux and other UNIX-like operating systems.

Typically, when you see me working on something behind my computer, you will most likely observe a screen that looks like this:


The above screenshot shows a KDE plasma desktop session in which I have several terminal screens opened running a command-line shell session.

For example, I do most of my file management tasks on the command-line. Moreover, I'm also a happy Midnight Commander user and I use the editor that comes with it (mcedit) quite a lot as well.

As a matter of fact, mcedit is my favorite editor for the majority of my tasks, so you will never see me picking any side in the well-known Emacs vs vi discussion (or maybe I upset people to say that I also happen to know Vim and never got used to Emacs at all). :-).

Using the command-line


For Linux users who happen to do development, this way of working makes (sort of) sense. To most outsiders, however, the above screenshot looks quite menacing and they often question me why this is a good (or appealing) way of working. Some people even advised me to change my working style, because they consider it to be confusing and inefficient.

Although I must admit that it takes a bit of practice to learn a relevant set of commands and get used to it, the reasons for me to stick to such a working style are the following:

  • It's a habit. This is obviously not a very strong argument, but I'm quite used to typing commands when executing system administration and development tasks, and they are quite common in a Linux/Unix world -- much of the documentation that you will find online explain how to do things on the command-line.

    Moreover, I have been using a command-line interface since the very first moment I was introduced to a computer. For example, my first computer's operating system shell was a BASIC programming language interpreter, which you even had to use to perform simple tasks, such as loading a program from disk/tape and running it.

  • Speed. This argument may sound counter-intuitive to some people, but I can accomplish many tasks quite quickly on the command-line and probably even faster than using a graphical user interface.

    For example, when I need to do a file management task, such as removing all backup files (files having a ~ suffix), I can simply run the following shell command:

    $ rm *~
    

    If I would avoid the shell and use a graphical file manager (e.g. Dolphin, GNOME Files, Windows Explorer, Apple Finder), picking these files and moving them to the trashcan would certainly take much more time and effort.

    Moreover, shells (such as bash) have many more useful goodies to speed things up, such as TAB-completion, allowing someone to only partially type a command or filename and let the shell complete it by pressing the TAB-key.

  • Convenient integration. Besides running a single command-line instruction to do a specific task, I also often combine various instructions to accomplish something more difficult. For example, the following chain of commands:

    $ wc -l $(find . -name \*.h -or -name \*.c) | head -n -1 | sort -n -r
    

    comes in handy when I want to generate an overview of lines of code per source file implemented in the C programming language and sort them in reverse order (that is the biggest source file first).

    As you may notice, I integrate four command-line instructions: the find instruction seeks for all relevant C source files in the current working directory, the wc -l command calculates the lines of code per file, the head command chops off the last line displaying the total values and the sort command does a numeric sort of the output in reverse order.

    Integration is done by pipes (using the | operator) which make the output of one process the input of another and command substitution (using the $(...) construct) that substitutes the invocation by its output.

  • Automation. I often find myself repeating common sets of shell instructions to accomplish certain tasks. I can conveniently turn them into shell scripts to make my life easier.

Recommendations for developing command-line utilities


Something that I consider an appealing property of the automation and integration aspects is that people can develop "their own" commands in nearly any programming language (using any kind of technology) in a straight forward way and easily integrate them with other commands.

I have developed many command-line utilities -- some of them have been developed for private use by my past and current employers (I described one of them in my bachelor's thesis for example), and others are publicly available as free and open source software, such as the tools part of the IFF file format experiments project, Disnix, NiJS and the reengineered npm2nix.

Although developing simple shell scripts as command-line utilities is a straight forward task, implementing good quality command-line tools is often more complicated and time consuming. I typically take the following properties in consideration while developing them:

Interface conventions


There are various ways to influence the behaviour of processes invoked from the command-line. The most common ways are through command-line options, environment variables and files.

I expose anything that is configurable as command-line options. I mostly follow the conventions of the underlying OS/platform:

  • On Linux and most other UNIX-like systems (e.g. Mac OS X, FreeBSD) I follow GNU's convention for command line parameters.

    For each parameter, I always define a long option (e.g. --help) and for the most common parameters also a short option (e.g. -h). I only implement a purely short option interface if the target platform does not support long options, such as classic UNIX-like operating systems.
  • On DOS/Windows, I follow the DOS convention. That is: command line options are single character only and prefixed by a slash, e.g. /h.

I always implement two command-line options, namely a help option displaying a help page that summarizes how the command can be used and a version option displaying the name and version of the package, and optionally a copyright + license statement.

I typically reserve non-option parameters to file names/paths only.

If certain options are crosscutting among multiple command line tools, I also make them configurable through environment variables in addition to command-line parameters.

For example, Disnix exposes each deployment activity (e.g. building a system, distributing and activating it) as a separate command-line tool. Disnix has the ability to deploy a system to multiple target environments (called: a profile). The following chain of command-line invocations to deploy a system make more sense to specify a profile:

$ export DISNIX_PROFILE=testenv
$ manifest=$(disnix-manifest -s services.nix -i infrastructure.nix \
    -d distribution.nix)
$ disnix-distribute $manifest
$ disnix-activate $manifest
$ disnix-set $manifest

than passing -p testenv parameter four times (to each command-line invocation).

I typically use files to process or produce arbitrary sets of data:

  • If a tool takes one single file as input or produces one single output or a combination of both, I also allow it to read from the standard input or write to the standard output so that it can be used as a component in a pipe. In most cases supporting these special file descriptors is almost as straight forward as opening arbitrary files.

    For example, the ILBM image viewer's primary purpose is just to view an ILBM image file stored on disk. However, because I also allow it to read from the standard input I can also do something like this:

    $ iffjoin picture1.ILBM picture2.ILBM | ilbmviewer
    

    In the above example, I concatenate two ILBM files into a single IFF container file and invoke the viewer to view both of them without storing the immediate result on disk first. This can be useful for a variety of purposes.

  • A tool that produces output writes to the standard output data that can be parsed/processed by another process in a pipeline. All other output, e.g. errors, debug messages, notifications go the the standard error.

Finally, every process should return an exit status when it finishes. By convention, if everything went OK it should return 0, and if some error occurs a non-zero exit status that uniquely identifies the error.

Command-line option parsing


As explained in the previous section, I typically follow the command-line option conventions of the platform. However, parsing them is usually not that straight forward. Some things we must take into account are:

  • We must know which parameters are command-line option flags (e.g. starting with -, -- or /) and which are non-option parameters.
  • Some command-line options have a required argument, some take an optional argument and some take none.

Luckily, there are many libraries available for a variety of programming languages/platforms implementing such a parser. So far, I have used the following:


I have never implemented a sophisticated command-line parser myself. The only case in which I ended up implementing a custom parser is in the native Windows and AmigaOS ports of the IFF libraries projects -- I could not find any libraries supporting their native command-line option style. Fortunately, the command-line interfaces were quite simple, so it did not take me that much effort.

Validation and error reporting


Besides parsing the command-line options (and non-options), we must also check whether all mandatory parameters are set, whether they have the right format and set default values for unspecified parameters if needed.

Furthermore, in case of an error regarding the inputs: we must report it to the caller and exit the process with a non-zero exit status.

Documentation


Another important concern while developing a command-line utility (that IMO is often overlooked) is providing documentation about its usage.

For every command-line tool, I typically implement an help option parameter displaying a help page on the terminal. This help page contains the following sections:

  • A usage line describing briefly in what ways the command-line tool can be invoked and what their mandatory parameters are.
  • A brief description explaining what the tool does.
  • The command-line options. For each option, I document the following properties:

    • The short and long option identifiers.
    • Whether the option requires no parameter, an optional parameter, or a required parameter.
    • A brief description of the option
    • What the default value is, if appropriate

    An example of an option that I have documented for the disnix-distribute utility is:

    -m, --max-concurrent-transfers=NUM  Maximum amount of concurrent closure
                                        transfers. Defauls to: 2
    

    that specifies that the amount of concurrent transfers can be specified through the -m short or --max-concurrent-transfers long option, requires a numeric argument and defaults to 2 if the option is unspecified.

  • An environment section describing which environment variables can be configured and their meanings.
  • I only document the exit statuses if any of them has a special meaning. Zero in case of success and non-zero in case of a failure is too obvious.

Besides concisely writing a help page, there are more practical issues with documentation -- the help page is typically not the only source of information that describes how command-line utilities can be used.

For projects that are a bit more mature, I also want to provide a manpage of every command-line utility that more or less contain the same stuff as a help page. In large and more complicated projects, such as Disnix, I also provide a Docbook manual that (besides detailed instructions and background information) includes the help pages of the command-line utilities in the appendix.

I used to write these manpages and docbook help pages by hand, but it's quite tedious to write the same stuff multiple times. Moreover, it is even more tedious to keep them all up-to-date and consistent.

Fortunately, we can also generate the latter two artifacts. GNU's help2man utility comes in quite handy to generate a manual page by invoking the --help and --version options of an existing command-line utility. For example, by following GNU's convention for writing help pages, I was able to generate a reasonably good manual page for the disnix-distribute tool, by simply running:

$ help2man --output=disnix-distribute.1 --no-info --name \
  'Distributes intra-dependency closures of services to target machines' \
  --libtool ./disnix-distribute

If needed, additional information can be augmented to the generated manual page.

I also discovered a nice tool called doclifter that allows me to generate Docbook help pages from manual pages. I can run the following command to generate a Docbook man page section from the earlier generated manpage:

$ doclifter -x disnix-distribute.1

The above command-line instruction generates a Docbook 5 XML file (disnix-distribute.1.xml) from the earlier manual page and the result looks quite acceptable to me. The only thing I had to do is manually replacing some xml:id attributes so that a command's help page can be properly referenced from the other Docbook sections.

Modularization


When implementing a command-line utility many concerns need to be implemented besides the primary tasks that it needs to do. In this blog post I have mentioned the following:

  • Interface conventions
  • Command-line option parsing
  • Validation and error reporting
  • Documentation

I tend to separate the command-line interface implementing the above concerns into a separate module/file (typically called main.c), unless the job that the command-line tool should do is relatively simple (e.g. it can be expressed in 1-10 lines of code), because I consider modularization a good practice and a module should preferably not grow too big or do too many things.

Reuse


When I implement a toolset of command-line utilities, such as Disnix, I often see that these tools have many common parameters, common validation procedures and a common procedure displaying the tool version.

I typically abstract them away in a common module, file or library so that I don't find myself duplicating them making maintenance more difficult.

Discussion


In this blog post, I have explained that I prefer the command-line for many development and system administration tasks. Moreover, I have also explained that I have developed many command-line utilities.

Creating good quality command-line tools is not straight forward. To make myself and other people that I happen to work with aware of it, I have written down some of the aspects that I take into consideration.

Although the command-line has some appealing properties, it is obviously not perfect. Some command-line tools are weird, may significantly lack quality and cannot be integrated in a straight forward way through pipes. Furthermore, many shells (e.g. bash) implement a programming language having weird features and counter-intuitive traits.

For example, one of my favorite pitfalls in bash is its behaviour when executing shell scripts, such as the following script (to readers that are unfamiliar with the commands: false is a command that always fails):

echo "hello"
false | cat
echo "world"
false
echo "!!!"

When executing the script like this:

$ bash script.bash
Hello
world
!!!

bash simply executes everything despite the fact that two commands fail. However, when I add the -e command line parameter, the execution is supposed to be stopped if any command returns a non-zero exit status:

$ bash -e script.bash
Hello
world

However, there is still one oddity -- the pipe (false | cat) still succeeds, because the exit status of the pipe corresponds to the exit status of the last component of the pipe only! There is another way to check the status of the other components (through $PIPESTATUS), but this feels counter-intuitive to me! Fortunately, bash 3.0 and onwards have an additional setting that makes the behaviour come closer to what I expect:

$ bash -e -o pipefail script.bash
Hello

Despite a number of oddities, the overall idea of the command-line is good IMO: it is a general purpose environment that can be interactive, programmed and extended with custom commands implemented in nearly any programming language that can be integrated with each other in a straight forward manner.

Meanwhile, now that I have made myself aware of some important concerns, I have adapted the development versions of all my free and open source projects to properly reflect them.

Thursday, April 30, 2015

An evaluation and comparison of Snappy Ubuntu

A few months ago, I noticed that somebody was referring to my "On Nix and GNU Guix" blog post from the Ask Ubuntu forum. The person who started the topic wanted to know how Snappy Ubuntu compares to Nix and GNU Guix.

Unfortunately, he did not read my blog post (or possibly one of the three Nix and NixOS explanation recipes) in detail.

Moreover, I was hoping that somebody else involved with Snappy Ubuntu would do a more in depth comparison and write a response, but this still has not happened yet. As a matter of fact, there is still no answer as of today.

Because of these reasons, I have decided to take a look at Snappy Ubuntu Core and do an evaluation myself.

What is Snappy Ubuntu?


Snappy is Ubuntu's new mechanism for delivering applications and system upgrades. It is used as the basis of their upcoming cloud and mobile distributions and supposed to be offered alongside the Debian package manager that Ubuntu currently uses for installing and upgrading software in their next generation desktop distribution.

Besides the ability to deploy packages, Snappy also has a number of interesting non-functional properties. For example, the website says the following:

The snappy approach is faster, more reliable, and lets us provide stronger security guarantees for apps and users -- that's why we call them "snappy" applications.

Snappy apps and Ubuntu Core itself can be upgraded atomically and rolled back if needed -- a bulletproof approach that is perfect for deployments where predictability and reliability are paramount. It's called "transactional" or "image-based" systems management, and we’re delighted to make it available on every Ubuntu certified cloud.

The text listed above contains a number of interesting quality aspects that have a significant overlap with Nix -- reliability, atomic upgrades and rollbacks, predictability, and being "transactional" are features that Nix also implements.

Package organization


The Snappy Ubuntu Core distribution uses mostly a FHS compliant filesystem layout. One notable deviation is the folder in which applications are installed.

For application deployment the /app folder is used in which files belonging to a specific application version reside in separate folders. Application folders use the following naming convention:

/app/<name>/<version>[.<developer>]

Each application is identified by its name, version identifier and optionally a developer identifier, as shown below:

$ ls -l /apps
drwxr-xr-x 2 root ubuntu 4096 Apr 25 20:38 bin
drwxr-xr-x 3 root root   4096 Apr 25 15:56 docker
drwxr-xr-x 3 root root   4096 Apr 25 20:34 go-example-webserver.canonical
drwxr-xr-x 3 root root   4096 Apr 25 20:31 hello-world.canonical
drwxr-xr-x 3 root root   4096 Apr 25 20:38 webcam-demo.canonical
drwxr-xr-x 3 root ubuntu 4096 Apr 23 05:24 webdm.sideload

For example, the /app/webcam-demo.canonical/1.0.1 refers to a package named: webcam version 1.0.1 that is delivered by Canonical.

There are almost no requirements on the contents of an application folder. I have observed that the example packages seem to follow some conventions though. For example:

$ cd /apps/webcam-demo.canonical/1.0.1
$ find . -type f ! -iname ".*"
./bin/x86_64-linux-gnu/golang-static-http
./bin/x86_64-linux-gnu/fswebcam
./bin/golang-static-http
./bin/runner
./bin/arm-linux-gnueabihf/golang-static-http
./bin/arm-linux-gnueabihf/fswebcam
./bin/fswebcam
./bin/webcam-webui
./meta/readme.md
./meta/package.yaml
./lib/x86_64-linux-gnu/libz.so.1
./lib/x86_64-linux-gnu/libc.so.6
./lib/x86_64-linux-gnu/libX11.so.6
./lib/x86_64-linux-gnu/libpng12.so.0
./lib/x86_64-linux-gnu/libvpx.so.1
...
./lib/arm-linux-gnueabihf/libz.so.1
./lib/arm-linux-gnueabihf/libc.so.6
./lib/arm-linux-gnueabihf/libX11.so.6
./lib/arm-linux-gnueabihf/libpng12.so.0
./lib/arm-linux-gnueabihf/libvpx.so.1
...

Binaries are typically stored inside the bin/ sub folder, while libraries are stored inside the lib/ sub folder. Moreover, the above example also ships binaries for two kinds of system architectures (x86_64 and ARM) that reside inside bin/x86_64-linux, bin/arm-linux-gnueabihf, and lib/x86_64-linux, arm-linux-gnueabihf sub folders.

The only sub folder that has a specific purpose is meta/ that is supposed to contain at least two files -- the readme.md file contains documentation in which the first heading and the first paragraph have a specific meaning, and the package.yaml file contains various meta attributes related to the deployment of the package.

Snappy's package storing convention also makes it possible to store multiple versions of a package next to each other, as shown below:

$ ls -l /apps/webcam-demo.canonical
drwxr-xr-x 7 clickpkg clickpkg 4096 Apr 24 19:38 1.0.0
drwxr-xr-x 7 clickpkg clickpkg 4096 Apr 25 20:38 1.0.1
lrwxrwxrwx 1 root     root        5 Apr 25 20:38 current -> 1.0.1

Moreover, every application folder contains a symlink named: current/ that refers to the version that is currently in use. This approach makes it possible to do atomic upgrades and rollbacks by merely flipping the target of the current/ symlink. As a result, the system always refers to an old or new version of the package, but never to an inconsistent mix of the two.

Apart from storing packages in isolation, they also must be made accessible to end users. For each binary that is declared in the package.yaml file, e.g.:

# for debugging convience we also make the binary available as a command
binaries:
 - name: bin/webcam-webui

a wrapper script is placed inside /apps/bin that is globally accessible by the users of a system through the PATH environment variable.

Each wrapper script contains the app name. For example, the webcam-webui binary (shown earlier) must be started as follows:

$ webcam-demo.webcam-webui

Besides binaries, package configurations can also declare services from which systemd jobs are composed. The corresponding configuration files are put into the /etc/systemd/system folder and also use a naming convention containing the package name.

Unprivileged users can also install their own packages. The corresponding application files are placed inside $HOME/app and are organized in the same way as the global /app folder.

Snappy's package organization has many similarities with Nix's package organization -- Nix also stores files files belonging to a package in isolated folders in a special purpose directory called the Nix store.

However, Nix uses a more powerful way of identifying packages. Whereas Snappy only identifies packages with their names, version numbers and vendor identifiers, Nix package names are prefixed with unique hash codes (such as /nix/store/wan65mpbvx2a04s2s5cv60ci600mw6ng-firefox-with-plugins-27.0.1) that are derived from all build time dependencies involved to build the package, such as compilers, libraries and the build scripts themselves.

The purpose of using hash codes is to make a distinction between any variant of the same package. For example, when a package is compiled with a different version of GCC, linked against a different library dependency, when debugging symbols are enabled or disabled or certain optional features enabled or disabled, or the build procedure has been modified, a package with a different hash is built that is safely stored next to existing variants.

Moreover, Nix also uses symlinking to refer to specific versions of packages, but the corresponding mechanism is more powerful. Nix generates so-called Nix profiles which synthesize the contents of a collection of installed packages in the Nix store into a symlink tree so that their files (such as executables) can be referenced from a single location. A second symlink indirection refers to Nix profile containing the desired versions of the packages.

Nix profiles also allow unprivileged users to manage their own set of private packages that do not conflict with other user's private packages or the system wide installed packages. However, partly because of Nix's package naming convention, also the packages of unprivileged users can be safely stored in the global Nix store, so that common dependencies can be shared among users in a secure way.

Dependency management


Software packages are rarely self contained -- they typically have dependencies on other packages, such as shared libraries. If a dependency is missing or incorrect, a package may not work properly or not at all.

I observed that in the Snappy example packages, all dependencies are bundled statically. For example, in the webcam-demo package, the lib/ sub folder contains the following files:

./lib/x86_64-linux-gnu/libz.so.1
./lib/x86_64-linux-gnu/libc.so.6
./lib/x86_64-linux-gnu/libX11.so.6
./lib/x86_64-linux-gnu/libpng12.so.0
./lib/x86_64-linux-gnu/libvpx.so.1
...
./lib/arm-linux-gnueabihf/libz.so.1
./lib/arm-linux-gnueabihf/libc.so.6
./lib/arm-linux-gnueabihf/libX11.so.6
./lib/arm-linux-gnueabihf/libpng12.so.0
./lib/arm-linux-gnueabihf/libvpx.so.1
...

As can be seen in the above output, all the library dependencies, including the libraries' dependencies (even libc) are bundled into the package. When running an executable or starting a systemd job, a container (essentially an isolated/restricted environment) is composed in which the process runs (with some restrictions) where it can find its dependencies in the "common FHS locations", such as /lib.

Besides static bundling, there seems to be a primitive mechanism that provides some form of sharing. According to the packaging format specification, it also possible to declare dependencies on frameworks in the package.yaml file:

frameworks: docker, foo, bar # list of frameworks required

Frameworks are managed like ordinary packages in /app, but they specify additional required system privileges and require approval from Canonical to allow them to be redistributed.

Although it is not fully clear to me from the documentation how these dependencies are addressed, I suspect that the contents of the frameworks is made available to packages inside the containers in which they run.

Moreover, I noticed that dependencies are only addressed by their names and that they refer to the current versions of the corresponding frameworks. In the documentation, there seems to be no way (yet) to refer to other versions or variants of frameworks.

The Nix-way of managing dependencies is quite different -- Nix packages are constructed from source and the corresponding build procedures are executed in isolated environments in which only the specified build-time dependencies can be found.

Moreover, when constructing Nix packages, runtime dependencies are bound statically to executables, for example by modifying the RPATH of an ELF executable or wrapping executables in scripts that set environment variables allowing it to find its dependencies (such as CLASSPATH or PERL5LIB). A subset of the buildtime dependencies are identified by Nix as runtime dependencies by scanning for hash occurrences in the build result.

Because dependencies are statically bound to executables, there is no need to compose containers to allow executables to find them. Furthermore, they can also refer to different versions or variants of library dependencies of a package without conflicting with other package's dependencies. Sharing is also supported, because two packages can refer to the same dependency with the same hash prefix in the Nix store.

As a sidenote: with Nix you can also use a containerized approach by composing isolated environments (e.g. a chroot environment or container) in which packages can find their dependencies from common locations. A prominent Nix package that uses this approach is Steam, because it is basically a deployment tool conflicting with Nix's deployment properties. Although such an approach is also possible, it is only used in very exceptional cases.

System organization


Besides applications and frameworks, the base system of the Snappy Ubuntu Core distribution can also be upgraded and downgraded. However, a different mechanism is used to accomplish this.

According to the filesystem layout & updates guide, the Snappy Ubuntu Core distribution follows a specific partition layout:

  • boot partition. This is a very tiny partition used for booting and should be big enough to contain a few kernels.
  • system-a partition. This partition contains a minimal working base system. This partition is mounted read-only.
  • system-b partition. An alternative partition containing a minimal working base system. This partition is mounted read-only as well.
  • writable partition. A writable partition that stores everything else including the applications and frameworks.

Snappy uses an "A/B system partitions mechanism" to allow a base system to be updated as a single unit by applying a new system image. It is also used to roll back to the "other" base system in case of problems with the most recently-installed system by making the bootloader switch root filesystems.

NixOS (the Linux distribution built around the Nix package manager) approaches system-level upgrades in a different way and is much more powerful. In NixOS, a complete system configuration is composed from packages residing in isolation in the Nix store (like ordinary packages) and these are safely stored next to existing versions. As a result, it is possible to roll back to any previous system configuration that has not been garbage collected yet.

Creating packages


According to the packaging guide, creating Snap files is very simple. It is just creating a directory, putting some files in there, creating a meta/ sub folder with a readme.md and package.yaml file, and running:

$ snappy build .

The above file generates a Snap file, which is basically just a tarball file containing the contents of the folder.

In my opinion, creating Snap packages is not that easy -- the above process demonstrates that delivering files from one machine to another is straight forward, but getting a package right is another thing.

Many packages on Linux systems are constructed from source code. To properly do that, you need to have the required development tools and libraries deployed first, a process that is typically easier said than done.

Snappy does not provide facilities to make that process manageable. With Snappy, it is the packager's own responsibility.

In contrast, Nix is a source package manager and provides a DSL that somebody can use construct isolated environments in which builds are executed and automatically deploys all buildtime dependencies that are required to build a package.

The build facilities of Nix are quite accessible. For example, you can easily construct your own private set of Nix packages or a shell session containing all development dependencies.

Moreover, Nix also implements transparent binary deployment -- if a particular Nix package with an identical hash exists elsewhere, we can download it from a remote location instead of building it from source ourselves.

Isolation


Another thing the Snappy Ubuntu Core distribution does with containers (besides using them to let a package find its dependencies) is restricting the things programs are allowed to do, such as the TCP/UDP ports they are allowed to bind to.

In Nix and NixOS, it is not a common practice to restrict the runtime behaviour of programs by default. However, it is still possible to impose restrictions on running programs, by composing a systemd job for a program yourself in a system's NixOS configuration.

Overview


The following table summarizes the conceptual differences between the Snappy Ubuntu Core and Nix/NixOS covered in this blog post:

Snappy Ubuntu Core Nix/NixOS
Concept Binary package manager Source package manager (with transparent binary deployment)
Dependency addressing By name Exact (using hash codes)
Dependency binding Container composition Static binding (e.g. by modifying RPATH or wrapping executables)
Systems composition management "A/B" partitions Package compositions
Construction from source Unmanaged Managed
Unprivileged user installations Supported without sharing Supported with sharing
Runtime isolation Part of package configuration Supported optionally, by manually composing a systemd job

Discussion


Snappy shares some interesting traits with Nix that provide a number of huge deployment benefits -- by deviating from the FHS and storing packages in isolated folders, it becomes easier to store multiple versions of packages next to each other and to perform atomic upgrades and rollbacks.

However, something that I consider a huge drawback of Snappy is the way dependencies are managed. In the Snappy examples, all library dependencies are bundled statically consuming more disk space (and RAM at runtime) than needed.

Moreover, packaging shared dependencies as frameworks is not very convenient and require approval from Canonical if they must be distributed. As a consequence, I think it will not be very encouraging to modularize systems, which is generally considered a good practice.

According to the framework guide the purpose of frameworks is to extend the base system, but not to be a sharing mechanism. Also the guide says:

Frameworks exist primarily to provide mediation of shared resources (eg, device files, sensors, cameras, etc)

So it appears that sharing in general is discouraged. In many common Linux distributions (including Debian and derivatives such as Ubuntu), it is common that the reuse-degree is raised to almost a maximum. For example, each library is packaged individually and sometimes libraries are even split into binary, development and documentation sub packages. I am not sure how Snappy is going to cope with such a fine granularity of reuse. Is Snappy going to be improved to support reuse as well, or is it now considered a good thing to package huge monolithic blobs?

Also, Snappy does only binary deployment and is not really helpful to alleviate the problem of constructing packages from source which is also quite a challenge in my opinion. I see lots of room for improvement in this area as well.

Another funny observation is the fact that Snappy Ubuntu Core relies on advanced concepts such as containers to make programs work, while there are also simpler solutions available, such as static linking.

Finally, something that Nix/NixOS could learn from the Snappy approach is the runtime isolation of programs out-of-the-box. Currently, doing this with Nix/NixOS this is not as convenient as with Snappy.

References


This is not the only comparison I have done between Nix/NixOS and another deployment approach. A few years ago while I was still a PhD student, I also did a comparison between the deployment properties of GoboLinux and NixOS.

Interestingly enough, GoboLinux addresses packages in a similar way as Snappy, supports sharing, does not provide runtime isolation of programs, but does have a very powerful source construction mechanism that Snappy lacks.

Friday, March 13, 2015

Disnix 0.3 release announcement

In the previous blog post, I have explained the differences between Disnix (which does service deployment) and NixOps (which does infrastructure deployment), and shown that both tools can be used together to address both concerns in a deployment process.

Furthermore, I raised a couple of questions and intentionally left one unmentioned question unanswered. The unmentioned question that I was referring to is the following: "Is Disnix still alive or is it dead?".

The answer is that Disnix's development was progressing at a very low pace for some time after I left academia -- I made minor changes once in a while, but nothing really interesting happened.

However, for the last few months, I am using it on a daily basis and made many big improvements. Moreover, I have reached a stable point and decided that this is a good moment to announce the next release!

New features


So what is new in this release?

Visualization tool


I have added a new tool to the Disnix toolset named: disnix-visualize that generates Graphviz images to visualize a particular deployment scenario. An example image is shown below:


The above picture shows a deployment scenario of the StaffTracker Java example in which services are divided over two machines in a network and have all kinds of complex dependency relationships as denoted by the arrows.

The tool was already included in the development versions for a while, but has never been part of any release.

Dysnomia


I also made a major change in Disnix's architecture. As explained in my old blog post about Disnix, activating and deactivating services cannot be done generically and I have developed a plugin system to take care of that.

This plugin system package (formerly known as Disnix activation scripts) has now become an independent tool named Dysnomia and can also be applied in different contexts. Furthermore, it can also be used as a standalone tool.

For example, a running MySQL DBMS instance (called a container in Dysnomia terminology) could be specified in a configuration file (such as ~/mysql-production) as follows:

type=mysql-database
mysqlUsername=root
mysqlPassword=verysecret

A database can be encoded as an SQL file (~/test-database/createdb.sql) creating the schema:

create table author
( AUTHOR_ID  INTEGER       NOT NULL,
  FirstName  VARCHAR(255)  NOT NULL,
  LastName   VARCHAR(255)  NOT NULL,
  PRIMARY KEY(AUTHOR_ID)
);

create table books
( ISBN       VARCHAR(255)  NOT NULL,
  Title      VARCHAR(255)  NOT NULL,
  AUTHOR_ID  VARCHAR(255)  NOT NULL,
  PRIMARY KEY(ISBN),
  FOREIGN KEY(AUTHOR_ID) references author(AUTHOR_ID)
    on update cascade on delete cascade
);

We can use the following command-line instruction to let Dysnomia deploy the database to the MySQL DBMS container we have just specified earlier:

$ dysnomia --operation activate --component ~/test-database \
  --container ~/mysql-production

When Disnix has to execute deployment operations, two external tools are consulted -- Nix takes care of all deployment operations of the static parts of a system, and Dysnomia takes care of performing the dynamic activation and deactivation steps.

Concurrent closure transfers


In the previous versions of Disnix, only one closure (of a collection of services and its intra-dependencies) is transferred to a target machine at the time. If a target machine has more network bandwidth than the coordinator, this is usually fine, but in all other cases, it slows the deployment process down.

In the new version, two closures are transferred concurrently by default. The amount of concurrent closure transfers can be adjusted as follows:

$ disnix-env -s services.nix -i infrastructure.nix \
  -d distribution.nix --max-concurrent-transfers 4

The last command-line argument states that 4 closures should be transferred concurrently.

Concurrent service activation and deactivation


In the old Disnix versions, the activation and deactivation steps of a service on a target machine were executed sequentially, i.e. one service on a machine at the time. In all my old testcases these steps were quite cheap/quick, but now that I have encountered systems that are much bigger, I noticed that there is a lot of deployment time that we can save.

In the new implementation, Disnix tries to concurrently activate or deactivate one service per machine. The amount of services that can be concurrently activated or deactivated per machine can be raised in the infrastructure model:

{
  test1 = {
    hostname = "test1";
    numOfCores = 2;
  };
}

In the above infrastructure model, the numOfCores attribute states that two services can be concurrently activated/deactivated on machine test1. If this attribute has been omitted, it defaults to 1.

Multi connection protocol support


By default, Disnix uses an SSH protocol wrapper to connect to the target machines. There is also an extension available, called DisnixWebService, that uses SOAP + MTOM instead.

In the old version, changing the connection protocol means that every target machine should be reached with it. In the new version, you can also specify the target property and client interface in the infrastructure model to support multi connection protocol deployments:

{
  test1 = {
    hostname = "test1";
    targetProperty = "hostname";
    clientInterface = "disnix-ssh-client";
  };
  
  test2 = {
    hostname = "test2";
    targetEPR = http://test2:8080/DisnixWebService/services/DisnixWebService;
    targetProperty = "targetEPR";
    clientInterface = "disnix-soap-client";
  };
}

The above infrastructure model states the following:

  • To connect to machine: test1, the hostname attribute contains the address and the disnix-ssh-client tool should be invoked to connect it.
  • To connect to machine: test2, the targetEPR attribute contains the address and the disnix-soap-client tool should be invoked to connect it.

NixOps integration


As described in my previous blog post, Disnix does service deployment and can integrate NixOS' infrastructure deployment features with an extension called DisnixOS.

DisnixOS can now also be used in conjunction with NixOps -- NixOps can be used to instantiate and deploy a network of virtual machines:

$ nixops create ./network.nix ./network-ec2.nix -d ec2
$ nixops deploy -d ec2

and DisnixOS can be used to deploy services to them:

$ export NIXOPS_DEPLOYMENT=ec2
$ disnixos-env -s services.nix -n network.nix -d distribution.nix \
    --use-nixops

Omitted features


There are also a couple of features described in some older blog posts, papers, and my PhD thesis, which have not become part of the new Disnix release.

Dynamic Disnix


This is an extended framework built on top of Disnix supporting self-adaptive redeployment. Although I promised to make it part of the new release a long time ago, it did not happen. However, I did update the prototype to work with the current Disnix implementation, but it still needs refinements, documentation and other small things to make it usable.

Brave people who are eager to try it can pull the Dynamic Disnix repository from my GitHub page.

Snapshot/restore features of Dysnomia


In a paper I wrote about Dysnomia I also proposed state snapshotting/restoring facilities. These have not become part of the released versions of Dysnomia and Disnix yet.

The approach I have described is useful in some scenarios but also has a couple of very big drawbacks. Moreover, it also significantly alters the behavior of Disnix. I need to find a way to properly integrate these features in such a way that they do not break the standard approach. Moreover, these techniques must be selectively applied as well.

Conclusion


In this blog post, I have announced the availability of the next release of Disnix. Perhaps I should give it the codename: "Disnix Forever!" or something :-). Also, the release date (Friday the 13th) seems to be appropriate.

Moreover, the previous release was considered an advanced prototype. Although I am using Disnix on a daily basis now to eat my own dogfood, and the toolset has become much more usable, I would not yet classify this release as something that is very mature yet.

Disnix can be obtained by installing NixOS, through Nixpkgs or from the Disnix release page.

I have also updated the Disnix homepage a bit, which should provide you more information.

Monday, March 9, 2015

On NixOps, Disnix, service deployment and infrastructure deployment

I have written many software deployment related blog posts covering tools that are part of the Nix project. However, there is one tool that I have not elaborated about so far, namely: NixOps, which has become quite popular these days.

Although NixOps is quite popular, its availability also leads to a bit of confusion with another tool. Some Nix users, in particular newbies, are suffering from this. The purpose of this blog post is to clear up that confusion.

NixOps


NixOps is something that is advertised as the "NixOS-based cloud deployment tool". It basically expands NixOS' (a Linux distribution built around the Nix package manager) approach of deploying a complete system configuration from a declarative specification to networks of machines and instantiates and provisions the required machines (e.g. in an IaaS cloud environment, such as Amazon EC2) automatically if requested.

A NixOps deployment process is driven by one or more network models that encapsulate multiple (partial) NixOS configurations. In a standard NixOps workflow, network models are typically split into a logical network model capturing settings that are machine independent and a physical network model capturing machine specific properties.

For example, the following code fragment is a logical network model consisting of three machines capturing the configuration properties of a Trac deployment, a web-based issue tracker system:

{
  network.description = "Trac deployment";

  storage =
    { pkgs, ... }:

    { services.nfs.server.enable = true;
      services.nfs.server.exports = ''
        /repos 192.168.1.0/255.255.255.0(rw,no_root_squash)
      '';
      services.nfs.server.createMountPoints = true;
    };

  postgresql =
    { pkgs, ... }:

    { services.postgresql.enable = true;
      services.postgresql.package = pkgs.postgresql;
      services.postgresql.enableTCPIP = true;
      services.postgresql.authentication = ''
        local all all                trust
        host  all all 127.0.0.1/32   trust
        host  all all ::1/128        trust
        host  all all 192.168.1.0/24 trust
      '';
    };

  webserver =
    { pkgs, ... }:

    { fileSystems = [
        { mountPoint = "/repos";
          device = "storage:/repos";
          fsType = "nfs";
        }
      ];
      services.httpd.enable = true;
      services.httpd.adminAddr = "root@localhost";
      services.httpd.extraSubservices = [
        { serviceType = "trac"; }
      ];
      environment.systemPackages = [
        pkgs.pythonPackages.trac
        pkgs.subversion
      ];
    };
}

The three machines in the above example have the following purpose:

  • The first machine, named storage, is responsible for storing Subversion source code repositories in the following folder: /repos and makes the corresponding folder available as a NFS mount.
  • The second machine, named postgresql, runs a PostgreSQL database server storing the tickets.
  • The third machine, named webserver, runs the Apache HTTP server hosting the Trac web application front-end. Moreover, it mounts the /repos folder as a network file system connecting to the storage machine so that the Trac web application can view the Subversion repositories stored inside it.

The above specification can be considered a logical network model, because it captures the configuration we want to deploy, without any machine specific characteristics. Regardless of what kind of machine we intend deploy, we want these services to be available.

However, a NixOS configuration cannot be deployed without any machine specific settings. These remaining settings can be specified by writing a second model, the physical network model, capturing these:

{
  storage = 
    { pkgs, ...}:
    
    { boot.loader.grub.version = 2;
      boot.loader.grub.device = "/dev/sda";

      fileSystems = [
        { mountPoint = "/";
          label = "root";
        }
      ];

      swapDevices = [
        { label = "swap"; }
      ];

      networking.hostName = "storage";
    };

  postgresql = ...

  webserver = ...
}

The above partial network model specifies the following physical characteristics for the storage machine:

  • GRUB version 2 should be used as bootloader and should installed on the MBR of the hard drive partition: /dev/sda.
  • The hard drive partition with label: root should be mounted as root partition.
  • The hard drive partition with label: swap should be mounted as swap partition.
  • The hostname of the system should be: 'storage'

By invoking NixOps with the two network models shown earlier as parameters, we can create a NixOps deployment -- an environment containing a set of machines that belong together:

$ nixops create ./network-logical.nix ./network-physical.nix -d test

The above command creates a deployment named: test. We can run the following command to actually deploy the system configurations:

$ nixops deploy -d test

What the above command does is invoking the Nix package manager to build all the machine configurations, then it transfers their corresponding package closures to the target machines and finally activates the NixOS configurations. The end result is a collection of machines running the new configuration, if all previous steps have succeeded.

If we adapt any of the network models, and run the deploy command again, the system is upgraded. In case of an upgrade, only the packages that have been changed are built and transferred, making this phase as efficient as possible.

We can also replace the physical network model shown earlier with the following model:

{
  storage = {
    deployment.targetEnv = "virtualbox";
    deployment.virtualbox.memorySize = 1024;
  };

  postgresql = ...

  webserver = ...
}

The above physical network configuration states that the storage machine is a VirtualBox Virtual Machine (VM) requiring 1024 MiB of RAM.

When we instantiate a new deployment with the above physical network model and deploy it:

$ nixops create ./network-logical.nix ./network-vbox.nix -d vbox
$ nixops deploy -d vbox

NixOps does an extra step before doing the actual deployment of the system configurations -- it first instantiates the VMs by consulting VirtualBox and populates them with a basic NixOS disk image.

Similarly, we can also create a physical network model like this:

let
  region = "us-east-1";
  accessKeyId = "ABCD..."; # symbolic name looked up in ~/.ec2-keys

  ec2 =
    { resources, ... }:
    
    { deployment.targetEnv = "ec2";
      deployment.ec2.accessKeyId = accessKeyId;
      deployment.ec2.region = region;
      deployment.ec2.instanceType = "m1.medium";
      deployment.ec2.keyPair = resources.ec2KeyPairs.my-key-pair;
      deployment.ec2.securityGroups = [ "my-security-group" ];
    };
in
{
  storage = ec2;

  postgresql = ec2;

  webserver = ec2;

  resources.ec2KeyPairs.my-key-pair = {
    inherit region accessKeyId;
  };
}

The above physical network configuration states that the storage machine is a virtual machine residing in the Amazon EC2 cloud.

Running the following commands:

$ nixops create ./network-logical.nix ./network-ec2.nix -d ec2
$ nixops deploy -d ec2

Automatically instantiate the virtual machines in EC2, populates them with basic NixOS AMI images and finally deploys the machines to run our desired Trac deployment.

(In order to make EC2 deployment work, you need to create the security group (e.g. my-security-group) through the Amazon EC2 console first and you must set the AWS_SECRET_ACCESS_KEY environment variable to contain the secret access key that you actually need to connect to the Amazon services).

Besides physical machines, VirtualBox, and Amazon EC2, NixOps also supports the Google Computing Engine (GCE) and Hetzner. Moreover, preliminary Azure support is also available in the development version of NixOps.

With NixOps you can also do multi-cloud deployment -- it is not required to deploy all VMs in the same IaaS environment. For example, you could also deploy the first machine to Amazon EC2, the second to Hetzner and the third to a physical machine.

In addition to deploying system configurations, NixOps can be used to perform many other kinds of system administration tasks that work on machine level.

Disnix


Readers who happen to know me a bit, may probably notice that many NixOps features are quite similar to things I did in the past -- while I was working for Delft University of Technology as a PhD student, I was investigating distributed software deployment techniques and developed a tool named: Disnix that also performs distributed deployment tasks using the Nix package manager as underlying technology.

I have received quite a few questions from people asking me things such as: "What is the difference between Disnix and NixOps?", "Is NixOps the successor of/a replacement for Disnix?", "Are Disnix and NixOps in competition with each other?"

The short answer is: while both tools perform distributed deployment tasks and use the Nix package manager as underlying (local) deployment technology, they are designed for different purposes and address different concerns. Furthermore, they can also be effectively used together to automate deployment processes for certain kinds of systems.

In the next chapters I will try to clarify the differences and explain how they can be used together.

Infrastructure and service deployment


NixOps does something that I call infrastructure deployment -- it manages configurations that work on machine level and deploys entire system configurations as a whole.

What Disnix does is service deployment -- Disnix is primarily designed for deploying service-oriented systems. What "service-oriented system" is exactly supposed to mean has always been an open debate, but a definition I have seen in the literature is "systems composed of platform-independent entities that can be loosely coupled and automatically discovered" etc.

Disnix expects a system to be decomposed into distributable units (called services in Disnix terminology) that can be built and deployed independently to machines in a network. These components can be web services (systems composed of web services typically qualify themselves as service-oriented systems), but this is not a strict requirement. Services in a Disnix-context can be any unit that can be deployed independently, such as web services, UNIX processes, web applications, and databases. Even entire NixOS configurations can be considered a "service" by Disnix, since they are also a unit of deployment, although they are quite big.

Whereas NixOps builds, transfers and activates entire Linux system configurations, Disnix builds, transfers and activates individual services on machines in a network and manages/controls them and their dependencies individually. Moreover, the target machines to which Disnix deploys are neither required to run NixOS nor Linux. They can run any operating system and system distribution capable of running the Nix package manager.

Being able to deploy services to heterogeneous networks of machines is useful for service-oriented systems. Although services might manifest themselves as platform independent entities (e.g. because of their interfaces), they still have an underlying implementation that might be bound to technology that only works on a certain operating system. Furthermore, you might also want to have the ability to experiment with the portability of certain services among operating systems, or effectively use a heterogeneous network of operating systems to use their unique selling points effectively for the appropriate services (e.g. a Linux, Windows, OpenBSD hybrid).

For example, although I have mainly used Disnix to deploy services to Linux machines, I also once did an experiment with deploying services implemented with .NET technology to Windows machines running Windows specific system services (e.g. IIS and SQL server), because our industry partner in our research project was interested in this.

To be able to do service deployment with Disnix one important requirement must be met -- Disnix expects preconfigured machines to be present running a so-called "Disnix service" providing remote access to deployment operations and some other system services. For example, to allow Disnix to deploy Java web applications, a predeployed Servlet container, such as Apache Tomcat, must already be present on the target machine. Also other container services, such as a DBMS, may be required.

Disnix does not automate the deployment of machine configurations (that include the Disnix service and containers) requiring people to deploy a network of machines by other means first and writing an infrastructure model that reflects the machine configurations accordingly.

Combining service and infrastructure deployment


To be able to deploy a service-oriented system into a network of machines using Disnix, we must first deploy a collection of machines running some required system services first. In other words: infrastructure deployment is a prerequisite for doing service deployment.

Currently, there are two Disnix extensions that can be used to integrate service deployment and infrastructure deployment:

  • DisnixOS is an extension that complements Disnix with NixOS' deployment features to do infrastructure deployment. With this extension you can do tasks such as deploying a network of machines with NixOps first and then do service deployment inside the deployed network with Disnix.

    Moreover, with DisnixOS you can also spawn a network of NixOS VMs using the NixOS test driver and run automated tests inside them.

    A major difference from a user perspective between Disnix and DisnixOS is that the latter works with network models (i.e. networked NixOS configurations used by NixOps and the NixOS test driver) instead of infrastructure models and does the conversion between these models automatically.

    A drawback of DisnixOS is that service deployment is effectively tied to NixOS, which is a Linux distribution. DisnixOS is not very helpful if a service-oriented system must be deployed in a heterogeneous network running multiple kinds of operating systems.
  • Dynamic Disnix. With this extension, each machine in the network is supposed to publish its configuration and a discovery service running on the coordinator machine generates an infrastructure model from the supplied settings. For each event in the network, e.g. a crashing machine, a newly added machine or a machine upgrade, a new infrastructure model is generated that can be used to trigger a redeployment.

    The Dynamic Disnix approach is more powerful and not tied to NixOS specifically. Any infrastructure deployment approach (e.g. Norton Ghost for Windows machines) that includes the deployment of the Disnix service and container services can be used. Unfortunately, the Dynamic Disnix framework is still a rough prototype and needs to become more mature.

Is service deployment useful?


Some people have asked me: "Is service deployment really needed?", since it also possible to deploy services as part of a machine's configuration.

In my opinion it depends on the kinds of systems that you want to deploy and what problems you want to solve. For some kinds of distributed systems, Disnix is not really helpful. For example, if you want to deploy a cluster of DBMSes that are specifically tuned for specific underlying hardware, you cannot really make a decomposition into "distributable units" that can be deployed independently. Same thing with filesystem services, as shown in the Trac example -- doing an NFS mount is a deployment operation, but not a really an independent unit of deployment.

As a sidenote: That does not imply that you cannot do such things with Disnix. With Disnix you could still encapsulate an entire (or partial) machine specific configuration as a service and deploy that, or doing a network mount by deploying a script performing the mount, but that defeats its purpose.

At the same time, service-oriented systems can also be deployed on infrastructure level, but this sometimes leads to a number of inconveniences and issues. Let me illustrate that by giving an example:


The above picture reflects the architecture of one of the toy systems (Staff Tracker Java version) I have created for Disnix for demonstration purposes. The architecture consists of three layers:

  • Each component in the upper layer is a MySQL database storing certain kinds of information.
  • The middle layer encapsulates web services (implemented as Java web applications) responsible for retrieving and modifying data stored in the databases. An exception is the GeolocationService, which retrieves data by other means.
  • The bottom layer contains a Java web application that is consulted by end-users.

Each component in the above architecture is a distributable service and each arrow denotes dependency relationships between them which manifest themselves as HTTP and TCP connections. Because components are distributable, we could, for example, deploy all of them to one single machine, but we can also run each of them on a separate machine.

If we want to deploy the example system on infrastructure level, we may end up composing a networked machine configuration that looks as follows:


The above picture shows a deployment scenario in which the services are divided over two machines a network:

  • The MySQL databases are hosted inside a MySQL DBMS running on the first machine.
  • The web application front-end and one of the web services granting access to the databases are deployed inside the Apache Tomcat Servlet container running on the first machine.
  • The remaining web services are deployed inside an Apache Tomcat container running on the second machine.

When we use NixOps to deploy the above machine configurations, then the entire machine configurations are deployed and activated as a whole, which have a number of implications. For example, the containers have indirectly become dependent on each other as can be seen in the picture below in which I have translated the dependencies from service level to container level:


In principle, Apache Tomcat does not depend on MySQL, so under normal circumstances, these containers can be activated in any order. However, because we host a Java web application that requires a database, suddenly the order in which these services are activated does matter. If we would activate them in the wrong order then the web service (and also indirectly the web application front-end) will not work. (In extreme cases: if a system has been poorly developed, it may even crash and need to be manually reactivated again!)

Moreover, there is another implication -- the web application front-end also depends on services that are deployed to the second machine, and two of these services require access to databases deployed to the first machine. On container level, you could clearly see that this situation leads to two machines having a cyclic dependency on each other. That means that you cannot solve activation problems by translating service-level dependencies to machine-level dependencies.

As a matter of fact: NixOps allows cyclic dependencies between machines and activates their configurations in arbitrary order and is thus incapable of dealing with temporarily or permanent inconsistency issues (because of broken dependencies) while deploying a system as shown in the example.

Another practical issue with deploying such a system on infrastructure level is that it is tedious to do redeployment, for example when a requirement changes. You need to adapt machine configurations as a whole -- you cannot easily specify a different distribution scenario for services to machines.

As a final note, in some organisations (including a company that I have worked for in the past) it is common practice that infrastructure and service deployment are separated. For example, one department is responsible for providing machines and system services and another department (typically a development group) responsible for building the services and deploying them to the provided machines.

Conclusion


In this blog post, I have described NixOps and elaborated on the differences between NixOps and Disnix -- the former tool does infrastructure deployment, while the latter does service deployment. Infrastructure deployment is a prerequisite of doing service deployment and both tools can actually be combined to automate both concerns.

Service deployment is particularly useful for distributed systems that can be decomposed into "distributable units" (such as service-oriented systems), but not all kinds of distributed systems.

Moreover, NixOps is a tool that has been specifically designed to deploy NixOS configurations, while Disnix can deploy services to machines running any operating system capable of running the Nix package manager.

Finally, I have been trying to answer three questions, which I mentioned somewhere in the middle of this blog post. There is another question I have intentionally avoided that obviously needs an answer as well! I will elaborate more on this in the next blog post.

References


More information about Disnix, service and infrastructure deployment, and in particular: integrating deployment concerns can be found in my PhD thesis.

Interestingly enough, during my PhD thesis defence there was also a question about the difference between service and infrastructure deployment. This blog post is a more elaborate version of the answer I gave earlier. :-)