Thinking bigger: a free engineering school

“What do you do?” or “What does your startup do?” are tired questions in San Francisco, a city where you are measured by your works. The superficial startup ideas, the digital smugness, and the unnatural culture of tech celebrity are sometimes enough to make you want to throw your smartphone into the ocean and dive in after it. Go away from San Francisco for a while though, and you’ll start to remember the thing that drew you in the first place. You remember that, beneath the fads and clumsy attempts of first-time entrepreneurs, lives a city of dreamers and doers defying cynics and conventional wisdom.

Our dream is to create a modern school for engineers to attend throughout their career. Even more audaciously, we want to offer this school for free. Various experiences have led us down this path: our experience as founders exposed us to the pains of hiring, and our experience as engineers gave us the aspiration to be lifelong learners. We believe we can connect the dots and create a better solution for both.

A founder’s perspective

A founder really only has two critical roles: company direction and hiring. A common mantra in the startup world is, “execution is everything”, and, ultimately, it is the team that must execute the vision. Founders often share the trait of being control freaks, so letting go of that control is one of first challenges that they will face. That doesn’t mean letting the company run amuck because setting company direction and expectations is the second role of a founder. I don’t have any sympathy for founders that gripe about their team. All I see is a founder that has failed in their only job.

Unfortunately, hiring is brutally hard. Referral is still the best way to hire, and when that well runs dry, the options are not good. Sometimes, I felt the only thing we could do was watch the months slip by as we tried job boards, contingency recruiters, in-house recruiters, engineering speed dating, engineering auctioning, and any number of ridiculous things.

I find it incredible that large companies use talent acquisition as a major strategy in hiring. I understand how it happens though, when you have executives staring at quarterly hiring targets. Acquisition is probably the only way to expedite the hiring process, so you’re left to solve the challenge of hammering together engineers from different company cultures into a cohesive team. To me, though, that’s like panning for gold when you can build a gold factory instead.

In fact, there is a vast supply of engineers. Engineers who are talented, humble, and eager to learn and contribute. Why, then, do companies drool over Ivy league new graduates whose only experience is school projects when there is this other pool of talent available? At the end of the day, it’s about risk. A big part of hiring is mitigating risk and most experienced engineers don’t have “clean” resumes. Does too many years at the same company mean a lack of initiative? Does too few years mean they are flighty? Also, their technical experience almost certainly doesn’t completely intersect with the tech stack your company is using, so how quickly can they ramp up?

At the end of the day, I can understand the justification for passing on many of these people. Companies avoid risk in hiring so much that one veto amongst six interviews is enough to disqualify a candidate. This is troubling, however, as we have all been in that candidate’s shoes. At some point in our career, a hiring manager or investor took a risk on us, perhaps despite our previous experience, and maybe that was just the opportunity that we needed. In the happiest stories, we went on to be very successful for the company, justifying the risk. Unfortunately, the stories often don’t have happy endings, which causes companies to write off a large number of candidates.

We believe we can change that and broaden the candidate pool beyond those that fit a narrow description.  Our school is the perfect environment for individuals to demonstrate initiative and the ability to master new skills quickly. Imagine companies bringing interesting projects to the class or challenges like Google’s Summer of Code. Opportunities to collaborate are opportunities to build trust, and trust is the thing that makes referral based hiring so much better than traditional recruiting. I would have gladly exchanged my many hours spent interviewing with time spent mentoring, probably with more productive results. Properly executed, the school could recycle an endless supply of qualified engineers back to grateful companies.

An engineer’s perspective

The only constant is change, and there’s no truer statement in engineering. An engineer’s career is composed of “learning” years and “plateau” years. It’s deadly to stay too long in the “plateau” years. As a result, professional engineers are accustomed to, and take pride in, frequent self-teaching. It’s a good thing that engineers enjoy self-teaching because it’s the only option available currently. It’s certainly not very practical or efficient to go back to college or even get a graduate degree, and community college and other continuing education programs are no match for modern technology. Instead, you’re left sifting through a thousand different eager voices on the internet, most of whom got something working, but also don’t really know what they’re talking about. You sit there, like some kind of technology archeologist, painstakingly piecing together which APIs were thoughtfully designed and which were some historical remnant. Imagine learning how to be a blacksmith by walking into an empty smithy with nothing but a handful of StackOverflow posts. Maybe possible, but very slow.

On the flip side, think back to some of your more fruitful “learning” years. They are probably characterized by two things: a substantial challenge and a significant mentor. In engineering, there are trivial challenges and there are interesting challenges, but both are time consuming for a newcomer. For example, debugging a well-known library issue can take as much time as designing a clever caching strategy. Mentors streamline the trivial challenges by filling in the communal tribal knowledge. Of course, reference texts are also great resources, but by being comprehensive, important nuggets are lost in a sea of trivia. Lack of mentorship breeds great inefficiencies, and this is made obvious when one observes technologists today praising “recent” advances that are simply rediscovered concepts pioneered decades ago.

We believe that our school brings the same advantages as structured mentorship, and allows engineers to challenge themselves throughout their career. Interestingly, it also brings about another important advantage. Collaboration with like-minded peers is so rewarding that it can keep a team together long after the product or engineering challenge loses its luster. However, if you’ve ever tried to chase that feeling by meeting random engineers at a beer-fueled engineering meetup, you’ve probably been disappointed. A learning environment is infinitely more effective at building meaningful collaborative relationships. The relationships may be its own end, may lead to professional collaboration, or may lead to your next cofounder.

CodePath

When we founded CodePath, we wanted to build it on several principles. First and foremost is the standard for excellence. We want to attract the best and that requires the highest quality classes. Second, classes are project-based because engineering is best learned by doing. Third, projects are built in groups because engineering is not a solo activity in industry. To us, product, design, and collaboration are as much a part of engineering as libraries and frameworks, especially in a startup.

Making the classes free is a risky decision. There’s no such thing as a free lunch, and that will make engineers wary of low quality or some kind of catch, so that’s an impression we’ll have to overcome. When we set out to build this school, we were determined to build a school that we would attend ourselves. As a student, I don’t have thousands to spend on classes every year. As long as we can prove that we can train engineers to a certain standard, there’s money on the table from startups and big companies alike.

We’ve already been using our curriculum in corporate training for the past several months, and we’re excited to bring the classes to other engineers. Are you interested? Apply for our iOS evening bootcamp or our Android evening bootcamp. These initial bootcamps are targeted towards experienced developers interested in iOS and Android. If you would like to mentor a group, email us at help@thecodepath.com.

Follow us @thecodepath or join our mailing list for updates.

Asynchronous Processing in Web Applications, Part 2: Developers Need to Understand Message Queues

In the first part of this series, we explained asynchronous processing, when you might need to use it and why leveraging a database for that purpose is not necessarily the best option. In this post, we will explore a smarter approach to asynchronous processing using “message queues”.

Message Queues

Given the reasons that a traditional database is not the right approach for asynchronous processing, let’s take a look at the why a message queue might be a smarter choice. With a message queue, we can efficiently support a significantly higher volume of concurrent messages, messages are pushed in real-time instead of periodically polled, messages are automatically cleaned up after being received and we don’t need to worry about any pesky deadlocks or race conditions. In short, message queues present a fundamentally sound strategy for handling high-volume asynchronous processing.

Before we continue, it is important to mention that I am not claiming that a database can never be used as a queue. In fact, every database can be made to work at a low or medium volume as a queue given enough time and effort of a clever developer. Moreover, there are actually databases such as PostgreSQL that have additional support for queueing and solid libraries that can make this a manageable solution. If you are processing a low to medium volume of messages largely intended to be used as a job queue and you already use postgreSQL to store your data, there are undoubtedly cases where avoiding a separate message queue is the best choice at least until you feel pain. That said a message queue can be quite a bit more versatile, powerful and flexible depending on your needs. With that said, let’s dive into the world of message queues.

What is a message queue?

At the simplest level, a message queue is a way for applications and discrete components to send messages between one another in order to reliably communicate. Message queues are typically (but not always) ‘brokers’ that facilitate message passing by providing a protocol or interface which other services can access. This interface connects producers which create messages and the consumers which then process them. Within the context of a web application, one common case is that the producer is a client application (i.e Rails or Sinatra) that creates messages based on interactions from the user (i.e user signing up). The consumer in that case is typically daemon processes (i.e rake tasks) that can then process the arriving messages. In other cases, various subsystems will need to communicate with each other via these messages.

Features

On top of the fundamental ability to pass messages quickly and reliably, most message queues offer additional complementary features. For instance, multiple separate queues can allow different messages to be passed to different queues. This allows the messages of different types to be consumed by different services. The consumer processing email sending requests can be on a totally different queue (and/or server) then the one that resizes uploaded images. Message delivery behavior will often vary depending on your need. Certain messages will go from one producer to a single consumer (direct) while other times the message is sent to multiple different listening consumers (fanout).

Another critical feature of message queues is robustness and reliability through persistence strategies. In order to keep reliability of the messages high, most message queues offer the ability to persist all messages to disk until they have been received and completed by the consumer(s). Even if the applications or the message queue itself happens to crash, the messages are safe and will be accessible to consumers as soon as the system is operational. In contrast, transient tasks performed synchronously in a web app or on a thread in memory will be lost if anything goes awry. This is especially relevant when dealing with deployments and updating code since restarting components no longer put your messages or tasks at risk.

In addition, message queues can give you a greater visibility into the volume of messages. At the minimum, since there’s typically a broker for messages, inspecting the tasks being processed at any given time is much easier then with synchronous processing. Similarly, handling a high volume of tasks to process is simpler since you can just horizontally scale your message consumers to handle higher loads with minimal fuss. As an added bonus, the application itself now doesn’t have to deal with these tasks, so higher volumes of tasks won’t slow down the response times of the front-end web app.

Service Oriented Architecture

Arguably though the most powerful reason to introduce a message queue is the architectural benefits that their use can afford. In particular, when you have message queues that enable lightweight communication between any number of disparate services, the ability to separate your application into many small subsystems is much easier which will often improve your architecture in several ways including making the individual pieces easier to maintain, test, debug and scale. In addition, with a services oriented approach made possible with message queues, you can easily have multiple teams working along well-defined boundary points and even using different tools or stacks when necessary.

Considerations

All that said, message queues are not a panacea and like all tools have their downsides. Setting up and configuring message queues, especially more complicated ones can add a lot of moving parts to your application. Often in small apps, you don’t actually need to introduce that overhead early on and can instead slowly offload tasks to message queues as the volume of traffic increases over time. Also, with a traditional message queue, error handling is sometimes a very manual effort if a message or task fails and communicating with message queues adds certain complexity into your application logic. In addition, for more general queues, you often have to define your own state machine for messages. That is, a message that needs to be passed to three different services for processing requires you to manually architect a queue workflow that supports that. Nonetheless, for asynchronous processing of many types, a message queue is often the best tool for the job depending on your needs.

Still, which message queue should we use then? What options and alternatives exist? Why would we pick one over another? What are the differences?

Comparing Message Queues

Overview

The good news and the bad news is that there are a lot of message queues to choose from. In fact, there are dozens of message queues with all sorts of names, features, and different pros and cons such as sparrow, starling, kestrel, kafka, Amazon SQS, and many more that will not be discussed here at length.

The easiest place to start is to explain the emerging open standard protocols for message queues which are AMQP and STOMP. These are the most popular message queue standards around today and many of the message queues implement one or both of these protocols. In addition, there is also JMS (Java Message Service) which is widely used on-top of the JVM. For completeness, there is also MSMQ which is not covered in this post but is the defacto queue for most .NET applications.

Message Queue Options

There are certain generic message queue implementations that have emerged as well, many of which built on top of the aforementioned protocols. The most popular and well-supported general purpose message queues available today are RabbitMQ, Apache ActiveMQ and ZeroMQ.

How do these options compare? Well, fortunately all three tools are being actively developed and maintained, have a large user base, good documentation and decent client libraries across many languages. So in a way, you can’t really go wrong with any of these options. However, let’s spend a moment understanding how they compare with one another. In the end, the one to use depends on the needs of your application and the required flexibility.

ZeroMQ

The interesting thing to understand is that ZeroMQ is actually not so much a pre-packaged message queue like the others but instead acts as a framework for building message queues. ZeroMQ focuses mostly on just passing the messages very efficiently over the wire while RabbitMQ acts as a full-fledged ‘broker’ which handles persisting, filtering and monitoring messages. ZeroMQ has no broker built in which means that it does not have a central dispatcher to manage your messages and is really not a “full service” message queue.

Think of ZeroMQ as it’s own toolbox or framework for creating message queues tailored to your own needs. An example is given above for using ZeroMQ to create a basic instant messaging service. RabbitMQ on the other hand tries to be a more complete queue implementation. RabbitMQ is much more packaged and as such requires less configuration and setup overhead for typical use cases.

Of course, with RabbitMQ or ActiveMQ, the broker and persistence built in adds quite a bit of overhead but those libraries choose to sacrifice raw speed to provide a much richer feature set with less manual tinkering. ZeroMQ is an excellent solution when you want more control or just want to do it yourself. In other cases where you just want to use a queue for typical use cases and you are willing to accept the higher overhead, you should consider RabbitMQ or ActiveMQ.

RabbitMQ and ActiveMQ

Comparing RabbitMQ with ActiveMQ is closer to a head-on comparison because they are solving similar problems. However, the differences here are mostly in the details. ActiveMQ is built in Java on the JMS (Java Message Service) and is very frequently used within applications on the JVM (Java, Scala, Clojure, et al). ActiveMQ also supports STOMP which provides support for Ruby, PHP and Python. RabbitMQ is built on Erlang, powered by AMQP and is used frequently with applications within Erlang, Python, PHP, Ruby, et al.

As you might expect, the developers preferences influenced the choices they make throughout the queues. For example, the configuration for ActiveMQ is in XML and the routing of messages is handled with custom rules defined by ActiveMQ. In contrast, the configuration of RabbitMQ is through an Erlang syntax and the advanced routing and configuration follows standard AMQP specifications. The protocols (AMQP vs JMS) used by each queue have certain underlying differences as well. One key difference is that in AMQP a producer sends a message to the broker without knowing the intended distribution strategy while in JMS the producer is aware of the strategy to be used explicitly.

The key takeaway here is that for a general purpose messaging queue to handle all your messaging needs and supporting most advanced requirements, you could use any of the three options listed above. However, my preference and recommendation in most cases for a Ruby or Python web application is to select RabbitMQ on AMQP. Conversely, if you are building on the JVM stack with Scala or Java, then my recommendation would be to use ActiveMQ. In either case, if you are looking for a customizable general message queue, then you won’t be disappointed. Of course, don’t be too quick to dismiss ZeroMQ if you are looking for raw speed and are interested in a light-weight, do-it-yourself protocol for delivering messages.

Specialized Queues

Incidentally, this is not the end of the story though. While message queues are incredibly helpful here, the major ones listed were built to be generic and general purpose. That is, they don’t solve one particular use case but instead support a wide plethora of use cases requiring varying levels of configuration. These ‘industrial strength’ message queues can power chat rooms, instant messaging services, inter-service communication, or even multiplayer online games.

For the average web application though, the requirements are very different. To understand your requirements, ask yourself if you are sending messages to communicate between different services in your application or if you just want to process simple background jobs. In the latter case, we certainly don’t need the most powerful or flexible message queue nor the expensive associated setup costs.

Intra-service vs Job Processing

Most popular web applications really only need a way to do background job processing and offload tasks to an asynchronous queue. These more specific and constrained requirements open up the possibility for a lighter-weight message queue that is easier to use and focused on doing one thing well.

Takeaways

In this article we have covered the features of a message queue, how they work, popular message queue libraries and how to understand if you need a general message queue or a more specialized one. Hopefully you now understand the potential benefits introducing a message queue into your system architecture but also the tradeoffs and overhead of adding it prematurely. In addition, you should be able to see how a message queue works complementary to a traditional database and how queues used appropriately can help improve the modularity, maintainability and scalability of your applications.

In the next part of this series, we will explore a specialized message queue specifically built to process background jobs called a work queue. We will explore how a work queue operates, what features they typically have, how to scale them and which are most popular today. Hope that this introduction was helpful and please let me know what topics or issues you’d like to have covered in future parts of this series.

Asynchronous Processing in Web Applications, Part 1: A Database Is Not a Queue

When hacking on web applications, you will inevitably find certain actions that are taking too long and as a result must be pulled out of the http request / response cycle. In other cases, applications will need an easy way to reliably communicate with other services in your system architecture.

The specific reasons will vary; perhaps the website has a real-time element, there’s a live chat feature, we need to resize and process images, we need to slice up and transcode video, do analysis of our logs, or perhaps just send emails at a high volume. In all the cases though, asynchronous processing becomes important to your operations.

Fortunately, there are a variety of libraries across all platforms intended to provide asynchronous processing. In this series, I want to explore this landscape, understand the solutions available, how they compare and more importantly how to pick the right one for your needs.

Asynchronous Processing

Let’s start by understanding asynchronous processing a bit better. Web applications undoubtedly have a great deal of code that executes as part of the HTTP request/response cycle. This is suitable for faster tasks that can be done within hundreds of milliseconds or less. However, any processing that would take more then a second or two will ultimately be far too slow for synchronous execution. In addition, there is often processing that needs to be scheduled in the future and/or processing that needs to affect an external service.

Synchronous HTTP Illustration

In these cases when we have a task that needs to execute but that is not a candidate for synchronous processing, the best course of action is to move the execution outside the request/response cycle. Specifically, we can have the synchronous web app simply notify another separate program that certain processing needs to be done at a later time.

Asynchronous Processing

Now, instead of the task running as a part of the actual web response, the processing runs separately so that the web application can respond quickly to the request. In order to achieve asynchronous processing, we need a way to allow multiple separate processes to pass information to one another. One naive approach might be to persist these notifications into a traditional database and then retrieve them for processing using another service.

Why not a database?

There are many good reasons why a traditional database is not well-suited for cases of asynchronous processing. Often you might be tempted to use a database because ther can be an understandable reluctance to introduce new technologies into a web stack. This leads people to try to use their RDBMS as a way to do background processing or service communication. While this can often appear to ‘get the job done’, there are a number of limitations and concerns that should not be overlooked.

Asynchronous Processing Model

There are two aspects to any asynchronous processing: the service(s) that creates processing tasks and the service(s) that consume and process these tasks accordingly. In the case of using a database as a method for this, typically there would be a database table which has records that represent notified tasks and then a flag to represent which state the task is in and whether the task is completed.

Polling Can Hurt

The first issue revolves around how to best consume and process these tasks stored within a table. With a traditional database this typically means a service that is constantly querying for new processing tasks or messages. The service may have a database poll interval of every second or every minute, but the service is constantly asking the database if there has been an update. Polling the database for processing has several downsides. You might have a short interval for polling and be hammering your database with constant queries. Alternatively, you could perhaps set a long interval in which case there will be many unnecessary processing delays. In the event of having multiple processing steps, the fastest route through your system has now been delayed to the sum of all the different polling intervals.

DB Polling

Polling requires very fast and frequent queries to the table to be most effective, which adds a significant load to the database even at a medium volume. Even worse, a given database table is typically prepared to be fast at adding data, updating data or querying data, but almost never all three on the same table. If you are constantly inserting, updating and querying, race conditions and deadlocks become inevitable. These ‘locks’ occur because multiple consumers will all be “fighting” with each other over the same table and with the ‘producers’ constantly adding new items. You will see load increase for the database, performance decrease and pile-ups become increasingly common as the volume scales up.

Manual Cleanup

In addition to that, clearing old jobs can be troublesome as well because at even a medium volume, there will be many tasks in the database table. At certain intervals, the old completed tasks need to be removed otherwise the table will grow quite large. Unfortunately, this has to be performed manually and deletes are even often not particularly efficient for tables especially when being removed so frequently in conjunction with updates and queries all happening at the same time.

Scaling Won’t Be Easy

In the end, while a database will appear to work at first as a simple way to send background instructions to be processed, this approach will come back to bite you. The many limitations such as constant polling, manual cleanups, deadlocks, race conditions and manual handling of failed processing should make it fairly clear that a database table is not a scalable strategy for this type of processing.

Takeaways

The takeaway here is that asynchronous processing is useful whether you need to send an sms, validate emails, approve posts, process orders, or just pass information between services. This type of background processing is simply not a task that a traditional RDBMS is best-suited to solve. While there are exceptions to this rule with PostgreSQL and its excellent notify support, there is a different class of software that was built from scratch for asynchronous processing use cases.

This type of software is called a message queue and you should strongly consider using this instead for a far more reliable, efficient and scalable way to perform asynchronous processing tasks. In the next part of this series, we will explore several different message queues in detail to understand how they work as well as how to contrast and compare the various options. Hope that this introduction was helpful and please let me know what you’d like to have covered in future parts of this series.

Continue to Part 2: Developers Need To Understand Message Queues

Adventures in Scaling, Part 3: PostgreSQL Streaming Replication

In my last post in this series, I overviewed how to get PostgreSQL setup, running and properly tuned on your system. This works well to setup a primary database for your system but depending on your needs, you may find yourself in a position where you would like to start taking advantage of the Streaming Replication and High Availability built into PostgreSQL 9.

When running a database such as PostgreSQL on a server, a single primary master database that performs all reads and writes for your applications will likely not be able to sustain your traffic forever. At a certain point, your application will generate too many concurrent reads and writes, you’ll want to be able to recover from database failure, or for whatever reason your database will need help scaling.

Hot Standby? High Availability?

On a high level, the challenge with scaling out a database is a problem of synchronization. With a single database, there is a single source of truth for your application. All records are in one place which acts as the authority for your data. However, once you begin scaling your database, you have multiple servers all responsible for your data. As data is created, updated and destroyed, this information must necessarily be propagated as quickly as possible to all other servers as well. If servers are out of sync, data can be mis-reported or outright lost. Therefore, each database must be in a consistent state with the others with little margin for error. Since no particular approach solves this problem perfectly for all needs, instead there are many strategies for scaling a database.

The approach you should take as load increases has a lot to do with the specific characteristics of your application. Is your application read-heavy? or write-heavy? If your application has many more reads by volume, then the most common strategy is to setup secondary databases that synchronize data with the primary database. This strategy of setting up read-only databases that track changes to the primary is only part of the scaling story. There will be also be cases where you have too many writes for a single box (multi-master) or you need to distribute data across many boxes (sharding), but this post will focus on solving the problem of simply scaling reads. Look for a future post on more advanced scaling and replication strategies.

Fortunately, the strategy for scaling reads is fairly straightforward. The primary database that accepts writes is called the “master” and all other databases that can only help with the reads are called “slaves”. Typically, you can setup as many slaves as you’d like to help distribute the querying load on the database. In addition, in some database setups, these slaves can also help in cases of database/server failure. In the event that the master database fails or becomes unavailable, a slave in a good position can elect to take over as the master so that your application can still accept and retrieve data without downtime. In this scenario, the slaves that can be elected to be masters are called hot standbys and this concept is a strategy contributing towards achieve high availability which means that your database is robust against failure.

Understanding Replication

To understand how to achieve high availability and how multiple PostgreSQL databases keep synchronized, one must understand the concept of “continuous archiving” and “write-ahead logs”. The key concept of high availability has to do with the database being resilient to server failure. Specifically, in the event that one database server goes down, another one should be ready to step up and assume the responsibilities. This secondary server that must be ready to step up at any time is called a “standby”.

In order to achieve this synchronization, the primary and standby servers work together. The primary server is continuously archiving the data coming in or being modified, while each standby server operates in continuous recovery mode, reading the archives from the primary. This “continuous archive” is a comprehensive log of all activity coming into the database (every table created, every row inserted, et al). This comprehensive log is called the “write-ahead log” or WAL and is essential to the replication process.

WAL is described well in the PostgreSQL user’s guide:

Write-Ahead Logging (WAL) is a standard method for ensuring data integrity. WAL’s central concept is that changes to data files (where tables and indexes reside) must be written only after those changes have been logged, that is, after log records describing the changes have been flushed to permanent storage. If we follow this procedure, we do not need to flush data pages to disk on every transaction commit, because we know that in the event of a crash we will be able to recover the database using the log: any changes that have not been applied to the data pages can be redone from the log records.

As logs are being written by the master, these log (WAL) records are then transferred to other database servers that act as standbys. Generally the best way for this process to happen is to streams WAL records to the standby as they’re generated, without waiting for the WAL file to be completely filled. This process of moving the WAL records (in small chunked segments) in this way is known as “Streaming Replication”. One important aspect to note about this process is that transferring happens asynchronously. This means that the WAL records are shipped after a transaction has been committed so there is a small delay between committing a transaction in the primary and for the changes to become visible in the standby. However, “streaming replication” has delays generally under one second unlike older transfer strategies.

Setting This Up

Great, now that we have covered the high-level concepts, let’s jump into how to setup hot standby streaming replication step-by-step. This guide assumes you have a UNIX server provisioned (Debian-based) and have already followed our guide to setup PostgreSQL on that machine.

The first thing to do to setup a standby is to provision another server (preferably same specs as the primary) and follow the steps to setup the basic PostgreSQL server again as similarly to the primary database as possible. Ideally, the standby is essentially a clone of your first server, so if you can duplicate the server, feel free to do that as a time saver. These database servers should ideally be part of the same local network and communicate through their private IPs.

Master Setup

Next, we need to setup your master such that the database can connect to your standby machines in /etc/postgresql/9.1/main/pg_hba.conf:

# /etc/postgresql/9.1/main/pg_hba.conf
host   replication      all             SLAVEIP/32              trust

Be sure to replace SLAVEIP with the correct IP address for the standby. Next, it is a good practice to create the empty WAL directories on both the master and the slaves:

master$ sudo mkdir /var/lib/postgresql/9.1/wals/
master$ sudo chown postgres /var/lib/postgresql/9.1/wals/
slave$ sudo mkdir /var/lib/postgresql/9.1/wals/
slave$ sudo chown postgres /var/lib/postgresql/9.1/wals/

You should also be sure that the master has easy keyless SSH access to the slave:

su - postgres
ssh-keygen -b 2048
ssh-copy-id -i ~/.ssh/id_rsa.pub root@slave.host.com

Now that your master can easily access your slave, you will need to enable streaming replication and WAL logging for your database in /etc/postgresql/9.1/main/postgresql.conf:

# /etc/postgresql/9.1/main/postgresql.conf
# ...
# Listen allows slaves to connect
listen_addresses = 'MASTERIP, localhost'
wal_level = hot_standby
wal_keep_segments = 32 # is the WAL segments to store
max_wal_senders = 2 #  is the max # of slaves that can connect
archive_mode    = on
archive_command = 'rsync -a %p root@slave.host.com:/var/lib/postgresql/9.1/wals/%f </dev/null'

Now create a user that the slave can use to connect to master:

> SELECT pg_switch_xlog();
> CREATE USER replicator WITH SUPERUSER ENCRYPTED PASSWORD 'secretpassword12345';

At this point you should probably restart PostgreSQL:

sudo /etc/init.d/postgresql restart

In order for the standby to start replicating, the entire database needs to be archived and then reloaded into the standby. This process is called a “base backup”, and can be performed on the master and then transferred to the slave. Let’s create a snapshot to transfer to the slave by capturing a binary backup of the entire PostgreSQL data directory.

su - postgres
psql -c "SELECT pg_start_backup('backup', true)"
rsync -av --exclude postmaster.pid --exclude pg_xlog /var/lib/postgresql/9.1/main/ root@slave.host.com:/var/lib/postgresql/9.1/main/
psql -c "SELECT pg_stop_backup()"

This will transfer all the binary data from the PostgreSQL master to the slave. Next, time to setup the standby machine to begin replication.

Standby Setup

First, let’s shutdown the database on the standby:

sudo /etc/init.d/postgresql stop

Let’s enable the slave as a hot standby:

# /etc/postgresql/9.1/main/postgresql.conf
# ...
wal_level = hot_standby
hot_standby = on

and modify the recovery settings:

# /var/lib/postgresql/9.1/main/recovery.conf
standby_mode = 'on'
primary_conninfo = 'host=MASTERIP port=5432 user=replicator password=secretpassword1234'
trigger_file = '/tmp/pgsql.trigger'
restore_command = 'cp -f /var/lib/postgresql/9.1/wals/%f %p </dev/null'
archive_cleanup_command = 'pg_archivecleanup /var/lib/postgresql/9.1/wals/ %r'

One interesting option in the file above that is worth noting is the trigger_file parameter which tells the standby when to become the master. In the event of a machine failure and that file becomes present, the slave will be triggered and become the master. In the event that this standby is one day elected as the master, you need to ensure certain files are in place. Let’s clone the connections listed on master:

# /etc/postgresql/9.1/main/pg_hba.conf
host   replication      all             MASTERIP/32              trust
host   replication      all             SLAVEIP/32              trust

At this point, the slave should be ready to start accepting WAL records and replicating all data from the primary. Let’s restart the database:

sudo /etc/init.d/postgresql start

Verification

Check the logs to make sure that the connection was properly established on the slave (and/or master):

# /var/log/postgresql/postgresql-9.1-main.log
LOG:  entering standby mode
LOG:  streaming replication successfully connected to primary

If you see those two lines, then this means replication has started successfully. If not, follow the error messages and fix the configuration until you can restart and see these lines.

You can also check certain processes on the machines to ensure replication is established:

master$ ps -ef | grep sender
slave$ ps -ef | grep receiver

If both machines have the expected process, then that means that replication is running along happily.

Wrapping Up

At this point, replication should be established on the standby machine and you can repeat this process for any number of additional standbys. Once a standby is established, the WAL records should be continuously streamed to the machine keeping the data synchronized with the primary. In the event of a disconnection, the standby is generally resilient enough to backfill data and catch back up to the primary.

Once the standbys are in place, since they have been configured as ‘hot’ they can actually be used easily as read slaves. Simply setup your application to read from these slaves but only write to the master. If you are using Rails, one good gem to support this concept of standbys is called Octopus.

Hopefully you have found this post useful as a straightforward guide to understanding and setting up hot standby streaming replication. In a future post, we hope to cover more advanced replication strategies as well as a detailed guide from how to do a large data migration from MySQL to PostgreSQL. If you run into any problems or spot errors in this guide, please let us know so we can update the steps.

You’re Overthinking It

You comb Hacker News daily, marveling at the neatly packaged startup tales, uber-effective best practices, super clever engineering solutions, and lots and lots of links to websites filled with Helvetica, minimalism, and pastel colors. You’ve attended Lean Startup workshops, read Four Steps to the Epiphany, and subscribe to the Silicon Valley Product Group blog.

Honestly, it’s all very intimidating.

My product advice, from one overthinker to another overthinker – throw it all away. I mean, read the articles, enjoy the stories, and try to form your own opinions, but I wouldn’t take it too seriously.

A year into my first startup, my first major product epiphany was to never, never, ever try to build a product you couldn’t be a user for. That may be obvious, but I still read people discussing strategies for building products that they don’t use. There is no better user study, no more accurate persona than asking yourself what is good. There are probably product people out there that can do it, but, no offense, it’s probably not you, and it’s certainly not me.

There are many pros to building a product that you would use. Actually using a product (and I mean really using it), allows you to access the powers of intuition, an infinitely more valuable product tool than reason. Your intuition explains to you in a moment what it takes your reason an hour to break down. Your reason will lead you down a dozen wrong roads.

Another way of saying that is: if you think it’s cool, it’s probably cool. If you think it sucks, it probably sucks.

However, building a product for yourself doesn’t give you a free pass from user research, personas, and all the other things that product gurus tell you to do. Spend a year having the same product discussions with the same group of people, and the discussions will lose all meaning. Talking to five people outside of the company will bring you back to earth real fast.

One last thing…whatever you build, make sure it looks good and is the highest quality possible.

But wait, you say, look at eBay, Amazon, and Craiglist – they look like crap. Implement an MVP with product/market fit, and it doesn’t matter what it looks like. That’s true sometimes, but it also depends on where you are on Maslow’s hierarchy of needs. The lower on the pyramid your product is, the crappier it can look. If your product is core to helping people make money, pirate movies, or sell your useless couch, you don’t need a designer. But if you’re high on the pyramid, ugly/clunky UI makes it impossible to for people to see your vision.

If you read Steve Jobs biography, it talks about one of the three original Apple philosophies: an odd word called impute. It’s basically a philosophy around impressions. On product, they said, “If we present them in a slipshod manner, they will be perceived as slipshod”.

Speaking of the biography, I’ll wrap up with my impression of that book. It’s a story about a product genius, but it’s a story with as many missteps as triumphs. I take the moral of the story to be: forget the experts, the know-it-alls, and the doubters. Trust yourself and your vision, and go build something.

Turn ruby code into simple daemons with Dante

Context

At Miso, the engineering team has recently been working on a major shift to our architecture. Namely, moving to a much purer service oriented approach. Our new strategy as a whole is going to be the subject of many blog posts in the future. This transition has required us to setup a lot of new services. These services range from small API services written in Sinatra and Renee to front-end heavy services in Padrino, even to internal services communicating with protobuf messages.

Every one of these services has a lot of shared behaviors in common; each needs to log output, be monitored, be daemonized and easily start / stoppable, have automated deployment, etc. These shared commonalities between services and the quantity of these services led us to begin our journey to create a ‘unified’ services architecture that will manage all of this for us. While that project is still a work in progress, today I would like to introduce one tiny library that we built along the way.

Introduction

One of the common behaviors each ruby service needs is a simple executable that can reliably start / stop a process. More-over, each executable needs to be able to start the process in non-daemon and daemonized mode, allow a port to be specified, allow a pid file to be specified, and the executable should be able to stop daemonized processes that had been started. Ideally, this same interface could be used with God or Monit to monitor the process.

We started by looking at existing daemonizing gems such as daemon-kit, and while these are excellent libraries, they didn’t really achieve quite our purposes. In the end, we decided to whip up a gem that would provide this in the simplest way that could possibly work. The result is an extremely tiny but well-tested daemonize gem called Dante.

Usage

First, let’s start with how to integrate Dante into a gem or ruby code base.

Since dante will become a dependency to your gem’s executable, add to your gemspec:

# mygem.gemspec

Gem::Specification.new do |s|
  # ...
  s.add_dependency "dante"
end

or alternatively to your Gemfile:

# Gemfile

gem "dante"

Now you can start using dante for your executables. Dante’s interface is extremely simple. You wrap your code in a Dante block and then the gem handles the rest and passes you the relevant options.

Let’s say we want to daemonize arbitrary ruby code, the simplest way would be:

# Starts the code as a daemonized process
Dante::Runner.new('foo').execute(
  :daemonize => true, :pid_path => "/tmp/path.pid") { do_something! }

You can then also stop based on the pid:

# Stops the daemonized process
Dante::Runner.new('foo').execute(
  :kill => true, :pid_path => "/tmp/path.pid")

Now, let’s say we wanted to boot up a daemonized thin api service using an executable. First we would create the executable in bin/blog and then wrap the thin bootup in a Dante#run block:

#!/usr/bin/env ruby

require File.expand_path("../blog.rb", __FILE__)

Dante.run('blog') do |opts|
  # opts available: host, pid_path, port, daemonize, user, group
  Thin::Server.start('0.0.0.0', opts[:port]) do
    use Rack::CommonLogger
    use Rack::ShowExceptions
    run Blog
  end
end

Notice how simple the Dante part is. Dante just accepts a daemon name and yield options that were passed on the command line to your program. In this example, the thin process uses those options to bootup thin on the correct port. So what does wrapping your executable code in Dante give you?

Functionality

With your executable daemonized with Dante, you get a familiar and standardized interface for controlling your process. The simplest way to start the thin server now is just to run the executable with no options:

./bin/blog

That will boot up a non-daemonized process in the command line. Now, let’s say we want to get help for this process:

./bin/blog --help

will print a nice auto-generated help message showing the daemons commands as a reference. Now, let’s start the process as a daemon, setting the port and pidfile:

./bin/blog -d -p 8080 -P /var/run/blog.pid

This will start the blog service on port 8080 storing the pid file at /var/run/blog.pid, and with that your process will be running. Now let’s stop the process:

./bin/blog -k -P /var/run/blog.pid

The -k flag will find the process from the pid and then kill the daemonized process for you.

Customization

In many cases, you will need to custom flags/options or a custom description for your executable. You can do this by using Dante::Runner explicitly:

#!/usr/bin/env ruby

require File.expand_path("../../myapp.rb", __FILE__)

# Set executable name to 'myapp' and default port to 8080
runner = Dante::Runner.new('myapp', :port => 8080)

# Sets the description in 'help'
runner.description = "This is awesome myapp!"

# Setup custom 'test' option flag
runner.with_options do |opts|
  opts.on("-t", "--test TEST", String, "Test this thing") do |test|
    options[:test] = test
  end
  # ... more opts ...
end

# Parse command-line options and execute the process
runner.execute do |opts|
  # opts: host, pid_path, port, daemonize, user, group
  Thin::Server.start('0.0.0.0', opts[:port]) do
    puts opts[:test] # Referencing my custom option
    use Rack::CommonLogger
    use Rack::ShowExceptions
    run MyApp
  end
end

Now you would be able to do:

./bin/myapp -t custom

and the opts would contain the :test option for use in your script. In addition, help will now contain your customized description in the banner.

That’s all you need to know to use Dante although the process also supports starting on alternate hosts and a handful of other options. This is all Dante does, the simplest daemonizer that could possibly work. But it has been extremely handy for us for controlling our services.

Process Monitoring

One easy way to monitor a daemonized process is to use God to watch the process. Thankfully, God works perfectly with Dante and they cooperate quite nicely. God is supposed to be able to daemonize processes automatically but in my experience this can be unreliable and having a consistent interface for the executable can be convenient for testing.

Setting up a god file for a dante process is simple:

# /etc/god/blog.rb

God.watch do |w|
  pid_file = "/var/run/blog.pid"
  w.name            = "blog"
  w.interval        = 30.seconds
  w.start           = "ruby /path/to/blog/bin/blog -d -P #{pid_file}"
  w.stop            = "ruby /path/to/blog/bin/blog -k -P #{pid_file}"
  w.start_grace     = 15.seconds
  w.restart_grace   = 15.seconds
  w.pid_file        = pid_file
  w.behavior(:clean_pid_file)
  w.start_if do |start|
    start.condition(:process_running) do |c|
      c.interval = 5.seconds
      c.running = false
    end
  end
end

Conclusion

Our team will have a lot more to talk about soon as we begin open-sourcing various parts of our new services architecture. However, I hope Dante can be useful as a standalone utility. I know that it has been for us. Be sure to checkout Dante for more information and to try this out for yourself.

For an example of our usage of Dante in a ruby gem, check out Gitdocs, our open-source dropbox clone using Ruby and Git. In particular, check out the cli.rb class for advanced usage.

Collaborate and track tasks with ease using GitDocs

At Miso, we have tried a lot of task tracking tools of all shapes and sizes. However, since we are a small team, we all end up using those in conjunction with just a plain text file where we ‘really’ track our tasks. Nothing beats a simple text file for detailed personal task tracking and project tracking. I have been using text-file based task management since as long as I can remember and most of the team shares this sentiment. A separate issue that commonly comes up is just a place to dump code snippets, setup guides, team references, etc. We have used a wiki for this, and are currently using the Github wiki for this very purpose. A third common need is a place to dump software, images, mocks, etc and we currently use dropbox for this along with a shared network drive.

Recently, a discussion arose around the possibility of just using a simple git repository of our own to serve all these disparate purposes. Namely, we needed a place to do personal and project task tracking, store useful shared code snippets, and dump random files, notes, etc that the development team needs access to. Fundamentally, what prevents a simple git repository shared between all developers serve this task? Well, there are a few obstacles to this purely git based approach.

The first is that with shared files and docs, you don’t want to have to worry about constantly pushing and pulling. Changes are happening all the time, text is being changed or files are being added and having to manually commit every line change to a text file doesn’t really make sense for this use case. We really wanted something like Dropbox…namely when a file is changed, the updates are pushed and pulled automatically. Second, we really need a way to view files in the browser. Imagine writing a great markdown guide and never being able to see the guide rendered in proper markdown. We needed a way for this shared repository to be browsable in the browser.

So after discussing this, we decided to build our ‘ideal’ solution and the result is gitdocs, a way to collaborate on documents and files effortlessly using just git and the file system. Think of gitdocs as a combination of dropbox and google docs but free and open-source.

Installation

Install the gem:

gem install gitdocs

If you have Growl installed, you’ll probably want to run brew install growlnotify to enable Growl support.

Usage

Gitdocs is a rather simple library with only a few basic commands. You need to be able to add paths for gitdocs to monitor, start/stop gitdocs and be able to check the status. First, you need to add the doc folders to watch:

gitdocs add my/path/to/watch

You can remove and clear paths as well:

gitdocs rm my/path/to/watch
gitdocs clear

Then, you need to startup gitdocs:

gitdocs start

You can also stop and restart gitdocs as needed. Run

gitdocs status

for a helpful listing of the current state. Once gitdocs is started, simply start editing or adding files to your designated git repository. Changes will be automatically pushed and pulled to your local repo.

You can also have gitdocs fetch a remote repository with:

gitdocs create my/path/for/doc git@github.com:user/some_docs.git

This will clone the repo and add the path to your watched docs. Be sure to restart gitdocs to have path changes update:

gitdocs restart

To view the docs in your browser with file formatting:

gitdocs serve

and then visit http://localhost:8888 for access to all your docs in the browser.

Conclusion

Our development team has started using Gitdocs extensively for all types of files and documents. Primarily, task tracking, file sharing, note taking, code snippets, etc. All of this in git repos, automatically managed by gitdocs and viewable in your favorite text editor or browser. Be sure to check out the gitdocs repository for more information.

What are subnets?

At Miso, we recently launched an integration with AT&T that allows the Miso iPhone app to connect with your TV. Have you ever thought that browsing hundreds of cable channels on your TV was a painful process? Our vision with Miso is that you can turn on your favorite show, see what your friends are watching, or browse additional information about a show, all from your phone.

To make this happen, Miso connects with the AT&T receiver over wifi, and we’re learning from our early adopters that home networks are more complicated than they used to be. Wifi routers are definitely the norm, and it’s not uncommon to have multiple wifi routers cover the living room, the master bedroom, and the guest bedroom upstairs. If you’re plugging in a wifi router that you bought from Best Buy, you’re probably creating a new subnet, which is something you almost never need to care about.

As it turns out, Miso’s integration with AT&T and DIRECTV both assume that a home only has one subnet, which is not always true. This has caused some support issues and prompted some people within Miso to ask, what is a subnet? Good question! As a programmer, I have some basic knowledge of network engineering, but answering this question prompted me to dig a bit deeper.

What is a subnet?

Subnets and IP addresses go hand in hand. Together, they are the building blocks for how computers find each other on a network. For instance, every time you visit a website, you first have to find the computer on the internet that hosts that website.

Addressing on the internet works a lot like mailing addresses for your apartment. The zip code for my apartment is 94105; if you were to Google that, you could see that I live within a certain region of San Francisco. Subnets are like zipcodes – they divide up the internet world into small regions.

The subnet is actually part of the IP address. You’ve probably seen an IP address for a computer. My IP address at this coffee shop is 192.168.5.199. The first three numbers (192.168.5) is the subnet (or network address) and the last number (199) is my laptop’s address (host address). The guy sitting at the next table probably has an address like 192.168.5.198. Because we’re on the same subnet, we could do things like share playlists with iTunes or exchange files on a shared folder. The three numbers that form the subnet are somewhat random, in the same way that zipcodes are somewhat random.

For nerds only: the tricky part is that which part of the IP address is the subnet actually varies, and it determines how large the subnet is. The mechanisms for dividing up address space has evolved over the years, from classful addressing to classless addressing. We’re at the point where we’re out of IPv4 addresses, thus the creation of IPv6 which has many more available addresses.

So, great, subnets are like zipcodes. They group computers together in some logical way. Laptops in the same cafe are probably on the same subnet and can do stuff like share iTunes playlists, and laptops that are in different cafes probably can’t. Yet, there must be some way to communicate between subnets. The websites that I visit every day are hosted on computers that are all on different subnets.

How does information travel between subnets?

Sending information across subnets is a lot like sending mail across the country. After writing the address on the envelope, I deliver the letter to my local post office. My local post office looks at the zipcode and begins the process of routing the letter through hubs to bring the letter in the general vicinity of the destination address. As the letter approaches the destination address, it will be given to another local post office, which ultimately puts it on a mail truck that goes to the actual address.

In the same way, let’s say that I want to visit google.com. From my laptop, the address for google.com is 74.125.224.114. My laptop has no idea where that address is, so it does the equivalent of delivering my request to the local post office, which is the wifi router of this coffee shop (you may have also heard this referred to as a gateway). The coffee shop’s wifi router also doesn’t know where that address is, so it forwards the request to the internet provider (ISP) of the coffee shop. The internet provider has a better sense for zip codes, so it starts the process of sending the packet in the right direction.

For nerds only: This process is called IP routing and there are several algorithms for doing it and they vary with IPv4 and IPv6. The algorithms differ in how chatty they are and what size network they work well with, among other things. Many algorithms are based on Dijkstra’s algorithm.

So how come I can’t share my iTunes playlist with any computer?

I can access google.com from anywhere, but I can’t access that laptop from the other coffee shop. The reason for that is that there are many more computers (and IP addresses) than there are mailing addresses, so they can’t all be public. If you work in a large business, then your department probably has an internal mail stop that’s handled by your company’s mailroom.

In the same way, internet addresses are divided up into public and private addresses. Public addresses can be reached from anywhere, but private addresses can only be accessed in local networks. Public addresses are handed out by the internet equivalent of the post office called the Internet Assigned Numbers Authority (IANA). If I want a public mailing address, I have to contact the post office. If I want a public IP address, then I have to contact my internet provider. My internet provider is a member of a Regional Internet Registry (RIR), which is a member of the IANA. It’s fairly common that one home will have one public IP address, although you can pay your ISP for more.

So, why doesn’t AT&T and DIRECTV work across multiple subnets?

Hopefully, you have a high level sense for internet addressing, subnets, and routing, although we still haven’t explained why our AT&T and DIRECTV integrations don’t work across multiple subnets yet. We need to understand one more concept – multicast.

Most network communication is two computers talking to each other directly. In some cases, you want to communicate with multiple computers at the same time. Broadcast and multicast are the two ways of doing that. In order for Miso to find the cable receiver, it makes an announcement over multicast asking if there are any cable receivers in the network. That’s the same way your computer discovers printers on your local network.

As you can imagine, multicast announcements can be pretty noisy. Imagine if my laptop made a multicast announcement and every computer in the world had to pay attention; things would get pretty chaotic. Therefore, multicast messages only go out to current subnet. If my cable receiver is on a different subnet, it won’t receive my multicast announcement.

There are a number of ways of overcoming this problem which include adjusting multicast TTL levels, router port forwarding, or configuring routers as bridges. TTL (Time-to-live) tells the multicast message how many hops to take before it should give up. By default, this value is 1, which means that it will deliver the multicast messages to all computers on the local subnet only. Router port forwarding and configuring routers as bridges are other mechanisms for allowing the multicast traffic to hop across to an adjacent subnet; however, for our AT&T integration, they are less ideal because they involve users understanding how to administer their home networks.

Hopefully, you found this high level guide useful. Feedback welcome!

Forget Chef or Puppet – Automate with Sprinkle

Miso has a relatively standard server architecture for a medium-level traffic Rails application. We use Linode to host our application and we provision a number of VPS instances that make up our infrastructure. We have a load balancer equipped with Nginx and Varnish, we have an application server that runs our Rails application to serve dynamic requests, we have a master-slave database setup, and a cache server that has memcached and redis.

Early on, we manually setup these servers by hand by installing packages, compiling libraries, tweaking configurations and installing gems. We then manually documented these steps to a company wiki which would allow us to mechanically follow the steps and setup a new instance of any of our servers.

Automating our Setup

We decided recently that we wanted a more robust way of provisioning and configuring our different types of servers. Mechanically following steps from a wiki is inefficient, error-prone and likely to become inaccurate. What we needed was a system that would allow us to provision a new application or database server with just a single command. Here were additional requirements for our desired setup:

  • Simple to setup and configure
  • Lightweight system using familiar syntax
  • Preferably utilizing ssh and commands in the vein of Capistrano
  • Modular components that can be assembled to setup each server
  • Locally executable such that I can provision a server from my local machine
  • Doesn’t require the target machine to have anything installed prior

The set-up and provisioning tools which are most popular for open-source seem to be Puppet and Chef. Both are great tools that have a lot of features and flexibility, and are well-suited when you have a large infrastructure with tens or hundreds of servers for a project. However, Chef and Puppet recipes are not trivial to create or manage and by default they are intended to be managed from a remote server. We felt that there was unnecessary complexity and overhead for our simple provisioning purposes. Thanks for the corrections in the comments regarding the remote server requirement for Puppet/Chef.

For smaller infrastructures like ours, we felt a better tool would be easier to manage and setup. Ideally, something that is akin to deploying our code with Capistrano. A tool that can be managed and run locally and that can be maintained easily. After some exploration, we stumbled upon a ruby tool called Sprinkle, probably best known by one example automated script called Passenger Stack.

There are several aspects of Sprinkle that made this our tool of choice. For one, it is locally managed and the setup is done leveraging the rock-solid Capistrano deployment system. Also, even though Sprinkle is written in Ruby, the tool does not require Ruby or anything else to be installed on the target servers since the automated setup is executed on your development machine and communicates with the remote server using only SSH. The best part is that there are only a few concepts required to understand and use this system.

Understanding Sprinkle

The rest of this article is intended to be a long and comprehensive overview of Sprinkle. While I have read many blog posts about Sprinkle written across several different years, I hadn’t seen a post that covered each aspect of Sprinkle in full detail. The goal is that by the end of this post, you should be able to understand and write Sprinkle scripts as well as execute them. Let’s start off by exploring the four major concepts that make up Sprinkle and that will be used to build out your server recipes: Packages, Policies, Installers, and Verifiers.

Packages

A package defines one or more things to provision onto the server. There is a lot of flexibility in a way a package is defined but fundamentally this represents a “component” can be installed. Packages are sets of installations, options, and verifications, grouped under a meaningful name. The basic structure of a package is like this:

# packages/some_name.rb
package :some_name do
  description 'Some text'
  version '1.2.3'
  requires :another_package

  # ...installers...

  verify { ...verifiers... }
end

Note that defining a package does not install the package by default. A package is only installed when explicitly mentioned in a policy. You can also specify recommendations and/or optional package dependencies in a package as well:

# packages/foo_name.rb
package :foo_name do
  requires :another_package
  recommends :some_package
  optional :other_package

  # ...installers...
  verify { ...verifiers... }
end

You can also create virtual aliased packages that are various alternatives for the same component type. For instance, if I wanted to give people a choice over the database to use when provisioning:

# packages/database.rb
package :sqlite3, :provides => :database do
  # ...installers and verifiers...
end

package :postgresql, :provides => :database do
  # ...installers and verifiers...
end

package :mysql, :provides => :database do
  # ...installers and verifiers...
end

You can now reference that you want to install a :database and the script will ask you which provision you want to install. For more information on packages, the best place is looking at the source file itself.

Policies

A policy defines a set of particular “packages” that are required for a certain server (app, database, etc). All policies defined will be run and all packages required by the policy will be installed. So whereas defining a “package” merely defines it, defining a “policy” actually causes those packages to be installed. A policy is very simple to define:

# Define target machine used for the "app" role
role :app, "208.28.38.44"
# Installs the packages specified with a 'requires' when the script executes
policy :myapp, :roles => :app do
  requires :some_package
  requires :nginx
  requires :postgresql
  requires :rails
end

A role merely defines what server the commands are run on. This way, a single Sprinkle script can provision an entire group of servers specifying different roles similar to using Capistrano directly. You may specify as many policies as you’d like. If the packages you’re requiring are properly defined with verification blocks, then no software will be installed twice, so you may require a webserver on multiple packages within the same role without having to wait for that package to install repeatedly.

For more information on policies, the best place is looking at the source file itself.

Installers

Installers are mechanisms for getting software onto the target machine. There are different scripts that allow you to download/compile software from a given source, use packaging systems (such as apt-get), copy files, install a gem, or even just inject content into existing files. Common examples of installers are detailed below.

To run an arbitrary command on the server:

package :foo do
  # Can be any line executed in the shell
  runner "run_some_command --now"
end

To install using the aptitude package manager in Ubuntu:

package :foo do
  apt 'foo-package'
  # Supports only installing dependencies
  apt('bar-package') { dependencies_only true }
end

That would install the ‘foo-package’ when you run the ‘foo’ package. To install a ruby gem:

package :foo do
  gem 'foo-gem'
  # Supports specifying version, source, repository, build_docs and build_flags
  gem 'bar-gem' do
    version "1.2.3"
    source 'http://gems.github.com'
    build_docs false
  end
end

To upload arbitrary text into a file on the target:

package :foo do
  push_text 'some random text', '/etc/foo/bar.conf'
  # Supports sudo access for a file
  push_text 'some random text', '/etc/foo/bar.conf', :sudo => true
end

You can also replace text in a target file:

# packages/foo.rb
package :foo do
  replace_text 'original foo', 'replacement bar', '/etc/foo/bar.conf'
  # Supports sudo access for a file
  replace_text 'original foo', 'replacement bar', '/etc/foo/bar.conf', :sudo => true
end

To transfer a file to a remote target file:

package :foo do
  # Transfers are recursive by default so whole directories can be moved
  transfer 'file/some_folder', '/etc/some_folder'
  # Supports sudo access for a file
  transfer 'file/foo.file', '/etc/foo.file', :sudo => true
  # Also a file can have "render" passed which runs the template through erb 
  # You can access variables to output the file dynamically, or pass explicit locals
  foo_port = 8080
  transfer 'file/foo.file', '/etc/foo.file', :render => true, 
            :locals => { :bar_port => 80 }
end

To run a rake task as part of a package on target:

package :foo, :rakefile => "/path/to/Rakefile" do
  rake 'foo-task'
end

To install a library from a given source path:

package :foo do
  source 'http://foo.com/latest-1.2.3.tar.gz'
  # Supports prefix, builds, archives, enable, with, and more
  source 'http://magicbeansland.com/latest-1.1.1.tar.gz' do
    prefix    '/usr/local'
    archives  '/tmp'
    builds    '/tmp/builds'
    with      'pgsql'
  end
end

Installers also support installation hooks at various points during the install process which vary depending on the installer used. An example of how to use hooks is as follows:

package :foo do
  apt 'foo-package' do
    pre :install, 'echo "Beginning install..."'
    post :install, 'echo "Completing install!"'
  end
end

Multiple hooks can be specified for any given step and each installer has multiple steps for which hooks can be configured. For more information on installers, the best place is looking at the source folder itself.

Verifiers

Verifiers are convenient helpers which you can use to check if something was installed correctly. You can use this helper within a verify block for a package. Sprinkle runs the verify block to find out whether or not something is already installed on the target machine. This way things never get done twice and if a package is already installed, then the task will be skipped.

Adding a verify block for every package is extremely important, be diligent to have an appropriate verifier for every installer used in a package. This will make the automated scripts much more robust and reusable on any number of servers. This also ensures that an installer works as expected and tests the server after installation as well.

There are many different types of verifications, for each one there are installers for which they are particularly useful. For instance, if I wanted to see if an aptitude package was installed correctly:

package :foo do
  apt 'foo-package'
  apt 'bar-package'
  verify do
    has_apt 'foo-package'
    has_apt 'bar-package'
  end
end

This will only install the package on a target if not already installed and verifies the installation after the package runs. If we wanted to check that a gem exists:

package :foo do
  gem 'foo-gem', :version => "1.2.3"
  gem 'bar-gem'
  verify do
    has_gem 'foo-gem', '1.2.3'
    # or verify that ruby can require a gem
    ruby_can_load 'bar-gem'
  end
end

If you want to check if a directory, file or executable exists:

package :foo do
  mkdir '/var/some/dir'
  touch 'var/some/file'
  runner 'touch /usr/bin/abinary' do
    post :install, "chmod +x /usr/bin/abinary"
  end

  verify do
    has_directory '/var/some/dir'
    has_file      '/etc/apache2/apache2.conf'
    has_executable 'abinary'
  end
end

You can also check if a process is running:

package :foo do
  apt 'memcached'

  verify do
    has_process 'memcached'
  end
end

For more information on verifiers, the best place is looking at the source folder itself.

Putting Everything Together

Once you understand the aforementioned concepts, building automated recipes for provisioning becomes quite straightforward. Simply define packages (with installers and verifiers) and then group them into ‘policies’ that run on target machines. Generally, you can have a deploy.rb file and an install.rb file that are defined as follows:

# deploy.rb
# SSH in as 'root'. Probably not the best idea.
set :user, 'root'
set :password, 'secret'

# Just run the commands since we are 'root'.
set :run_method, :run

# Be sure to fill in your server host name or IP.
role :app, '83.434.34.234'

default_run_options[:pty] = true

The install file tends to define the various policies for this sprinkle script:

# install.rb
require 'packages/essential'
require 'packages/git'
require 'packages/nginx'
require 'packages/rails'
require 'packages/mongodb'

policy :myapp, :roles => :app do
  requires :essential
  requires :git
  requires :nginx
  requires :rails
  requires :mongodb
end

deployment do
  delivery :capistrano

  source do
    prefix   '/usr/local'
    archives '/usr/local/sources'
    builds   '/usr/local/build'
  end
end

Then you should store your packages in a subfolder aptly named ‘packages’:

# packages/git.rb
package :git, :provides => :scm do
  description 'Git version control client'
  apt 'git-core'

  verify do
    has_executable 'git'
  end
end

You can store your assets (configuration files, etc) in “assets” folder and access them from your packages to upload. Once all the packages and policies have been defined appropriately you can execute the sprinkle script on the command line with:

sprinkle -c -s install.rb

And sprinkle is off to the races, setting up all the policies on the target machines.

Further Reading

There are several other good posts about Sprinkle:

In addition, there are a lot of good examples of sprinkle recipes:

Let me know what you think of all this in the comments and if you have any related comments or questions. What do you use to automate and manage your servers?

Vendor – Bringing Bundler to iOS

Using and sharing iOS libraries is tragically difficult given the maturity of the framework, the number of active developers, and the size of the open source community.  Compare with Rails which has RubyGems and Bundler, it’s no surprise that Rails/Ruby has a resource like The Ruby Toolbox, while iOS development has…nothing?

Are iOS developers fundamentally less collaborative than Rails developers?  When developing on Rails, if I ever find myself developing anything remotely reusable, I can almost always be certain that there is a gem for it (probably with an obnoxiously clever name).

I don’t think the spirit of collaboration is lacking in the iOS developer community; rather, there are a few fundamental challenges with iOS library development:

  • Libraries are hard to integrate into a project.  Granted, it’s not *that* hard to follow a brief set of instructions, but why can’t this process be more streamlined?
  • No standardized versioning standard.  Once a library is integrated into a project, there is no standard way of capturing which version of the library was used.
  • No dependency specification standard (this is a big problem).  Why does facebook-ios-sdk embed it’s own JSON library when there are better ones available?  So many libraries come embedded with common libraries that we all use – and worse than that, who knows what version they’re using!  Not only can this lead to duplicate symbols, but library developers essentially have to start from scratch instead of leveraging other existing libraries.

Of course, coming up with a naming, versioning, and dependency standard for iOS libraries and convincing everyone to adopt it is a daunting task.  One possible approach is follow the example of Homebrew, a popular package manager for OS X.  Homebrew turned installing and updating packages on OS X into a simple process.  Instead of convincing everyone to comply to some standard, Homebrew maintains a set of formulas that helps describe commonly used packages.  These formulas allow Homebrew to automate the installation process as well as enforce dependencies.  This works well for Homebrew, although it puts the burden of maintaining package specifications in one place, rather then distributed as it is with Ruby gems.

There seems to be a need for some type of solution here.  When we write Rails libraries, we write the readme first to help us understand what problem we’re trying to solve (Readme Driven Development).  Below is the readme for an iOS packaging system called Vendor.

Vendor – an iOS library management system

Vendor makes the process of using and managing libraries in iOS easy.  Vendor leverages the XCode Workspaces feature introduced with XCode 4 and is modeled after Bundler. Vendor streamlines the installation and update process for dependent libraries.  It also tracks versions and manages dependencies between libraries.

Step 1) Specify dependencies

Specify your dependencies in a Vendors file in your project’s root.

source "https://github.com/bazaarlabs/vendor"
lib "facebook-ios-sdk"  # Formula specified at source above
lib "three20"
lib "asi-http-request", :git => "https://github.com/pokeb/asi-http-request.git"
lib "JSONKit", :git => "https://github.com/johnezang/JSONKit.git"

Step 2) Install dependencies

vendor install
git add Vendors.lock

Installing a vendor library gets the latest version of the code, and adds the XCode project to the workspace.  As part of the installation process, the library is set up as a dependency of the main project, header search paths are modified, and required frameworks are added.  The installed version of the library is captured in the Vendors.lock file.

After a fresh check out of a project from source control, the XCode workspace may contain links to projects that don’t exist in the file system because vendor projects are not checked into source control. Run `vendor install` to restore the vendor projects.

Other commands

# Updating all dependencies will update all libraries to their latest versions.
vendor update
# Specifying the dependency will cause only the single library to be updated.
vendor update facebook-ios-sdk

Adding a library formula

If a library has no framework dependencies, has no required additional compiler/linker flags, and has an XCode project, it doesn’t require a Vendor formula. An example is JSONKit, which may be specified as below. However, if another Vendor library requires JSONKit, JSONKit must have a Vendor formula.

lib "JSONKit", :git => "https://github.com/johnezang/JSONKit.git"

However, if the library requires frameworks or has dependencies on other Vendor libraries, it must have a Vendor formula.  As with Brew, a Vendor formula is some declarative Ruby code that is open source and centrally managed.

An example Vendor formula might look like:

require 'formula'

class Three20 < Formula
  url "https://github.com/facebook/three20"
  libraries libThree20.a
  frameworks "CoreAnimation"
  header_path "three20/Build/Products/three20"
  linker_flags "ObjC", "all_load"
  vendors "JSONKit"
end

Conclusion

Using iOS libraries is way harder than it should be, which has negatively impacted the growth of the open source community for iOS.  Even if I was only developing libraries for myself, I would still want some kind of packaging system to help me manage the code.  Vendor essentially streamlines the flow described by Jonas Williams here.  Unfortunately, programmatically managing XCode projects isn’t supported natively, but people have implemented various solutions, such as Three20Victor Costan, and XCS.

Open Questions

  • Is there an existing solution for this?
  • Would this be a useful gem for your iOS development?
  • Why hasn’t anyone built something like this already? Impossible to build?