This article is an introduction about RestMS, the RESTful Messaging Service. RestMS is a new standard for real web messaging that offers simple, scalable, and secure data delivery over ordinary HTTP.
Why do We need Messaging?
Messaging – also called “messaging middleware” or “message-oriented middleware” – connects software applications running on a variety of platforms, and written in a variety of languages, across a network. There are three main interconnected reasons why messaging is a hot topic:
- It’s about scale. Applications create more value when they can reach more users. It used to be hard enough to make a software application that ran all by itself. Today, ‘many users’ means Internet scale. It’s one thing for a web site to handle many users. It can be far more profitable to connect that web site to dozens, hundreds of other applications, so that information can be traded, sold, exchange rapidly and accurately.
- It’s about change. Used to be, you changed one line of code, you rebuilt the whole system. That stops making sense when applications are distributed across many teams, and when change happens anywhere, any time, to any part of the overall system. Using loose connections between pieces means that change is cheap. No piece knows too much about any other: all they know is the data they exchange: what it means, and how to exchange it. Messaging makes this systematic and reliable.
- It’s about cost. It’s expensive to try to define the behaviour of each of many software applications that work together in a large network. It’s much cheaper, and more flexible, to define the interfaces between them: the APIs. Messaging is an API that can be stretched across a network, where even the location of the different pieces is not fixed, and can change at any moment.
So a good messaging system lets you plan much larger empires, where many of the pieces are made by your suppliers, customers, or even users. It lets you start small and grow, without the big risk of having to change fundamental aspects late in the game. And it saves you money by focusing your investment on those areas where it is most worthwhile.
Most Internet APIs are fairly basic: an application requests some data, the server responds with the data or an error. This is easy to understand and easy to program but it’s the way a person would work: click on a link, refresh email, and so on. It is an ugly way to exchange data between applications because it forces the client application to choose between polling more often (and wasting bandwidth) or polling less often (and getting data late, or not at all).
Most web developers, used to thinking of their application as the centre of the universe, will find it normal that client applications come knocking for data. But it turns out there is a better model for messaging. Instead of building concrete interfaces for services, we define service messages that are carried by generic interfaces. Instead of polling a service for data, we define data flows that can be accessed in a generic manner. Instead of designing a distributed architecture, we create a loosely-coupled abstract one, where services can come and go dynamically.
The common term “feed” is pretty good for these generic data flows. For example, let’s imagine a feed that contains all apartments for rent. If I want to rent my apartment, I publish a message to that feed. If I want to search for an apartment, I subscribe to that feed.
Feeds are dynamic rivers of data that are constantly changing and can carry from a few messages a second to millions. We need new paradigms to efficiently get the data we need. If searching the web is casting a fishing line into the river, then accessing a feed is more like diverting part of the river into an irrigation channel.
A feed-centric view of messaging works better than an application-centric one. We can focus on the semantics of publishing, routing, queuing, and delivering data, and define clear and generic APIs to do this, without any regard to which actual applications are involved.
It’s a much more scalable model, but also a more efficient one. Messages are delivered precisely and rapidly, as soon as they are available. There is no polling, no wasted bandwidth, no wasted latency. For a business sells time-dependent content, whole new markets open up.
What Messaging Should Offer
There are many products that call themselves “messaging”, so it’s worth being very clear about what a messaging product should be and do:
- It should be easy to connect to from any application, no matter what the programming language used, and no matter what platform it runs on. Messaging that is tightly tied to one operating system or one programming language is worse than useless, it’s a trap.
- It should be based on a generic model of publishing to, and mining feeds (often called “queues”). No matter what the precise model, it must be asynchronous, like email, so that senders and receivers can both work at their own pace.
- It should be agnostic with respect to data, so that it can be used for any application. Ideally, messages should be blobs of any size from zero to very large, with envelopes that hold addressing and other data used for routing.
- It should do early filtering, which means that subscriptions are passed as close to the publisher as possible, and only the necessary data is pushed downstream. This cuts network traffic, which saves money and improves performance.
- It should be scalable to large numbers of clients and large volumes of traffic, so that a successful application is never limited by the messaging system’s capabilities.
Basic Patterns for Messaging
There are three basic patterns for messaging:
- One-to-one shout, like an email. We call this the Housecat pattern.
- One-to-many shout, like an email list. We call this the Parrot pattern.
- One-to-many spread, like a printer queue. We call this the Wolfpack pattern.
The Parrot pattern is the most widely used. Get an alert if AAPL drops below $90. Get an alert if your website is mentioned in the news or a blog. Get an alert if someone posts a matching appartment for rent in Brooklyn. Any high-volume source of news, especially, if getting the news early is valuable, makes a compelling use case for messaging.
The Wolfpack pattern is more subtle, it enables a command relationship between the client applications, and a set of services. This is fairly rare across the Internet but it does happen, especially in the growing “cloud computing” scene. Amazon’s Simple Queue Service (SQS) does just this.
The State of the Art
What is available in terms of messaging? I’ve mentioned Amazon’s SQS, which looks okay. Amazon says, “Components of applications …can run independently, and do not need to be on the same network, developed with the same technologies, or running at the same time.”
SQS does not have any kind of upstream filtering. It is designed for Wolfpack, and Housecat but cannot be sensibly used for Parrot. If you tried, you would have to send every message in a queue to every client, which would be less than ideal in terms of bandwidth, performance and cost.
The widely-used RSS protocol is a Parrot solution. AtomPub is a better RSS. Typically, applications poll like mad to try to get data early, while servers groan under the stress and ban over-demanding clients. More sophisticated sites offer “custom RSS feeds” which do upstream filtering but don’t fix RSS’s very poor polled delivery model.
XMPP is a more scalable protocol and is often cited as one that does asynchronous delivery, but it’s not designed for general messaging. It does Housecat and Parrot, but once again without any upstream filtering. Any filtering is either done at the client side (meaning bandwidth is wasted), or is done by segmenting messages into many feeds (which creates other problems, such as how both sides agree on how to segment).
SOAP is another web protocol, which was meant to turn the simple XML-RPC API into something more “enterprise” worthy. SOAP got so complex that it now seems to be used only by businesses with too money and time to waste. Amazon SQS is based on SOAP. One quote of its performance by a less than happy user: “10,000 100-byte strings takes 200 seconds.”
There are many products for “enterprise messaging” – where the network is one inside a company. These are not suitable for the Internet but they do help understand the scene. There are API standards like the Java Messaging System, which does a very decent Housecat, Parrot, and Wolfpack but is limited to Java. There are older protocols like CORBA which became too complex for their own good. And there are the legacy messaging products: Websphere, Tibco, and so on: closed, commercial products that trap their users like butterflies in profitable prehistoric amber.
Making a good messaging system seems to be an unusually difficult problem. People have tried, over many years, and the results – as we just saw – seem to be all round mediocre. In my view the main reason we have not “solved” this domain (as we’ve solved operating systems, databases, file systems, word processors, spreadsheets, email, data representation, etc.) are that the problem has been too arcane to interest enough engineers.
We can break down the problem of “making a messaging system” into distinct problems, each of which has better and worse solutions:
- At what stage are messages filtered and routed? Is this at the publisher (upstream), in a central broker (midstream), or at the recipient (downstream)? Upstream routing is usually but not always the most efficient in terms of network usage. Midstream routing is often the simplest, technically. Downstream routing is usually the worst solution.
- If routing is done upstream or midstream, how are subscriptions passed up from the recipient? What are the semantics for subscriptions? What is the computational complexity of the routing model? Can messages be matched with subscriptions in one step – O(logn) – or must each message be compared to each subscription – O(n2)?
- What is the queuing model? Are there queues, and if so can these be created at runtime? Are messages queued before or/and after routing? Can queues be addressed explicitly?
- What is the persistence model? Are messages saved to disk and if so, how are transactions managed? What happens if the process crashes and is restarted?
- What is the delivery model? Are messages delivered asynchronously or must the application poll for them. Are messages held if recipients disappear and return?
- What is the security model? Who can publish, who can subscribe, who can fetch messages? How are applications authenticated? What is the level of encryption used?
- What is the overall architecture? Is it a client-to-server-to-client model, or a peer-to-peer model? Can servers be federated into larger networks? How do messages cross such larger networks?
- What are the APIs for this? How are commands and messages represented on the wire? What data types are supported, and how are they encoded? CORBA died partly because it allowed data type encoding to become much, much too complex.
- Lastly, is the solution a product, or a protocol? If it’s a protocol, is it free and open, or is it restricted in some way? Who is responsible for writing it, changing it, evolving it? Do we need a Foundation, or can it be done with a collaborative model?
In an Ideal World
In an ideal world, messaging would be based on free and open standards, like our Internet infrastructure. There would be a number of standards, tailored to different niches and solving different parts of the puzzle. I call this a “fabric of messaging protocols”. You’d be able to download dozens of excellent open source products that worked together using these standards. Your web browser would have messaging built-in. Messaging would become a standard tool for programmers, as SQL and XML have become.
In an ideal world, messaging would work from any language using standardised APIs that made it easy to replace one product with another. The art of constructing distributed applications – still something practised only inside the largest firms – would become a standard teaching for software engineers, so that messaging became the framework on which large scale collaborations could grow, much as operating-system messaging (Windows COM, Linux D-Bus) did for applications on a single box.
And products would be fast and scalable, handling tens or hundreds of thousands of messages a second, with latencies barely more than the network itself.
We are still a few years away from such a world, but it is happening.
The Advanced Message Queuing Protocol (AMQP)
Late in 2004, a large bank took the unusual step of paying a small firm (mine) to develop a new protocol and product for messaging. The bank was tired of paying massive sums for messaging. We built AMQP, and an implementation called OpenAMQ, and we migrated a huge application onto it. AMQP looks like an remote API: it lets applications talk to messaging servers and thus to each other. The API uses a wire level protocol that is carefully documented so that anyone can write their own server and clients (and several teams have done this). Today AMQP is in the hands of an industry work group.
AMQP does something new: it gives you a box of messaging building blocks that do routing and queuing, and lets you combine these to make different patterns. This turns out to be very useful because the single AMQP model solves many different messaging problems.
AMQP is a good open protocol with a choice of implementations. I like it and so do most of the people who use it. Compared to CORBA or SOAP, AMQP achieves more, and does it with less effort. It has certainly succeeded in cracking open the messaging market and provoking new debates about the very nature of messaging.
However, the design is clumsy in some profound ways (and I take blame for launching AMQP on that course, and failing to correct it once the problems became clear). AMQP is actually not one protocol but several. At a minimum, there is a control protocol to do the wiring (create exchanges, queues, and bindings) and a data protocol to send and receive messages. The control protocol is naturally synchronous (make a request, get a success/failure response), while the data protocol is naturally asynchronous (push data as fast as possible, handle failures asynchronously). The best way to make a control protocol is like SMTP, using text headers that are cheap to parse, change, and safe to extend. The best way to make a data protocol is a minimal binary encoding of the necessary envelope and data.
AMQP does not however look like this. Instead it uses a single binary protocol that is sometimes asynchronous and sometimes synchronous. Error handling is complex. Extending the control protocol is painful. Making clients is painful. AMQP hurts in ways that are unnecessary. Its binary framing was meant to be fast, but for applications that login once a day, we do not care whether that login dialogue uses highly efficient binary, or simpler SMTP-like text. AMQP’s control verbs are used only for fractions of seconds at the start of hour-long connections. Premature optimisation, making AMQP more complex, more fragile, and less interoperable than it could be. This is the only reason AMQP/0.9.1 does not interoperate with AMQP/0.8 and AMQP/0.7 and AMQP/1.0 will break all existing stacks once again.
I’ve mentioned AtomPub, which is a replacement for RSS, and an IETF standard. It looks good: a simple, well-written open specification, easy to use in any language, portable to any operating system, fast enough, and much simpler than SOAP. However, AtomPub does no routing, and only a basic form of queuing. The name says it: a protocol for publishing but not for subscribing. Without upstream filtering, AtomPub cannot scale to large volumes of data (millions of messages per day being sent to thousands of subscribers).
However, AtomPub is significant nonetheless. It is one of the best snapshots of REST, a pattern for client-server architectures that has been around for many years (it is an aspect of the HTTP protocol) but is today starting to define the leading edge in the design of web protocols and APIs.
There is still discussion about what REST is, and is not. In the AtomPub model, REST is a Create-Retrieve-Update-Delete dialog that lets a client work with server-side resources using the HTTP POST, GET, PUT, and DELETE verbs. This is the model that seems to work best because it expresses critical semantics (such as caching) in a way that the network itself (browser, proxies, caches, firewalls, servers) can understand and work with.
Why invent RestMS?
There is no Internet protocol that does messaging right. We have good solutions for security, for delivery, for queuing and routing, indeed for most or all of the challenges I listed. But no protocol (before RestMS) does them all, even in the village idiot fashion of a first version.
Having said that AMQP is pretty good except that it does not use a text protocol for control commands, and having said that AtomPub is pretty good as a RESTful API (arguably the ideal text protocol for control commands), the next step is obvious and compelling. We must take the best of AMQP and AtomPub and remix that into a new RESTful protocol that does upstream routing, queuing, and gets rid of polling. In the process we can perhaps improve on both AMQP and AtomPub.
So RestMS is this: a generic routing and queuing protocol (a client-server API) that uses simple and predictable RESTful patterns. RestMS works over plain HTTP (or secure HTTPS) so that it’s familiar to developers and network managers, and friendly to proxies, caches and firewalls. RestMS implements abstract routing and queueing models (“profiles”) and one of its profiles is interoperable with AMQP. Most importantly, RestMS is a free and open standard that anyone can use, at no cost. More than that, anyone can extend and improve on any part of RestMS, for instance to make a new profile.
A RestMS server looks, to web clients across the Internet, like a web server that has some extra intelligence built in. That is, it accepts certain URIs that refer to resources like ‘domains’, ‘feeds’, and ‘pipes’, and ‘messages’. Clients work with these resources using a RESTful API, which eventually leads them to publishing messages, receiving messages, or both.
A RestMS server can run alone, but more often it will connect to an AMQP network running on a company intranet. Often this will be the most effective way to publish data out of a company, to web-based RestMS clients, and to pass requests from those web clients back to internal AMQP services.
So RestMS can act as an AMQP-to-HTTP proxy, or it can work stand-alone. As a protocol, it looks just like HTTP, because it is just HTTP. Here, for example, is a typical request (to retrieve a domain):
Client request ------------------------------------------------------------ GET /restms/domain/default Server response ------------------------------------------------------------ HTTP/1.0 200 Content-type: text/xml Last-modified: Mon, 16 Mar 2009 20:33:51 UTC Etag: 465425e4e49c1-8a-3 Date: Mon, 16 Mar 2009 20:34:12 UTC Content-length: 274 <?xml version="1.0"?> <restms xmlns = "http://www.restms.org/schema/restms"> <domain name = "default" title = "Default domain"> <feed type = "" name = "default" title = "Default feed" href = "http://host.com/restms/feed/default" /> </domain> </restms>
How RestMS works
RestMS uses a system of “feeds” and “pipes” to drive messages from a set of senders to a set of recipients. Messages are opaque blobs with an envelope. A set of subscribers create pipes, and join their pipes to feeds using various subscription criteria. A set of publishers post messages to feeds, causing messages to arrive in pipes. Subscribers then read their pipes, fetching and deleting messages. All this happens at runtime, using a RESTful API.
The feed-join-pipe model supports any kind of routing: one-to-one, one-to-many, one-to-one-of-many, and so on. RestMS delegates the actual routing semantics to “profiles”, which are extensions that define a particular set of feed, join, and pipe types.
In the general sense, both feeds and pipes act as FIFO queues, and joins act as multiplexors or selectors on messages. Individual profiles define more precise queuing and routing semantics. For example, the AMQP9 profile defines semantics that interoperate cleanly with AMQP/0.9.1, so that any RestMS server which implements the AMQP9 profile can act as an AMQP network front-end.
REST over HTTP is very simple to understand. Resource operations are expressed as HTTP methods, so they are directly meaningful to all layers. Thus, fetching a resource is an HTTP GET, and deleting a resource is an HTTP DELETE. Updating a resource is an HTTP PUT. These three methods all operate on a resource URI, and can be conditional (using standard HTTP headers like If-Modified-Since).
Creating a resource is the most complex operation. The client POSTs a request to the parent of the eventual resource to create. The server checks the request and if it’s ok, it creates the resource and returns a new URI together with the resource description. All RestMS URIs are either fixed by agreement, or are server-assigned. The client never specifies the URI for a new resource.
RestMS expresses most resources in XML, or in JSON. Messages actually consist of a message resource – the envelope – (XML or JSON) and a set of content resources, which are MIME-typed blobs. Applications can express small messages fully in the envelope (like a postcard) or send very large contents (parcel post).
Why REST-over-HTTP is compelling
Though REST itself is not HTTP-specific, REST-over-HTTP is compelling because:
- It is simple: REST translates to just four HTTP methods used in a very regular fashion
- It uses existing transport layers: HTTP is a well-understood, human readable protocol
- It uses existing security: TLS/SSL software is widely available, and very robust
- It uses existing libraries: HTTP client and server stacks are cheap, good, and plentiful
- It uses existing metaphors: the Create-Retrieve-Update-Delete model is easy to understand
- It uses existing networks: HTTP uses web proxies, firewalls, caches, and accelerators
One problem: the simple patterns of REST-over-HTTP create a chatty protocol in which we see a lot of “request-response” traffic, with verbose resource descriptions. This is less of a problem than it might seem. In messaging, the only aspect where performance matters is the transfer of message data itself, not the control of wiring resources. Message transfer can be batched and encoded in more efficient forms.
A second problem: the basic REST-over-HTTP patterns are synchronous, but messaging needs event-driven delivery. Polling is not acceptable. We can resolve this by adding “long polling” to the GET method:
- Return the resource only when it has been modified by another application. This requires custom HTTP headers (When-Modified-After and When-None-Match).
- Return the resource only when it has been created. This requires that the server provides a URI before the resource exists. RestMS introduces this concept as an asynclet.
A final problem: REST-over-HTTP is not considered as an “Enterprise Technology”, and its appeal to larger businesses may therefore be restricted. Over time the savings that this design offer are compelling and REST-over-HTTP will become widely used in the enterprise as a basis for web service APIs, for messaging and other tasks.
What is available?
The RestMS specification is developed on www.restms.org, a collaborative wiki that hosts a series of specifications that together form RestMS. The specification process is open to all contributors who agree to license their work under a share-alike agreement.
You can get the specifications either from www.restms.org, or from a Git repository along with ancillary texts and a sample Perl client stack.
iMatix Corporation publishes a RestMS server called Zyre as part of their open source OpenAMQ/1.4 messaging product. Zyre runs on Linux, Solaris, AIX, Windows, and OpenVMS.
Thilo Fromm is the main author of a second open source project, Ahkera, which is building a RestMS server using the Django framework for Python.
Messaging is an important tool that is finally coming within reach of the normal software developer. However the good stuff is still too complex, and the simple stuff not good enough for real work. We think that it’s time for a simple capable option, based on a free and open standard, with solid open source implementations.
RestMS is this free and open standard, based on generic routing and queuing models, and easily extensible to new semantics. RestMS was heavily influenced by AMQP, another emerging messaging protocol, and AtomPub, a content publishing protocol that uses a simple and regular HTTP-based API style called “REST”.
RestMS is simple, secure, scalable. It is clean, precise and minimal, but gets the job done. It uses established HTTP security standards. And it uses HTTP scaling: servers, caches, proxies, accelerators, optimised stacks.
You can download RestMS implementations today.