Dec 24: Working at FastMail

This blog post is part of the FastMail 2014 Advent Calendar.

The previous post on 23nd December was the open protocol, JMAP. And this is the end!

Technical level: low

FastMail has been around for 15 years now, via a short detour as part of Opera Software and then back to being our own company again.

Some History

I was hired in 2004 as the fourth member of a small technical team in Melbourne. Rob M was living overseas at the time, so I worked with Jeremy (one of the original founders, he’s moved on to other things now) and Richard. We had no office, but I would catch the train and tram to Port Melbourne and work with Jeremy in his lounge room.

After working for a big corporate where (no joke) I couldn’t have a server to do my work for the 6 months I was seconded to New Jersey, because they needed longer than that to plan things, and where I only managed to wrangle a desktop computer to make into a server because my laptop had been purchased in Australia and wasn’t in their database… it was a breath of fresh air to be asked to specify the laptop that I wanted and have it delivered and waiting for me when I started.

Jeremy also had another company, and we moved in with them when they got some space of their own. We shared a house in Port Melbourne where we set up desks in the bedrooms, and then later a proper office in Melbourne CBD until they were sold in 2008. We moved to a serviced office on the 50th floor of one of the tallest buildings in Melbourne. The view was fantastic, though my ears always popped in the elevator! Jeremy stayed with ODG, so it was just the three of us working together.

After the sale to Opera, we doubled the size of the team and took a larger office on the same floor. I was lucky enough to get a transfer to head office in Norway in 2011-2012, and while I was away the team in Australia grew further and moved to our current office on William St in the Melbourne CBD (interestingly, our datacentre in New York is also on a William St — it hasn’t caused any misdirected mail yet). We have a great office of our own now, with plenty of space.

Office environment

We work in rooms with 2-4 people, with doors that can be closed (though they usually aren’t) and a boardroom that’s big enough for the entire team to get together for our weekly status meeting. If anyone is remote (working from home, travelling, etc) they join via video conference. We’ve been using AppearIn from our friends at Telenor.


We have a huge open breakout area with couches, table tennis table and kitchen.


The nice thing about working on computers on the other side of the world is that it really doesn’t matter where you are. We don’t treat the office network specially, everybody’s laptop makes its own VPN connection anyway – so we can do our work anywhere. Most of the team have children, and many of us work from home one or two days per week.

When we are in the office together, we frequently gather around whiteboards to nut out ideas. The great thing about smartphones is that everyone has a camera, so we all take a photo of the end result and keep it with us as we go back to our individual tasks.

A Small Business

The great thing about FastMail is that it’s a blend of startup and small business. We have the best bits of startup culture — flexible working hours, free coffee, snacks and drinks in the fridge, table tennis table, cake on Fridays (often shared with our friends at ODG, we still stay in touch). This is matched with the best bits of a profitable company — consistent revenue, existing infrastructure, decent salaries, and people who understand the business side of things as well as the tech.

My first question when I interviewed with FastMail was “do you have someone who knows how to run a business”, because I worked for a dotcom that went bankrupt due to poor business planning. I didn’t want to live through that mess again. FastMail has had steady growth every year for the last 15 years, thanks to our fantastic users who appreciate our product and stay with us.

Jobs at FastMail

As with any business, if the right person appears, sometimes you adjust things to create a role for them. Our tasks aren’t that fixed, we split the work between us to get the required jobs done.

Having said that, we have two specific positions opening up in our Melbourne, Australia office for early 2015:

These two people will be working on both our FastMail product and building the reference open-source implementations for JMAP.

If you have the skills we need, and the right to work in Australia (sorry, we can’t help with visas or sponsorships), then drop us a line at


Thank you to everyone who has been following this series, reading what we write. More than anything, people want to know they are bringing value to others. One of the best things about working at FastMail is that code we write is out there making people’s lives better almost immediately — that’s a great feeling. The positive feedback we’ve been receiving has made all the effort worthwhile, even last-minute scramble to get posts finished on the weekends!

Extra special thanks to all our customers. It’s your ongoing support that allows us to continue our passion of building email, calendar and contacts done right.

Wishing everybody a happy and safe holiday season.

Posted in Advent 2014. Comments Off

Dec 23: JMAP — A better way to email

This blog post is part of the FastMail 2014 Advent Calendar.

The previous post on 22nd December was the long awaited beta of contact syncing. The final post on 24th December is about our work environment.

Technical level: low

You can reproduce the demos by signing up a trial account at FastMail and trying for yourself!


FastMail started with IMAP in 1999 when it was still quite new. We have been involved in the standards process, and have watched while non-open protocols led the way on features, usability and reliability. We pride ourselves on the standards compliance of our server, but we have been frustrated by the lack of progress in the third-party client experience available to our users.

The fragmentation of server support for newer IMAP features means that clients either have multiple implementations of everything, with the complexity and bugs that involves, or fall back to lowest-common-denominator behaviour. Extending IMAP further just makes this situation worse.

Having a separate protocol (SMTP) for sending email, running on a different TCP port and with potentially different credentials provides opportunities for partial failures, where a client can send but not receive or vice versa. These issues can be caused by firewalls, by temporary maintenance on the server, or just misconfiguration.

Regardless, for the end user the result is confusion. This is a frequent support problem for anybody with non-technical users, and the addition of CalDAV and CardDAV — everybody wants calendar and contact sync these days — means yet another failure mode in the mix.

FastMail’s protocol

When we built our own client and API we had the benefit of years of experience. I had rewritten the Cyrus IMAPd internals with strong locking to allow reliable QRESYNC support. We benefited from the work done by the LEMONADE working group for mobile email.

We also hooked in to MBOXEVENTS and FUZZY SEARCH, building on standards in the backend to create a great experience for our users. We added non-standard conversations support to our server, and there is discussion on the protocol lists about standardising that in a way which is compatible with gmail and other implementations.

Our servers are in New York, and our developers are in Australia. Even on a good day, the ping times are over 230 milliseconds. Anything more than the absolute minimum number of round trips is felt very keenly.

The end result is a very efficient, stateless, easy to use JSON API which already provides a great experience for our customers.

A proliferation of protocols

We are by no means the only people to have this idea. Many companies are building APIs to access email. Here’s just a few of them:

None of these APIs are designed with a primary goal of enabling 3rd party clients to work efficiently, with few round trips and low bandwidth use, and they are all limited to a single vendor. To inter-operate, everything falls back to speaking IMAP and SMTP.

Enter JMAP

JMAP is FastMail’s protocol with the warts removed. We leverage existing standards like HTTP, JSON, and native push channels on platforms which have them – making it easy for developers to work with.

JMAP is friendly to mobile devices. By batching multiple commands in a single HTTP request, and having an efficient update mechanism, the radio works less, and battery life is increased. By using the platform push channels, JMAP avoids having to hold its own connection open.

JMAP is friendly to servers. A stateless protocol, there’s no need for the server to maintain a version of the mailbox view that’s out of sync with the current state, as IMAP does, just so that clients can use integer offsets to refer to messages.

JMAP is friendly to the network. Unlike IMAP which can send an unbounded number of unsolicited updates, in JMAP you explicitly ask for the fields you are interested in, and the update command can contain a limit — if there are too many changes, return an error and let the client load the view from scratch. If you’re not caching the entire mailbox locally, then re-fetching a few pages of index listing is better than getting 100,000 EXPUNGED lines down the wire.

JMAP is friendly to multiple clients. In IMAP, if you rename a folder or move messages, then the client which initiated the action can update their local cache, but every other client has to re-fetch everything – there is no unique message ID which follows the message. JMAP uses globally unique IDs for messages, so immutable data can be cached forever.

JMAP does everything. Instead of separate protocols with different syntax for sending email than for receiving it (and separate protocols again for contacts and calendars) JMAP combines them all under one protocol. Best of all, the push notification includes all tokens for each service, so you can synchronise changes to your entire mail account plus your contacts and your calendar in a single round trip — up to date immediately, the way it should be.

Open everything

The JMAP protocol is totally open and unencumbered.

FastMail commits to provide a reference server and reference client, as well as maintaining the specification document in collaboration with others and keeping an up-to-date test suite to allow verification of implementations.

Finally, we know IMAP, SMTP and the DAVs aren’t going away any time soon. No protocol will succeed unless it provides an upgrade path from where we are now, and a compelling reason to switch. We will provide a proxy which can talk to existing servers and present them over JMAP. If the proxy is run somewhere close (in network terms) to the legacy servers, then JMAP will be a better experience than speaking to the servers natively, particularly if you’re on a slow network or the other side of the world.

As more servers start talking JMAP natively, the baseline for interoperability will be raised beyond IMAP+SMTP.

Come and join us at and on the mailing list. Together we can build a better way to email.

Posted in Advent 2014. Comments Off

Dec 22: CardDAV beta release

This blog post is part of the FastMail 2014 Advent Calendar.

The previous post on 21st December was about our file storage system. The following post on 23rd December promotes our standard for better email.

Technical level: medium

After more than a year of anticipation we’re very happy to announce today that we’re releasing CardDAV support into public beta test.

CardDAV is a protocol for reading, writing and synchronising contact data. It’s built into iOS devices and available on Android with an inexpensive third-party application. If you’ve ever wanted to have your FastMail contacts available on your mobile device (and vice-versa), then this is exactly what you want.

Obviously, since this is a beta, there are still a few pointy edges and non-working bits. The most notable thing is that the beta is currently only available to personal accounts, not to business or family accounts. This is because support for shared contacts is not ready yet and there’s some potential for data loss and inconsistent behaviour if you try to use shared contacts without proper support. We’re working hard on finishing shared contact support and hope to make the beta available to business and family accounts within the next couple of months.

CardDAV is only available to Full account levels and higher. Member, Guest and Lite accounts will need to upgrade to be able to use CardDAV.

So now all the disclaimers are out of the way, you can sign up for the CardDAV beta here: Instructions for connecting your client are still under development here:

The contacts story

An address book is a fundamental component of any mail system and FastMail has had one almost since the beginning. It’s always been stored in the MySQL database and available through the web client. For much of its history it’s been confined to the web client. A few years back we did add a read-only LDAP interface, which is useful for desktop mail clients that could support LDAP address books. It works fine, but being read-only severely limits its usefulness. Some time later mobile devices happened, and it became clear that something else was needed.

In 2011 the CardDAV protocol was published, largely developed at Apple to allow device contacts to be synchronised with a server. The protocol is very similar to the earlier CalDAV protocol (which we also use for our calendar) which is good as it allows us to share a lot of code between our calendar and contacts system.

Towards the end of 2012 we started to seriously appreciate the need for both an integrated calendar and device contacts syncing. We weren’t the only ones, as the Cyrus project had started to add support for CalDAV and CardDAV to the Cyrus mail server. We looked at a few options for calendar and contacts and decided that we would implement both on top of the support being baked into Cyrus, and work began in earnest. We decided that calendar was more important because we already had a contacts system and while it wasn’t perfect we preferred to focus our engineering effort on a clear gap in our product lineup. That work took the best part of a year, and we finally released the calendar to production in June 2014. At this point we were able to focus on CardDAV-based contacts.

The actual CardDAV part of this work is actually fairly simple. Unlike CalDAV, the backend server (Cyrus) doesn’t really need much special knowledge. It mostly just saves and loads contact cards as required. Calendar entries are more complicated; the server needs to know about timezones, recurring events, alarms, etc. CardDAV is much easier and if all we had to do was ship CardDAV support, we probably could have done so months ago.

The thing that made it more difficult came from the fact that we already had a contacts system and plenty of code fairly tightly integrated with it. It’s more than just the two user interfaces. The mail delivery pipeline also makes use of user contacts for spam whitelists and distribution lists, so we needed to teach these systems about a whole new storage system for contacts. Up until this time they had simply hit the database for this information. To make matters worse, we always knew that we’d need to roll out CardDAV to users gradually which meant that both the UI and the delivery code needed to be able to work with either. In short, we needed to abstract away the implementation details of the contacts storage, which took a few months to build, test and deploy. We ended up with a nice abstraction based on the JMAP getContacts/setContacts model, with a database provider behind it.

The next step was to write a CardDAV provider for our contacts abstraction. That was actually pretty easy because most of the code needed to access DAV resources was already available from our CalDAV work.

The last piece of the puzzle was the actual data conversion layer. The existing contacts system has a data model that doesn’t match up perfectly with the vCard format used by CardDAV, so we had to develop a mapping. Most of the fields have a 1:1 mapping (addresses, email addresses and phone numbers). What we call “online” fields, however, do not. Our “online” field group includes URLs, Twitter handles and chat IDs. vCard doesn’t group those the same way but more annoyingly, it doesn’t have a standard set of fields for representing these. It took a long time to develop and test a mapping that works most of the time. It’s going to need improvement as we go but it’s not bad for now.

What’s next

The next few months will include a lot more testing, polishing and responding to user feedback and obviously completing the business and family support. That will bring us to a full release where everyone will be quietly and transparently migrated to the CardDAV backend. We can then start to clean up a lot of old code, always a nice thing to do.

If you’re trying the CardDAV beta test, we’d love to hear what you think. Let us know on twitter or by emailing

Posted in Advent 2014, News. Comments Off

Dec 21: File Storage

This blog post is part of the FastMail 2014 Advent Calendar.

The previous post on 20th December saw us open source our core javascript library. The following post on 22nd December is the long awaited beta release of our contact syncing solution.

Technical level: high

Why does an email service need a file storage system?

For many years, well before cloud file storage became an everyday thing, FastMail has had a file storage feature. Like many features, it started as a logical extension of what we were already doing. People were accessing their email anywhere via a web browser, it would be really nice to be able to access important files everywhere as well. And there’s the obvious email integration points as well, being able to save attachments from emails to somewhere more structured, or having files that you commonly want to attach to emails (e.g. your resume) stored at FastMail, rather than having to upload it again each time.

Lots of FastMail’s early features exist because they’re the kind of thing that we wanted for ourselves!
It also turns out that have a generic file storage is useful for other features, as we discovered later.

A generic file service

The first implementation of our file storage system used a real filesystem on a large RAID array. To make the data available to our web servers, we used NFS. While in theory and nice and straight forward solution, unfortunately it all ended up being fairly horrible. We used the NFS server built into the Linux kernel at that time, and although it was supposed to be stable, that was not our experience at all. While all our other servers had many months of uptime, the file storage server running NFS would freeze/crash somewhere between every few days and every week. This was particularly surprising to us because we weren’t actually stressing it much, the workload wasn’t high compared to the IO that some file servers perform.

Having the NFS server go offline and people losing access to their files until it is rebooted was bad enough, but there was a much worse problem. Any process that tried to access a file on the NFS mount would freeze up until the server came back. Since the number of processes handling web requests was limited, all it took was a few 100 requests by users trying to access their file storage, and suddenly there were no processes left to handle other web requests, and ALL web requests would start failing, meaning no one was able to access the FastMail site at all. Not nice. We tried a combination of soft mounts and other flags, but couldn’t find a combination that was both consistently reliable and failure safe.

Apparently I have suppressed the memories — something to do with being woken by both young children AND broken servers, but Rob M remembers, and he says: “In one of those great moments of frustration at being woken up again by a crashed NFS server, Bron wanted to do a complete rewrite, and to use an entirely different way of storing the files. Instead of storing the file system structure in a real filesystem, we decided to use a database. However we didn’t want to store the actual file data in the database, that would result in a massive monolithic database with all user file data in it, not easy to manage. So the approach he came up with is a rather neat hybrid that has worked really well.” So there you go.

One of my first major projects at FastMail was this new file storage service. I was fresh from building data management tools for late-phase clinical trials (drugs, man) for Quintiles in New Jersey (that’s right, I moved from living in New Jersey and working on servers in Melbourne to living in Melbourne and working on servers in New York). I over-engineered our file storage system using many of the same ideas I had used for clinical data.

Interestingly, a lot of the work I’d been doing at Quintiles looked very similar in design to git, though it was years before git came out. Data addressed by digest (sha1), signatures on digests of lists of digests to provide integrity and authenticity checks over large blocks of data. That product doesn’t seem to exist any more though.

The original build of the file service was based on many of the same concepts. File version forking and merging (which was too complex and got scrapped) with very clever cache hierarchy and invalidation scheme. Blobs (file contents themselves) are stored in a key-value pool spread over multiple machines, with push to many copies before the store is successful, and a background cleaning task that ensures they are spread everywhere and garbage collected when no longer referenced.

The blob storage system is very simple – we could definitely build or grab off the shelf something a lot faster and better these days, but it’s very robust, and that matters to us more than speed.

Interestingly enough, while the caching system was great when there was a lower volume of changes and slow database servers, it eventually became faster to remove a layer of caching entirely as our needs and technology changed.

Database backed nodes

The same basic architecture still exists today. The file storage is a giant database table in our central mysql database. Every entry is a “Node”, with a primary key called NodeId, and a “ParentNodeId”. Node number 1 is treated specially, and is of class ‘RootNode’. It’s the top of the tree.

Because there are hundreds of thousands of top level nodes (ParentNodeId == 1), it would be crazy to read the key ‘ND:1′ (node directories, parent 1) for normal operations. Instead, we fetch “UA:$UserId” which is the ACL for the user’s userid, and then walk the tree back up from each ACL which grants the user any access, building a tree that way.

For example:

$ DEBUG_VFS=1 vfs -u ls /
Fetching ND:504452
Fetching UA:485617
Fetching N:20872929
Fetching N:3
Fetching N:1394099
Fetching N:1394098
Fetching N:2
Fetching N:504452
d---   504452 2005-09-14 04:31:35
d---  1394098 2006-01-20 00:44:04
d---        2 2005-09-13 07:46:04

Whereas if we’re inside an ACL path we walk the tree normally from that ACL (we still need to check the other ACLs to see if they also impact the data we’re looking at…):

$ DEBUG_VFS=1 vfs -u ls '~/websites'
Fetching ND:504452
Fetching N:504452
Fetching UA:485617
Fetching N:20872929
Fetching N:3
Fetching N:1394099
Fetching N:1394098
Fetching N:2
Fetching UT:485617
darw  6548549 2007-03-27 06:03:19 cherubfest/
darw 39907741 2008-11-25 10:27:55 custom-ui/
darw 335168869 2014-10-19 23:51:38 talks/
Fetching NFZ:504465

This structure also allows us to keep deleted files for a week, because when a node is “deleted”, it’s not actually deleted – it just gets an integer field “IsDeleted” set to the NodeId of the top node being deleted.

Let’s try that too:

[brong@utility2 ~]$ vfs -u mkdir '~/test/blog'
[brong@utility2 ~]$ echo "hello world" > /tmp/hello.txt
[brong@utility2 ~]$ vfs -u cat '~/test/blog' /tmp/hello.txt
Failed to write: Could not open / for writing: Is a directory 
[brong@utility2 ~]$ vfs -u cat '~/test/blog/hello.txt' /tmp/hello.txt
[brong@utility2 ~]$ vfs -u cat '~/test/blog/hello.txt'
hello world
[brong@utility2 ~]$ vfs -u ls '~/test/blog/'
-arw 387950177 2014-12-21 00:31:56 hello.txt (12)
[brong@utility2 ~]$ vfs -u rm '~/test/blog/hello.txt'
[brong@utility2 ~]$ vfs -u ls '~/test/blog/'
[brong@utility2 ~]$ vfs -u lsdel '~/test/blog/'
/ (deleted)
-arw 387950177 2014-12-21 00:31:56 hello.txt (12)
[brong@utility2 ~]$ vfs -u undel 387950177
restored 387950177 (hello.txt) as /
[brong@utility2 ~]$ vfs -u cat '~/test/blog/hello.txt'
hello world

And the versioning:

[brong@utility2 ~]$ echo "hello world v2" > /tmp/hello.txt 
[brong@utility2 ~]$ vfs -u cat '~/test/blog/hello.txt' /tmp/hello.txt
Failed to write: Could not open / for writing: File exists
[brong@utility2 ~]$ vfs -u -f cat '~/test/blog/hello.txt' /tmp/hello.txt
[brong@utility2 ~]$ vfs -u cat '~/test/blog/hello.txt' 
hello world v2
[brong@utility2 ~]$ vfs -u ls '~/test/blog/'
-arw 387950597 2014-12-21 00:35:50 hello.txt (15)
[brong@utility2 ~]$ vfs -u lsdel '~/test/blog/'
/ (deleted)
-arw 387950177 2014-12-21 00:31:56 hello.txt (12)
[brong@utility2 ~]$ vfs -u undel 387950177 oldhello.txt
restored 387950177 (hello.txt) as /
[brong@utility2 ~]$ vfs -u ls '~/test/blog/'
-arw 387950597 2014-12-21 00:35:50 hello.txt (15)
-arw 387950745 2014-12-21 00:36:53 oldhello.txt (12)

This means that if you delete a folder and all its contents, you can undelete it without also undeleting everything ELSE that was deleted in the same action, just by setting nodes with the same IsDeleted field as the top node back to zero again.

[brong@utility2 ~]$ vfs -u rm '~/test/blog/oldhello.txt'
[brong@utility2 ~]$ vfs -u ls '~/test/blog/'
-arw 387950597 2014-12-21 00:35:50 hello.txt (15)
[brong@utility2 ~]$ vfs -u lsdel '~/test/blog/'
/ (deleted)
-arw 387950177 2014-12-21 00:31:56 hello.txt (12)
-arw 387950745 2014-12-21 00:36:53 oldhello.txt (12)
[brong@utility2 ~]$ vfs -u rm '~/test/blog'
[brong@utility2 ~]$ vfs -u lsdel '~/test/'
/ (deleted)
darw 387950021 2014-12-21 00:38:20 blog/
[brong@utility2 ~]$ vfs -u undel 387950021
restored 387950021 (blog) as /
[brong@utility2 ~]$ vfs -u ls '~/test/blog/'
-arw 387950597 2014-12-21 00:35:50 hello.txt (15)

So only the file that was in the folder when it was deleted is now present in the restored copy.

That’s the node tree. Alongside this is “Properties”, which are used not only for storing mime types of files (you can edit these in the filestorage) the expansion state of directories in the left view on the Files screen, but can be used via our WebDAV server to set any generic fields you like. There are ACLs and Locks which can also be used via DAV, and a Websites table which is used to manage our customer static-file web hosting.

The cache module can present any file as a regular Perl filehandle, including an append filehandle which already has the old content pre-loaded. Calling “close” on that filehandle returns a complete filestorage node on which other actions can be taken – and it does quota accounting on the fly. The whole system is object oriented and a joy to work with at the top level, despite all the weird bits underneath.

One thing – if you upload over the same filename again and again (webcams for example) we will only keep the most recent 30 copies, because we were finding that it slowed the directory listings too much otherwise.

File storage as a service

A long time ago, if you were in the middle of composing a draft, and you had uploaded a file, we stored that file in a temporary location on the web server’s filesystem. This was fine normally, but if we had to shut down one of our web servers, the session would move to another server, and the email would be sent without the attachment, because it couldn’t find the file.

We now keep uploaded files in a separate VFS directory for each user – and if you’re composing and we switch our web servers, you don’t even notice:

[brong@imap20 hm]$ vfs -u ls /
darw   504453 2014-10-18 16:13:19 files/
d-rw  6825201 2014-12-16 21:36:11 temp/

The home directory that you see in the Files screen is “/username.domain/files/” in the global filesystem. Your temporary files go into “/username.domain/temp/” with a special flag that says when they will be automatically deleted (default, 1 week from upload).

This has been very useful for other things too. We now upload imported calendar/addressbook files into VFS first via our general upload mechanism, and just use a VFS address for everything internally, meaning that the upload and the processing phase can be separate.

Calendar attachments are likewise done with an upload as a temporary file, and then a conversion into a long-lived URL later. Even on Mailr, which doesn’t have a file storage service for users, the file storage engine is used underneath for attachments and calendar.

Multiple ways to access

We use Perl as our backend language. We built both FTP and DAV servers, as well as our Files screen within the app. Another very valuable use of filestorage (indeed, one of the original reasons) is that you can copy attachments from emails into file storage, and attach from file storage into new emails. This is a great way to save having to download and upload giant attachments if you want to send them on to someone else.

As you can see from my example above, where my top level had 3 different directories, it’s possible to share file storage within a business. There’s a very powerful access control system to grant access only to individual folders, and separate read and write roles.

You can create websites, including password protected websites. We even have support for basic photo galleries.

Open Source?

A lot of the tooling on top of the VFS is based on open source modules, but hacked so much that they can’t really be shared easily back to the original authors. I have been more lax with this than I should have, particularly the DAV modules where I took the Net::DAV::Server module and almost entirely rewrote it based on the RFC, including a ton of testing against the clients of the day, and adding things like Apple’s quota support.

I don’t have time during this blog series to release anything nice, but there are no real secrets in the tree, so I’ve put up a tar file at which contains the current version of the files I’ve talked about in this post, and send me an email at the obvious address if you have comments or suggestions. The file itself is hosted on a FastMail static website!

A road not taken

All of this was built in 2004/2005. These days there are tons of “filesystem as a service” products out there. We could have built one – our VFS has some nice properties which would have made syncing with clients very viable – but our focus is on email, and we didn’t make the time to learn how to build good clients.

We actually have an experimental Linux FUSE module which can talk directly to our VFS, and I built some experimental Windows and Mac clients as well, but we never took it anywhere — though we did talk about the potential to pivot the company into the file storage business.

These days we have gone the other way, with Dropbox integration on the compose screen and hooks to add other services too. It makes sense to integrate with services which do what they are good at, and concentrate our efforts on being the best at our field, email.

Posted in Advent 2014, Technical. Comments Off

Dec 20: Open-sourcing OvertureJS – the JS lib that powers FastMail

This blog post is part of the FastMail 2014 Advent Calendar.

The previous post on 19th December was about Mailr. The following post covers our file storage backend.

Technical level: medium

Every so often we’re asked what library we use to build our awesome webmail. We don’t use Ember, Angular, React, or even jQuery. Instead, we use something we’ve developed internally over the last four years. And today, we’re making it open source under the MIT License. It’s called Overture, and it could be the start of your next great web app.

Overture is a library of most of the general-purpose frontend code that powers FastMail. It’s a powerful basis for building really slick web applications, with performance at, or surpassing, native apps.

At its heart, there’s a really powerful object system (originally inspired by SproutCore many years ago), which adds support for computed properties, observables (methods that trigger when properties change), events (which can pass from object to object) and bindings (to keep the property of one object in sync with another). The implementation is solid and extremely efficient, and allows you to write clear declarative-style code.

The datastore module provides the tools you need to manage the data in any CRUD-style application. It can keep track of client-side changes and efficiently synchronise with a server. Live-updating queries on the local cache allow your views to immediately update and feel completely responsive. Support for copy-on-write nested stores and a built-in Undo Manager give complete control over mutating data.

The view system uses the sugared DOM method to render views, which has many advantages over a standard template system, such as speed, XSS-invulnerability and access to the full power of JS, rather than a cut-down second programming language. The implementation in Overture lets you define views, which are essentially self-contained UI components, and easily nest other views inside.

There’s also one-line support for animating views. You declare the layout property and its dependencies, and Overture will handle animating it between the different states. Full support for drag and drop, localisation, keyboard shortcuts, inter-tab communication, routing and more mean you have everything you need to build an awesome app.

Overture is only 65 KB minified and gzipped, and has no dependencies (except for Squire if you want to use the rich text editing component). It supports all the browsers you’d expect including IE8+ (there’s a special little library you need to include for IE8 support).

There’s fairly complete API documentation available on the Overture website, and I’ve thrown together a classic Todo demo app. It’s a good showcase of what you can build in less than a day with Overture, and it’s heavily commented to help you understand what’s going on; use your browser web dev tools, or check out the source in the Overture repo, which is of course hosted on GitHub.

Posted in Advent 2014. Comments Off

Dec 19: Mailr

This blog post is part of the FastMail 2014 Advent Calendar.

The previous post on 18th December was about how we take payments. The following post on 20th December saw us open source our core javascript library.

Technical level: low

We don’t often mention individual customers, for obvious privacy reasons.

While the negotiations for the sale of FastMail back to the staff in Australia were concluding, another long term contract was completed.

The end result is called Mailr and will be launching early next year. This was one of our largest projects this year, both in terms of development time, and in terms of interesting changes to the FastMail platform.

Co-branding and reselling FastMail

We have always offered great options for partnering with us and reselling our service. At the most basic level, everybody can generate their own affiliate URL and be paid for bringing us customers.

At the business level, value added resellers can create sub-businesses to leverage our systems for their own hosted email solution, and create their own custom landing page.

The deepest standard level of integration was used for the discontinued MyOpera Mail platform. It was a complete co-branding, with custom themes and even custom features. We have always had support for completely white-label co-branding, as evidenced by the annoyingly hard to type domain in all our settings (don’t worry, we have plans to move to easier-to-type names in the domain soon).

A whole new world

Telenor wanted something even more separated, a completely standalone version of the FastMail system, integrated with their Global Backend to give single signon and a central management and support channel across all their services. There are also jurisdictional hosting requirements for some of their business units which mean that in future, we will need the ability to run separate instances with hosting in particular countries.

To support those requirements, we had to add another layer of abstraction to the FastMail platform. We call each fully standalone copy an “island’, which may consist of machines in multiple datacentres.

Mailr’s island is built with physical and virtual machines at Softlayer. Softlayer is a great fit for email, because email is very IO heavy, and they build real hardware to spec. This allowed us to have machines which are very similar to our regular email servers, with their great IO capabilities. We can also spin up virtual machines quickly for the services with no local storage requirements, allowing fast ramp-up of extra capability.

With Mailr now available as a fully integrated service through Telenor’s global backend, Telenor business units all around the world can very easily integrate a high quality email experience as part of their service offering.

An email service

When I say standalone, Mailr is completely isolated. It is updated separately to our FastMail datacentres in New York and Iceland. The source code is stored in a single git repository still, but the list of staff with access is different, and updates are applied independently.

It also has its very own larsbot in a separate IRC channel. It’s called something different, but still answers to lars because that’s hard-coded into our fingers (OK, I have typed “last ack” more than once…). We don’t offer FastMail as a boxed piece of software – set and forget. Email services just don’t work that way – they have to interact with the rest of the world in real time, and the 24 hour on-call roster and monitoring are a key part of our service.

If your company wants to move away from running your own service, FastMail is great option – we offer a fantastic user experience, are competitive on price and provide exceptional reliability. Hosting your email with us releases your best technical staff from the day-to-day grind of interacting with the rest of the world’s email servers and allows them time to add value to the things which are core to your business. Drop us a line at

Posted in Advent 2014. Comments Off

Dec 18: Billing and Payments — a potted history

This blog post is part of the FastMail 2014 Advent Calendar.

The previous post on 17th December was about how we test our software. The following post on 19th December is about one of our biggest projects this year.

Technical level: low

Billing is not very glamorous — but it is important.

If you’ve ever run a business, you know that it doesn’t matter how great your service or product is, you have to be able to bill your customers, or eventually you’ll no longer be able to provide that service or product.

The financial side of FastMail is broadly divided into two parts: billing — which is keeping track of which services are being used and how much people have paid, and payments — which is about actually collecting money from customers.


When FastMail started, our billing system was fairly ad-hoc, and pretty much the bare minimum we needed to keep track of user balances. When a user signed up, or paid, or renewed, we would

  • create a record with the effect on their balance and a description of the event, and
  • update any attributes affected, such as the service level, or subscription expiry date

This did the job for a few years, but as we slowly grew it became apparent that it was not sufficient. Manual adjustments caused problems, and it was difficult to extract information that our accountants needed.

So, we redid things in a kind of engineery-accounting way.

The basic record keeping part of the system has these properties:

  • data in the billing system is never changed or deleted, only added
  • every event is modelled as a bunch of square pulses with a start time, end time, resource type, and height of the pulse
  • a pulse may have no end time for non-expiring resources — in this case the pulse becomes a step function
  • if the resource type is “money” then all steps (money is non-expiring) are paired in such a way that this is equivalent to double-entry bookkeeping
  • each event has a separate time that the event occurred
  • the authoratitive billing information is calculated by adding together all the pulses and steps.

On top of that record keeping are a bunch of simple views and convenience methods for commonly used queries and actions, such as resources which are always linked to the duration of the current subscription (this was the case for Extra Aliases and Extra Domains, when we offered those “a la carte”).

This allowed us to do a number of things that were previously difficult or impossible. We can:

  • See exactly how many resources (e.g. subscriptions) were being used for any time in the past,
  • Reconstruct all the transactions and purchases that led to the current state — this is important to show customers a statement of account,
  • Easily and accurately calculate pro-rata changes, even when prices have changed since the original purchase,
  • Audit our user attributes to make sure that they match the information in our billing system,
  • Reconcile our billing information with our payment gateway, and
  • Report on things that are important to our accountant, like “deferred revenue” and “unrealised FX gain / loss”

This billing system has now been in use for the past 10 years, with only minor changes needed in that time. Part of the reason for the small number of changes is that over time our pricing and product offerings have become progressively simpler.


Some of our users don’t like this, because they like to optimise their subscription so that they are paying for exactly what they need, and no more. Most of our users, though, find the all-inclusive model to be much more appealing.

Some things that are still lacking in our billing system are automated “discount coupon” and “special prices for charities” functionality. I’d like to add these one day.


As a credit card merchant, an important thing is the chargeback rate — that is, the percentage of your payments that the cardholders dispute with their bank. If this rate gets too high, then the payment gateways deem you to be risky, and they will charge much higher fees, or cancel your account completely.

When we first started out we used integrated billing only for credit cards. This was done via a payment gateway — we chose Worldpay. In those days, the only payment gateways were banks, and Worldpay was then part of the Royal Bank of Scotland. Banks are pretty conservative by nature and we had to go to some lengths to reassure them that we were a good risk.

All other payment methods were manual — someone would send us a cheque or a PayPal payment or a telegraphic transfer, and we would try to find the FastMail user and apply the credit to that account.

We were fairly successful at keeping the chargeback rate low — we have a number of fraud checks, and we try to refund disputed payments promptly. So, we got along just fine with Worldpay.

Over time, Worldpay’s APIs improved and we were able to improve the user interface, do automated reconciliation, and even perform delayed capture.

However, when FastMail was bought by Opera in 2010, Worldpay would not let us keep our merchant account under the new structure. We would need to reapply for a new merchant account, and Worldpay’s policies had changed, so we would now need to keep a much larger deposit — so large that it wasn’t feasible.

Finding a payment gateway that had acceptable deposit terms turned out to be difficult. At the time, it seemed to be a common practice to require merchants to provide a continual deposit of between three to twelve months of revenue! The mindset among many of the payment gateways appeared to be that the risk of total bankruptcy of their merchants was so high that they could not tolerate any possible exposure to chargebacks, ever.

We did a lot of searching around and found that even as recently as 2010, it was hard to find a payment gateway that met our requirements, which were:

  • take payments in USD
  • take payments from anywhere in the world
  • pay into a USD bank account in Australia
  • support delayed capture
  • have acceptable deposit terms
  • not require us to see the most PCI-DSS sensitive data

This last requirement was important, because we were not a huge company, and the cost of maintaining (and certifying) compliance with the higher levels of PCI-DSS is significant.

But we still wanted to be able to conduct recurring billing to make it easy for users to renew their subscriptions, and also so that our customers could purchase “a la carte” additions to their account.

The solution is for us to redirect to the payment gateway (in 2010) or use javascript (now) to send card information directly from the user’s web browser to the payment gateway. In both cases, the communication is directly between the user’s web browser and the payment gateway, without passing through our servers. The payment gateway would then give us a token that we could use to process additional payments when necessary.

This means that even if our servers were compromised, the attacker would not be able to steal the card details of our users from our system. The best way to ensure confidentiality is not actually having the data in the first place!

We eventually found a payment gateway that met all of our requirements, and signed up with Global Collect.

An advantage of Global Collect was that it was able to process payments of many different methods, so in theory it was able to act as an abstraction layer, and allow us to easily take payments using almost any scheme, including local bank transfers in many different countries. In practice, there was a fair amount of work needed, and in the end the only additional payment method that we used was automatic payments via PayPal.

A substantial amount of effort was required to make things work with the Global Collect — all the integration points were slightly different from Worldpay and the failure modes were different too. There was substantially more work involved in the “non-payment” part of the integration. These include reconciliation with the payment gateway (to make sure that FastMail has credited all the payments to the right users, even if we didn’t initiate them), and dealing with payments that unexpectedly change status (from Succeeded to Failed or vice versa) some days or months after the actual payment.

This refactoring was not a lot of fun, so to try to reduce this in the future, a lot of groundwork was laid for our own payment abstraction layer.

We attempted to move all the Worldpay card data into Global Collect, so that customers would not have to re-enter their billing details. This generally worked technically but had some giant wrinkles. As a standing authorisation was created in Global Collect for each user, an “auth” of $1.00 was processed, which never appeared on customers’ card statements. For most customers this was fine. A small number of customers had their bank contact them about the $1.00 charge, which they didn’t know about, and an even smaller number of customers had their cards summarily cancelled because their bank deemed that the charges were probably fraudulent. None of this showed up in our testing with our own credit cards.

The whole point of this effort was to make things as convenient as possible for our customers, but the only people who noticed were those who were so severely inconvenienced that we wished we’d never tried. It was a good lesson for future migrations though.

The “delayed capture” feature worked very well with Global Collect. We would process an “auth” which would reserve the funds, and then a few days later we would “capture” the payment, which would actually take the funds from the user’s card.

Spammers and scammers are often trying to use our systems. Many of these are kept away by the requirement to pay, but a few determined spammers make use of stolen payment credentials to sign up accounts. Often we are alerted to abuse of our system within a few days, and if we find out before the payment is “captured”, then we can cancel the payment. In this case, the cardholder will never see a transaction on their paper statement.

In this situation we know that the cardholder’s credentials have been stolen, and are being actively used for fraud on the internet (because they have just been used to try to defraud us). However, we are repeatedly told by all the payment gateways and banks we have asked that there is NO WAY for us to tell the card issuing bank or the cardholder that those details are compromised.

Fast-forward a few more years to 2013, when FastMail was split from Opera. Unfortunately the merchant agreement was now no longer applicable, so we need to change to a different payment gateway.

When it seemed likely that we would have to switch payment gateways again, we pushed ahead with realising our payment abstraction layer. This would allow us to easily support multiple payment gateways at the same time, and direct users to the appropriate payment gateway.


It’s possible that a service like Chargify might have been able to provide this for us, but that would have required us to refactor all our billing code as well. Also, I didn’t know about Chargify or similar services then.

By this time, there were a number of the new breed of “disruptive” payment gateways around — such as Pin Payments, Stripe, Braintree, and others. These don’t require large deposits from their merchants.

This means they are exposed to a chargeback risk, but have evaluated risk as being small enough that they don’t need every merchant to carry a deposit large enough to cover the maximum chargeback risk.

They effectively keep about a week of revenue as a deposit, by paying out funds a week after they were taken from the merchant’s customers. This is very convenient for a merchant as it means you can just dip a toe into the waters of a payment gateway, without taking a plunge.

After the difficulties encountered when importing data in to Global Collect, we decided not to use any data migration schemes this time. This meant our customers who already had a billing agreement with us had to enter their billing details again with a new payment gateway.

We selected Pin Payments to handle the bulk of our credit card payments, and have been pretty happy with them in general.

They had a “delayed capture” feature already, and when we asked if that could be automated (so that payments were automatically captured after 3 days) they were happy to add it. Unfortunately this came back to bite us — it turned out that the payment gateway had added a special case to deal with the automatic delayed capture, and this special case was not fully covered in their internal testing. When they made some internal changes later on, this caused a bug, and a bunch of payments were captured a second time, and a third time and a fourth! This was a giant headache to fix, and made us look pretty bad to the affected customers. As a result, we’ll shortly be doing the delayed capture ourselves — we don’t want to be skipping test cases in our payment gateway.

Speaking of “delayed capture”, it doesn’t work quite as cleanly with Pin Payments as it used to with Global Collect. Depending on the card and the bank, sometimes the “auth” appears as a transaction on the card, and then the “capture” appears as another transaction. For the affected customers, they will see two transactions on an online statement. After a while, the “auth” transaction will completely disappear (and it will never appear on paper statements) but there is a period when customers may be concerned. We have many reports of this occuring with Pin Payments, and the Stripe documentation also mentions that this may happen. We didn’t get any reports of this when we were using Global Collect.

In addition, when we started using Pin Payments they didn’t support American Express payments. Many of our business customers prefer to pay with American Express, so this was a challenge. We worked around this by initially advising American Express cardholders to pay via PayPal, and later by automatically sending those payments via Stripe.

A benefit of having our payment abstraction layer was that it was easy for us to gradually phase in payments via Pin. We started off with a couple of percent of payments, and slowly increased the percentage as we grew more confident that everything was going according to plan.

The abstraction layer makes it straightforward to integrate with PayPal directly, which we’ve had to do since late 2013.

It also means that using new payment gateways is pretty straightforward — an implementation class need to be written, and a reconciliation script. As an experiment, we’ve processed a small percentage of payments via Stripe, and we’re confident that we could switch if there was ever a persistent problem with Pin Payments.

And, after testing for a few months, today we’ve also added bitcoin payments (via BitPay) in our main web interface.

Posted in Advent 2014. Comments Off

Get every new post delivered to your Inbox.

Join 5,912 other followers