Today we updated PythonAnywhere with a simple, but effective improvement. Filesystem access from your web apps and consoles should now be much faster. Here’s what we did.
All of your consoles and your web apps on PythonAnywhere have access to an identical filesystem. You can write to a file from a web app, and read the file from the console, and see it updating in real time. This is despite the fact that your web apps and consoles can all be running on different physical machines.
Gluster? Ceph? AFS even? No my friends: NFS.
Obviously, to share the filesystem between multiple machines, we need to use a network filesystem under the hood. Now, with all of the recent work on distributed filesystems and all of the new cool stuff coming out, you might think that we were using something clever like GlusterFS, Ceph, or even that old stalwart, AFS. But while they’re all great for keeping large amounts of data, if all you want to do is share a few tens of gigabytes per user between machines, the best bet is actually still NFS. So that’s what we use.
Unfortunately, we recently discovered that filesystem performance on PythonAnywhere was getting unacceptably slow. Investigations uncovered the fact that the fault actually lay with the way we handle Dropbox shares.
Dropbox is a great product, and we love it. But we use it at an unusually large scale (tens of thousands of shares, etc), and we run the clients that keep it up-to-date on our side on Linux machines – Linux is a pretty small market for them, and presumably they don’t put as much time into making their Linux client as fast and efficient as possible as they do into optimising the Windows and Mac versions.
Added to this is the fact that a Dropbox client cannot run on an NFS client. If you have an NFS server that shares a directory, then mount it on an NFS client, and then run Dropbox on the NFS client, changes to files made elsewhere won’t be detected by Dropbox and so they won’t be synchronised. (For people who eat this kind of stuff for breakfast – turns out Dropbox uses
inotify, which doesn’t work over NFS)
So, historically, each NFS file server that was serving up your home directory was also running Dropbox so that it could keep everything in sync. But Dropbox was sucking up more and more machine resources as more and more people joined us on PythonAnywhere. Something had to change, and the change was pretty simple.
Dropbox files are now on a separate NFS server
The important thing for Dropbox is that it runs on the NFS server that is serving up the Dropbox directory. But there’s no reason that the NFS server that’s serving up the Dropbox directory should also be the one that’s serving up the rest of your files. So now we have two classes of file server; Dropbox file servers, which handle just the Dropbox subdirectory for each user’s filesystem, and regular file servers, which handle all of the other space that you can write to. (Files you can’t write to are stored locally on each machine in our cluster.)
The net result is that accessing files inside your dropbox subdirectory will be slightly faster than before (because the server handling it isn’t having to handle your other files), but more importantly, access to your other files will be much faster because it’s not having to handle a heavyweight Dropbox connection.
To give some numbers: before this change, the CPU load on a combined file server would frequently spike to 100% and things would often get backed up. Now we’ve made this change, a typical Dropbox server load is around 50% of CPU, while on a regular file server things are down at less than 10%.
We’ve got some other ideas planned for making filesystem access faster, but we’re hoping that this one alone will fix most of the problems people have seen.