PythonAnywhere now supports Postgres

Finally!

tl;dr: upgrade to a Custom account and you can now add Postgres

Say no to multi-tenancy (ak The Whale vs the Elephant vs the Dolphin)

Postgres has been the top requested feature for as long as we've supported webapps (3 years? who's counting!). We did have a brief beta based on the idea of a single postgres server, and multi-tenanted low-privileged accounts for each user, but it turned out Postgres really doesn't work too well that way (unlike MySQL).

Our new solution uses Linux containers to provide an isolated server for each user, so everyone can have full superuser access. And, yes, it uses the ubiquitous Docker under the hood.

How to get it

  • You'll need to upgrade to a Custom account, and enable Postgres, as well as choosing how much storage you need for your database.

  • Then, head on over to the Databases tab, and click the big button that says "Start a Postgres server".

  • Once it's ready, take a note of its hostname and port

  • Set your superuser passsword. We'll save it to ~/.pgpass for convenience.

  • And now hit "Start a Postgres console" and take a look around!

How it works under the hood

We run several different machines to host postgres containers for our users. When you hit "start my server", we scan through to find a machine with spare capacity, and ask it to build up a container for you.

It's a docker container based on our postgres image, but each individual user's is customised slightly. For example, we create a superuser account called "pythonanywhere_helper" with a unique, random password to enable us to perform some admin functions. (once you have your own superuser account you could theoretically delete this guy, but we'd rather you didn't...)

We also set up a special permanent storage area for your database, which gives us the ability to migrate you to a different server if we need to.

You can read a bit more about our TDD process for developing this part of the codebase here

What it costs

Aha, the dreaded question. We've set the price for the basic service at $15/month which includes 1GB of storage, and subsequent gigs are 20cents/gig, which we think is pretty fair... Feel free to have a moan if you think it isn't!

What to do next

That's up to you! We've tried to supply you with everything you could need, including the bleeding-edge 9.4 version of Postgres, PostGIS, PL-Python and PL-Python3.... Let us know how you get on!


New release -- new console with 256 colours, some fixes to task logging, and the P-thing.

Exciting new deploy today!

A new console

Obviously, the most important thing we did was to switch out our javascript console for a new one that supports 256 colours! And slightly more sane copy + paste. And it works on Android, or at least it does on Lollipop. Giles recommends the Hackers keyboard. Still doesn't work on my blackberry though.

For the curious, it's based on hterm which is a part of Chromium...

Some new packages

Of secondary importance, we added a few new packages, including TA-lib, pytesseract, and a thing called ruffus.

Improved logging of scheduled tasks

Scheduled tasks now log directly to files in /var/log, rather than storing their output in our database. That means they'll get log-rotated like everything else in there, and if you call flush on your sys.stdout, you may even be able to see live updates while tasks are still running. I think.

New database type supported.

Oh, and we also released a new database type, it's called Postgres, I'm told it's quite popular. Skip on over to the accounts page and get yourself a Custom account if you want to check it out.

Happy coding everyone!


Outage report: 1st November 2014

We had an outage this morning that lasted about an hour. We've established the cause, fixed the problem, and all sites are now back up. Apologies to all those affected. More detail follows.

We were alerted to an initial problem via our monitoring systems around 9:50AM. The symptom was that one of the web servers was seeing intermittent outages, due to a memory leak in one of our users' web applications causing a lot of swapping.

Our failover procedure involves taking the affected server out of rotation on the load balancer, redistributing its workload across to other servers, and rebooting it. We saw the same user using a lot of memory on a second server, so we able to confirm that that was a repeatable issue. We disabled his web app and rebooted this second server.

At this point the larger issue kicked in, which was that the rebooted servers seemed to be non-functional when they came back, which left the remaining servers struggling to keep up with the load, and causing outages to more customers. By this point two of us were working on the issue, and it took us a while to identify the root cause. It turned out to be due to a change in our logging configuration which was causing nginx to hang on startup. Specifically, it only affected users with custom SSL configuration. The reason that this was particularly baffling to us is that our deploy procedure involves a manual check on a sample of custom SSL users, and we confirmed they were functional when we did that deploy two days ago. Our working theory is that nginx will reload happily with broken logging config, but not restart happily:

On deploy:

  1. Start nginx
  2. Add custom SSL webapp configs
  3. Reload nginx

--> not a problem, despite broken SSL webapp logging

On reboot:

  1. Custom SSL configs with broken logging are already present on disk
  2. Nginx refuses to start

We'll be confirming this theory in development environments over the next few days.

In the meantime, we've fixed the offending configuration file template, and confirmed that both regular users and custom-SSL users sites are back up. We're also adding some safeguards to prevent any other users' web apps from using up too much memory.

Once again, we apologise to all those affected.


Maintenance release: trusty + process listings

Hi All,

A maintenance release today, so nothing too exciting. Still, a couple of things you may care about:

  • We've updated to Ubuntu Trusty. Although we weren't vulnerable to shellshock, it's nice to have the updated Bash, and to be on an LTS release

  • We've added an oft-requested feature to be able to view all your running console processes. You'll find it at the bottom of the consoles page. The UI probably needs a bit of work, you need to hit refresh to update the list, but it's a solution for when you think you have some detached processes chewing up your CPU quota! Let us know what you think.

Other than that, we've updated our client-side for our Postgres beta to 9.4, and added some of the PostGIS utilities. (Email us if you want to check out the beta). We also fixed an issue where a redirect loop would break the "reload" button on web apps, and we've added weasyprint and python-svn to the batteries included.


Try PythonAnywhere for free for a month!

Angkor Wat steps
Step Right Up Folks, Step Right Up...

OK, so we already have a free account, but we'd like to give out a free trial of our most popular paid plan, the "Web Developer" plan. Here's what you have to do to claim your free schtuff:

  1. Sign up for PythonAnywhere, if you haven't already
  2. Upgrade to a Web Developer account using Paypal or a credit card (we use this as a basic way of verifying your identity, to mitigate the likelihood of being used as a DDoS platform)
  3. Build a Thing using PythonAnywhere (even a really silly trivial Thing. Say a double-ROT13 encrypter)
  4. Get in touch with us, quoting the code UNGUESSABLEUNIQUECODE1, and show us your Thing
  5. And we'll refund your first month's payment!

Then, play around with the shiny paid features for the rest of the month! Build up to 3 web apps on whatever domains you like, splurge your 3,000 second CPU allowance or your 5GB of storage, use SSH and SSL to your heart's content, whatever it'll be. Then when your month is up, you can cancel if you don't like it (no need to quote any more codes or anything, just hit cancel, no questions asked).

PS -- this isn't some scheme to catch you out and tax you for a month if you forget to cancel. We'll send you a reminder a couple of days before the renewal date, reminding you that your trial is almost over and that you'll be auto-renewed if you don't cancel. And if you happen to miss that, and you get in touch all sore about how we charged you for the second month, we'll totally refund you that too. IT IS MATHEMATICALLY IMPOSSIBLE TO SAY FAIRER THAN THAT.

PPS -- offer expires at the end of this week! You have until 23:59 UTC on the 2nd of Nov.


Site updates on 1 October 2014

We've just updated PythonAnywhere, and there's some great news: Postgres is now in beta! We've switched it on for a select list of beta testers; if you'd like to join, drop us a line at support@pythonanywhere.com.

There have also been some minor tweaks and updates:

  • New installed packages: OpenCV, the libproj Ubuntu package (useful for some Python GIS packages), WeasyPrint,
  • General website speedup (improved minifying of CSS and JavaScript).
  • The "Web" and "Databases" tabs remember which sub-tab you're on between visits.
  • Better validation on the web tab.
  • The page displayed for web apps that have their DNS set up to route to PythonAnywhere but haven't been set up has a better explanation of what's going on.

Test-Driving a docker-based Postgres service using py.test

[cross-posted at Obey The Testing Goat!]

We've been working on incorporating a Postgres database service into PythonAnywhere, and we decided to make it into a bit of a standalone project. The shiny is that we're using Docker to containerise Postgres servers for our users, and while we were at it we thought we'd try a bit of a different approach to testing. I'd be interested in feedback -- what do you like, what might you do differently?

Context: A Docker-based Postgres service

The objective is to build a service that, on demand, will spin up a Docker container with Postgres running on it, and listening on a particular port. The service is going to be controlled by a web API. We've got Flask to run the web service, docker-py to control containers, and Ansible to provision servers.

A single loop of integrated tests

Normally we use a "double-loop" TDD process, with an outside loop of functional tests that use selenium to interact with our web app, and an inner loop of more isolated unit tests. For our development of the Postgres service, we still have the outer loop of functional tests -- selenium tests that log into the site via a browser, and test the service from the perspective of the user -- clicking through the right buttons on our UI and seeing if they can get a console that connects to a new Postgres service.

But for the inner loop we were in a green field -- this wasn't going to be another app in our monolithic Django project, we wanted it to be a standalone service, one that you could package up and use in another context. It would provide all its services via an API, and need no knowledge of the rest of PythonAnywhere. So how should we write the self-contained tests for this app? Should it, in turn, have a double loop? Relying on isolated unit tests only felt like a waste of time -- after all, the whole app was basically a thin wrapper that hooks up a web service to a series of Docker commands. All boundaries. Isolated unit tests would end up being all mocks. And from a TDD-process point of view, because we'd never actually used docker-py before, we didn't know its API, so we wouldn't know what mocks to write before we'd actually decided what the code was going to look like, and tried it out. And trying it out would involve either running one of the PythonAnywhere FTs (super-slow, so a tediously and onerous feedback loop), or with manual tests, with all the uncertainty that implies.

So instead, it felt like starting with an intermediate-level layer of integrated tests might be best: we've already got our top-level UI layer full-stack tests in the form of functional tests. The next level down was the API level -- does calling this particular URL on the API actually give us a working container?

An example test

def test_create_starts_container_with_postgres_connectable(docker_cleanup):
    response = post_to_api_create()

    port = response.json()["port"]
    assert port > 1024

    connection = psycopg2.connect(
        database="postgres",
        user="pythonanywhere_helper", password="papwd",
        host="localhost", port=port,
    )
    connection.close()

Where

def post_to_api_create():
    response = requests.post(
        "http://localhost:5000/api/create",
        {"admin_password": "papwd"}
    )
    assert response.status_code == 200
    assert response.json()["status"] == "OK"
    return response

So you can see that's a very integration-ey, end-to-end test -- it does a real POST request, to a place where it expects to see an actual webapp running, and it expects to see a real, connectable database spun up and ready for it.

Now this test runs in about 10 seconds - not super-fast, like the milliseconds you might want a unit test to run in, but much faster than our FT, which takes 5 or 6 minutes. And, meanwhile, we can actually write this test first. To write an isolated, mocky test, we'd need to know the docker-py API already, and be sure that it was going to work, which we weren't.

To illustrate this point, take a look at the difference between an early implementation and a later one:

A first implementation

USER_IMAGE_DOCKERFILE = '''
FROM postgres
USER postgres
RUN /etc/init.d/postgresql start && \\
    psql -c "CREATE USER pythonanywhere_helper WITH SUPERUSER PASSWORD '{hashed}';"
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
'''

def get_user_dockerfile(admin_password):
    hashed = 'md5' + md5(admin_password + 'pythonanywhere_helper').hexdigest()
    return USER_IMAGE_DOCKERFILE.format(
        hashed=hashed,
    )

def create_container_with_password(password):
    tempdir = tempfile.mkdtemp()
    with open(os.path.join(tempdir, 'Dockerfile'), 'w') as f:
        f.write(get_user_dockerfile(password))

    response = docker.build(path=tempdir)
    response_lines = list(response)

    image_finder = r'Successfully built ([0-9a-f]+)'
    match = re.search(image_finder, response_lines[-1])
    if match:
        image_id = match.group(1)
    else:
        raise Exception('Image failed to build:\n{}'.format(
            '\n'.join(response_lines)
        ))

    container = docker.create_container(
        image=image_id,
    )
    return container

(These are some library functions we wrote, I won't show you the trivial flask app that calls them).

This was one of our first attempts -- we needed to be able to customise the Postgres superuser password for each user, and our initial solution involved building a new image for each user, by generating and running a custom Dockerfile for them.

We were never quite sure whether the Dockerfile voodoo was going to work, and we weren't really Postgres experts either, so having the high-level integration test, which actually tried to spin up a container and connect to the Postgres database that should be running inside it, was a really good way of getting to a solution that worked.

Imagine what a more isolated test for this code might look like:

@patch('containers.docker')
def test_uses_dockerfile_to_build_new_image(mock_docker):
    expected_dockerfile = USER_IMAGE_DOCKERFILE.format(
        'md5sekritpythonanywhere_helper'
    ).hexdigest()
    def check_dockerfile_contents(path):
        with open(os.path.join(path, 'Dockerfile')) as f:
            assert f.read() == expected_dockerfile

    mock_docker.build.side_effect = check_dockerfile_contents

    create_container_with_password('sekrit')

    assert mock_docker.build.called is True

 @patch('containers.docker')
def test_creates_container_from_docker_image(mock_docker):
    create_container_with_password('sekrit')
    mock_docker.create_container.assert_called_once_with(
        mock_docker.build.return_value
    )

There's no way we could have written that test until we actually had a working solution. And, on top of that, the test would have been totally useless when it came to evolving our requirements and our solution

A later implementation -- but minimal change to the main test

To give you an idea, here's what our current implementation looks like:

def start_new_container(storage_dirname, password, requested_port):
    prep_storage_dir(storage_dirname)
    run_command_on_temporary_container_with_mounts(
        command=['chown', '-R', 'postgres:postgres', POSTGRES_DIR],
        storage_dirname=storage_dirname,
        user='root',
    )
    run_command_on_temporary_container_with_mounts(
        command=[
            'bash', '-c', 
            INITIALISE_POSTGRES_AND_SET_PASSWORD.format(password)
        ],
        storage_dirname=storage_dirname
    )
    user_container = create_postgres_container(name=storage_dirname)
    start_container_with_storage(
        user_container, storage_dirname, 
        ports={POSTGRES_PORT: requested_port},
    )
    with open(port_file_path(storage_dirname), 'w') as f:
        f.write(str(requested_port))
    return requested_port

I won't bore you with the details of run_command_on_temporary_container_with_mounts, but one way or another we realised that building separate images for each user wasn't going to work, and that instead we were going to want to have some permanent storage mounted in from outside of Docker, which would contain the Postgres data directory, and which would effectively "save" customisations like the user's password.

So a radically different implementation, but look how little the main test changed:

def post_to_api_create(storage_dir=None, port=None):
    if storage_dir is None:
        storage_dir = uuid.uuid4()
    if port is None:
        port = random.randint(6000, 9999)
    response = requests.post(
        "https://localhost/api/containers/",
        {
            "storage_dir": storage_dir,
            "admin_password": OUR_PASSWORD,
            "port": port,
        },
        verify=False,
    )
    return response

def test_create_starts_container_with_postgres_connectable(docker_cleanup):
    response = post_to_api_create(port=6123)
    # rest of test as before!

And now imagine all the time we'd have had to spend rewriting mocks, if we'd decided to have isolated tests as well.

Aside: py.test observations

One py.test selling point is "less boilerplate". Notice that none of these tests are methods in a class, and there's no self variable. On top of that, we just use assert keywords, no complicated remembering of self.assertIn, self.assertIsNotNone, and so on. Absolutely loving that.

py.test fixtures

Another thing you may be interested in is the docker_cleanup argument to the test. py.test will magically look for a special fixture function named the same as that argument, and use it in the test. Here's how it looks:

from docker import Client
docker = Client(base_url='unix://var/run/docker.sock')

@pytest.fixture()
def docker_cleanup(request):
    containers_before = docker.containers()

    def kill_new_containers():
        current_containers = docker.containers()
        for container in current_containers:
            if container not in containers_before:
                print('killing {}'.format(container['Names'][0]))
                docker.kill(container)

    request.addfinalizer(kill_new_containers)
    return kill_new_containers

The fixture function has a couple of jobs:

  • It adds a "finalizer" (the equivalent of unittest addCleanup or tearDown) which will run at the end of the tests, to kill any containers that have been started by the test

  • It provides that same finalizer, and a helper method to identify new containers, to the tests that use the fixture, as a helper tool (I haven't showed any examples of that here though)

As it's illustrated here, there are no obvious advantages over the unittest setUp/tearDown ideas, although you can see it would make it a little easier to share setup and cleanup code between tests in different files and tests. There's a lot more to them, and if you really want to get #mindblown, go checkout out pytest yield fixtures

Incidentally, until I started using py.test I'd always associated "fixtures" with Django "fixtures", which basically meant serialized versions of model data, but really py.test is using the word in a more correct usage of the term, to mean "state that the world has to be in for the test to run properly".

The pros & cons of the "integrated-tests-only" workflow

Pros:

  • Allowed us to experiment freely with an API that was new to us, and get feedback on whether it was really working
  • Allowed us to refactor code freely, extracting helper functions etc, without needing to rewrite mocky unit tests

Cons:

  • Being end-to-end tests, they ran much slower than unit tests would - on the order of seconds, and later, a minute or two, once we grew from three or four tests to a dozen or two. And, on top of that...

  • Being integrated tests, they're not designed to run on a development machine. Instead, each code change means pushing updated source up to the server using Ansible, restarting the control webapp, and then re-running the tests in an SSH session.

  • Because the tests call across a web API, the code being tested runs in a different process to he test code, meaning tracebacks aren't integrated into your test results. Instead, you have to tail a logfile, and make sure you have logging set up appropriately.

Conclusions and next steps

I can potentially imagine a time when we might start to see value in a layer of "real" unit tests... So far though, there's really no "business logic" that we could extract and write fast unit tests for. Or at least, there's no business logic that I identify as such, and I'd be very pleased for someone to come along and school me about it?

On the other hand, I can definitely see a time where we might want to split out our tests for the web API from the tests for the Postgres and Docker stuff, and I can see value in a setup where a developer can run these tests locally rather than having to push code up to a dev box. Vagrant and VirtualBox might be one solution, but, honestly, installing Docker and Postgres on a dev box doesn't feel that onerous either, as long as we know we'll be testing on a "real" box in CI. Or at least, it doesn't feel onerous until we start talking about my poor laptop with its paltry 120GB SSD. No room here!

And the bonus of being able to see honest-to-God tracebacks in your test run output feels like it might be worth it.

But, overall, at this stage in development, given the almost total lack of "business logic" in our app, and given the fact that we were working with a new API and a new set of technologies -- I've found that doing without "real" unit tests has actually worked very well.


New release - a few new packages, some steps towards postgres, and forum post previews

Today's upgrade included a few new packages in the standard server image:

  • OpenSCAD
  • FreeCAD
  • inkscape
  • Pillow for 3.3 and 3.4
  • flask-bootstrap
  • gensim
  • textblob

We also improved the default "Unhandled Exception" page, which is shown when a users' web app allows an exception to bubble up to our part of the stack. We now include a slightly friendlier message, explaining to any of the users' users that there's an error, and explaining to the user where they can find their log files and look for debug info.

And in the background, we've deployed a bunch of infrastructure changes related to postgres support. We're getting there, slowly slowly!

Oh yes, and we've enabled dynamic previews in the forums, so you get an idea of how the markdown syntax will translate. It actually uses the same library as stackoverflow, it's called pagedown. Hope you find 'em useful!


Slides for Giles Thomas' EuroPython talk now online

Our founder, Giles Thomas, gave a high-level introduction to our load-balancing system as a talk at this summer's EuroPython. There's a video up on PyVideo: An HTTP request's journey through a platform-as-a-service. And here are the slides [PDF].


PythonAnywhere is looking for a new developer

ancient greek cult initiation

This position is now filled

Fancy helping to build the Python world's favourite PaaS (probably)? We're looking for a "junior" programmer with plenty of smarts to come and join the team, learn the stuff we do, and inject some new ideas...

Job spec

Here's some stuff you'd be doing:

  • Working in an Extreme Programming (XP) shop, pair programming and TDD all day, woo*.

  • Coding in Python lots and JavaScript a bit, and maybe other stuff too (OK there's like 5 lines of Lua code somewhere. But you could come along and try and convert us all to ClojureScript or something...)

  • Devops! Or what we take it to mean, which is that you deploy to and administer the servers as well as write code for them. Lots of Linux command-line goodness in SSH sessions, and automated deployment stuff.

  • Sexy CV-padding technologies! Like Docker, nginx, websockets, Django, copy-on-write filesystems, Ansible, GNU/Linux (Ubuntu), virtualbox, vagrant, continuous integration, AWS, redis, Postgres, and even Windows XP! (although we're phasing that last one out, to our great chagrin). Don't worry, you don't have to know any of these before you show up, you'll get to learn them all on the job...

  • Learn vim (if you want to) much faster than you would on your own by being forced to pair program with vim cultists all too happy to explain the abstruse keyboard shortcuts they're using...

  • Get involved in the nonprogramming aspects of the business too (we all do!), like customer support and marketing. Honestly, that can be fun too.

  • Work near enough to Silicon Roundabout that you can walk to the Hacker News meetups, but not so near that you're forced to overhear bad startup ideas being pitched in every coffee shop

* The pair programming thing is an unbelievably good deal for new developers btw, there's just no faster way of learning than sitting down next to someone that knows and being able to ask them questions all day, and they're not allowed to get annoyed.

Person spec

Here's the kind of person we'd like

  • Smart -- academically or otherwise. A degree (CS or other), won't hurt, but it's not required either.
  • An enthusiastic programmer (but not necessarily Python and not necessarily professional)
  • A bit wacky (which doesn't mean you have to be an extrovert, just inclined to left-field ideas)
  • Willing to come and work here in sunny Clerkenwell
  • Willing to get paid less than you would at Google or a bank, in exchange for working in an exciting but relaxed tech-startup environment

Here's some stuff we don't care about:

  • Age
  • Male/Female/Black/White/Number of functioning limbs/Space alien.
  • Gaps in CVs
  • Current country of residence (as long as you're willing to move here promptly! You do need to be able to speak good English, and unfortunately we're too small to sponsor visas, so you need to already have the to right live + work in the UK)
  • Dress codes

Send us an email telling us why you'd like to work here, and a current CV, to jobs@pythonanywhere.com

Image source: wikimedia commons, Eleusinian Mysteries


Page 1 of 10.

Older posts »

PythonAnywhere is a Python development and hosting environment that displays in your web browser and runs on our servers. They're already set up with everything you need. It's easy to use, fast, and powerful. There's even a useful free plan.

You can sign up here.