Maintenance release: trusty + process listings

Hi All,

A maintenance release today, so nothing too exciting. Still, a couple of things you may care about:

  • We've updated to Ubuntu Trusty. Although we weren't vulnerable to shellshock, it's nice to have the updated Bash, and to be on an LTS release

  • We've added an oft-requested feature to be able to view all your running console processes. You'll find it at the bottom of the consoles page. The UI probably needs a bit of work, you need to hit refresh to update the list, but it's a solution for when you think you have some detached processes chewing up your CPU quota! Let us know what you think.

Other than that, we've updated our client-side for our Postgres beta to 9.4, and added some of the PostGIS utilities. (Email us if you want to check out the beta). We also fixed an issue where a redirect loop would break the "reload" button on web apps, and we've added weasyprint and python-svn to the batteries included.


Try PythonAnywhere for free for a month!

Angkor Wat steps
Step Right Up Folks, Step Right Up...

OK, so we already have a free account, but we'd like to give out a free trial of our most popular paid plan, the "Web Developer" plan. Here's what you have to do to claim your free schtuff:

  1. Sign up for PythonAnywhere, if you haven't already
  2. Upgrade to a Web Developer account using Paypal or a credit card (we use this as a basic way of verifying your identity, to mitigate the likelihood of being used as a DDoS platform)
  3. Build a Thing using PythonAnywhere (even a really silly trivial Thing. Say a double-ROT13 encrypter)
  4. Get in touch with us, quoting the code UNGUESSABLEUNIQUECODE1, and show us your Thing
  5. And we'll refund your first month's payment!

Then, play around with the shiny paid features for the rest of the month! Build up to 3 web apps on whatever domains you like, splurge your 3,000 second CPU allowance or your 5GB of storage, use SSH and SSL to your heart's content, whatever it'll be. Then when your month is up, you can cancel if you don't like it (no need to quote any more codes or anything, just hit cancel, no questions asked).

PS -- this isn't some scheme to catch you out and tax you for a month if you forget to cancel. We'll send you a reminder a couple of days before the renewal date, reminding you that your trial is almost over and that you'll be auto-renewed if you don't cancel. And if you happen to miss that, and you get in touch all sore about how we charged you for the second month, we'll totally refund you that too. IT IS MATHEMATICALLY IMPOSSIBLE TO SAY FAIRER THAN THAT.

PPS -- offer expires at the end of this week! You have until 23:59 UTC on the 2nd of Nov.


Site updates on 1 October 2014

We've just updated PythonAnywhere, and there's some great news: Postgres is now in beta! We've switched it on for a select list of beta testers; if you'd like to join, drop us a line at support@pythonanywhere.com.

There have also been some minor tweaks and updates:

  • New installed packages: OpenCV, the libproj Ubuntu package (useful for some Python GIS packages), WeasyPrint,
  • General website speedup (improved minifying of CSS and JavaScript).
  • The "Web" and "Databases" tabs remember which sub-tab you're on between visits.
  • Better validation on the web tab.
  • The page displayed for web apps that have their DNS set up to route to PythonAnywhere but haven't been set up has a better explanation of what's going on.

Test-Driving a docker-based Postgres service using py.test

[cross-posted at Obey The Testing Goat!]

We've been working on incorporating a Postgres database service into PythonAnywhere, and we decided to make it into a bit of a standalone project. The shiny is that we're using Docker to containerise Postgres servers for our users, and while we were at it we thought we'd try a bit of a different approach to testing. I'd be interested in feedback -- what do you like, what might you do differently?

Context: A Docker-based Postgres service

The objective is to build a service that, on demand, will spin up a Docker container with Postgres running on it, and listening on a particular port. The service is going to be controlled by a web API. We've got Flask to run the web service, docker-py to control containers, and Ansible to provision servers.

A single loop of integrated tests

Normally we use a "double-loop" TDD process, with an outside loop of functional tests that use selenium to interact with our web app, and an inner loop of more isolated unit tests. For our development of the Postgres service, we still have the outer loop of functional tests -- selenium tests that log into the site via a browser, and test the service from the perspective of the user -- clicking through the right buttons on our UI and seeing if they can get a console that connects to a new Postgres service.

But for the inner loop we were in a green field -- this wasn't going to be another app in our monolithic Django project, we wanted it to be a standalone service, one that you could package up and use in another context. It would provide all its services via an API, and need no knowledge of the rest of PythonAnywhere. So how should we write the self-contained tests for this app? Should it, in turn, have a double loop? Relying on isolated unit tests only felt like a waste of time -- after all, the whole app was basically a thin wrapper that hooks up a web service to a series of Docker commands. All boundaries. Isolated unit tests would end up being all mocks. And from a TDD-process point of view, because we'd never actually used docker-py before, we didn't know its API, so we wouldn't know what mocks to write before we'd actually decided what the code was going to look like, and tried it out. And trying it out would involve either running one of the PythonAnywhere FTs (super-slow, so a tediously and onerous feedback loop), or with manual tests, with all the uncertainty that implies.

So instead, it felt like starting with an intermediate-level layer of integrated tests might be best: we've already got our top-level UI layer full-stack tests in the form of functional tests. The next level down was the API level -- does calling this particular URL on the API actually give us a working container?

An example test

def test_create_starts_container_with_postgres_connectable(docker_cleanup):
    response = post_to_api_create()

    port = response.json()["port"]
    assert port > 1024

    connection = psycopg2.connect(
        database="postgres",
        user="pythonanywhere_helper", password="papwd",
        host="localhost", port=port,
    )
    connection.close()

Where

def post_to_api_create():
    response = requests.post(
        "http://localhost:5000/api/create",
        {"admin_password": "papwd"}
    )
    assert response.status_code == 200
    assert response.json()["status"] == "OK"
    return response

So you can see that's a very integration-ey, end-to-end test -- it does a real POST request, to a place where it expects to see an actual webapp running, and it expects to see a real, connectable database spun up and ready for it.

Now this test runs in about 10 seconds - not super-fast, like the milliseconds you might want a unit test to run in, but much faster than our FT, which takes 5 or 6 minutes. And, meanwhile, we can actually write this test first. To write an isolated, mocky test, we'd need to know the docker-py API already, and be sure that it was going to work, which we weren't.

To illustrate this point, take a look at the difference between an early implementation and a later one:

A first implementation

USER_IMAGE_DOCKERFILE = '''
FROM postgres
USER postgres
RUN /etc/init.d/postgresql start && \\
    psql -c "CREATE USER pythonanywhere_helper WITH SUPERUSER PASSWORD '{hashed}';"
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
'''

def get_user_dockerfile(admin_password):
    hashed = 'md5' + md5(admin_password + 'pythonanywhere_helper').hexdigest()
    return USER_IMAGE_DOCKERFILE.format(
        hashed=hashed,
    )

def create_container_with_password(password):
    tempdir = tempfile.mkdtemp()
    with open(os.path.join(tempdir, 'Dockerfile'), 'w') as f:
        f.write(get_user_dockerfile(password))

    response = docker.build(path=tempdir)
    response_lines = list(response)

    image_finder = r'Successfully built ([0-9a-f]+)'
    match = re.search(image_finder, response_lines[-1])
    if match:
        image_id = match.group(1)
    else:
        raise Exception('Image failed to build:\n{}'.format(
            '\n'.join(response_lines)
        ))

    container = docker.create_container(
        image=image_id,
    )
    return container

(These are some library functions we wrote, I won't show you the trivial flask app that calls them).

This was one of our first attempts -- we needed to be able to customise the Postgres superuser password for each user, and our initial solution involved building a new image for each user, by generating and running a custom Dockerfile for them.

We were never quite sure whether the Dockerfile voodoo was going to work, and we weren't really Postgres experts either, so having the high-level integration test, which actually tried to spin up a container and connect to the Postgres database that should be running inside it, was a really good way of getting to a solution that worked.

Imagine what a more isolated test for this code might look like:

@patch('containers.docker')
def test_uses_dockerfile_to_build_new_image(mock_docker):
    expected_dockerfile = USER_IMAGE_DOCKERFILE.format(
        'md5sekritpythonanywhere_helper'
    ).hexdigest()
    def check_dockerfile_contents(path):
        with open(os.path.join(path, 'Dockerfile')) as f:
            assert f.read() == expected_dockerfile

    mock_docker.build.side_effect = check_dockerfile_contents

    create_container_with_password('sekrit')

    assert mock_docker.build.called is True

 @patch('containers.docker')
def test_creates_container_from_docker_image(mock_docker):
    create_container_with_password('sekrit')
    mock_docker.create_container.assert_called_once_with(
        mock_docker.build.return_value
    )

There's no way we could have written that test until we actually had a working solution. And, on top of that, the test would have been totally useless when it came to evolving our requirements and our solution

A later implementation -- but minimal change to the main test

To give you an idea, here's what our current implementation looks like:

def start_new_container(storage_dirname, password, requested_port):
    prep_storage_dir(storage_dirname)
    run_command_on_temporary_container_with_mounts(
        command=['chown', '-R', 'postgres:postgres', POSTGRES_DIR],
        storage_dirname=storage_dirname,
        user='root',
    )
    run_command_on_temporary_container_with_mounts(
        command=[
            'bash', '-c', 
            INITIALISE_POSTGRES_AND_SET_PASSWORD.format(password)
        ],
        storage_dirname=storage_dirname
    )
    user_container = create_postgres_container(name=storage_dirname)
    start_container_with_storage(
        user_container, storage_dirname, 
        ports={POSTGRES_PORT: requested_port},
    )
    with open(port_file_path(storage_dirname), 'w') as f:
        f.write(str(requested_port))
    return requested_port

I won't bore you with the details of run_command_on_temporary_container_with_mounts, but one way or another we realised that building separate images for each user wasn't going to work, and that instead we were going to want to have some permanent storage mounted in from outside of Docker, which would contain the Postgres data directory, and which would effectively "save" customisations like the user's password.

So a radically different implementation, but look how little the main test changed:

def post_to_api_create(storage_dir=None, port=None):
    if storage_dir is None:
        storage_dir = uuid.uuid4()
    if port is None:
        port = random.randint(6000, 9999)
    response = requests.post(
        "https://localhost/api/containers/",
        {
            "storage_dir": storage_dir,
            "admin_password": OUR_PASSWORD,
            "port": port,
        },
        verify=False,
    )
    return response

def test_create_starts_container_with_postgres_connectable(docker_cleanup):
    response = post_to_api_create(port=6123)
    # rest of test as before!

And now imagine all the time we'd have had to spend rewriting mocks, if we'd decided to have isolated tests as well.

Aside: py.test observations

One py.test selling point is "less boilerplate". Notice that none of these tests are methods in a class, and there's no self variable. On top of that, we just use assert keywords, no complicated remembering of self.assertIn, self.assertIsNotNone, and so on. Absolutely loving that.

py.test fixtures

Another thing you may be interested in is the docker_cleanup argument to the test. py.test will magically look for a special fixture function named the same as that argument, and use it in the test. Here's how it looks:

from docker import Client
docker = Client(base_url='unix://var/run/docker.sock')

@pytest.fixture()
def docker_cleanup(request):
    containers_before = docker.containers()

    def kill_new_containers():
        current_containers = docker.containers()
        for container in current_containers:
            if container not in containers_before:
                print('killing {}'.format(container['Names'][0]))
                docker.kill(container)

    request.addfinalizer(kill_new_containers)
    return kill_new_containers

The fixture function has a couple of jobs:

  • It adds a "finalizer" (the equivalent of unittest addCleanup or tearDown) which will run at the end of the tests, to kill any containers that have been started by the test

  • It provides that same finalizer, and a helper method to identify new containers, to the tests that use the fixture, as a helper tool (I haven't showed any examples of that here though)

As it's illustrated here, there are no obvious advantages over the unittest setUp/tearDown ideas, although you can see it would make it a little easier to share setup and cleanup code between tests in different files and tests. There's a lot more to them, and if you really want to get #mindblown, go checkout out pytest yield fixtures

Incidentally, until I started using py.test I'd always associated "fixtures" with Django "fixtures", which basically meant serialized versions of model data, but really py.test is using the word in a more correct usage of the term, to mean "state that the world has to be in for the test to run properly".

The pros & cons of the "integrated-tests-only" workflow

Pros:

  • Allowed us to experiment freely with an API that was new to us, and get feedback on whether it was really working
  • Allowed us to refactor code freely, extracting helper functions etc, without needing to rewrite mocky unit tests

Cons:

  • Being end-to-end tests, they ran much slower than unit tests would - on the order of seconds, and later, a minute or two, once we grew from three or four tests to a dozen or two. And, on top of that...

  • Being integrated tests, they're not designed to run on a development machine. Instead, each code change means pushing updated source up to the server using Ansible, restarting the control webapp, and then re-running the tests in an SSH session.

  • Because the tests call across a web API, the code being tested runs in a different process to he test code, meaning tracebacks aren't integrated into your test results. Instead, you have to tail a logfile, and make sure you have logging set up appropriately.

Conclusions and next steps

I can potentially imagine a time when we might start to see value in a layer of "real" unit tests... So far though, there's really no "business logic" that we could extract and write fast unit tests for. Or at least, there's no business logic that I identify as such, and I'd be very pleased for someone to come along and school me about it?

On the other hand, I can definitely see a time where we might want to split out our tests for the web API from the tests for the Postgres and Docker stuff, and I can see value in a setup where a developer can run these tests locally rather than having to push code up to a dev box. Vagrant and VirtualBox might be one solution, but, honestly, installing Docker and Postgres on a dev box doesn't feel that onerous either, as long as we know we'll be testing on a "real" box in CI. Or at least, it doesn't feel onerous until we start talking about my poor laptop with its paltry 120GB SSD. No room here!

And the bonus of being able to see honest-to-God tracebacks in your test run output feels like it might be worth it.

But, overall, at this stage in development, given the almost total lack of "business logic" in our app, and given the fact that we were working with a new API and a new set of technologies -- I've found that doing without "real" unit tests has actually worked very well.


New release - a few new packages, some steps towards postgres, and forum post previews

Today's upgrade included a few new packages in the standard server image:

  • OpenSCAD
  • FreeCAD
  • inkscape
  • Pillow for 3.3 and 3.4
  • flask-bootstrap
  • gensim
  • textblob

We also improved the default "Unhandled Exception" page, which is shown when a users' web app allows an exception to bubble up to our part of the stack. We now include a slightly friendlier message, explaining to any of the users' users that there's an error, and explaining to the user where they can find their log files and look for debug info.

And in the background, we've deployed a bunch of infrastructure changes related to postgres support. We're getting there, slowly slowly!

Oh yes, and we've enabled dynamic previews in the forums, so you get an idea of how the markdown syntax will translate. It actually uses the same library as stackoverflow, it's called pagedown. Hope you find 'em useful!


Slides for Giles Thomas' EuroPython talk now online

Our founder, Giles Thomas, gave a high-level introduction to our load-balancing system as a talk at this summer's EuroPython. There's a video up on PyVideo: An HTTP request's journey through a platform-as-a-service. And here are the slides [PDF].


PythonAnywhere is looking for a new developer

ancient greek cult initiation

Fancy helping to build the Python world's favourite PaaS (probably)? We're looking for a "junior" programmer with plenty of smarts to come and join the team, learn the stuff we do, and inject some new ideas...

Job spec

Here's some stuff you'd be doing:

  • Working in an Extreme Programming (XP) shop, pair programming and TDD all day, woo*.

  • Coding in Python lots and JavaScript a bit, and maybe other stuff too (OK there's like 5 lines of Lua code somewhere. But you could come along and try and convert us all to ClojureScript or something...)

  • Devops! Or what we take it to mean, which is that you deploy to and administer the servers as well as write code for them. Lots of Linux command-line goodness in SSH sessions, and automated deployment stuff.

  • Sexy CV-padding technologies! Like Docker, nginx, websockets, Django, copy-on-write filesystems, Ansible, GNU/Linux (Ubuntu), virtualbox, vagrant, continuous integration, AWS, redis, Postgres, and even Windows XP! (although we're phasing that last one out, to our great chagrin). Don't worry, you don't have to know any of these before you show up, you'll get to learn them all on the job...

  • Learn vim (if you want to) much faster than you would on your own by being forced to pair program with vim cultists all too happy to explain the abstruse keyboard shortcuts they're using...

  • Get involved in the nonprogramming aspects of the business too (we all do!), like customer support and marketing. Honestly, that can be fun too.

  • Work near enough to Silicon Roundabout that you can walk to the Hacker News meetups, but not so near that you're forced to overhear bad startup ideas being pitched in every coffee shop

* The pair programming thing is an unbelievably good deal for new developers btw, there's just no faster way of learning than sitting down next to someone that knows and being able to ask them questions all day, and they're not allowed to get annoyed.

Person spec

Here's the kind of person we'd like

  • Smart -- academically or otherwise. A degree (CS or other), won't hurt, but it's not required either.
  • An enthusiastic programmer (but not necessarily Python and not necessarily professional)
  • A bit wacky (which doesn't mean you have to be an extrovert, just inclined to left-field ideas)
  • Willing to come and work here in sunny Clerkenwell
  • Willing to get paid less than you would at Google or a bank, in exchange for working in an exciting but relaxed tech-startup environment

Here's some stuff we don't care about:

  • Age
  • Male/Female/Black/White/Number of functioning limbs/Space alien.
  • Gaps in CVs
  • Current country of residence (as long as you're willing to move here promptly! You do need to be able to speak good English, and unfortunately we're too small to sponsor visas, so you need to already have the to right live + work in the UK)
  • Dress codes

Send us an email telling us why you'd like to work here, and a current CV, to jobs@pythonanywhere.com

Image source: wikimedia commons, Eleusinian Mysteries


New Release

Here's what's new in the latest version of PythonAnywhere that we released this morning:

  • User files are now on SSDs, so we're expecting to see some performance improvements.
  • We've implemented a fix for the issue that we believe has been causing the recent outages and database access issues.
  • We've improved the general security of the PythonAnywhere web site.
  • We've added some minor fixes to the user interface

Outage Report for 15 July 2014

After a lengthy outage last night, we want to let you know about the events that led up to it and how we can improve our outage responses to reduce or eliminate downtime when things go wrong.

As usual with these things, there is no single cause to be blamed. It was a confluence of a number of things happening together:

  1. An AWS instance that was running a critical piece of infrastructure had a hardware failure
  2. We performed a non-standard deploy earlier in the day
  3. We took too long to realize the severity of the hardware failure

It is a fact of life that our machines run on physical hardware. As much as the cloud, in general, and AWS, in particular, try to insulate us from that fact. Hardware fails and we need to deal with it when it does. In fact, we believe that a large part of the long-term value of PythonAnywhere is that we deal with it so you don't have to.

Since our early days, we have been finding and eliminating single points of failure to increase the robustness of our service, but there are still a few left and we have plans to eliminate them, too. One of the remaining ones is the file server and that's the machine that suffered the hardware failure last night.

The purpose of the file server is to make your private file storage available to all of the web, console, and scheduled task servers. It does this by owning a set of Elastic Block Storage devices, arranged in a RAID cluster, and sharing them out over NFS. This means that we can easily upgrade the file server hardware, and simply move the data storage volumes over from the old hardware to the new.

Under normal operation, we have a backup server that has a live, constantly updated copy of the data from the file server, so in the event of either a file server outage, or a hardware problem with the file server's attached storage, we can switch across to using that instead. However, yesterday, we upgraded the storage space on the backup server and switched to SSDs. This meant that, instead of starting off with a hot backup, we had a period where the backup server disks were empty and were syncing with the file server. So we had no fallback when the file server died. Just to be completely clear -- the data storage volumes themselves were unaffected. But the file server that was connected to them, and which serves them out via NFS, crashed hard.

With all of that in mind, we decided to try to resurrect the file server. On the advice of AWS support, we tried to stop the machine and restart it so it would spin up on new hardware. The stop ended up taking an enormous amount of time and then restarting it took a long time, too. After trying several times and poring over boot logs, we determined that the boot disk of the instance had been corrupted by the hardware failure. Now the only path we could see to a working cluster was to create an entirely new one which could take over and use the storage disks from the dead file server. So we kicked off the build (which takes about 20min) and waited. After re-attaching the disks, we checked that they were intact and switched over to the new cluster.

Lessons and Responses

Long term

  • A file server hardware failure is a big deal for us, it takes everything down. Even under normal circumstances switching across to the backup is a manual process and takes several minutes. And, as we saw, rare circumstances can make it significantly worse. We need to remove it as a single point of failure.

Short term

  • A new cluster may be necessary to prevent extended downtime like we had last night. So our first response to failure of the file server must be to start spinning up a new cluster so it's available if we need it. This should mean that our worst downtime could be about 40 mins from when we start working on it to having everything back up.
  • We need to ensure that all deploys (even ones where we're changing the storage on either the file or backup servers) start with the backup server pre-seeded with data so the initial sync can be completed quickly.

We have learned important lessons from this outage and we'll be using them to improve our service. We would like to extend a big thank you and a grovelling apology to all our loyal customers who were extremely patient with us.


Outage report: lengthy upgrade this morning

This morning we upgraded PythonAnywhere, and the upgrade process took much longer than expected. Here's what happened.

What we planned to happen

The main visible feature in this new version was Python 3.4. But a secondary reason in doing it was to move from one Amazon "Availablity Zone" to another. PythonAnywhere runs on Amazon Web Services, and availability zones are essentially different Amazon data centers.

We needed to move from the zone us-east-1a to us-east-1c. This is because the 1a zone does not support Amazon's latest generation of servers, called m3. m3 instances are faster than the m1 instances we currently have, and also have local SSD storage. We expect that moving from m1 to m3 instances will make PythonAnywhere -- our site, and sites that are hosted with us -- significantly faster for everyone.

Moving from one availability zone to another is difficult for us, because we use EBS -- essentially network-attached storage volumes -- to hold our customers' data. EBS volumes exist inside a specific availability zone, and machines in one zone can't connect to volumes in another. So, we worked out some clever tricks to move the data across beforehand.

We use a tool called DRBD to keep data safe. DRBD basically allows us to associate a backup server with each of our file servers. Every time you write to your private file storage in PythonAnywhere, it's written to a file server, which stores it on an EBS volume but also then also sends the data over to its associated backup server, which writes it to another EBS volume. This means that if the EBS volume on the file server goes wrong (possible, but unlikely) we always have a backup to switch over to.

So, in our previous release of PythonAnywhere last month, we moved all of the backup servers to us-east-1c. Over the course of a few hours after that move, all of our customers' data was copied over to 1c, and it was then kept in sync on an ongoing basis over the following weeks.

What actually happened

When we pushed our update today, we essentially started a new PythonAnywhere cluster in 1c, but the EBS volumes that were previously attached to the old cluster's backup servers were attached to the new cluster's file servers (after making sure that all pending updates had synced across). Because all updates had been replicated from the old file servers in 1a to these disks in 1c, this meant that we'd transparently migrated everything from 1a to 1c with minimal downtime.

That was the theory. And as far as it went, it worked flawlessly.

But we'd forgotten one thing. Not all customer data is on file servers that are backed up this way. The Dropbox storage, and storage for web app logs, is stored in a different way. So while we'd migrated everyone's normal files -- the stuff in /home/USERNAME, /tmp, and so on -- we'd not migrated the logs or the Dropbox shares. These were stuck in 1a, and we needed to move them to 1c so that they could be attached to the new servers there.

This was a problem. Without the migration trick using DRBD, the best way to move an EBS volume from one availability zone to another is to take a "snapshot" of it (which creates a copy that isn't associated with any specific zone) and then create a fresh volume from the snapshot in the appropriate zone. This is not a quick process. And the problem only became apparent to us when we were committed enough to the move to us-east-1c that undoing the migration would have been risky and slow.

So, 25 minutes into our upgrade, we started the snapshot process. We hoped it would be quick.

After 10 minutes of the snapshots of the Dropbox and the log storage data running, we noticed something worrying. The log storage snapshot was running at a reasonable speed. But the Dropbox storage snapshot hadn't even reached 1% yet. This is when we started talking to Amazon's tech support team. Unfortunately, after much discussion with them, it was determined that there was essentially nothing that could be done to speed up either of the snapshots.

After discussion internally we came to the conclusion that while the system couldn't run safely without the log storage having been migrated, we could run it without the Dropbox storage. We've deprecated Dropbox support recently due to problems with our connection to Dropbox themselves, so we don't think anyone's relying on it. So, we waited until the log storage snapshot completed (which took about 90 minutes), created a new EBS volume in us-east-1c, and brought the system back up.

Where we are now

  • PythonAnywhere is up and running in the us-east-1c availability zone, and we'll be able to start testing higher-speed m3 servers next week.
  • All of our customers' normal file data is in us-east-1c (with the old ones in 1a kept as a third-level backup for the time being)
  • All of the log data is also stored in both 1c and 1a, with the 1c copy live.
  • The Dropbox storage is still in us-east-1a and is still snapshotting (15% done as of this post). Once the snapshot is complete, we'll create a new volume and attach it to the current instance, and start Dropbox synchronisation.

Our apologies for the outage.


Page 1 of 10.

Older posts »

PythonAnywhere is a Python development and hosting environment that displays in your web browser and runs on our servers. They're already set up with everything you need. It's easy to use, fast, and powerful. There's even a useful free plan.

You can sign up here.