According to Nectar:
A 24 hour outage is required to relocate and upgrade the cloud storage infrastructure at the University of Melbourne. Only instances in the melbourne-qh2 availability zone will be affected. The upgrade will bring further stability and increased performance to this availability zone.
Unfortunately, we’ll have a number of impacted machines: the FAIMS mobile app sandbox, this webserver, our wiki, testrepo, and authentication server will all be down.
Downtime will span close of business on the 28th to midday on the 30th.
Critically, if we’re developing modules for you, you will not be able to get modules-in-progress from sandbox during this downtime.
Let us know if that’s a problem and we can find a workaround.
A tantalizing hint of what’s to come from our friends at Solutions First. They did some fantastic work on the Debian Installer for the FAIMS server, are huge contributors to the open source community, and are now doing some (super secret!) hardware work for us.
Would you like to take one of these into the field?
A quick public service announcement: use the massive impact of the heartbleed bug as impetus to
- change all your passwords (used in the last few weeks) to distinct, very strong, random passwords, and;
- use a password manager like lastpass or KeePass to manage them with a very strong (and never reused) master password and two-factor authentication.
Due to the recent heartbleed bug (good summary of the impact on half a million web sites here), we’ve had to regenerate our SSL keys and patch our servers, just like the rest of the internet. (Quick summary: it’s a very very no good just awful bug impacting ssl security.)
Happily, we’ve gone through all of the necessary heartache to patch our primary web server, our wiki, and infrastructure servers and are currently patching our repository. We have also regenerated our SSL private keys and certificates, so we’ve mitigated the bug’s impact, but it’s still a very very scary bug and thousands of major websites are still unpatched.