Starting at approx 19:55 this evening, our main link to Amsterdam started showing high latency and packet loss. This is considerably more disruptive than a complete outage, as the auto-failover mechanisms don’t kick in.
All data is currently manually routed over the backup link. A ticket has been raised with the link supplier.
Update at 21:22 – our monitoring is showing the main link to be behaving normally once again. We will leave the maanual route override in place until we have heard from the supplier what has been done.
Update at 22:05 – An attempt to put the main link back in service was not successful, suggesting the link behaviour is changing with traffic. We remain on the backup link for now. Further investigations will continue.
We apologies for the outage, and also for the slow posting of the initial update – investigation of the problem took priority.
Update at 10:05 on 25/05/16 – The main link was successfully re-enabled at 09:30 this morning and has been behaving normally since.
The fault is believed to be with the supplier who routes traffic between our datacentre and the science park in Amsterdam. We have requested an explanation and will update this status with anything we receive.