All Messages

Title Service Type Datesort ascending


We have received some reports of connectivity issues to SaaS environments in EMEA. We are currently investigating.

10:10am GMT - We have identified that there is an issue with the Janet Network (JISC) that may be the cause of the connectivity issues some customers are experiencing -

10:45am GMT - The effected customers are reporting that connectivity has been restored.

SaaS-EU Update 06 Mar 2018
10:02am UTC

Possible Tier-2 Internet provider outage / slowdown

We are investigating reports of connectivity problems and / or slow response time from various customers.  At this point in time, the various internet provider outage detection sites are reporting problems in New York, Boston, Chigago, and Dallas.  As details become available, we will post them.

SaaS-Atlanta Info 27 Feb 2018
5:51pm UTC

Brief downtime on BLUEcloud environments.

We had an emergency issue where we need to bring down BLUEcloud environments and bring them back up.  Downtime for environments should be about 15 minutes each environment.  They are going to go down in a staggered motion (not all at the same time).  If there are any questions please contact us at

Thank you.

SaaS-Atlanta, SaaS-EU, SaaS-Melbourne Info 13 Feb 2018
1:37am UTC

Resolved - Researching alerts for plasma Enterprise cluster

The following issue is resolved

Update 1: Our current focus is on potential 3rd party vendor connection issues. Our customer support department is continuing to investigate. 

We are currently investigating monitoring alerts for a single Enterprise cluster in Atlanta.

SaaS-Atlanta Info 08 Feb 2018
7:07pm UTC

RESOLVED | Investigating reports of Horizon Citrix Problems

We believe the problem with users receiving "unable to launch as the application is not currently available" is now resolved.  There was a back-end Citrix connection issue which interrupted Citrix's ability to connect internally. A reboot of these servers resolved the issue.

If you are continuing to experience problems please try logging out and back in to the Citrix receiver.  If you are still experiencing problems please let Customer  Support know.

We are currently investigating several reports of Horizon connectivity problems in Atlanta.  Updates will be posted here.


SaaS-Atlanta Alert 05 Feb 2018
1:13pm UTC

North America BLUEcloud Analytics

We are seeing reports of sluggish behavior in our North America BLUEcloud Analytics cluster.

One of the servers in the cluster is in the middle of a service restart which is affecting degradation in the remaining servers in the cluster.  We will post an update when the services are fully operational.

09:59 MT - Update:  The affected server in the cluster is now fully back online.  Service performance should return to normal.

SaaS-Atlanta Info 01 Feb 2018
4:25pm UTC

Symphony Service Disruption - Galileo / Giotto

For customers on Giotto and Galileo, we have identified a fault on the server that we have been unable to resolve. This will require us to migrate your Symphony system during the day today.  We will be sending periodic updates in email and posting to this bulletin.  During the outage, access to e-library/iLink/iBistro/WebCat and Symphony will be unavailable. If you have Enterprise / Portfolio the My Account and item specific information will also be unavailable.  If you have any questions please contact or by calling 1-800-284-3969.


Symphony Service Disruption - Giotto

Services are now up and running for customers on the Giotto Symphony server. An email notification has also been sent out to this effect.


Symphony Service Disruption - Galileo

Services are now up and running for customers on the Galileo Symphony server. An email notification has also been sent out to this effect.

SaaS-Atlanta Info 01 Feb 2018
2:50pm UTC

Firewall Mainteance

Update of the SaaS firewall to the current release of firmware. During this maintenance period, connections to all UK SaaS systems will be disrupted.

SaaS-EU Maintenance 22 Jan 2018
10:00pm UTC

BLUEcloud Service Disruption

We are currently investigating a problem with the BLUEcloud environment in the APAC region.  We are working to restore service as quickly as possible. Updates to be posted here.



Issues were found while patching the environment that led the unexpected extended down time of the environment. The issues were resolved and the patching completed successfully. All services have been restored.

SaaS-Melbourne Update 16 Jan 2018
9:31pm UTC

North America - BLUEcloud Analytics

Update 12:10pm (MST) - The issue has been resolved.

We are currently investigating an issue with BLUEcloud Analytics in North America were it may not be possible to log in or access reports.

SaaS-Atlanta Info 03 Jan 2018
6:09pm UTC

EOS.Web Login, Reporting, and Email Issues

Update - the issue preventing some customers from being able to log into EOS.Web, and other customers from using email or reporting capabilities has been fully resolved.  All customers should now have normal functionality.

We are experiencing issues at the data center preventing some customers from being able to log into their EOS.Web systems.  Others are able to login and use EOS.Web but are not able to use the email capabilities, or run certain reports.  Our SaaS team is working to resolve all issues as quickly as possible, the resolution is expected later today.  We will keep you updated here.

SaaS-Atlanta Alert 03 Jan 2018
4:04pm UTC

Widespread outage in Atlanta

(0609 MT) We are currently investigating several critical alerts for many services in the Atlanta facility. This may be impacting access to EOS.Web, Symphony, Horizon, Enterprise and other ancillary products. Updates will be posted here.

Update 1 (0635 MT) We have multiple engineers investigating. From our initial assessment it appears to be SAN / fabric / storage related. We will continue to post updates here.

Update 2 (0722 MT) We are continuing to focus on storage components, specifically the SAN switches. Internal escalation, to include members of management, has taken place.

Update 3 (0752 MT) At this time we are working with our colocation partner to review lights on various devices within our space. We are seeing 2 switch ports offline which is impacting the hosts ability to access their respective storage devices.

Update 4 (0912 MT) We are currently rebooting one of the suspect SAN switches. We believe this will reset the SAN links allowing host to storage communication.

Update 5 (1015 MT) At this time we are testing areas of the infrastructure prior to bringing hosts and services online.

Update 6 (1129 MT) Testing is near completion and preparations for service restoration are underway.

Update 7 (1220 MT) Individual service restorations are underway. Anticipate all services restored by 2PM (MT).

Update 8 (1321 MT) Roughly 50% of sytems are restored to service, including all EOS.web sites.

Update 9 (1425 MT) Restorations continue - at about 80% completed

Update 10 (1530 MT) A handful of restorations remain.

Update 11 (1635 MT) Addressing individual outage-related issues as reported by customers. We continue to have staff working this issue until all services are restored and alerts are cleared.

Update 12 (1755 MT) Final hourly update - Most monitoring is now green. SaaS teams are working through remaining details and some test instances. Anticipate "all clear" by 1900 (MT). Please contact SirsiDynix Customer Support with any issues - use the Critical Care number for down systems. Staff are online through the holidays.

SaaS-Atlanta Alert 23 Dec 2017
1:09pm UTC

North America - BLUEcloud Analytics

Update 11:00am - The issue has been resolved.

We are currently investigating an issue with BLUEcloud Analytics in North America were it may not be possible to log in or access reports.

SaaS-Atlanta Info 19 Dec 2017
3:16pm UTC

COMPLETE - Unscheduled HIP Server Reboot Required


We need to perform an unscheduled reboot of a single HIP server in Atlanta.  We hope to have service returned to normal very shortly.

SaaS-Atlanta Info 03 Dec 2017
5:46pm UTC

North America - BLUEcloud Analytics

A small portion of customers using BLUEcloud Analytics in North America were experiencing issues with connecting to their projects this morning.  The issue was again caused by a deadlock on the database containing the metadata.  

SaaS-Atlanta Info 15 Nov 2017
3:47pm UTC

North America - BLUEcloud Analytics

A small portion of customers using BLUEcloud Analytics in North America are experiencing issues with connecting to their projects.  We are working on the problem and will provide updates as soon as they become available.


UPDATE:  Our investigation discovered a database deadlock which was preventing Microstrategy Intelligence servers from being able to connect to the metadata.  This failure to connect also sent the Intelligence servers into a stop/start loop.  We have cleared out the lock and the Microstrategy Intelligence servers are starting up normally.

SaaS-Atlanta Info 14 Nov 2017
2:41pm UTC

RESOLVED - EOS.Web - Investigating Latency

The following event was resolved

  • We are currently proactively investigating latency alarms for select EOS.Web VM's.  At this time we do not consider this a widespread problem.  Updates to be posted here.
  • We believe we have identified the offending vm.  We have decided to quckly shutdown the vm and run a backend vmlevel procedure to clear the latency.  This is not expected to take more than 5 minutes.
  • The backend procedure has completed and the vm in question has been booted.  Service for this specific set of customers should be restored shortly.
  • Monitoring service checks for this specific set of customers are clearing
  • We have identified another vm and are taking the same action as described above. The next update will be posted when services for this single vm have been restored.
  • Monitoring service checks for 2nd specific set of customers are green.
SaaS-Atlanta Info 09 Nov 2017
12:36pm UTC

RESOLVED - Odd Enterprise latency - hit and miss

We received the following update from our colocation provider on November 7, 2017 at 17:23 CDT.  This will service as a final update

On Monday, November 6th, QTS Networking Team engaged our Upstream Provider regarding an internet connectivity issue. QTS has been advised that this issue is resolved. At this time, we are awaiting a Reason of Outage (RFO) from our Upstream Provider.

QTS upstream provider has a configuration issue that impacted IP services. They have reverted a policy change to restore services to a stable state. QTS will monitor for stability and route traffic back to normal during maintenance hours.

We received the following from our colocation partner just a few moments ago.

  • QTS is aware of a connectivity issue affecting internet traffic. At this time, QTS routed traffic away from the affected internet provider so that QTS customers would not be impacted. QTS is closely monitoring this situation and will provide more information as they become available.

Our colocation partner alerted us to a known problem with connections into Atlanta.  They are looking into the situation and contacting upstream providers.  Once we receive word from them we will update this page. 

Our colocation partner has confirmed they are receiving many calls about a potential upstream provider problem.  They are currently looking into the situation

We are opening a ticket with our colocation partner to see if there are any issues upstream causing all the problems with Enterprise across several clusters as well as some Symphony customers.

We are currently investigating oddities in Atlanta where some Enterprise and Symphony connections up and down. There are no obvious infrastructure issues that we have been able to identify.  We have many people working on this and hope to have more details soon.  

SaaS-Atlanta Info 07 Nov 2017
3:14am UTC

RESOLVED - EOS.Web Cluster has extreme performance degradation

The following service event is now resolved and all customer systems should be operating normally.  We will be performing an analysis of the service disruption in the coming days.  Once complete, a root cause summary will be provided to EOS Client Services.  We hope to have the summary finalized by the middle part of next week.  

Our monitoring has alerted us to performance degradation on one of our EOS.Web clusters.  The SaaS team is working to resolve this as quickly as possible.  Service may temporarily be out for multiple clusters during the resolution process.  Updates to be posted here.

SaaS-Atlanta Info 02 Nov 2017
3:29pm UTC

RESOLVED - Investigating Enterprise Performance Issues

The following event has been resolved.

Our monitoring has alerted us to performance degredation on one of our Enterprise / Portfolio clusters.  Updates to be posted here.

  • 09:10 CDT - decision made to shutdown the database.  Connections to database are much higher than normal which is causing massive system load.  The hope is the database shutdown and restart will kill off whatever processes is causing site instability.   
  • 10:15 CDT - after 1 hour the database finally started.  A decision was madeto shutdown the database and unmount the entire cluster.
  • 11:00 CDT - continuing to shutdown cluster related application servers.  We believe that we have identified the source of latency but want to make sure we take the Enterprise services down cleanly.  
  • 11:05 CDT - In short, this event was related to controller performance.  Though not critically high, the load was high enough to cause I/O performance problems on the SAN.  After we moved applicable LUN to another controller port the average request service times at the disk level improved dramatically. 
  • 11:08 CDT - We are in the process of final testing and hope to have service restored within 30 minutes.   
  • 11:13 CDT - Restarting app servers one at a time.  Some customers should see service restoration now.
  • 11:34 CDT - All processes have been started for the Enterprise cluster.  It does take some time for each to become fully accessible.  The disk service times are much improved over those that my staff were seeing when this event initially began this morning. 
  • 11:50 CDT - Our monitoring has cleared all customer service checks.  Environment stabalized.  Email forthcoming to affected customers.  
SaaS-Atlanta Info 31 Oct 2017
1:56pm UTC