Norwegian version of this page

TSD Operational Log - Page 9

Published Mar. 2, 2020 3:10 PM

The machine exporting the /cluster filesystem crashed, causing hanging mounts on machines which mounts the /cluster file system.

We're working on solving the issue.

--
Best regards,
TSD

Published Feb. 3, 2020 9:58 AM

We are working on fixing an issue affecting ssh between project VMs. While the issue persists, you may experience trouble accessing your Colossus submit host.

Published Jan. 31, 2020 1:44 PM

TSD-users cannot log in and we are investigating the cause of this and working on a fix.

Published Jan. 20, 2020 8:39 AM

We are having trouble with the Colossus NFS export, and are working to solve it.

Published Jan. 15, 2020 9:20 AM

UPDATE: Maintenance is done, and all exports of /cluster should be back to normal as of 12:58, 15-01-2020.

Due to the twice occurring crashes of the file system so far this week, we will be taking down the file system again today at 12:00, 15-01-2020 for quick maintenance.

As a result of this, /cluster will become unavailable on submit-hosts and other project VMs which mount /cluster. However, HPC-jobs running on Colossus itself will not be affected.

Published Jan. 13, 2020 3:31 PM

The machine responsible for making /cluster on Colossus available to the project machines in TSD crashed at 15:15 13-01-2020.

The services are now back up and running as expected.
For most projects this should not impact regular operations, however could create problems for projects which frequently access /cluster from their virtual machines.

We are currently checking all projects and working on getting everything back in order for the projects still affected by the outage.
 

-- 
Best regards,
TSD

Published Jan. 6, 2020 11:30 AM

SPSS is displaying a warning for  license expiry. Please ignore this message. The problem will be solved soon.

Published Jan. 3, 2020 1:27 PM

We are working to solve the issue.

Published Dec. 9, 2019 3:19 PM

We had a short network-outage due to firewall-updates. The change has been reverted, and everything should be operational again.

Published Nov. 27, 2019 12:33 PM

We are experiencing some technical issues with our selfservice portal, and the users currently cannot change their password as a result. We are investigating the cause and working on fixing the issue as soon as possible.

Published Nov. 7, 2019 12:11 PM

The /cluster file system on Colossus crashed around 11:00 today. It was restarted at 11:30. We are investigating the reason for the crash.

This probably affected running jobs on Colossus, so you should check your jobs.

It also affected the NFS-mounted /cluster file systems on the Linux VMs that mount /cluster. The mounts should be fine now, but please report any hanging mounts.

Published Oct. 29, 2019 9:13 AM

There are a few remaining projects which need help with printing and GPUs, but the rest of the work is completed.

We are busy with Windows maintenance, which will cause interruptions to login sessions throughout the day.

Published Oct. 10, 2019 12:48 PM

We are experiencing some technical issues with tsd-fx03, and the service is currently unavailable as a result.

We are investigating the cause and working on fixing the issue as soon as possible.

Published Oct. 8, 2019 2:55 PM

Difi has notified us that they are experiencing issues with BankID mobile for Telenor customers, which therefore may affect selfservice login for some TSD users.

Published Oct. 8, 2019 9:37 AM

Dear TSD User

We will perform some internal network maintenance at 10:00. We do not expect any interruptions to services, but please let us know if you experience any issues.

 

UPDATE:

The problem should be resolved.

 

UPDATE:

We are experiencing problems with access to view.tsd.usit.no. We are working on resolving this.

Published Oct. 3, 2019 9:33 AM

UPDATES ON ONGOING MAINTENANCE:
09:30: Unmounting /cluster on all machines.

11:00: New machine is up, currently running tests.

12:15: Starting up services on project machines to allow access to /cluster again.

12:50: Colossus services and /cluster exports running as normal, now with 10 times the bandwidth.

Published Oct. 2, 2019 4:10 PM

The NFS-exporter for Colossus crashed again, on the brink of our planned maintenance and switch to the new machine tomorrow morning.

We've restarted the services now, and will be restarting the machines which are now hanging due to this promptly.

Our apologies for the inconvenience. 

-- 
Best regards,
The TSD Team

Published Oct. 1, 2019 1:32 PM

UPDATE, 09:30, starting unmounting of NFS-shares from /cluster on all machines.

We have now solved the problems we encountered on Monday, and are now ready to replace the NFS-exporter.

The work will start on Thursday 3rd October at 09:00 CET. We expect to be finished by the end of the day, possibly earlier.

During the maintenance, we have to unmount /cluster on all virtual machines (VMs) that mount it. This means that the /cluster/projects/pXX areas will be unavailable on the VMs, and it will not be possible to use the module load system for software on the VMs. Some VMs might also require a reboot.

Jobs on Colossus will continue to run as normal, but it will not be possible to submit new jobs during the stop.

Do not run jobs on VMs that need data from /cluster or software modules. If you do so, we will have to kill them to unmount the /cluster area. Also, if the VM needs to be rebooted, all ru...

Published Sep. 27, 2019 7:31 AM

We are currently performing maintenance on the self service and data portal

Published Sep. 26, 2019 9:41 AM

UPDATE: Unfortunately, we encountered some unforeseen problems, and were not able to switch to the new NFS-exporter today. The system is now back in normal production using the old exporter, and you can continue to work as normal again. We hope to solve the problems quickly, and come back with a new day soon for replacing the NFS-exporter.

We are sorry for the inconveniency.

 

We will replace the existing NFS-exporter on Colossus starting on Monday, 30th September 09:00 CET, and continue working throughout the day.

We will stop the NFS-export by dismounting it on all Virtual Machines, and some may also require a reboot.

You will not be able to run jobs on VMs that need data from /cluster or software modules. If we have to reboot the VM to unmount /cluster, the running jobs will also be killed.

Please save your data before the maintenance window, and follow our Operational Log for the update.

The...

Published Sep. 17, 2019 12:54 PM

We are experiencing issues with some services, which may lead to users being unable to login to TSD through VMWare Horizon Client with an error "all available desktop sources are currently busy". We are investigating the cause of this and working on a fix.

Published Sep. 16, 2019 11:35 AM

Dear TSD User

Due to issues with part of login infrastructure which is preventing some projects from logging in, we need to perform unplanned maintenance on the view-ous login gateway. This will mean that login sessions for p22, p149, p191, p192, p321, and p410 will be suspended while we reboot. Apologies for the inconvenience.

Published Aug. 27, 2019 8:41 AM

We are experiencing issues with some services, which may lead to  users being unable to login to TSD through VMWare Horizon Client with an error "all available desktop sources are currently bussy". We are investigating the cause of this and working on fix.