TSD Operational Log - Page 6
Colossus will have downtime today, 2021-06-29 from 08:00-16:00 to upgrade the Slurm job scheduler software.
We have set a reservation on the cluster so that jobs which request running time during the maintenance windows will not be scheduled from now on. These jobs will remain pending until after the downtime, when they will be rescheduled automatically. The submit hosts will be accessible, but cannot be used to submit jobs to Colossus.
During the downtime we advice you to keep an eye on this operational log for any updates.
The group management pages in selfservice hasn't worked properly after the maintenance earlier this week.
There can be cases where a users haven't been added to groups.
Users experience random job submission failures with an error message similar to:
"sbatch: error: Batch job submission failed: Socket timed out on send/recv operation."
We're actively working on a solution.
Selfservice is down for planned maintenance June 15.
Update June 15, 18:28: To ensure that all functions are working as normal, we will keep Selfservice in maintenance mode until tomorrow.
Update June 16, 10:42: We will reenable selfservice around 12.00 today.
Update June 17, 11:15: Most parts of selfservice should work as normal. Please contact tsd-drift@usit.uio.no if you experience any problems.
Colossus will have downtime today, 2021-06-29 from 08:00-16:00 to upgrade the Slurm job scheduler software.
We have set a reservation on the cluster so that jobs which request running time during the maintenance windows will not be scheduled from now on. These jobs will remain pending until after the downtime, when they will be rescheduled automatically. The submit hosts will be accessible, but cannot be used to submit jobs to Colossus.
During the downtime we advice you to keep an eye on this operational log for any updates.
Projects with a project id (pXX) greater than p1575 may be experiencing problems logging in to Windows hosts. Linux hosts are not affected. We're actively working on a solution.
Hosts mounting /cluster may be experiencing NFS hangs at the moment. We're actively working on a solution.
We are investigating some reported login problems with data import and export. We will come back with an update once we have gathered more info.
Update 15:37: The problems have been resolved.
Currently, we are experiencing problems with managing groups of a TSD-project via TSD Selfervice, while logging in with ID-Porten (MinID, BankID, Buypass, Commfides). As a temporary workaround, please log in via TSD Credentials, to perform this task of managing groups in your TSD-project.
Update May 20: The problems have been resolved
Some projects experienced /cluster NFS hangs on April 25th between 19:00 and 19:45 and April 26th between 06:30 and 08:00.
We do not expect there to be any interruptions.
Som informert tidligere i ?r (ca. slutten av januar) skulle vi fra 1. mai ha innf?rt lisenskostnader for Windows i TSD. TSD rapporterer n? bruk av Microsoft-produkter i TSD til Microsoft p? m?nedsbasis, basert p? antall personer med faktisk tilgang. Grunnet noen sm? tekniske utfordringer har vi n? valgt ? utsette avregningen til 1. juni.
Innen 1. juni vil Prosjektleder i TSD, via selvbetjeningsportal, kunne styre hvem som skal ha tilgang til de ulike tjenestene ved ? melde folk inn og ut av grupper. Vi vil publisere fremgangsm?ten for ? styre inn-, og utmelding av prosjektets medlemmer p? denne lenken:
Login to TSD is currently unavailable.
We are working to solve the problem as quickly as possible.
Our apologies for the inconvenience.
--
The TSD Team
All RHEL6 ThinLinc (pxx-tl01-l) machines have now been shut down, as mentioned in the email sent in february. With a few exceptions.
A new RHEL8 Machine has also been made available to every project which can be accessed at https://view.tsd.usit.no
Read: /english/services/it/research/sensitive-data/use-tsd/login/index.html#toc8
If you for any reason need to access your RHEL6 Machine for a limited time, please contact us:/english/services/it/research/sensitive-data/contact/index.html
Update 20:00 April 27: a few submit and login hosts that mount /cluster are experiencing new NFS hangs. Some host have been rebooted.
There were NFS hangs on submit and login nodes that mount /cluster.
We are performing network maintenance on Thursday 29/4/2021.
We do not expect there to be any interruptions.
The cost command, used to query cpu quota usage on Colossus, is currently not working for projects without Sigma2 quota.
Update: the cost command now displays usage stats for Sigma2 quota, and will display NA and an info message for projects without Sigma2 quota.
Starting from April 1st., we will be introducing the following changes in the distribution of Colossus Quotas:
- We will reduce the Sigma2 pool of resources to 1536 cpu cores, with no gpu nodes. Only TSD-projects with cpu hour quota from Sigma2 can use this pool.
- We will move the removed resources from the Sigma2 pool to a dedicated resource, called “tsd”, consisting of 288 cpu cores on ordinary compute nodes, plus 128 cpu cores and 4 gpu cards on two gpu nodes.
- All TSD-projects can use the “tsd” resource, by submitting jobs using "--account=pNN_tsd" instead of "--account=pNN". Please check this document, for the complete procedure:
/english/services/it/research/sensitive-data/use-tsd/hpc/dedicated-resources.html - There will be a limit of 200,000 cpu hours on “tsd” resource, as it is limited. However, we may increase this limit in future.
Login through VMware was unavailable for some hours last evening.
Update 21:20: Issue resolved.
The TSD Team
IDPorten is having technical problems. When they are resolved everything will continue normally
We're experiencing NFS hangs on many Linux hosts mounting /cluster since 5:55 this morning.
Its also affecting /cluster on the Colossus compute nodes. The majority of compute nodes have been rebooted which may have affected running jobs.
Update 12:00: The submit hosts and Colossus are currently unavailable.
Update 14:00: The issue has been resolved, and we're rebooting the submit hosts now.
Due to an outage, login through VMware is currently unavailable.
You should however still be able to login through https://login.tl.tsd.usit.no if your project has a Linux-VM.
We are working on getting things back to normal as quickly as possible.
--
The TSD Team
The storage system for project storage (not Colossus) is having performance issues. This is causing instability in file import and export, and some slowness on virtual machines. We are debugging and fixing this.
Consent Portal from registration temporary due to service modification. This does not mean that consent is no longer acquired. The consents will be delivered to your project normally. The already registered forms will continue to be exposed to consenters on the external portal. We expect to resume form registration in couple of days.