TSD Operational Log - Page 19
Dear TSD-user,
the update of the VMWare security server has been successfully completed. All the windows VMs are now again accessible with the PCoIP protocol.
Enjoy TSD!
Regards,
Francesca
Dear TSD-user,
on Today the 30/11-2015 between 13:00 and 15:00 CET we will upgrade the VMWare View security server. During the upgrade the login to the windows machines in TSD via the PCoIP protocol will not be available. Login to the windows servers will be therefore only possible vis ssh+RDP connection (http://www.uio.no/tjenester/it/forskning/sensitiv/hjelp/brukermanual/ssh-og-rdp/index.html). However be aware that the ssh+RDP connection will only work if you do “Log off" from your last session opened with PCoIP.
The windows and linux VMs will not be affect by the upgrade and the processes running on the machines will keep running. Jobs on Colossus will not be affected.
Regards,
Francesca
Dear Colossus User
the maintenance has been successfully completed and the cluster is up and running. The hugemem nodes still need to come up, and probably will not be available until Monday next week. However all the jobs that were queuing during the downtime are already running.
Happy computing!
Francesca
Today (19/11) from 8:00 am Colossus will be stopped for maintenance. The outage shall last for two days.
Francesca
Dear TSD-user,
the maintenance stop of Colossus was successfully complete and the cluster is back in production.
As previously informed, there will be one more downtime the 19 Nov 2015 from 8:00 am. The downtime will last at max two days. This second downtime is needed to complete the work initiated now, namely setting up a new configuration that will significantly improve the I/O in the cluster.
Please notice that if you schedule a job with running-time longer then 14 days, then the job will not start before the end of next downtime.
Happy computing!
Francesca@TSD
Dear TSD-users,
as anticipated two weeks ago the Colossus will be stopped today (4/11-2015) from 8:00 a.m. Except drawbacks, we will expect to finished the downtime by Thursday afternoon. You will be notified when the service will be on again.
Regards,
Francesca@TSD
Dear TSD-users,
there will be a maintenance stop of the TSD infrastructure on Thursday 5/11 from 15:00 to 15:30 CET. During the downtime the users will not be able to access TSD. The VMs will be probably rebooted at the end of the downtime therefore all the running process will be stopped. The TSD downtime coincides with the Colossus maintenance stop, so there will be no jobs running on the cluster at the time of the downtime. The short notice is due to the fact that we have decided to merge two maintenance stops, namely HNAS and Colossus, to minimise the numbers of outages.
The downtime lasted from 1500 to 1510, and everything is back up and good. Performance should be better.
Sorry for the inconvenience.
Regards,
Francesca
Dear TSD-user,
tomorrow there will be an upgrade of Cerebrum instance in TSD. The outage will last for the entire day. As a consequence of the maintenance stop the brukerinfo will not work.
You will receive an informative email when the maintenance is finished.
Sorry for the inconvenience.
Regards,
TSD team
Dear TSD-users,
the issue regarding the missing communication between Colossus and the Domain Controllers has been solved and now Colossus is back in production as usual.
We expect that very few (if none) jobs had failed during the unplanned outage.
Sorry for the inconvenience.
Happy computing!
Francesca@TSD
Dear TSD user
We got a problem with Colossus because of our Domain Controller update made an unwanted situation pop up. We are hoping to get the situation back on track today, we?ll keep you posted.
For those of you paying for CPU hours and having had jobs killed, please email us at tsd-drift@usit.uio.no to get this refunded with interests.
Sorry for the inconvenience.
Gard
Dear TSD user,
the 10th June 2015 at 12:00 CEST there will be an update of the TSD disk. We expect the upgrade to last for a hour. During this period the system might hang up for circa one minute at intervals of every 30 minutes (the first time at 12:00, the second time at 12:30 etc).
For the Colossus users: all the jobs on Colossus nodes will keep running as usual during the upgrade. Notice however that jobs finishing during the upgarde might crash because of the failure of data writing processes back on the VMs. We therefore advice you to schedule (when possible) your jobs in order to finished well after the upgrade period.
Regards,
TSD@USIT
Dear TSD-user,
the linux vms will be shut down for maintenance purposes, as announced previously.
You will be informed when we will finish.
Regards,
Francesca@TSD
Dear User,
the login problem we experienced this morning has just been solved.
Regards,
TSD@USIT
Dear Users
We have an issue with the two factor login. Problem occures second time you try to log in today. We are on the case, hopefully solved quite soon.
Best
Gard@TSD
Dear TSD user,
the maximum wall-time-limit for the jobs running on Colossus has been now increased to 28 days. This will facilitate the execution of long simulations/calculations. However we strongly advice you not to run jobs for more than 7 days, unless you have enable checkpointing (...
Dear TSD user,
due to maintenance, the 18th May 2015 at 13:30 CEST all the linux machines in TSD will be shut down. We expect the downtime to last for a hour. However you will receive a notification by mail when the maintenance stop is over.
For the Colossus users: all the jobs on Colossus nodes will keep running as usual during the downtime. Notice however that jobs finishing during the downtime might crash because of the failure of data writing processes back on the VMs. We therefore advice you to schedule (when possible) your jobs in order to finished well after the downtime period.
Regards,
TSD@USIT
- This work has been finished and we are back in production (08.50 - 6/5-15)
Due to USIT starting use of a new certificate when speaking to minID (idporte/difi) there will be a restart of Nettskjema at 6/5-15, 08:30. Estimated downtime is about 20 minutes.
Nettskjema will not work during this downtime. We will try to update here when back on line again.
Best TSD@UST
Spice-proxy was accidentally rebooted today at 15:40 today. This incident caused a 5 min downtime for remote connections to linux machines in TSD using spice. We are sorry for any inconvenience this may have caused.
Spice proxy is now up and running again.
Dear TSD users
As usual things does not go as planned. The machine has been moved, but we can not get the SPICE connection working. We will come back to this shortly. Login first to a windows machine and then putty to your linux VM works.
We are moving the SPICE proxy machine today to vmware at 14.00, so there will be a short downtime, up no later than 1430. All connections will be lost if using SPICE. If you log in on windows and then use SSH to your computer you will not be affected.
We will update this logpost when done.
Dear TSD users
We have fixed the the LDAP issue in TSD. Everything should work, except a known truble with p21 in the filesluice.
We are very sorry for the downtime. If there are any more issues, please report to tsd-drift@usit.uio.no.
Best regards
Gard@TSD
Dear TSD-users
NB : Amount of data inside the file-sluice was so large that we must encrease downtime for the file-sluice until 1700 today.
We are moving one of the file-sluice-machine tomorrow morning to vmware, thus, no files can be imported or exported tomorrow morning from about 0900-1200. Data will not be lost, but all jobs and connections will be cut off at 0900 tomorrow morning. Nettskjema answers from this period will pop up inside TSD once we are back online with the server.
Nettskjema will be stopped 19/3-15 from 0830 to 10.30. No answers can be handed in during this time and one can not log in to create or change Nettskjemas at nettskjema.uio.no. This downtime is due to a major upgrade to Nettskjema version 15.
Best regards
Gard
We had a DNS issue yesterday at about 1530. It was quickfixed yesterday afternoon, and a permanent fix was in place today at 10.15.
The reason why was that after migrating machines to vmware, these machines were left in RHEV as turned off. Some eager users had restarted these machines (we totally understand why, as you believed they where down) and this caused duplicate machines regarding names and IP addresses. This again caused DNS to panic.
Sorry for the inconvenience.
Gard
We have solved the issue with the import - export.
Sorry for the delay in with the fix
Gard
Dear TSD-users
You may logging to TSD now. Problem solved.
TSD-team
Dear TSD-users
You may experience problems with logging to TSD. We are on the case and working to solve it as soon as possible.
Sorry for the inconvenience.
TSD-Team