Dear Interested,

Please find some IT updates below. Report any issues or questions to it.img-wien@univie.ac.at

Upcoming Events

  • ☁️☁️☁️wolke.img.univie.ac.at replaces srvx1.img.univie.ac.at as Landing Page 🥳
    • Links on srvx1.img.univie.ac.at/??? are gone by  
  • Upcoming Maintenance in Summer:
    • Webserver switch Wolke - SRVX1,  
    • Network speed upgrade form 10GbE to 25GbE, -  
    • JET Cluster Upgrade  
  • SRVX8 gets a GPU, in summer this will be the new student hub server
  • New Server Aurora: aurora.img.univie.ac.at 
  • Quotas on SRVX1
  • New user restrictions on JET
  • ECMWF has a very useful new open data access/charts
  • WLan Vouchers (eduroam for guests), request via ServiceDesk
  • Gitlab has more runners for CI.

New Landing page

Please use wolke.img.univie.ac.at from now on. Users will redirect to the servers running these services, but hidden to the user. This allows for maximum flexibility to move services from server to server.

new address / subdomainold addressservicecommentstart
jupyter.wolke.img.univie.ac.atjet01.img.univie.ac.atResearch Jupyter Hub on JET

 

teaching.wolke.img.univie.ac.atsrvx1.img.univie.ac.at/hubTeaching Jupyter Hub on SRVX1migrates to SRVX8

 

webdata.wolke.img.univie.ac.atsrvx1.img.univie.ac.at/webdataWeb file browser on SRVX1

upgrade to new version on  

OPERATIONAL 

secure.wolke.img.univie.ac.atsrvx1.img.univie.ac.at/secureMessage encryption on SRVX1migrates to DEV

 

filetransfer.wolke.img.univie.ac.atsrvx1.img.univie.ac.at/filetransferCommandline file transfer

 

uptime.wolke.img.univie.ac.atsrvx1.img.univie.ac.at/statusStatus of IT servicesmigrates to DEV

OPERATIONAL

ecaccess.wolke.img.univie.ac.atsrvx1.img.univie.ac.at/ecmwfECAccess local gatewayupgrade to containerized version

 

library.wolke.img.univie.ac.atsrvx1.img.univie.ac.at/libraryiLibrarian digital Libraryupgrade to new version

 

Jet Cluster 🖧

Please be reminded that JET01/02 are not meant for computing, use SRVX1/8/Aurora for interactive computing.

JET01/JET02 have now stricter rules for user processes (max 20GB, max 500 processes). In Autumn when the JET Upgrade will most likely happen, the two login nodes will become fully available to the users. no restrictions anymore.

Software 📝

gnu-stack - Combination of multiple packages (cdo, eccodes, nco, ...) build with on compiler GCC 8.5.0

intel-stack - Combination of multiple packages (cdo, eccodes, nco, ...) build with on compiler intel-oneapi 2021.7.1

mayavi 4.8.1 - 4D scientific plotting

rttov 12.2 - RTTOV library + python interface. GCC 8.5.0

SRVX🖳

Will need to be renamed after summer.

Software 📝

dwd-opendata 0.2.0 - Download DWD ICON open data, forecasts, analysis

nwp 2023.1 - NWP Python distribution (enstools, ...)

gnu-stack - Combination of multiple packages (cdo, eccodes, nco, ...) build with on compiler GCC 8.5.0

intel-stack - Combination of multiple packages (cdo, eccodes, nco, ...) build with on compiler intel-oneapi 2021.7.1

cuda 11.8.0 - NVIDIA CUDA library and utils

Hardware 🧰

Installation of a Nvidia GPU GTX 1660 (6 GB Memory) in SRVX8 to allow for graphical computing using e.g. paraview or mayavi.

You can monitor the execution on a GPU using: nvidia-smi -l

Example of running a job on a GPU vs. on CPU, Python example: gpu-stress.py

Python on GPU
# load a conda module, e.g. micromamba
you@srvx8 $ module load micromamba
# setup an environment and install required packages
you@srvx8 $ micromamba create -p ./env/gputest -c conda-forge numba cudatoolkit
# run example on CPU and GPU
you@srvx8 $ ./env/gputest/bin/python3 gpu-stress.py
---------------
Add 1 to a array[1000]
on CPU: 0.0002308s
on GPU: 0.1826s
Add 1 to a array[100000]
on CPU: 0.02366s
on GPU: 6.91e-05s
Add 1 to a array[1000000]
on CPU: 0.2343s
on GPU: 0.0008399s
Add 1 to a array[10000000]
on CPU: 2.299s
on GPU: 0.009334s
Add 1 to a array[100000000]
on CPU: 23.14s
on GPU: 0.09309s
Add 1 to a array[1000000000]
on CPU: 229.9s
on GPU: 0.9586s
---------------

obviously that is just an example, but it demonstrates how parallel computing might help with some of your problems. Please let me know if you have other examples or need help to implement code changes.

Quotas🧮

Please be reminded that strict quotas need to be enforced to ensure that the backup procesure is much more responsive than it used to be.

As described in Computing (HPC, Servers) there are the following storage space restrictions:

NameHOMESCRATCHQuota limits enforced ?comment
SRVX1 / SRVX8100 GB1 TB

01.09.2023

Staff, Exceptions can be granted.
SRVX150 GB-

01.09.2023

Students
JET100 GB-

YES


VSC100 GB100 TB

YES

share between all users, number of files matter too, 2e6 number of files

Note: Students will only be given 50 GB on HOME, no SCRATCH

For Teaching on SRVX1 use: /mnt/students/lehre (request a directory for your course to share with students)

VSC🖧

There has been a major software stack change on VSC. They separated the software stacks by architecture/VSC. So a different for VSC4 and VSC5.

Please check that your paths are correct. Module names have changed once again. Some more infos on VSC Wiki.

Software 📝

eccodes 2.25 - with fortran extension on VSC5

Please have a look a our new VSC project page and add your project / usage statistics. Thanks.

There are still problem with VSC modules, not setting CPATH and LIBRARY_PATH correctly.

VPN

Being informed by ZID, from 3rd July on MFA (multifactor authentication) will be mandatory for VPN! More information on our Wiki (MFA - Multifactor Authentication). There are two choices, using a YubiKey or using an authenticator app.

MS365 vs. Office 2021

You have now the opportunity to choose between the cloud version MS365 (known as "MS office suite") or MS Office 2021 (local "on premise" installation).

In principle it is up to the user, which kind of office will be chosen. Some more infos can also be found in our wiki.



  • No labels