Fysikum runs a cluster providing basic shared resources like job scheduling, storage and software maintenance as well as a few computing nodes for general usage of all Fysikum researchers. As of March 2020, 11 Project-funded nodes with 704 cores and 5.5TB RAM memory and 300TB disk space are added to the cluster and job scheduling system. The configuration allows Fysikum users to run jobs on any unused node while providing priority on the users' project funded nodes. The modular, inhomogeneous design allows for a continuous development of the cluster by addition of new nodes and removal of outdated nodes.

Serverrum på Fysikum, foto: Holger Motzkau
Server room at Fysikum

As a starting point, nodes on existing hardware (8 1- core Opteron nodes with 64GB memory, 25 Gb network and 10 8-core Xenon nodes with 24 GB) were set up to and are generally available.

Additional contributions are welcome.

For communication around the cluster, the mailing list
[HPC-at-fysik.su.se] was set up.

As soon as the cluster becomes operational, a seminar to get users started in the cluster will be scheduled.

For ideas, questions or requests, please contact holger.motzkau@fysik.su.se or mikica.kocic@fysik.su.se.

Cluster policy

  • The Fysikum cluster is a common resource supported and maintained by Fysikum available to research groups at Fysikum.
  • It is a supplement rather than an alternative to national large-scale infrastructures such as PDC and SNIC bridging the gap between desktop computers and large clusters.
  • Basic infrastructure such as rack space, power, operating system infrastructure, hardware and software installation & maintenance, basic login nodes, storage, and interconnect as well as some general computing nodes are provided by Fysikum.
  • Available compute time is shared equally between Fysikum users (managed by a queuing system).
  • Excessive use of storage (>~several TB) requires a user contribution to the cluster (approx 1kkr/TB/5 years for scratch space with no backup and 2kkr/TB/5 years for storage with backup).
  • Additional compute nodes are funded by projects/research groups. Research groups have priority on the resources they funded. If the resources are idle they are available for other users.
  • External collaborators can get access to the cluster after approval by the host. They get access to the cluster on the same premises as the host's group members.