Fysikum runs a cluster providing basic shared resources like job scheduling, storage and software maintenance as well as a few computing nodes for general usage of all Fysikum researchers. As of April 2021, the cluster about 18 general and 38 project-funded nodes, whereof 24 belong to the department of Astronomy. There are about 2.3 million core hours per month, 32TB of RAM memory and 500TB disk space available. The configuration allows Fysikum users to run jobs on any unused node while providing priority on the users' project funded nodes. The modular, inhomogeneous design allows for a continuous development of the cluster by addition of new nodes and removal of outdated nodes.

Serverrum på Fysikum, foto: Holger Motzkau

Additional contributions are welcome.

For communication around the cluster, the mailing list
[HPC-at-fysik.su.se] was set up.

As soon as the cluster becomes operational, a seminar to get users started in the cluster will be scheduled.

For ideas, questions or requests, please contact holger.motzkau@fysik.su.se or mikica.kocic@fysik.su.se.

Cluster policy

  • The Fysikum cluster is a common resource supported and maintained by Fysikum available to research groups at Fysikum.
  • It   is   a   supplement   rather   than   an   alternative   to   national   large-scale infrastructures such as PDC and SNIC bridging the gap between desktopcomputers and large clusters.
  • Basic infrastructure such as rack space, power, operating system infrastructure, hardware and software installation & maintenance, basic login nodes, storage, and interconnect as well as some general computing nodes are provided by Fysikum.
  • Available compute time is shared equally between Fysikum users (managed by a queuing system).
  • Excessive use of storage (>~several TB) requires a user contribution to thecluster (approx 1kkr/TB/5 years for scratch space with no backup and 2kkr/TB/5years for storage with backup).
  • Additional compute nodes are funded by projects/research groups. Researchgroups have priority on the resources they funded. If the resources are idle they are available for other users.
  • External collaborators can get access to the cluster after approval by the host. They get access to the cluster on the same premises as the host's groupmembers.