Nodes overview


Sunrise is comprised of different node types grouped in partitions.

The HPL performance details are given here: Fysikum HPL.

Partition

Nodes

Cores/node

RAM/node

Archtecture and instruction set

c01: solar (*)

8

12

32 GiB

amd, fma4/avx

c02: fermi (*)

10

8

24 GiB

intel, sse4_2

c03: cops

14

64

512 GiB

amd, avx2

c04: rossw

30

128

1 TiB

amd, avx2

c05: ampere

1

48

1 TiB

NVIDIA A100 40GiB

c06: qcmd11 (*)

1

16

48 GiB

amd, fma4

c07: amper2

2

48

512 GiB

NVIDIA HGX A100 80GiB

c08: jon

1

64

768 GiB

amd, avx2, avx512

c08: kta

1

96

768 GiB

amd, avx2, avx512

c09: qcmd (*)

2

20

96 GiB

intel, avx2, fma4/avx

Total:

70

5272

42 TiB

Some nodes are restricted depending on the source of funding. The partitions marked with (*) are open access, allowing jobs from any user. They can be used for testing, lab and training sessions. Other partitions are restricted to specific groups. Requests for access to these nodes will need to be confirmed by the owner of the node before access will be granted, which is usually not a problem. Note however, that the owner will always have a higher priority in accessing these nodes.

Note

Hyperthreading is not enabled on the CPUs. Jobs are therefore allocating real cores.

CPU instruction sets

The following table lists the CPU instruction sets supported by different partitions. The label EPYC denotes the nodes in the cops, rossw and ampere partitions.

Instruction set

solar

fermi

EPYC

qcmd

sse

X

X

X

X

sse2

X

X

X

X

ssse3

X

X

X

X

sse4_1

X

X

X

X

sse4_2

X

X

X

X

sse4a

X

X

fma4

X

fma

X

X

avx

X

X

X

avx2

X

X

Solar partition (c01)

We have 8 Dell R415 nodes. These nodes are not restricted to any user group and as a result perform a diverse range of workloads.

c01n[01-08]

Dell R415 II

Count

8

Processor

2 x 6 Core AMD Opteron 4238, 3.3 GHz

Cores/Node

12

RAM

32 GiB

TMP Size

400 GiB HDD

Interconnect

20 Gbps InfiniBand DDR

Fermi partition (c02)

We have 10 HP DL160 G6 Server nodes. These nodes are not restricted to any user group and as a result perform a diverse range of workloads.

c02n[01-10]

HP DL160 G6

Count

10

Processor

2 x 4 Core Intel Xeon L5520, 2.27 GHz

Cores/Node

8

RAM

24 GiB

TMP Size

140 GiB HDD

Interconnect

1 Gbps Ethernet

CoPS partition (c03)

We have 14 Dell R6525 nodes with AMD EPYC Rome 7502. These nodes are mainly used for cosmological simulations by the Cosmology, Particle Astrophysics and Strings (CoPS) division and the DESIREE research (Double ElectroStatic Ion Ring ExpEriment).

c03n[01-14]

Dell R6525

Count

14

Processor

2 x 32 Core AMD EPYC 7502, 2.5 GHz

Cores/Node

64

RAM

512 GiB, 3200 MT/s

TMP Size

1.92 TB SSD

Interconnect

100 Gbps InfiniBand EDR/HDR100

Rossw partition (c04)

We have 6 Dell R6525 nodes with AMD EPYC Rome 7H12 and 18 Dell R6525 nodes with AMD Epyc Milan 7763. These nodes are mainly used by the computational high-energy astrophysics group at the Department of Astronomy.

c04n[01-06]

Dell R6525

Count

6

Processor

2 x 64 Core AMD EPYC 7H12, 2.6 GHz

Cores/Node

128

RAM

1024 GiB, 3200 MT/s

TMP Size

0.46 TB SSD

Interconnect

100 Gbps InfiniBand EDR/HDR100

c04n[07-24]

Dell R6525

Count

18

Processor

2 x 64 Core AMD EPYC3 7763, 2.45 GHz

Cores/Node

128

RAM

1024 GiB, 3200 MT/s

TMP Size

0.46 TB SSD

Interconnect

100 Gbps InfiniBand EDR/HDR100

c04n[25-29]

Dell R6525

Count

5

Processor

2 x 64 Core AMD EPYC3 7773X, 2.2 GHz

Cores/Node

128

RAM

1024 GiB, 3200 MT/s

TMP Size

0.46 TB SSD

Interconnect

100 Gbps InfiniBand EDR/HDR100

Ampere GPU partition (c05)

We have one Supermicro 4124GS-TNR node with 8 GPU cards A100. This partition is oversubscribed.

c05n01

Supermicro 4124GS-TNR

Node count

1

GPU count

8 x NVIDIA A100 40GiB

Processor

2 x 24 Core AMD EPYC 7402, 2.8 GHz

Cores/Node

48

RAM

1024 GiB

TMP Size

1.9 TB SSD + 7.6 TB NVMe

Interconnect

100 Gbps InfiniBand EDR/HDR100

Qcmd11 partition (c06)

Contains one HP DL165 G7 node which originally belonged to the QCMD research group.

c06n01

HP DL165 G7

Count

2

Processor

2 x 8 Core AMD Opetorn 6128, 2.00 GHz

Cores/Node

16

RAM

48 GiB

TMP Size

178 GiB SSD

Interconnect

1 Gbps Ethernet

Amper2 GPU partition (c07)

We have two Del PowerEdge XE8545 nodes, each with 4 GPU cards HGX A100. This partition is oversubscribed. This partition is funded by an EU project and not available for generl use.

c07n[01-02]

Dell PowerEdge XE8545

Node count

2

GPU count

4 x NVIDIA HGX A100 80GiB

Processor

2 x 24 Core AMD EPYC 7413, 2.6 GHz

Cores/Node

48

RAM

512 GiB

TMP Size

1.9 TB SSD + 4 * 1.9 TB NVMe

Interconnect

100 Gbps InfiniBand EDR/HDR100

Jon partition (c08/jon)

We have one Del R7625 node funded by a CMB/CoPS project.

c08n01

Dell R7625

Node count

1

Processor

2 x 32 Core AMD EPYC 9374F, 3.85 GHz

Cores/Node

64

RAM

768 GiB

TMP Size

480 GB SSD

Interconnect

100 Gbps InfiniBand EDR/HDR100

KTA partition (c08/kta)

We have one Del R7625 node cofunded by Fysikum KTA.

c08n01

Dell R7625

Node count

1

Processor

2 x 48 Core AMD EPYC 9474F, 3.60 GHz

Cores/Node

96

RAM

768 GiB

TMP Size

480 GB SSD

Interconnect

100 Gbps InfiniBand EDR/HDR100

Qcmd partition (c09)

We have 2 HP DL360 G7 Server nodes that originally belonged to the QCMD research group; they are currently not restricted to any user group.

c09n[01-02]

HP DL360 G7

Count

2

Processor

2 x 10 Core Intel Xeon E5-2650v3, 2.30 GHz

Cores/Node

20

RAM

96 GiB

TMP Size

178 GiB SSD

Interconnect

10 Gbps Ethernet

Storage nodes

We have 7 Dell R540 servers, one R740, one R6525 server and one SuperMicro server reserved for storage. Nine servers host the Lustre storage. Two servers are for the home directories (one supplied by Fysikum).

fs[01-04]

Dell R540

Count

4

Processor

2 x 6 Core Intel Xeon Bronze 3204, 1.9 GHz

Cores/Node

12

RAM

96 GiB, 2660 MT/s

Storage

12 x 14 TB 12 Gbps SAS HDD

Interconnect

100 Gbps InfiniBand EDR/HDR100

fs[05-07]

Dell R540

Count

3

Processor

2 x 6 Core Intel Xeon Bronze 3206, 1.9 GHz

Cores/Node

12

RAM

96 GiB, 2660 MT/s

Storage

12 x 16 TB 12 Gbps SAS HDD

Interconnect

100 Gbps InfiniBand EDR/HDR100

fs09

Dell R740

Count

1

Processor

2 x 6 Core Intel Xeon Silver 4309, 2.9 GHz

Cores/Node

12

RAM

96 GiB, 2660 MT/s

Storage

12 x 18 TB 12 Gbsps SAS HDD

Interconnect

100 Gbps InfiniBand EDR/HDR100

fs00

Dell R6525

Count

1

Processor

2 x 8 core AMD EPYC 7252, 3 GHz

Cores/Node

16

RAM

256 GiB, 3200 MT/s

Storage

2 x 1.9 TB NVMe

Interconnect

100 Gbps InfiniBand EDR/HDR100

Head nodes

We have one Dell R6525 and several Dell R415 nodes which are reserved for user’s login, system management and the Nix environment.