Menu

Homepage
CV
Presentations
Publications
Thesis
Lectures
Evaluation
> Computing

German version


Links

KOMET 337
    Román Orús
    Matteo Rizzi
    Daniel Rost
Department of Physics
Mainz University
 
Research unit SFB/TR49
Research unit FOR 1346


nils-uni@bluemer.name

Valid HTML 4.01!

Valid HTML 4.01

 

Prof. Dr. Nils Blümer

  Logo Uni Mainz (since 2010)


Attention: these pages are mostly outdated! For current information on Nils Blümer, see web pages at the KU.

Computing

Successful work in Computational Physics obviously requires access to sufficient computing resources and state-of-the-art techniques in computing and IT management. More generally, information management is (or should be) a prime task in leading groups and collaborations, especially in science (with a steady fluctuation in team members), and in self-organization. In the following, I discuss some aspects that are either particularly relevant for my group and/or might be helpful for others.

Group HPC cluster

My group owns and uses a dedicated Linux cluster for High-Performance Computing (hosted by the university data center) with more than 200 CPU cores, comprised of the following hardware:
  • main file server: Supermicro 4U chassis with 24 SAS/SATA slots, 2x Intel Xeon Hapertown Quad Core E5430, 16 GB RAM, ARECA RAID SAS controller 20x ARC-1680ix with 20x 1 TB SAS hard disks (SEAGATE ST31000640SS), 2x 900 W redundant power supply, 2x 10 GbE (CX4), IPMI 2 interface
  • 2 nodes, each with 2x Intel Xeon L5410, 16 GB, IPMI (in 1x Supermicro Twin 6015TC-T-10G)
  • 16 nodes, each with 2x Intel Xeon Nehalem E5520, 8 GB, IPMI (in 4x Supermicro Twin2 6026TT-BTRF)
  • 8 nodes, each with 2x Intel Xeon Westmere E5620, 8 GB, IPMI (in 2x Supermicro Twin2 6026TT-HTRF)
  • main network: HP ProCurve Switch 2900-48G (J9050A): 48 GBit-ports, 2x 10 GbE Ports (CX4), 2 optical 10 GbE ports (X2)
  • service network (IPMI): LevelOne Switch FastEthernet Switch 48Port+2PortGBE+1PortSFP (unmanaged)
  • update installed in 11/2013: new 1U file server Supermicro 1018D-73MTF with 1x Intel Xeon E3-1240V3, 16 GB RAM, 8x SAS controller LSI 2308, 8x 960 GB Crucial M500 SSD, 2x 10 GbE (CX4), IPMI 2 interface
Setup: ZDV housing

cluster load (last year) cluster load (last week)
    Previous clusters (with P. van Dongen)
  • 16 nodes with 2x AMD AthlonMP 1200, 2 GB, Tyan S2460 (year 2001-2006)
  • 7 nodes with 2x AMD AthlonMP 2200, 1 GB, Tyan S2466 (year 2002-2006)
  • 4 nodes with 2x AMD Opteron 244, 2 GB, Rioworks HDAMA (year 2003)
  • 10 nodes with 2x AMD Opteron 246, 2 GB, Tyan S2882 (year 2004)
  • 4 nodes with 2x AMD Opteron 270, 2 GB (year 2005)
  • 8 nodes with 2x AMD Opteron 2216, 4 GB (year 2006)
  • file servers (years 2001, 2005)
  • Rembo boot server, PBS Pro / OpenPBS queueing system
compute cluster 2003
Group HPC cluster in 2003
compute cluster 2003
Group HPC cluster ~ 2008

Information management

  • TWiki/FosWiki for collaboration, group calendar, documentation of ressources, programs, procedures, research data management, grading, etc.
  • Subversion version control system for code development and collaborative writing of papers (while preserving all needed figures and corresponding data sets) and proposals
  • rsnapshot for (local and remote) snapshots, e.g., of user data
  • Unison file synchronizer (e.g. for two-way synchronization between laptops and NFS and between web servers)
  • Sympa based mailing lists (with archives etc.)
  • central department seminar web page (implemented using PHP+MySQL by Markus Himmerich, now administrated by Andreas Nussbaumer)

Usage of supercomputers and central HPC clusters

My experience with supercomputers and central HPC clusters dates back to 1995 and includes the following machines: For better overview over jobs at different sites, I had created a portal for supercomputer batch queues.

Code development

Research codes and tools

Sample codes and templates

See also course pages on computer simulations and numerical methods listed on my lectures page.

Parallelization, tuning, porting, and benchmarking

(to be continued, sample benchmark results shown below)
benchmark Intel and AMD benchmark JUGENE

Miscellaneous

  • Arbeitsprobe Linux (Aufgabenstellung zur Auswahl eines Linux-Administrators für das Institut für Physik im Mai 2009)

Print version: http://dmft.org/Bluemer/computing.en.shtml?print

Last changed: 25-Nov-13