Purchasing Computational Resources and Research Storage.
The University of Sheffield (through IT Services) provides resources enabling researchers to investigate big sets of data and to undertake an extensive range of multidisciplinary computational modelling. The central High Performance Computing (HPC) Service provides a standard level of computational access which is free of charge for all researchers at the university. A fair sharing mechanism is used to allocate resources amongst the user community. The investment programme and usage policy is regularly reviewed by ITS with advice from the academically lead Research Computing Advisory Group.
Research groups with funded projects can purchase priority access to the Sheffield HPC resources. This will ensure that tasks will be run at a higher priority. This service is available for projects which have planned their Research-IT resource requirements using the costing and awards tool with MyResearch. There are two routes whereby groups can purchase priority.
a) Purchasing priority access on the HPC facility for an agreed amount of compute time. This priority access will be available to the group subject to any overriding restrictions on time or memory allocation for individual jobs that are set by ITS. This access is transferable onto a different machine (at an exchange rate determined by ITS) as required. This is priority access that will enable research projects to achieve their goals within their required time scales.
b) Purchasing hardware which can be integrated with the central HPC systems. Projects partner with ITS and use a campus framework to procure HPC hardware. The group will have priority access to computing resources equivalent to those purchased with their funds for a set period which shall be the expected lifetime of the purchased resource (as determined by ITS) but they can also access more nodes if needed. The group does not retain ownership of the hardware but the computing resource can be transferred (at an exchange rate determined by ITS) onto a different machine. The benefits this yields for research groups is that hardware is supported, administered and maintained through support contracts managed by ITS. Infrastructure integrated with the central facility makes it easier to support multi-disciplinary and cross faculty projects. Un-utilised resources can be shared with the wider research community.
Request for resources, both priority and hardware purchases are made by completing the online form:
As well as enabling ITS to provision your request, this form enables ITS to ensure that the required resources are available to support your Research-IT resource needs. We will reply and give you details of the cost, how to pay and how to access your reserved allocation. As it is necessary to reserve resources the process can take a week (the HPC facility is well utilised) it can take longer if more specialised and/or a greater number of resources is requested. We will inform you as soon as the reservation is active and you will receive a warning of when the reservation is due to expire.The costs quoted in the tables provided below are based on the full economic cost and include staffing costs for maintaining the facility. Income generated using these resource allocation requests is reinvested in facility upgrades.
Specific policy for ShARC
ShARC is intended for parallel HPC computing that takes advantage of MPI-based communications using the Omnipath fabric. Queue priorities will therefore be tuned to favour HPC tasks of around 200-300 cores. The scheduler will be set to penalise attempts to run high throughput jobs on this machine (note we will initially continue with son of grid engine).
|Resource||Cost per core per hour||Notes|
|Standard||1.0p||Single Intel E5-2630 v3 core with 4 GB per core of memory|
|Big Memory||1.1p||Single Intel E5-2630 v3 core with 16 GB per core of memory|
|GPU||11.3p||1 x NVIDIA Kepler K80M GPUs with upto 4 cores from 2xE5-2630-v3 CPUs|
|Resource||Cost per node per hour||Notes|
|Standard||15.6p||Dual Intel E5-2630 v3 8-core CPU with 4GB/core|
|Big Memory||19.1p||Dual Intel E5-2630 v3 8-core CPU with 16GB/core|
Specific policy for Bessemer
Bessemer is used for general purpose computing and supports
- Single-node HPC tasks of up to 40 cores (nodes will have 40 cores).
- Some two-node tasks (i.e. MPI tasks up to 80 cores) will also be allowed.
- High throughput tasks
- Data intensive big memory tasks (using big memory nodes)
- Accelerated nodes with GPU such as NVIDIA volta 100 (32GB GDDR). These nodes can be used for general purpose computation and may also take advantage of NVLINK within a node to support Deep Learning
- Interactive work (running interactive visualisation using such codes as abaqus, ansys, matlab, paraview, idl, python, mayavi)
Researchers can purchase priority access to compute resources as set out in the purchasing section above. Researchers can purchase further hardware to be integrated into this machine through the campus HPC purchase framework provided that such hardware is consistent with the intended usage of Bessemer set out above. The specification of hardware for Bessmer must be agreed with ITS before purchase. Researchers may either (i) use the purchasing procedures above or (ii) request sole access to the purchased hardware for the lifetime of Bessemer. In both cases they relinquish any rights of ownership except that in case (ii) researchers may request that the hardware be given to them when Bessemer is replaced.
|Resource||Cost per core per hour||Notes|
|Standard||0.3p||Single Intel Intel Xeon Gold 6138 20 core with 4.8 GB per core of memory|
Purchase Resources for Research Computing
Research groups can work with IT Services to purchase hardware that will sit within the framework of Bessemer while delivering dedicated HPC services for their research. This will give research teams the advantage of having access to dedicated resources while continuing to take advantage of the 'free' facilities as well. The University of Sheffield has a framework agreement for procuring such extra HPC hardware at favourable prices. This is prepared by IT Services working with the ;Research Computing Advisory Group. Research groups who are interested should initially contact email@example.com to start a dialog. It is important to note that the 'framework agreement' ensures and in a way limits the choice of new hardware so as to be able to integrate it with the ShARC cluster.
Extra File Storage Costs
All research groups are entitled to 10TB of Standard Research Storage free of charge which is backed up (cross site) to another datacentre on campus. IT services can also provide extensions to Research Storage over and above 10TB which is charged at £100 per TB per copy per annum. So for example, an extra 10TB of backed up storage (2 copies; main storage & cross site backup), available for 5 years would be 10TB x £100 x 2 copies x 5 years = £10,000. At the end of the 5 years no further charges will be taken and IT services will continue to host and make available ad infinitum. For more information on this please see IT Services information on storage.
Funding priority access and hardware purchases.
Notes for UKRI funding
- Priority access should be costed using the costs identified in the tables above and justification given as to why priority access is needed
- Purchase of new hardware should use the standard UKRI procedures (including the requirement of 50% contribution from the University for purchases above £10,000; this contribution will not normally come from the ITS budget). See the webpage https://epsrc.ukri.org/research/facilities/equipment/process/researchgrants/ for the EPSRC regulations. Other research councils have similar provisions. In the justification of resources, a sentence along the lines of This equipment will be housed in an existing centrally provided and managed high performance computing facility with associated support costs funded through the indirect cost element of the grant should be inserted
Notes for Charities funding
- Charities do not fund an overhead (indirect/estates) component of a grant. This should be allowed for in costing of access to computing facilities. Contact Research Office for further guidance. Priority access should be costed using the costs identified in the tables above.
Notes for Industry funding
- Applications to industrial bodies should charge overheads at an appropriate rate. Priority access and/or a contribution to hardware costs should be sought where possible. Contact Research Office for further guidance
Information on national Tier 2 facilities available can be found on the EPSRC website at https://epsrc.ukri.org/research/facilities/hpc/tier2/ Information on UK Tier 1 level computing can be found at http://www.archer.ac.uk/ which is also connected to the European PRACE initiative (http://www.prace-ri.eu/).