Genomics pipelines are series of bioinformatics steps and tools/algorithms used to analyse genome data. A pipeline may run successfully in the environment in which it was created, but fail on other platforms due to differences in execution environment, which is where containers have an advantage. Genomic analyses are incredibly complex and often involve comparison of new data against multiple large-scale external datasets. High performance computing (HPC) helps to solve complex compute problems quickly and at scale.This session covers the evaluation of containerised genomics pipeline on HPC. A performance benchmark is created by comparing containerised pipeline against baremetal pipeline. Reproducibility is essential for the verification and advancement of scientific research. Containers can distribute an entire computing environment and hence support portability. This session is a showcase of work we have conducted at Hartree Centre using all open source technologies to solve real world problem that has a vital role i.e. genomics workflows. The purpose of this case study is to evaluate different container technologies and assess their feasibility and performance at HPC.