RSEConUK 2019 has ended
The Fourth Conference of Research Software Engineering was held at the University of Birmingham.

Content from all sessions is licensed under a Creative Commons Attribution 2.0 UK: England & Wales License.
Back To Schedule
Thursday, September 19 • 11:00 - 12:30
#6W1b - Train the Trainer: IBM PowerAI - Hands On with Accelerated Software and Hardware For AI Workloads

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Register here if you plan to attend this session as there is limited availability: http://ibm.biz/RSEConf19

A Laptop is required for this workshop

An in depth hands on session covering some of the major benefits of using GPU accelerated systems for building Deep Learning (DL) and Machine Learning (ML) models. This is based on the IBM Watson Machine Learning Community Edition software bundle, free for use on IBM Power Systems (AC922) and x86 based GPU accelerated systems. In addition to setting up environments to allow data scientists access to Deep Learning frameworks, we will also show you how to use features like:

Large Model Support (LMS) Available for Tensorflow and PyTorch, LMS allows for more complex models and larger data points than can generally fit into GPU memory. Out of memory errors are common for data scientists, and can be alleviated by LMS by using system memory alongside GPU memory space. We will show you 3 techniques for implementing LMS with existing code and with minimal effort.

SnapML Common ML algorithms can be accelerated by using GPU parallelism and the right approach. We will show how to use the GPU accelerated algorithms in SnapML to offload common functions to accelerated hardware, providing performance boosts of up to 50x. This can be used in place of the extensively used SciKit Learn Python library to simplify the implementation.

Distributed Deep Learning (DDL) Scaling beyond a single GPU or a single server can be a challenge for data science workloads, particularly for Deep Learning. DDL allows users to scale their existing model training runs across any available resources - maximising efficiency and leading to faster results. With up to 95% linear scalability across 256 GPUs, we will demonstrate how to make use of this scalability for existing code.

PowerAI Vision When you come across users who do not have the Python skills or Deep Learning knowledge to build their own models based on visual data, then other tools are available. PowerAI Vision can be used exclusively through the web interface to build and label data sets, train deep learning models, and even deploy them as public APIs. This software allows researchers with image data stores to rapidly build models for:
  • Image classification
  • Object detection
  • Object segmentation
  • Action recognition

avatar for Tom Farrand

Tom Farrand

Machine Learning Engineer, IBM
avatar for Andrew Laidlaw

Andrew Laidlaw

Power Systems and Red Hat Specialist, IBM

Thursday September 19, 2019 11:00 - 12:30 BST
4. Aston Webb, Room WG12 Aston Webb Building