Much of my work relies on accelerated computation of some sort, usually applied to images or other 2D/3D data. While CUDA is the de-facto toolkit in my field, OpenCL is a more intriguing solution thanks to its portability. This is additionally important in a cloud environment where the ability to run the same code/containers one a variety of different node SKUs is very appealing.
This post walks through my experiments with bringing multi-platform OpenCL support to Azure Kubernetes via Docker.
There are tons of existing OpenCL dockerfiles demonstrating multiplatform support. (ex: here for the one I used as a base, and plenty more). I wanted:
- Based on Ubuntu 20.04LTS
- Support for at least Intel and CUDA OpenCL (with AMD GPUs as a bonus)
- Set up for use as a VSCode DevContainer Image
The result was a Dockerfile that looks like the following:
FROM nvidia/cuda:11.6.0-devel-ubuntu20.04
ARG DEBIAN_FRONTEND=noninteractive
################################ begin nvidia opencl driver ################################
RUN apt-get update && apt-get install -y --no-install-recommends \
ocl-icd-libopencl1 \
clinfo pkg-config && \
rm -rf /var/lib/apt/lists/*
RUN mkdir -p /etc/OpenCL/vendors && \
echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
RUN echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf && \
echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf
ENV PATH /usr/local/nvidia/bin:${PATH}
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}
# nvidia-container-runtime
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
################################ end nvidia opencl driver ################################
############### begin Intel OpenCL driver installation ###############
RUN apt-get update && apt-get install -y alien clinfo pkg-config
# Install Intel OpenCL driver
#ENV INTEL_OPENCL_URL=http://registrationcenter-download.intel.com/akdlm/irc_nas/vcp/13793/l_opencl_p_18.1.0.013.tgz
ENV INTEL_OPENCL_URL=http://registrationcenter-download.intel.com/akdlm/irc_nas/9019/opencl_runtime_16.1.1_x64_ubuntu_6.4.0.25.tgz
RUN mkdir -p /tmp/opencl-driver-intel
WORKDIR /tmp/opencl-driver-intel
RUN curl -O $INTEL_OPENCL_URL; \
tar -xzf $(basename $INTEL_OPENCL_URL); \
for i in $(basename $INTEL_OPENCL_URL .tgz)/rpm/*.rpm; do alien --to-deb $i; done; \
dpkg -i *.deb; \
mkdir -p /etc/OpenCL/vendors; \
echo /opt/intel/*/lib64/libintelocl.so > /etc/OpenCL/vendors/intel.icd; \
rm -rf *
############### end Intel OpenCL driver installation ###############
# Update & install packages
RUN apt-get update && \
apt-get install -y wget make clinfo build-essential git libcurl4-openssl-dev libssl-dev zlib1g-dev libcurl4-openssl-dev libssl-dev
To use this in VSCode, a .devcontainer folder is first created in the root of the project you’re going to be working with (for me, a new empty git repository). Inside the .devcontainer folder, you need to put the above Dockerfile as well as a devcontainer.json configuration file that tells VSCode how to configure your development environment. Mine looks like the following:
{
"name": "${localWorkspaceFolderBasename}",
"dockerFile": "Dockerfile",
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
// To enable your local GPUs in container if they are on enabled by default
"runArgs": [ "--gpus=all" ],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [9002],
"extensions": [
"ms-vscode.cpptools",
"ms-vscode.cmake-tools",
"eamodio.gitlens",
],
// Create and mount the working directory in the container
//"postCreateCommand": "apt-get update && apt-get -y install gdb"
}
Once both of these files are present and saved, you can hit “F1”, then select “Remote-Containers: Rebuild Container Without Cache” (you’ll need to have the Remote Container extension and Docker installed). Once the DevContainer is built, you’ll be greeted with a new empty terminal in VSCode, and will be ready to get started developing.