How to containerize an FPGA Application?

How to containerize an FPGA Application?

Done… Almost!

After months of development & testing, your FPGA application is finally ready to be delivered to your customers.

You will now encounter the usual deployment conundrum: user environment fragmentation.

Every user will have its own, customized, execution environment with the different kernel, OS, libs…and your FPGA application needs to be compatible with all of them.

To tackle this unsolvable equation, there is one straightforward solution: Containers!

Containers are the solution to software reliability in various execution environments. Wherever it runs on the developer’s physical computer, in a datacenter production environment or in the public/hybrid clouds virtualized servers a container will behave exactly the same.

It consists of an entire runtime environment (app + libs + dependencies) bundled in a single package: infrastructure is abstracted away.

Check this video of IDG.TV for more informations on this topic!

But how does it work?

In this article, we have chosen to use Docker as it is, today, the ubiquitous containerization technology. Please note that alternatives exist.

Hopefully this diagram will make it even easier to grasp the concept:

1. Creating the container image

The first requirement is to create a container image containing the software application.

This is achieved using a “Dockerfile” as greatly explained in Docker documentation:

2. ​Select the base image

To build a Xilinx FPGA image, you should use one of the base images provided by Xilinx: https://github.com/Xilinx/Xilinx_Base_Runtime

Select the base image that matches both the targeted environment and the bitstream requirements.

For instance, if I want to use Ubuntu 18.04 and my bitstream was generated with Xilinx tools 2019.1, I will use:

FROM xilinx/xilinx_runtime_base:alveo-2019-1-ubuntu-1804  

3. Configure the entry point

XRT requires some environment variables to be set to run the application.

The usual way to set these environment variables is to source the “/opt/xilinx/xrt/setup.sh” script prior to run the application.

Since it is possible to only set a single entry point in a dockerfile, the following workarounds are possible:

Use a shell script as an entry point that will source the “setup.sh”, and then run the application.

Hardcode the variables inside the Docker file using the “ENV” directive. This allows you to use the application entry point directly.

4. Copy the FPGA bitstream

Add your FPGA bitstream in the container image using the “COPY” directive:

COPY alveo_u200_xdma_201830_2.xclbin /usr/local/share/app 

5. Install your application and its dependencies

You can install application dependencies using apt/yum tools and copy binaries and libraries using the “COPY” directive:

 COPY app_lib/ app_lib 
 RUN apt-get update && apt-get install -y --no-install-recommends apt-transport-https curl (...) 

6. Build the Docker Image

Simply run the following command:

 docker build -t [tag] -f [Dockerfile] [BuildFolder] 

Preparing a host for container deployment

To run the FPGA application container on a host, Xilinx XRT, driver, and board shell must be installed as well as Docker and eventually Kubernetes.

Xilinx does provide a convenient script to install all dependencies at once: https://github.com/Xilinx/Xilinx_Base_Runtime

Docker & Kubernetes installation procedures can be found in their documentation:

https://docs.docker.com/engine/install/
https://kubernetes.io/docs/setup/

Running with Docker!

Docker is the easiest way to run containers in a development environment.

Your application running inside the Docker needs to have direct access to the FPGA device.

This is done with the “–device” and “–mount” arguments.

On the host environment, run the following commands:

 /opt/xilinx/xrt/bin/xbmgmt scan 
 /opt/xilinx/xrt/bin/xbutil scan 

It will display the following outputs:

 0000:02:00.0 xilinx_u250_xdma_201830_2(ts=0x5d14fbe6) mgmt(inst=512) 
 [0] 0000:02:00.1 xilinx_u250_xdma_201830_2(ts=0x5d14fbe6) user(inst=129

The ends of each line give the number to be used with the “–device” argument (512 et 129 in that case):

 docker run --rm --device=/dev/xclmgmt512:/dev/xclmgmt512 
 --device=/dev/dri/renderD129:/dev/dri/renderD129 [DockerImage] [AppArguments] 

In some environments like AWS, more devices are required, generally from “/sys/bus/pci/devices”:

For devices that are not in ”/dev”, you need to use the “–mount” argument as follows:

 docker run --rm --device=/dev/xclmgmt512:/dev/xclmgmt512 
device=/dev/dri/renderD129:/dev/dri/renderD129 --mount
type=bind,source=/sys/bus/pci/devices/0000:00:1d.0,target=/sys/bus/pci/devices/0000:00:1d.0 [DockerImage] [AppArguments]

Running with Kubernetes!

Kubernetes is the recommended way to run containers in production.

See the Docker and Kubernetes documentation for more information on how to use it.

Xilinx provides a device plugin to use FPGA with Kubernetes:

https://github.com/Xilinx/FPGA_as_a_Service/tree/master/k8s-fpga-device-plugin

That’s it!

Within a few minutes you solved the execution environment fragmentation equation.

Thanks to containers, your application is now future-proof and ready to be deployed everywhere by your customers.

Now, why won’t you tackle the revenue generation topic by transforming into a global SaaS vendor?

By Gaetan Dufourcq on 11 May 2020