Docker build chain

Hello everyone,

I am currently working on making the docker builds smaller so that I can update/download ODM faster. While I have been modifying the dockerfiles, I see that ODM has a portable version: that is targeted at an older architecture.

Is the portable version the one that is targeted to the builds for Docker Hub? as it isn’t immediately obvious to me.

Has anyone done any bench-marking at the performance different between the older architecture (Nehalem) and new architectures that use AVX2/AVX512 etc. extensions?

If there is a performance difference, could we do a multi-build for docker and tag the builds that are using the more advanced/modern extensions with :latest-avx512 etc. that way every ones installs will continue to work correctly when they update, and you have to explicitly opt into the newer extensions.

Also looking at the pretty long list of packages that are used as dependencies and requisites, are there any that can be easily said to be used only during the build with cmake, or only when running a production system?

RUN add-apt-repository -y ppa:ubuntugis/ubuntugis-unstable \
  && add-apt-repository -y ppa:george-edison55/cmake-3.x \
  && apt-get update -y \
  && apt-get install --no-install-recommends -y \
  build-essential \
  cmake \
  gdal-bin \
  git \
  libatlas-base-dev \
  libavcodec-dev \
  libavformat-dev \
  libboost-date-time-dev \
  libboost-filesystem-dev \
  libboost-iostreams-dev \
  libboost-log-dev \
  libboost-python-dev \
  libboost-regex-dev \
  libboost-thread-dev \
  libeigen3-dev \
  libflann-dev \
  libgdal-dev \
  libgeotiff-dev \
  libgoogle-glog-dev \
  libgtk2.0-dev \
  libjasper-dev \
  libjpeg-dev \
  libjsoncpp-dev \
  liblapack-dev \
  liblas-bin \
  libpng-dev \
  libproj-dev \
  libsuitesparse-dev \
  libswscale-dev \
  libtbb2 \
  libtbb-dev \
  libtiff-dev \
  libvtk6-dev \
  libxext-dev \
  python-dev \
  python-gdal \
  python-matplotlib \
  python-pip \
  python-software-properties \
  python-wheel \
  software-properties-common \
  swig2.0 \
  grass-core \
  libssl-dev \
  && apt-get remove libdc1394-22-dev \
  && pip install --upgrade pip \
  && pip install setuptools

Yes. This is necessary otherwise the images won’t run on older machines.

Would be interesting to explore the option.

That’s a good question; I’m pretty sure some packages are not even needed (due to older modules no longer existing, older dependencies, etc.), but sorting it out is a bit of a challenge, due to the many modules that make up ODM.

If image size is the goal, I’d recommend looking into I haven’t tried it, but seems like a promising project for reducing the overall image size.

Running docker-slim + on the “all” oats test group should hit most of the code paths.

What system is being used to trigger the docker builds? I can see in other repos that you are using travis, but can’t see what is triggering this one.

I have started to brute force the packages that are required to be used for the production system, and leaving the build with the entire list of packages for the moment.

I had seen docker-slim before, will need to look into it again and how it might be able to be integrated into the build pipeline.

Since ODM builds timeout on docker hub (they take too long), there’s currently a machine that checks for changes on the ODM repo using git hooks, rebuilds when new commits are made to master and tags the docker images with the appropriate tags.

1 Like

Alright, I think I have come up with a multi-stage build docker file that should result in smaller images. Is there a set of integration tests that we can throw at it to ensure that it has compiled completely and is working as intended?

1 Like


1 Like