Going to Production

This page adapts the upstream libvips production checklist to ol.vips. The original reference is the libvips Checklist for programmers using libvips.

Use this page as guidance for application design and deployment. It is not a hard rulebook, but these defaults are a good starting point for long-running services, upload pipelines, and image proxy workloads.

Prefer smart thumbnailing over load then resize

If you are starting from a filename and want a thumbnail, prefer ops/thumbnail over loading the full image and then calling ops/resize. Unlike a separate load followed by resize, ops/thumbnail combines loading and shrinking into one step. That lets libvips use format-specific tricks such as shrink-on-load, which can significantly reduce memory use and latency.

This is not only about speed. Thumbnailing at load time can also improve quality, because libvips can premultiply automatically where needed and can render vector inputs at the target size instead of rasterizing them large and shrinking afterwards.

(require '[ol.vips :as v]
         '[ol.vips.operations :as ops])

(with-open [thumb (ops/thumbnail "input.jpg" 300 {:height 300})]
  (v/write-to-file thumb "thumb.jpg"))

If you already have an image handle in hand, there is still a lower-level ops/thumbnail-image wrapper. Treat it as an escape hatch. It wraps libvips thumbnail_image, so it does not get the load-time rendering and shrink-on-load advantages that ops/thumbnail gets from starting at the source.

Use sequential access when your pipeline can stream

When you know you will read the source once in loader order, pass {:access :sequential} to your loader. This often reduces memory use and can improve throughput.

For background on why this matters, see the upstream libvips "How it opens files" chapter.

(with-open [image (v/from-file "input.jpg" {:access :sequential})]
  (v/metadata image))

The same applies to v/from-stream, v/from-buffer, and format-specific loader operations such as ops/jpegload and ops/pngload.

Prefer longer pipelines to many materialized steps

libvips is demand-driven and uses partial images as intermediates. In ol.vips, that means you can usually build one longer pipeline of operations without materializing every step in memory or on disk. The work is deferred until a sink asks for pixels, such as v/write-to-file, v/write-to-buffer, or v/write-to-stream.

This is one of the main reasons to lean into the libvips model instead of breaking processing into many disconnected phases. Long pipelines usually stay memory-efficient because intermediate results are represented as graph nodes rather than fully realized images.

libvips is also horizontally threaded: threads tend to run along the pipeline you are evaluating, rather than up and down the image. In practice, that means longer pipelines often parallelize better than shorter ones.

(with-open [image   (v/from-file "input.jpg")
            result  (-> image
                        (ops/resize 0.5)
                        (ops/colourspace :b-w)
                        (ops/sharpen))]
  (v/write-to-file result "output.jpg"))

If you can, aim for one coherent processing pipeline per output rather than a series of intermediate writes, reloads, and separate mini-pipelines.

This generally works better than repeatedly writing intermediates to disk or breaking the flow into many disconnected steps.

Reuse shared intermediates intentionally

If one derived image is reused several times in the same request, bind it once and pass that handle to downstream operations instead of recalculating it.

(with-open [image   (v/from-file "input.jpg")
            base    (ops/resize image 0.5)
            preview (ops/colourspace base :srgb)
            mask    (ops/extract-band base 0)]
  ...)

If you need to materialize a reused intermediate so libvips does not recalculate it, use v/copy-memory. This forces the current pipeline to render into a private in-memory image and returns another image handle you can fan out to several downstream operations or keep in an application cache for later reuse.

This can trade CPU time for higher memory use, so it is best reserved for intermediates you know are reused often enough to justify the retained pixels.

Put large resizes early in the pipeline

If a pipeline includes a large resize, do it near the start. After that, apply area operations such as sharpening, then point operations.

This reduces the amount of pixel data that later stages need to touch.

Restrict loaders and untrusted inputs

ol.vips starts from a secure default: libvips operations marked as untrusted are blocked during initialization. If you handle untrusted data, keep v/set-block-untrusted-operations! set to true unless you have a specific trusted-input path that needs those loaders.

If you need direct control over that default, use v/set-block-untrusted-operations!. Passing true restores the default secure posture, and passing false allows operations libvips has marked as untrusted.

libvips also has a lower-level class-hierarchy blocker, exposed as v/set-operation-block!. This lets you block broad families of operations and selectively re-enable the subset you trust. For example, to allow only JPEG loaders:

(require '[ol.vips :as v])

(v/set-block-untrusted-operations! true)
(v/set-operation-block! "VipsForeignLoad" true)
(v/set-operation-block! "VipsForeignLoadJpeg" false)

After those calls, libvips will only load JPEGs from the foreign-loader hierarchy. This is useful when you need tighter runtime control without building a custom libvips binary.

If you need tighter control over which loaders are present at all, prefer shipping a custom libvips build and loading it with -Dol.vips.native.preload instead of relying on a broader system installation.

See also: Security Policy.

Sanity-check images before expensive processing

Open the image, inspect cheap metadata first, and reject inputs that are too large or unsuitable for your main pipeline.

(with-open [image (v/from-file "upload.jpg" {:access :sequential})]
  (let [width      (v/width image)
        height     (v/height image)
        interlaced (v/field image "interlaced" false)]
    (when (> (* width height) 100000000)
      (throw (ex-info "image too large" {:width width :height height})))
    (when interlaced
      (throw (ex-info "progressive images are not allowed" {})))))

This is especially useful for defending against decompression bombs and for keeping progressive or interlaced inputs away from latency-sensitive paths. In practice that usually means checking v/width, v/height, and v/field before you start expensive work.

Tune the Linux allocator for long-running glibc services

On glibc-based Linux systems, long-running multithreaded image workloads can benefit from an alternative allocator such as jemalloc. The default allocator often performs poorly for long-running, multithreaded processes with frequent small allocations, and switching allocators can reduce the off-heap footprint of the JVM when using libvips.

This is an operating environment concern rather than an ol.vips API setting. On Linux that usually means configuring LD_PRELOAD before launching the JVM, not changing anything inside your image pipeline.

The jemalloc project is also in a somewhat unsettled state. See this postmortem for background, and Facebook’s fork for one possible continuation point.

Musl-based Linux systems and non-Linux runtimes are generally less affected by this specific issue.

Disable or tune the libvips operation cache for proxy workloads

For image proxy workloads that process many unrelated images, the libvips operation cache is often not useful. Disable it entirely with v/disable-operation-cache!:

(require '[ol.vips :as v])

(v/init!)
(v/disable-operation-cache!)

The cache is more useful when the same or very similar operation graphs are reused repeatedly in one process, for example a service that applies a small set of common transforms over and over. In that kind of workload, the defaults may be fine and you may not want to touch them.

When you do need to tune it, think about the three limits separately:

  • v/set-operation-cache-max! limits how many recent operations libvips keeps. Lower this when cache churn is high and you are not seeing reuse. Raise it only if you have evidence that repeated operation graphs are being evicted too aggressively.

  • v/set-operation-cache-max-mem! limits how much libvips-tracked memory can accumulate before cached operations start getting dropped. Lower this when you want cache eviction to happen earlier under memory pressure.

  • v/set-operation-cache-max-files! limits how many libvips-tracked file descriptors can accumulate before cached operations start getting dropped. Lower this when file handle pressure matters more than cache hit rate.

Tune the cache limits directly like this:

(v/set-operation-cache-max! 0)
(v/set-operation-cache-max-mem! (* 32 1024 1024))
(v/set-operation-cache-max-files! 32)

(v/operation-cache-settings)
;; => {:max 0, :size 0, :max-mem 33554432, :max-files 32}

(v/tracked-resources)
;; => {:mem 0, :mem-highwater 0, :allocs 0, :files 0}

v/operation-cache-settings reports the current libvips cache limits and the current cache size. v/tracked-resources reports libvips tracked memory, highwater memory, allocation count, and tracked file count.

Those tracked counters are useful for observing trends, but they only include resources libvips tracks itself. Memory or file descriptors used inside external libraries may not be reflected there, so treat them as a lower bound rather than a complete process-wide accounting.

Ship only the native bundle you need

This is an ol.vips-specific deployment concern. The main library is small, but the companion native jars contain the bundled libvips binaries and their support libraries for a specific platform. Those jars are usually the largest part of an ol.vips deployment artifact.

If you know the production target in advance, prefer packaging only the one native companion jar that matches that runtime platform. For example, a Linux x86-64 glibc deployment usually only needs com.outskirtslabs/vips-native-linux-x86-64-gnu on the classpath.

Including several platform jars is useful for development, shared tooling, or distributing a generic application bundle. But for a single-platform deploy it usually just makes the image, container layer, or classpath larger without adding runtime value, since ol.vips will only load the bundle that matches the detected target platform.

If possible, build separate deployment artifacts per target platform and keep each one trimmed to the matching native dependency set.