Skip to main content

Adlik: Toolkit for Accelerating Deep Learning Inference


Adlik is an LF AI Foundation incubation project licensed under the Apache 2.0 license. It offers a end-to-end optimizing framework for deep learning models whose goal is to accelerate deep learning inference process both on cloud and embedded environments.

Adlik consists of two sub projects: model compiler and serving platform. Model compiler supports several optimizing technologies like pruning, quantization and structural compression to optimize models developed in major frameworks like Tensorflow, Keras, and Caffe, so that they can run with lower latency and higher computing efficiency. Serving platform provides deep learning models with optimized runtime based on the deployment environment such as CPU, GPU, and FPGA. Based on a deep learning model, the users of Adlik can optimize it with model compiler and then deploy it to a certain platform with serving platform.

With Adlik, different deep learning models can be deployed to different platforms with high performance and much flexibility.


Please visit us on GitHub where our development happens and be part of our community.

We invite you to join our community and contribute to the project both as a user of it and as a contributor to its development. We look forward to your contributions!


Install Git and Bazel and run the following command:

$ git clone

$ cd Adlik

$ bazel build //adlik_serving --config=tensorflow-cpu --incompatible_no_support_tools_in_action_inputs=false

Join the Conversation

Adlik maintains three mailing lists. You are invited to join the one that best meets your interest.

Adlik-Announce (top-level milestone messages and announcements)

Adlik-TSC (top-level technical governance discussions and decisions)

Adlik-Technical-Discuss (technical discussions and questions)