If the set of builtin ops is deemed to be too large, a new OpResolver could be code-generated based on a given subset of ops, possibly only the ones contained in a given model. This is the equivalent of TensorFlow's selective registration (and a simple version of it is available in the tools directory).

If you want to define your custom operators in Java, you would currently need to build your own custom JNI layer and compile your own AAR in this jni code. Similarly, if you wish to define these operators available in Python you can place your registrations in the Python wrapper code.

Note that a similar process as above can be followed for supporting a set of operations instead of a single operator. Just add as many AddCustom operators as you need. In addition, BuiltinOpResolver also allows you to override implementations of builtins by using the AddBuiltin.

$ cd ${HOME} && mkdir tflitecustom && cd tflitecustom
$ git clone -b v2.4.1
$ git clone -b 0.8.2

Navigate to the path tensorflow/lite/kernels from tensorflow root directory and do the following steps

  1. Paste the custom operation files in the above said path. (In my case i have pasted,,
  2. Make changes in the following files ( and ) as mentioned below.
$ cd mediapipe/mediapipe/util/tflite/operations && \
  cp tensorflow/tensorflow/lite/kernels && \
  cp max_pool_argmax.h tensorflow/tensorflow/lite/kernels && \
  cp tensorflow/tensorflow/lite/kernels && \
  cp max_unpooling.h tensorflow/tensorflow/lite/kernels && \
  cp tensorflow/tensorflow/lite/kernels && \
  cp transpose_conv_bias.h tensorflow/tensorflow/lite/kernels
$ sudo CI_DOCKER_EXTRA_PARAMS="-e CI_BUILD_PYTHON=python3.6 -e CROSSTOOL_PYTHON_INCLUDE_PATH=/usr/include/python3.6" \
  tensorflow/tools/ci_build/ CPU-PY36 \
  tensorflow/lite/tools/pip_package/ native