Multiple precision in tflite quantization not supported int16 for input

Trying multiple precision on tflite quantization like input as int16, matrix weight as int8 and bias weight as int32 and output as int8.
Unfortunately, input precision not able to make it as int16 unlike matrix weight, bias weight and output.
Is there any option or customization flags to make input as int16?