When using on Android, why do you have to use allocatedirect to create ByteBuffer and ByteBuffer.Order(byteorder. Nativeorder()) this method

I am now using an Android demo of tensorflow Lite. I want to try to use my model on it, but the input data required by my model is float type, but the floatbuffer does not have allocatedirect, and its order cannot be passed into any method, which leads to the empty output of model recognition. What do I need to do?

This is how I convert mat to floatbuffer

private fun convertBitmapToByteBuffer(mat: Mat): FloatBuffer {

    val floatBuffer = FloatBuffer.allocate((BATCH_SIZE * inputSize * inputSize * PIXEL_SIZE))
    floatBuffer.order()
    val intValues = IntArray(inputSize * inputSize)

    val channels: Int = mat.channels()
    val width: Int = mat.cols()
    val height: Int = mat.rows()
    val data = ByteArray(channels)

    for (i in 0 until width) {
        for (j in 0 until height) {

            mat.get(i, j, data)

            val b: Byte = (data[0] and 0xff.toByte())
            val g: Byte = (data[1] and 0xff.toByte())
            val r: Byte = (data[2] and 0xff.toByte())
            floatBuffer.put(b.toFloat())
            floatBuffer.put(g.toFloat())
            floatBuffer.put(r.toFloat())

        }
    }

    return floatBuffer
}