How to Classifier Sound For a short time On IOS

I am an iOS user of TensorFlow, and I am working on sound classification using the Yamnet model.

I have modified the iOS demo code for this purpose.

I have integrated the following two libraries:

  1. pod 'TensorFlowLiteSwift', '~> 2.14.0'
  2. pod 'TensorFlowLiteSelectTfOps', '~> 2.14.0'

On the Android client, I can pass an array of audio sample points to the recognition model, and TensorFlow allows me to adjust the sample rate and use sample points for less than 1 second, for example, I can pass 1920 sample points, which is approximately 0.12 seconds of data, and then examine the recognition results. The results include scores and label names.

On the iOS client, TensorFlow Lite only allows a sample rate of 15600, and I must input 15600 data points at once, which is 1 second of data. The results returned only include an array of scores for all the results, without label names.

I want to know if the TensorFlow interface is different between Android and iOS. Is it possible to input less than 1 second of data in iOS?

Here are the links I’ve gathered:

  1. Audio Classification Overview
  2. Integration Guide
  3. Complete List of Classification Types
  4. Android Demo (Yamnet)
  5. iOS Demo