``!pip install tflite-model-maker
fails or hangs forever
I am using the tensor flow lite model maker for image classification notebook to test the model building for edge devices but the notebook fails while installing tflite model maker.
Either the notebook keep downloading the files or crashed. If anyone has the solution kindly tell me or help me
thanks in advance
Welcome to the Tensorflow Forum!
Thank you for reporting the issue.
Could you please try a temporary solution, as suggested here?
According to Colab Updated to Python 3.10:
Colab’s fallback runtime version: Using the fallback runtime version temporarily allows access to the Python 3.9 runtime, and will be available until mid-May. This is available from the Command Palette via the Use fallback runtime version command when connected to a runtime. Of note, this setting does not persist across sessions — the command will need to be invoked on each new session.
As a temporary workaround, you can use the Colab fallba…
@chunduriv when I select
Use fallback runtime version I get this message:
The fallback runtime version is unavailable at this time.
And the Python version stays in 3.10.11. It looks like the fallback runtime is not available anymore, is there another workaround?
I support this issue, fallback is no more available for google colab
I even tried to subscribe to paid version, had a thought it’s pro feature, but no.
There was suggestion to train with media pipe. It looks interesting but doesn’t have all required features and models that are available in TFLiteModelMaker
I suggested the way to use tflite-model-maker in colab here
Guys, I am happy to inform you that you can use virtual environment for installing tflite-model-maker
I didn’t find how to use it in colab notebook, but you can use bash commands for installation and trainining models (put your code in .py and run python code.py) , anyway it’s better than nothing.
python3.8 -m virtualenv venv
virtualenv --python=/usr/bin/python3.8 liteenv
pip install --upgrade pip==20.1.1
pip install tensorflow==2.8.0
git clone https://github.com/te…
its still not working,
when trying to install tflite-model-maker
ERROR: Could not find a version that satisfies the requirement tflite-support>=0.4.2 (from tflite-model-maker) (from versions: 0.1.0a0.dev3, 0.1.0a0.dev4, 0.1.0a0.dev5, 0.1.0a0, 0.1.0a1)
ERROR: No matching distribution found for tflite-support>=0.4.2
Im using a Mac M2.
I ve tried all possible solutions published, and none works so far.
Tried to install tflite-model-maker to build a custom model but cannot install the package. same issues as above tried with different python & pip version but doesnot help.
Also is there a way we can verify if the yolov5 model converted to tflite model works as expected?
Welcome to the Tensorflow Forum,
Please try the workaround recommended in the thread below
The issue still exists. You can try the workaround as suggested in 60431#issuecomment by @ tomkuzma
Even though you can make a conda environment in colab, it will still always use the colab runtime python version. But you can force it to use the environment by using a bash script and activate it every time you need to use it. Unfortunately this means that you can’t use cells for python code, but instead you can just have it run a python file with all th…
You can verify the model using Tensorflow Lite Interpreter as shown below
import numpy as np
import tensorflow as tf
# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test the model on random input data.
input_shape = input_details['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
# The function `get_tensor()` returns a copy of the tensor data.
# Use `tensor()` in order to get a pointer to the tensor.
output_data = interpreter.get_tensor(output_details['index'])
If people are still struggling with installing tflite-model-maker pip install failing i’ve created a Git that runs through the current workaround people are using:
Just lays out the process for the Android Figurine example but you can amend to your own data and training prefs.
Posted a thread for this process as well in General Discussion.