I don't see the code for actually exporting the model there?
It seems that after you've got model_final you need to do something like:
# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_final)
tflite_model = converter.convert()
# Save the model to disk
open("model_final.tflite", "wb").write(tflite_model)
# Convert the model to the TensorFlow Lite format with quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_final)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()
# Save the model to disk
open("model_final_quantized.tflite", "wb").write(tflite_model)
import base64;
print("var model=atob(\""+base64.b64encode(tflite_model)+"\");")
Espruino is a JavaScript interpreter for low-power Microcontrollers. This site is both a support community for Espruino and a place to share what you are working on.
I don't see the code for actually exporting the model there?
It seems that after you've got
model_final
you need to do something like: