• I don't see the code for actually exporting the model there?

    It seems that after you've got model_final you need to do something like:

    # Convert the model to the TensorFlow Lite format without quantization
    converter = tf.lite.TFLiteConverter.from_keras_model­(model_final)
    tflite_model = converter.convert()
    
    # Save the model to disk
    open("model_final.tflite", "wb").write(tflite_model)
    
    # Convert the model to the TensorFlow Lite format with quantization
    converter = tf.lite.TFLiteConverter.from_keras_model­(model_final)
    converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
    tflite_model = converter.convert()
    
    # Save the model to disk
    open("model_final_quantized.tflite", "wb").write(tflite_model)
    
    import base64;
    print("var model=atob(\""+base64.b64encode(tflite_m­odel)+"\");")
    
About

Avatar for Gordon @Gordon started