Gesture Recognition ?

Posted on
  • I tried to create my own gestures. Somehow I got this in github­esture-models
    There is a file create_gesture_model.ipynb which I got running on COLAB
    First problem was circular loading around AccelerationReader
    Got rid of that by changing directory to nodeconfeu-gesture-models before importing reader and go back
    Next problem was in converting to tflite, functions SPACE_TO_BATCH_ND and BATCH_TO_SPACE_ND have been unknown.
    Added these commands to builtin_operator_version_support in
    At least I got it working this way. Training etc ran fine.
    Next created .tfnames and .tflite and added a handle for aiGesture
    Now I get an error Didn't find op for builtin opcode 'SPACE_TO_BATCH_ND'
    Is there any idea, what to do or how to do it better ?

  • Argh, sorry - that's a pain. So it looks like while neither the colab or Tensorflow on the Bangle has changed, something in the code the colab uses must have changed.

    I don't ever recall seeing SPACE_TO_BATCH_ND before so presumably this is a new op that got put into the exported file that we don't support yet.

    I'd be really interested to see if you have any luck with this tutorial:­mpulse

    I feel like it's probably a good deal easier to use than the Colab - I should probably point to that as an example of use rather than nearform's old one

  • Looks to me like Tensorflow itself changed.
    And there is a new tflite version, and in sources I found SPACE_TO_BATCH and BATCH_TO_SPACE
    I will give EdgeImpuls a try, but this will take some time.

  • Checked the edge solution and failed.
    Could this be a new version ?
    Wording and buttons don't match perfect.
    Made it up to generated features.
    Next step should be train the neural network, but there is no NN Classifier
    Should I select another target (tried all nordics, no success)

  • played around looks like I had to

    • select device to Nordic NRF 52840 DK in the beginning
    • ignore that NN stuff
    • train without changes
      Now I've got a file
      TensorFlow Lite (int8 quantized) 22 KB
      22KB is a big file for a clock

    Anyway, copied this to .tfmodel and created .tfnames
    reset bangle2 and checked.
    I've 2 gestures (leftright and updown) leftright is recognized perfect
    Updown of the clock gives leftright very often.

    At the end, I got something running, but I'm not really satisfied, because

    • steps for EdgeImpuls are complicated, there is no way to automate this
    • resulting file is very large, at least in my eyes
    • gives wrong results very often
  • Is the problem that it takes a long time to use the 22kB file?

  • @Ganblejs
    AFAIK, data is loaded to memory which takes away a lot of vars. Correct me, if I'm wrong.
    And I would like to have some more complex gestures.
    For example lift the arm and draw a right circle, something like lifting umbrella by tourist guides.
    My expectation for complex gestures is much larger file compared to the simple UpDown, LeftRight in my example.
    Last not least, if I compare size of EdgeImpuls with nodeconfeu, size of nodeconfeu file is about 30%. Even compared with an JS building, which has 14kb for Keras Model, its a lot.

  • It's hard to know what to suggest - I guess the size of the model maybe can be trimmed down by tweaking EdgeImpulse - I think under the hood it's running exactly the same tools as the original Colab file.

    Espruino will attempt to be smart with the tensorflow model - if you safe to the .tfmodel file then it will do the tensorflow calculations in the idle loop, so there's the max amount of free memory available.

    I'm afraid what with Espruino and Bangle.js, I don't really have time to keep up to date with all the latest happenings in Tensorflow though. I could attempt to pull in a new Tensorflow implementation at some point though.

  • Post a reply
    • Bold
    • Italics
    • Link
    • Image
    • List
    • Quote
    • code
    • Preview

Gesture Recognition ?

Posted by Avatar for JumJum @JumJum