I am working on a footpod for running, using the Puck.js, which will connect to the Bangle.js 2 and provide speed for one of the data fields of the runnig app. To see if this will work, I put together a webpage which connects to the puck, and displays some data [1].
Once this whole thing is working, it should be possible to implement this on the Bangle.js 2, similarly as the bthrm app. What the code on the website does is the follwing:
Send code to the Puck.js telling it to send data from the accelerometer/gyroscope/magnetometer.
Feed the data into an algorithm devised by Madgwick [2] in order to keep orientation. This is visualized using three.js on the page, using code written by @bre.
Integrate the acceleration twice to obtain position. The code keeps track of a position, but also analyzes each step. Steps are separated by 20 datapoints of 'stillness' (when the foot is on the ground). Empirically, I have found that there is error in the step. However, since each step ends with 20 'still' datapoints, what I do is subtract from the step a linear term, which makes the final 20 points look like they are not moving. So far, this looks like it is giving a good approximation of each step. Try it yourself at home! :D
There is the consideration of calibration: calibrating the gyroscope is easy, just read a 100 values, and subtract the mean. The error introduced is not enormous anyway.
I haven't calibrated the accelerometer, the readings give slight variation from 9.8m/s^2 depending on which way the puck is facing, but the error is not huge. I might implement something for this later, if it is deemed important.
Calibrating the magnetometer is trickier. The error is huge, apparently there is "soft iron" and "hard iron" effects, which fortunately can be detected and computed from a good enogh set of data. Pressing a button on [1] calibrates it using some linear algebra and least squares. I am still not sure if this is the correct way of doing it, there may be an extra matrix factor needed to orient the values.
The code chooses between the MARG and IMU algorithms from Madgwick's paper [2], depending on the sensibility of the values of the magnetometer. Sensible means that the ratio of the norm of the calibrated magnetodata, and the expected norm (the radius) is within a neighborhood of 1.
If it is not calibrated, then it chooses IMU, which only uses the accelerometer and gyroscope. From the graphics, two things are clear: The algorithm works as advertied, the z-axis is stable (the one that points upwards) but errors rotating around the z-axis accumulate. Try placing the puck on a table, and rotating it quickly in one direction, and then slowly in the other. Doing this a few times, you see the errors accumulating.
If it is calibrated, then it chooses MARG, which should keep the direction up and, let's say, north, stable. However, something is clearly wrong, because in the graphics you can see the guy having problems.
Any comments or questions on any of the above topics are very welcome. I apologize if the code in [1] is a bit of a mess, this is a work in progress.
Finally, let me go out on a limb. I don't know how gestures and Tensorflow work, but is it possible to train the Bangle.js 2 to take in accelerometer / gyro / magneto data from a single step, and to approximate it's length, somehow by magic? There is a function in the code which 'corrects' a single step, but this is ruidmentary, and clearly sometimes has errors. It should be easy to obtain accurate data by running along a 400m track, or along any known route. If an AI can be trained on such data, that would be the dream, no?
Espruino is a JavaScript interpreter for low-power Microcontrollers. This site is both a support community for Espruino and a place to share what you are working on.
I am working on a footpod for running, using the Puck.js, which will connect to the Bangle.js 2 and provide speed for one of the data fields of the runnig app. To see if this will work, I put together a webpage which connects to the puck, and displays some data [1].
Once this whole thing is working, it should be possible to implement this on the Bangle.js 2, similarly as the bthrm app. What the code on the website does is the follwing:
Send code to the Puck.js telling it to send data from the accelerometer/gyroscope/magnetometer.
Feed the data into an algorithm devised by Madgwick [2] in order to keep orientation. This is visualized using three.js on the page, using code written by @bre.
Integrate the acceleration twice to obtain position. The code keeps track of a position, but also analyzes each step. Steps are separated by 20 datapoints of 'stillness' (when the foot is on the ground). Empirically, I have found that there is error in the step. However, since each step ends with 20 'still' datapoints, what I do is subtract from the step a linear term, which makes the final 20 points look like they are not moving. So far, this looks like it is giving a good approximation of each step. Try it yourself at home! :D
There is the consideration of calibration: calibrating the gyroscope is easy, just read a 100 values, and subtract the mean. The error introduced is not enormous anyway.
I haven't calibrated the accelerometer, the readings give slight variation from 9.8m/s^2 depending on which way the puck is facing, but the error is not huge. I might implement something for this later, if it is deemed important.
Calibrating the magnetometer is trickier. The error is huge, apparently there is "soft iron" and "hard iron" effects, which fortunately can be detected and computed from a good enogh set of data. Pressing a button on [1] calibrates it using some linear algebra and least squares. I am still not sure if this is the correct way of doing it, there may be an extra matrix factor needed to orient the values.
The code chooses between the MARG and IMU algorithms from Madgwick's paper [2], depending on the sensibility of the values of the magnetometer. Sensible means that the ratio of the norm of the calibrated magnetodata, and the expected norm (the radius) is within a neighborhood of 1.
If it is not calibrated, then it chooses IMU, which only uses the accelerometer and gyroscope. From the graphics, two things are clear: The algorithm works as advertied, the z-axis is stable (the one that points upwards) but errors rotating around the z-axis accumulate. Try placing the puck on a table, and rotating it quickly in one direction, and then slowly in the other. Doing this a few times, you see the errors accumulating.
If it is calibrated, then it chooses MARG, which should keep the direction up and, let's say, north, stable. However, something is clearly wrong, because in the graphics you can see the guy having problems.
Any comments or questions on any of the above topics are very welcome. I apologize if the code in [1] is a bit of a mess, this is a work in progress.
Finally, let me go out on a limb. I don't know how gestures and Tensorflow work, but is it possible to train the Bangle.js 2 to take in accelerometer / gyro / magneto data from a single step, and to approximate it's length, somehow by magic? There is a function in the code which 'corrects' a single step, but this is ruidmentary, and clearly sometimes has errors. It should be easy to obtain accurate data by running along a 400m track, or along any known route. If an AI can be trained on such data, that would be the dream, no?
[1] https://baldursigurds.github.io/gamrun_dev/
[2] https://x-io.co.uk/open-source-imu-and-ahrs-algorithms/