At threshold 14, the results are still pretty good:
NSAMPLE = 12 X_STEPS = 6, RAW_THRESHOLD = 14 File, Expected, Simulated, Diff, %, (Original) HughB-walk-6605.csv, 6605, 6135, -470, 92.88 %, (3223) HughB-walk-2350.csv, 2350, 2188, -162, 93.11 %, (1042) HughB-walk-a3070-b3046.csv, 3070, 2913, -157, 94.89 %, (1909) HughB-walk-a10021-b10248.csv, 10021, 10220, 199, 101.99 %, (12222) HughB-drive-36min-0.csv, 0, 53, 53, 0.00 %, (1199) HughB-drive-29min-0.csv, 0, 60, 60, 0.00 %, (1153) HughB-drive-a3-b136.csv, 3, 75, 72, 2500.00 %, (535) HughB-work-66.csv, 66, 81, 15, 122.73 %, (980) HughB-work-66.csv, 66, 81, 15, 122.73 %, (980) HughB-mixed-390.csv, 390, 465, 75, 119.23 %, (1871) HughB-general-a260-b573.csv, 260, 444, 184, 170.77 %, (3854) HughB-housework-a958-b2658.csv, 958, 2078, 1120, 216.91 %, (5762) MrPloppy-stationary-0.csv, 0, 0, 0, 0.00 %, (1) TOTAL DIFFERENCE 1709
10 still solves Mr Ploppy but I think it's too sensitive.
NSAMPLE = 12 X_STEPS = 6, RAW_THRESHOLD = 10 File, Expected, Simulated, Diff, %, (Original) HughB-walk-6605.csv, 6605, 6548, -57, 99.14 %, (3223) HughB-walk-2350.csv, 2350, 2268, -82, 96.51 %, (1042) HughB-walk-a3070-b3046.csv, 3070, 3082, 12, 100.39 %, (1909) HughB-walk-a10021-b10248.csv, 10021, 11388, 1367, 113.64 %, (12222) HughB-drive-36min-0.csv, 0, 120, 120, 0.00 %, (1199) HughB-drive-29min-0.csv, 0, 164, 164, 0.00 %, (1153) HughB-drive-a3-b136.csv, 3, 195, 192, 6500.00 %, (535) HughB-work-66.csv, 66, 154, 88, 233.33 %, (980) HughB-work-66.csv, 66, 154, 88, 233.33 %, (980) HughB-mixed-390.csv, 390, 663, 273, 170.00 %, (1871) HughB-general-a260-b573.csv, 260, 790, 530, 303.85 %, (3854) HughB-housework-a958-b2658.csv, 958, 3601, 2643, 375.89 %, (5762) MrPloppy-stationary-0.csv, 0, 0, 0, 0.00 %, (1) TOTAL DIFFERENCE 4092
With NSAMPLE=6 you lose too much low frequency I think and step counting suffers:
NSAMPLE = 6 X_STEPS = 6, RAW_THRESHOLD = 15 File, Expected, Simulated, Diff, %, (Original) HughB-walk-6605.csv, 6605, 5977, -628, 90.49 %, (3223) HughB-walk-2350.csv, 2350, 2091, -259, 88.98 %, (1042) HughB-walk-a3070-b3046.csv, 3070, 2775, -295, 90.39 %, (1909) HughB-walk-a10021-b10248.csv, 10021, 9506, -515, 94.86 %, (12222) HughB-drive-36min-0.csv, 0, 12, 12, 0.00 %, (1199) HughB-drive-29min-0.csv, 0, 53, 53, 0.00 %, (1153) HughB-drive-a3-b136.csv, 3, 26, 23, 866.67 %, (535) HughB-work-66.csv, 66, 54, -12, 81.82 %, (980) HughB-work-66.csv, 66, 54, -12, 81.82 %, (980) HughB-mixed-390.csv, 390, 356, -34, 91.28 %, (1871) HughB-general-a260-b573.csv, 260, 339, 79, 130.38 %, (3854) HughB-housework-a958-b2658.csv, 958, 1476, 518, 154.07 %, (5762) MrPloppy-stationary-0.csv, 0, 0, 0, 0.00 %, (1) TOTAL DIFFERENCE 1171
So 15 with NSAMPLE=12 worked best, however, I agree more data would be desirable.
@jeffmer started
Espruino is a JavaScript interpreter for low-power Microcontrollers. This site is both a support community for Espruino and a place to share what you are working on.
At threshold 14, the results are still pretty good:
10 still solves Mr Ploppy but I think it's too sensitive.
With NSAMPLE=6 you lose too much low frequency I think and step counting suffers:
So 15 with NSAMPLE=12 worked best, however, I agree more data would be desirable.