You are reading a single comment by @charlie and its replies. Click here to read the full conversation.
  • I have now spent some time developing my hiking app on the platform so I have alot more understanding of how everything fits together. Its worth noting I bought a bangle 2 so any memory references are related to this rather than the first one. I will publish it to the app store once more complete.

    See screenshot for how I structured the project. Everything is typescript and packed with webpack into a single app.js minified file. When working with limited mem this has the advantage that things like tests and mocks are not included in the prod build as well as any unused files or functions.

    The app provides a vector drawn route relative to current location and bearing, direction needed to hit next waypoint (via arrow pointer), direction at next waypoint (relative to current bearing). Distances to next waypoint/end etc (Screenshot is from the emulator which is not showing the light coloured objects).

    I took some ideas from the gipy project (which is the best attempt of oo I have seen) and nav compass. I read the gpx files directly from storage. I know gipy creates its own format, but I am not sure what calculations you need to make prior to running as vector maths is super quick on most processors. It also allows you to use gadget bridge to put a gpx file on the bangle without going though the app loader every time.

    Observations:

    Documentation is definitely a weak point especially with built in modules and which javascript spec the compiler is following (like 'let' and 'const' are implemented but regex named capture groups is not). Most of what I now know is from trial and error and declaration files.

    I only hit memory issues if stored alot of information in long running variables - like entire location history.

    Assuming the garbage collection is working as would expect I would definitely use the documentation to nudge people to use small modular functions over single large code chunks. Always using "let" over "var" so variables take up memory for the smallest time possible. Keep global variables to min. I think you can do this in the documentation and tutorials in a way which doesn't over complicate things for beginners and doesn't force it.


    2 Attachments

    • nav_s.png
    • struc.png
  • as vector maths is super quick on most processors

    there are no vector instructions in Cortex M4 (if you mean something like https://en.wikipedia.org/wiki/Vector_pro­cessor ). Even all the floating point math is done in software because Javascript uses Double type and Cortex M4 only has single precision floats. There is really nothing that could be called super quick here.

    I would still implement assocs as hashmaps so they are future proofed for larger mem sizes.

    Are you aware there is even no dynamic heap (malloc()/free() C api calls) available? Where would that hashmap and its data be stored?

    I think you still did not grasp fully the CPU and memory limitations. Which is OTOH a testament to all the work that Gordon did to make it work so well on such limited devices. Your comments would be more fitting if we had 10 times the memory and CPU speed. Then it would be comparable to e.g. 486 or Pentium machines from nineties running Windows 3.11 with 2MB of RAM (did javascript run on that?)

    BTW Espruino also works on Microbit 1 which only has 16KB of RAM and it works there including Bluetooth LE stack (!) which had to be disabled in Micropython implementation because Micropython is really not designed for limited devices like this (that's why there is also Snek).

    As for 'future-proofing' it might be a bit more complicated than adding hashmaps ;-) Only recently we got typed arrays over 64KB which made no sense few years ago as typical espruino device had less than that in total.

About

Avatar for charlie @charlie started