-
• #2
not sure if this is it but process.memory() has a parameter and when omitted the parameter is by default true. The parameter defines if garbage collection should happen when calling process.memory or not. setting it to false won't make it do garbage collection setting it to true or ommitting it makes garbage collection happen. see https://www.espruino.com/ReferenceBANGLEJS#t_l_process_memory
Now as to know where that big part of used memory that gets free'd up by the gc comes from i don't know, i also don't know as to why it happens when running it from ram but not from storage file. Maybe it is related to certain code being in ram but if that would be cleared how can the program still run so not sure. Might also be something that gets loaded initially but is no longer required and the GC when calling process.memory free's it up while with storage file code is read and executed from the storage and it does not load complete code in ram.
-
• #3
Without a complete mini example to look at I can't be sure, but any function that is defined (even arrow functions) will contain a link to the scope they were defined in. Espruino isn't smart enough to figure out when the function is defined if it actually references any local variables.
So likely:
setTimeout(function() { this.printMem(); }, 3000); setTimeout(this.printMem, 3000);
Will print two very different results. Could that be it?
-
• #4
setting the garbage collection to true or false seems to make no difference (with any process.memory() tests I have done)
-
• #5
The print mem function was defined within the class instance which was running for the entire length of the program so any variables local to that should stay locked in memory so this shouldn't be a factor.
In your example "this" could have change so they could print different results if printMem was different. Arrow functions were introduced to guarantee "this" stays the same. So
const app = new App(); app.setup() //"this" will be instance of the App app.printMem() // will be same as this.printMem inside setup
-
• #6
My first thought was that bangle was keeping files read from strorage (which I do in "loadRoute") in memory and the garbage collection for this is somehow delayed. However I checked and this memory is released as soon as the scope ends.
-
• #7
My first thought was that bangle was keeping files read from strorage (which I do in "loadRoute") in memory
Storage.read
returns just pointer to flash. However RAM is used once you start manipulating the data (like reading short blocks from storage and concatenating them) so sometimes it may be better to 'read' whole file at once and just iterating over the big string or array than reading in small chunks and possibly joining them back. so depends on your loadRoute code whether it ' was keeping files read from storage in memory' . If it was just iterating over it there is no data to garbage collect.Also the interpreter is running code directly from flash not using RAM. Unless you upload code to RAM, then it of course needs more RAM for storing uploaded code.
-
• #8
loadRoute is doing two things. Its reading the gpx file with Storage.read, and parsing additional json files with jsonRead. In both cases I am parsing the data into local constructs. The mem used by both these functions is correctly releasing the memory once outside the scope.
I actually think the memory which is released is memory bangle is using when loading up the main js file on app load.
-
• #9
If you do
setTimeout(this.printMem, 0);
orsetTimeout(this.printMem, 1);
, does the memory also drop the same amount?
In which case it might not as large a time required as shown?Also, could you provide a complete example, I am curious to test it myself.
-
• #10
I have narrowed down what is triggering the low memory (when run from RAM). I have the following class which is basically shifting the queue if push item and over max size:
export class Queue<T> { protected itemLimit:number; protected internalArray: Array<T>; constructor(itemLimit = 3) { this.itemLimit = itemLimit; this.internalArray = []; }
If I initiate 2 or more of these like this:
this.waypoints = new Queue<IWaypoint>(10); this.waterways = new Queue<ILocalisedFeature>(10);
...the memory bottoms out. Which is curious as even if I comment out the initialisation of the array it still bottoms out.
The actual non minified compiled typescript looks as follows:
/***/ './src/constructs/queue.ts': /***/ function (__unused_webpack_module, exports) { eval( '\r\nObject.defineProperty(exports, "__esModule", ({ value: true }));\r\nexports.Queue = void 0;\r\nvar Queue = /** @class */ (function () {\r\n // itemCount = 0;\r\n function Queue(itemLimit) {\r\n if (itemLimit === void 0) { itemLimit = 3; }\r\n var _this = this;\r\n this.lastN = function (count) {\r\n if (count === void 0) { count = 2; }\r\n var newAry = [];\r\n for (var i = _this.length - 1; i >= Math.max(0, _this.length - count); i--) {\r\n newAry.push(_this.internalArray[i]);\r\n }\r\n return newAry;\r\n };\r\n this.any = function () {\r\n return _this.length > 0;\r\n };\r\n this.lastEntry = function () {\r\n try {\r\n return _this.internalArray[_this.length - 1];\r\n }\r\n catch (err) {\r\n return null;\r\n }\r\n };\r\n this.firstEntry = function () {\r\n return _this.internalArray[0];\r\n };\r\n this.lastMinus = function (numberFromEnd) {\r\n if (numberFromEnd === void 0) { numberFromEnd = 0; }\r\n return _this.internalArray[_this.length - 1 - numberFromEnd];\r\n };\r\n this.clear = function () {\r\n _this.internalArray.length = 0;\r\n };\r\n this.asArray = function () {\r\n return _this.internalArray;\r\n };\r\n this.itemLimit = itemLimit;\r\n this.internalArray = [];\r\n }\r\n Object.defineProperty(Queue.prototype, "length", {\r\n get: function () {\r\n return this.internalArray.length;\r\n },\r\n enumerable: false,\r\n configurable: true\r\n });\r\n Queue.prototype.isEmpty = function () {\r\n return !this.any();\r\n };\r\n Queue.prototype.push = function () {\r\n var _a;\r\n var items = [];\r\n for (var _i = 0; _i < arguments.length; _i++) {\r\n items[_i] = arguments[_i];\r\n }\r\n var n = (_a = this.internalArray).push.apply(_a, items);\r\n if (this.itemLimit != null && this.itemLimit > 0) {\r\n this.internalArray.splice(0, this.length - this.itemLimit);\r\n }\r\n return this.length;\r\n };\r\n return Queue;\r\n}());\r\nexports.Queue = Queue;\r\n\n\n//# sourceURL=webpack://ck_nav/./src/constructs/queue.ts?', ); /***/ },
I am wondering if the get overrides (such as below)
which result in Class.prototype stuff is the cause but need to test more.
get length() { return this.internalArray.length; }
-
• #11
yeh if I have time I will make a simplified example and set timeout as you suggest
-
• #12
Ok, thanks for trying to narrow this down. I think really I'd need to see an example of this being used, but nothing looks that bad space-wise, unless what you're pushing into it ends up including a bunch of extra info (maybe via the scope it was defined in).
Honestly though, this is not the way to write code for Espruino. It's going to be so slow! By using
Queue
you're basically just reimplementing whatArray
does but adding a whole extra layer of abstraction which is eating memory and CPU time.I mean, you look at
isEmpty
, it's callingthis.any()
which calls_this.length
which is a getter which callsthis.internalArray.length
. It's just a nightmare - it's like someone wrote the code specifically to waste CPU cycles - even on Node.js it's going to suck.On Espruino as far as I can see you could just use
Array
directly, and if you really need to limit the size of the queue you could override.push
to just callsplice
after it to limit the length.Just a quick example:
var q = new Queue(20); t=getTime();for (var i=0;i<20;i++)q.push(i);print(getTime()-t); // 0.082 seconds var q = []; t=getTime();for (var i=0;i<20;i++)q.push(i);print(getTime()-t); // 0.0102 seconds
So it's 8x slower than just using an Array, and honestly I'm amazed it's that good.
-
• #13
So it's 8x slower than just using an Array, and honestly I'm amazed
it's that good.At first I did just use an array but I need to limit the size otherwise the mem is very quickly consumed (think wanting to sample gps data, but only recent as you want to reduce noise). You are speed testing two different things. You are saying pushing to an array is quicker than pushing to an array and then pruning it. Which is obviously true. If you have a quicker way of pushing and then resizing that would be more constructive.
On Espruino as far as I can see you could just use Array directly, and
if you really need to limit the size of the queue you could override
.push to just call splice after it to limit the length.I am literally doing the same thing as splice after push (via shift). Originally I extended Array rather than having an internal array variable but the espruino compiler isn't releasing memory when splice or shift.
I mean, you look at isEmpty, it's calling this.any() which calls
_this.length which is a getter which calls this.internalArray.length. It's just a nightmare - it's like someone wrote the code specifically
to waste CPU cycles - even on Node.js it's going to suck
If I want to know if my array is empty I need to do a check in my code. I still need to call array.length. I could repeat this code every time I want to check, it wouldn't speed up the processing, or I can add an additional layer which can be individually unit tested and optimised further in future.Ultimately you might save some cpu time with large chunks of repeated code. It will limit you from creating good stable apps which are easy to maintain and extend. Most of the apps in the current app library are not extendable or maintainable. You need to be able to split the code into small segments to unit test. Otherwise you are taking a step back to pre object orientated test driven development.
-
• #14
but the espruino compiler isn't releasing memory when splice or shift.
it is interpreter, compilers are supposed to compile, not manage runtime memory
Just tried slice, splice, shift and also pop, at least in simple cases it works
____ _ | __|___ ___ ___ _ _|_|___ ___ | __|_ -| . | _| | | | | . | |____|___| _|_| |___|_|_|_|___| |_| espruino.com 2v19.182 (c) 2023 G.Williams Espruino is Open Source. Our work is supported only by sales of official boards and donations: http://espruino.com/Donate >a=[];i=0 =0 >process.memory() ={ free: 13978, usage: 22, total: 14000, history: 9, gc: 0, gctime: 2.469, blocksize: 18 } >for (var i=0;i<100;i++)a.push(i) =undefined >process.memory() ={ free: 13878, usage: 122, total: 14000, history: 13, gc: 0, gctime: 2.435, blocksize: 18 } >while(a.length)a.shift() =undefined >process.memory() ={ free: 13978, usage: 22, total: 14000, history: 16, gc: 0, gctime: 1.872, blocksize: 18 } >for (var i=0;i<100;i++)a.push(i) =undefined >process.memory() ={ free: 13878, usage: 122, total: 14000, history: 16, gc: 0, gctime: 1.664, blocksize: 18 } >for (i=0;i<100;i++)a.push(i) =undefined >process.memory() ={ free: 13778, usage: 222, total: 14000, history: 20, gc: 0, gctime: 0.697, blocksize: 18 } >while(a.length)a.shift() =undefined >process.memory() ={ free: 13978, usage: 22, total: 14000, history: 20, gc: 0, gctime: 2.424, blocksize: 18 } >for (i=0;i<100;i++)a.push(i) =undefined >process.memory() ={ free: 13878, usage: 122, total: 14000, history: 20, gc: 0, gctime: 2.41, blocksize: 18 } >while(a.length)a.pop() =undefined >process.memory() ={ free: 13978, usage: 22, total: 14000, history: 23, gc: 0, gctime: 0.525, blocksize: 18 } >for (i=0;i<100;i++)a.push(i) =undefined >a=a.slice(50) =[ 50, 51, 52, 53, 54, ... 95, 96, 97, 98, 99 ] >process.memory() ={ free: 13928, usage: 72, total: 14000, history: 25, gc: 0, gctime: 0.528, blocksize: 18 } >while(a.length)a.pop() =undefined >process.memory() ={ free: 13978, usage: 22, total: 14000, history: 25, gc: 0, gctime: 0.687, blocksize: 18 } > >a=[1,2,3,4,5] =[ 1, 2, 3, 4, 5 ] >process.memory() ={ free: 13973, usage: 27, total: 14000, history: 30, gc: 0, gctime: 0.812, blocksize: 18 } >a.splice(3,2,10,11) =[ 4, 5 ] >a =[ 1, 2, 3, 10, 11 ] >process.memory() ={ free: 13973, usage: 27, total: 14000, history: 30, gc: 0, gctime: 2.526, blocksize: 18 } >a.splice(3,2) =[ 10, 11 ] >process.memory() ={ free: 13975, usage: 25, total: 14000, history: 34, gc: 0, gctime: 2.294, blocksize: 18 } >a =[ 1, 2, 3 ] >
check
usage
, goes up and down as expectedIt will limit you from creating good stable apps
For me good and stable app is not bloated app that runs slow (=drains more battery) and needs lot of memory.
which are easy to maintain and extend. Most of the apps in the current app library are not extendable or maintainable.
This is debatable. Smaller code without unneeded abstractions can be easy to maintain and extend too. Someone else may see your perfectly testable and extensible code hard too. Also the size of typical watch app is not that big and people write it for fun and for free so extendable and maintainable may not be even the goal here.
-
• #15
I'm not trying to have a go, although I guess what I wrote came out as a bit combative.
It's more frustration at the TypeScript compiler, which even if it's not optimising at all could have had a better standard library. It's a pet hate of mine that there are teams of people trying to make JS engines like V8 super fast, but then seemingly the rest of the world's out there finding ingenious ways to make JS code as large, incomprehensible and slow as possible.
Smaller code without unneeded abstractions can be easy to maintain and extend too.
Exactly.
Most of the apps in the current app library are not extendable or maintainable.
I'd argue that many of the apps in the app library actually have been extended and maintained by others, precisely because they are small and are written in the language that Bangle.js runs natively (and so can be debugged easily).
-
• #16
It's more frustration at the TypeScript compiler
Typescript is a game charger for large scale javascript applications. Having type safe code saves so much time on testing and debugging. This is why its built in as standard now to the default react and angular boilerplates. Its also why python is introducing typing.ingenious ways to make JS code as large, incomprehensible and slow as
possibleIts not built for people to read the compiled code. In my setup its not just typescript. Webpack is doing the module splitting, minification etc. I also doubt I have it set up optimally.
So while you might be frustrated with Typescript, JavaScript isn't compatible on its own with making large scale stable applications. This is not just my opinion but an opinion held by the majority of leading tech companies using JavaScript.
..and just on why abstractions such as "isEmpty" are needed/good practice. If you have a large application with many developers working on it. What does it mean for an array to be empty? If its a preset length and each value is undefined is it empty? Or is it an array of undefined values. It doesn't matter which is true as long as its consistent across your app. The only way to do this is to create a single function everyone calls. This has other advantages. You can have unit test coverage so that if someone finds a faster way to determine if an array is empty that function can be rewritten and the unit tests remove any worry or time taken to test the new version. The person using isEmpty doesn't have to see the code or care how the code works, or what it means to be empty.
-
• #17
I'm not against TypeScript at all - I think adding types and compile time checking for large projects is a good idea.
Its not built for people to read the compiled code
Oh that's ok then. If nobody reads the code it can be as slow and inefficient as you like.
-
• #18
Oh that's ok then. If nobody reads the code it can be as slow and
inefficient as you like.I think you know what I meant. The produced code doesn't matter as much as the thing producing it, which is dependent on which javascript version you want the compiled code in and what other compile settings you have.
During some testing using the emulator and pushing the code to RAM I found the following oddity while logging the output from process.memory(), the same occurs if I run on the watch via RAM. If I push the code to a storage file and its run from there it works more as I would expect.
I have a test method inside an instance of an App class which runs for the duration of the program as follows:
printing out the memory at the comment part of the above script gives around 2000 free. Its the same if I do this outside the method after the block has finished.
If I add a timeout to print the memory usage after a few seconds:
Free memory jumps to around 9000.
That code block doesn't contain any async functions. Does anyone have some insight into why a big chunk of memory is being freed a few seconds after execution, rather than at the end of each block scope.