Garbage collection unexpected results #6322
Replies: 1 comment
-
Posted at 2023-09-20 by joyrider3774 not sure if this is it but process.memory() has a parameter and when omitted the parameter is by default true. The parameter defines if garbage collection should happen when calling process.memory or not. setting it to false won't make it do garbage collection setting it to true or ommitting it makes garbage collection happen. see https://www.espruino.com/ReferenceBANGLEJS#t_l_process_memory Now as to know where that big part of used memory that gets free'd up by the gc comes from i don't know, i also don't know as to why it happens when running it from ram but not from storage file. Maybe it is related to certain code being in ram but if that would be cleared how can the program still run so not sure. Might also be something that gets loaded initially but is no longer required and the GC when calling process.memory free's it up while with storage file code is read and executed from the storage and it does not load complete code in ram. Posted at 2023-09-20 by @gfwilliams Without a complete mini example to look at I can't be sure, but any function that is defined (even arrow functions) will contain a link to the scope they were defined in. Espruino isn't smart enough to figure out when the function is defined if it actually references any local variables. So likely:
Will print two very different results. Could that be it? Posted at 2023-09-21 by charlie setting the garbage collection to true or false seems to make no difference (with any process.memory() tests I have done) Posted at 2023-09-21 by charlie The print mem function was defined within the class instance which was running for the entire length of the program so any variables local to that should stay locked in memory so this shouldn't be a factor. In your example "this" could have change so they could print different results if printMem was different. Arrow functions were introduced to guarantee "this" stays the same. So
Posted at 2023-09-21 by charlie My first thought was that bangle was keeping files read from strorage (which I do in "loadRoute") in memory and the garbage collection for this is somehow delayed. However I checked and this memory is released as soon as the scope ends. Posted at 2023-09-21 by @fanoush
Also the interpreter is running code directly from flash not using RAM. Unless you upload code to RAM, then it of course needs more RAM for storing uploaded code. Posted at 2023-09-22 by charlie loadRoute is doing two things. Its reading the gpx file with Storage.read, and parsing additional json files with jsonRead. In both cases I am parsing the data into local constructs. The mem used by both these functions is correctly releasing the memory once outside the scope. I actually think the memory which is released is memory bangle is using when loading up the main js file on app load. Posted at 2023-09-22 by d3nd3-o0 If you do Also, could you provide a complete example, I am curious to test it myself. Posted at 2023-09-27 by charlie I have narrowed down what is triggering the low memory (when run from RAM). I have the following class which is basically shifting the queue if push item and over max size:
If I initiate 2 or more of these like this:
...the memory bottoms out. Which is curious as even if I comment out the initialisation of the array it still bottoms out. The actual non minified compiled typescript looks as follows:
I am wondering if the get overrides (such as below)
Posted at 2023-09-27 by charlie yeh if I have time I will make a simplified example and set timeout as you suggest Posted at 2023-09-27 by @gfwilliams Ok, thanks for trying to narrow this down. I think really I'd need to see an example of this being used, but nothing looks that bad space-wise, unless what you're pushing into it ends up including a bunch of extra info (maybe via the scope it was defined in). Honestly though, this is not the way to write code for Espruino. It's going to be so slow! By using I mean, you look at On Espruino as far as I can see you could just use Just a quick example:
So it's 8x slower than just using an Array, and honestly I'm amazed it's that good. Posted at 2023-09-28 by charlie
At first I did just use an array but I need to limit the size otherwise the mem is very quickly consumed (think wanting to sample gps data, but only recent as you want to reduce noise). You are speed testing two different things. You are saying pushing to an array is quicker than pushing to an array and then pruning it. Which is obviously true. If you have a quicker way of pushing and then resizing that would be more constructive.
I am literally doing the same thing as splice after push (via shift). Originally I extended Array rather than having an internal array variable but the espruino compiler isn't releasing memory when splice or shift.
Ultimately you might save some cpu time with large chunks of repeated code. It will limit you from creating good stable apps which are easy to maintain and extend. Most of the apps in the current app library are not extendable or maintainable. You need to be able to split the code into small segments to unit test. Otherwise you are taking a step back to pre object orientated test driven development. Posted at 2023-09-28 by @fanoush
it is interpreter, compilers are supposed to compile, not manage runtime memory Just tried slice, splice, shift and also pop, at least in simple cases it works
check
For me good and stable app is not bloated app that runs slow (=drains more battery) and needs lot of memory.
This is debatable. Smaller code without unneeded abstractions can be easy to maintain and extend too. Someone else may see your perfectly testable and extensible code hard too. Also the size of typical watch app is not that big and people write it for fun and for free so extendable and maintainable may not be even the goal here. Posted at 2023-10-02 by @gfwilliams I'm not trying to have a go, although I guess what I wrote came out as a bit combative. It's more frustration at the TypeScript compiler, which even if it's not optimising at all could have had a better standard library. It's a pet hate of mine that there are teams of people trying to make JS engines like V8 super fast, but then seemingly the rest of the world's out there finding ingenious ways to make JS code as large, incomprehensible and slow as possible.
Exactly.
I'd argue that many of the apps in the app library actually have been extended and maintained by others, precisely because they are small and are written in the language that Bangle.js runs natively (and so can be debugged easily). Posted at 2023-10-11 by charlie
Its not built for people to read the compiled code. In my setup its not just typescript. Webpack is doing the module splitting, minification etc. I also doubt I have it set up optimally. So while you might be frustrated with Typescript, JavaScript isn't compatible on its own with making large scale stable applications. This is not just my opinion but an opinion held by the majority of leading tech companies using JavaScript. ..and just on why abstractions such as "isEmpty" are needed/good practice. If you have a large application with many developers working on it. What does it mean for an array to be empty? If its a preset length and each value is undefined is it empty? Or is it an array of undefined values. It doesn't matter which is true as long as its consistent across your app. The only way to do this is to create a single function everyone calls. This has other advantages. You can have unit test coverage so that if someone finds a faster way to determine if an array is empty that function can be rewritten and the unit tests remove any worry or time taken to test the new version. The person using isEmpty doesn't have to see the code or care how the code works, or what it means to be empty. Posted at 2023-10-12 by @gfwilliams I'm not against TypeScript at all - I think adding types and compile time checking for large projects is a good idea.
Oh that's ok then. If nobody reads the code it can be as slow and inefficient as you like. Posted at 2023-10-23 by charlie
I think you know what I meant. The produced code doesn't matter as much as the thing producing it, which is dependent on which javascript version you want the compiled code in and what other compile settings you have. |
Beta Was this translation helpful? Give feedback.
-
Posted at 2023-09-20 by charlie
During some testing using the emulator and pushing the code to RAM I found the following oddity while logging the output from process.memory(), the same occurs if I run on the watch via RAM. If I push the code to a storage file and its run from there it works more as I would expect.
I have a test method inside an instance of an App class which runs for the duration of the program as follows:
printing out the memory at the comment part of the above script gives around 2000 free. Its the same if I do this outside the method after the block has finished.
If I add a timeout to print the memory usage after a few seconds:
Free memory jumps to around 9000.
That code block doesn't contain any async functions. Does anyone have some insight into why a big chunk of memory is being freed a few seconds after execution, rather than at the end of each block scope.
Beta Was this translation helpful? Give feedback.
All reactions