• Wed 2019.01.30 after four hours of number crunching, I thought I had a possible explanation until:

    note: Module minification is whitespace only and code minification is off

    Ripping out code functions and performing the minimum initialization did allow the flat strings to be created. But what doesn't make sense is the available free blocks needed above and beyond just creating an instance.

    Just connecting and process.memory()

     2v00.103 (c) 2018 G.Williams
    >process.memory()
    ={ free: 5080, usage: 20, total: 5100, history: 0,
      gc: 0, gctime: 5.41782379150, "stackEndAddress": 536958216, flash_start: 134217728, "flash_binary_end": 376936,
    


    Load modules only

    >pm()
    { "free": 3190, "usage": 1910, "total": 5100, "history": 1690,
      "gc": 6, "gctime": 7.11059570312, "stackEndAddress": 536958216, "flash_start": 134217728, "flash_binary_end": 376936,
      "flash_code_start": 134234112, "flash_length": 393216 }
    

    usage 1910 x 16 = 30560 bytes which seems right considering non-minimized ASCII file size is around 48k for modules only


    From:

    https://www.espruino.com/Reference#t_l_E­_getSizeOf
    E.getSizeOf(v, depth)
    Return the number of variable blocks used by the supplied variable

    But why is an instance of a class with just arrays of 1156 bytes total and a handful of number vars create a usage of 19776 bytes?   19776 == 1236 * 16

    This implies the entire class is being loaded yet again, even after the initial module(s) is/are loaded during the initial 'send'



    Then I create an instance of a class

    >n
    ={  }
    >E.getSizeOf(n)
    =1
    

    after instance creation

    >E.getSizeOf(n)
    =1236
    >pm()
    { "free": 3026, "usage": 2074, "total": 5100, "history": 1699,
    

    1236 x 16 = 19776
    Free 3190 - 3026 = 164
    164 x 16 = 2624

    But, when I create an instance, in the above case, no more free space is used. Which doesn't make sense. getSizeOf() is returning 1236 but the free space doesn't go down much/enough even though usage goes up a lot.
    2074 - 1910 = 164    164 * 16 = 2624 bytes used  vs
       1236 x 16 = 19776 bytes using getSizeOf()


    which also doesn't agree when using depth with the following:

    >E.getSizeOf(n,0)
    =1236
    

    'If depth>0 . . . an array listing all property names and their sizes is returned'

    then is size here the same as the number of blocks returned when depth == 0 ?

    >E.getSizeOf(n,1)
    =[
      {
        name: "__proto__",
        size: 1096 },
      {
        name: "pinLed",
        size: 3 },
       . . . 
    

    No, doesn't look like it

    as, when I apply a depth of 1 the sum of the returned sizes doesn't equal the number of blocks returned at depth 0
    nor does it at depth 2. In fact the sum returned is ten times greater, which implies the size is in bytes. (but reference indicates blocks)

    But that doesn't add up either, as a number var size is 2, but the internals page indicates the smallest size is 16 bytes per block.

    >E.getSizeOf(n,2)
    =[
      {
        name: "__proto__",
        size: 1096,
        more: [
          {
            name: "constructor",
            size: 1094 },
          {
            name: "ledon",
            size: 1094 },
       . . .
    ten more like the above child
    

    Sum of sizes is ten times ~== 10940

    None of these methods agree.



    These observations draw me to conclude that

    process.memory().free usage doesn't agree with E.getSizeOf()
    and the size summation of each depth level doesn't agree with each other depth level.


    So, what is the correct way to analyze memory then, in such a way one can guarantee the success of requesting that a flat string be created?

About

Avatar for Robin @Robin started