• I just did some more tests - this time on a bare nRF52 with a minimal build.

    5.65 softfp
    5.63 hardfp
    

    So there is a repeatable difference, but it's very very small, and I'd definitely err on the side of keeping softfp and not breaking anything if at all possible :)

    I just tried building a minimal C file with hard/softfp and I see a few binary differences in the object file, but I do see:

    // hard
      4 .ARM.attributes 00000034  00000000  00000000  000000b8  2**0
                      CONTENTS, READONLY
    // soft
      4 .ARM.attributes 00000032  00000000  00000000  000000b8  2**0
                      CONTENTS, READONLY
    

    And of course the algo.o is something totally different (5e!) - but I did just hex edit algo.o and change 5e to 32 and it does now build. I've just got to do the other files, and we might be ok

  • I just did some more tests - this time on a bare nRF52 with a minimal build.

    And is the hardfp version larger? so the extra register moves for double precision math are there?

    OTOH I can imagine that if you use single precision float type argument/return value somewhere internally in Espruino then it could make such code shorter and may not overwrite integer registers. But that should affect only method calling. vfp single precision math inside methods should be there already even with softfp calling convention.

    EDIT:
    when briefly checking https://github.com/espruino/Espruino/search?q=float&type=code I can see tensorflow uses float type a lot so maybe that one could benefit from hardfp calling convention being default

About

Avatar for Gordon @Gordon started