• Yes, that's correct. Nothing stops you from compressing the data in big blocks and then writing those using the standard Storage.write function. The overhead of each file is currently 16 bytes, so not massive.

    You can even use Storage.write to allocate a large file and then write to it a bit at a time, so you could in fact just implement your own module for storing binary files.

    The current solution handled the vast majority of use-cases and seemed like a good balance of speed & sanity - but I can imagine that perhaps it could be modified to write one byte of length + X bytes of data into each block for every write command - and so avoid the limitation on 0xFF.

About

Avatar for Gordon @Gordon started