• @sp33c, I asume that the data you published in this forum is test data and therefore, my conclusion may not be the answer to the payload capacity issues... but I still want to mention it.

    JSON is already much leaner than XML, but it is still 'fat', especially for repetitive data... and last but not least the lots of double quotes required. JSON is very easy and fast to process, but may be you can afford some CPU cycles to get over the bump. In case 'you know your data' / 'have control over the data', a CSV string is still the most effective to get there.... and furthermore, even having a catalog with ids for values that repeat.

    The challenge is now to provide what xml does best and JSON about equally as good: provide the meta data (reference) about the individual data/information elements, and the structure...

    The first one 'is easy' and is applicable if you have a number of defined data structures: first or last data item includes the id to the definition of the data structure.

    The second one is when the values are sparse and variable in distribution across the overall (complete) structure: use the 1st approach - if useful - to identify the main structure, and a second - or second last - value for a bitmap that identifies which attribute (always same sequence) has value... (saves you from sending commas).

    When values are repeated, a dictionary is built, passed as first thing in the payload, and then payload-local references describe the content (Some DBs do compression this way in order to not have to run uncompress algorithms).

    Further option is to code the values if the values are known beforehand.

About

Avatar for allObjects @allObjects started