-
-
Yes, breakage is to be expected in cutting edge builds. I think most people understand that and are happy to help squashing some bugs by testing.
I have posted a example in the Github discussion, it seems to be a somewhat strange problem only surfacing in specific combinations of let/const/var/require/exports. -
I have set up 2v13.123 with install of default apps in stable app loader. Then installed Android and Messages. Notifications are working fine.
After installing Weather the console shows the following error, no Notification on the Bangle:
>GB({"t":"notify","id":1575479849,"src":"Hangouts","title":"A Name","body":"message contents"}) Uncaught ReferenceError: "_GB" is not defined at line 7 col 59 in weather ...eather")update(event);if(_GB)setTimeout(_GB,0,event); ^ in function "GB" called from line 1 col 94 ...,"body":"message contents"})
After removing Weather app notifications work again.
-
-
I think I have identified the problem and found a workaround:
https://github.com/espruino/Espruino/issues/2215With the described workaround in the weather lib it seems to work fine for me. Removing the weather app would probably work the same way.
-
Mentioned app is at https://banglejs.com/apps/?id=hrmaccevents .
The format is simple csv. Recording of a reference HRM via bluetooth requires https://banglejs.com/apps/?id=bthrm to be installed. -
Maybe you can find some helpful stuff here: https://dontkillmyapp.com/
Most vendors restrict apps pretty aggressively for battery conservation. -
There surely are some things that could be moved out of boot or at least made optional during execution.
How about moving the discovery and device caching code completely into the settings? The scanning currently only sets a filter that is then used by the boot code.
Maybe splitting the bthrm app into a relatively small generic part for using bt sensors (reconnects, caching, error handling) and more specialized modules for actually parsing the data and emitting events (hrm, cadence, battery) would help.@metallisto can you try to activate debug logging in the settings and check in the IDE if there is some kind of error while trying to start the app? It can take a while on first connect, subsequent tries should be faster. Initial connect with my two different sensors takes around 10s. Some sensors seem to be more problematic than others. You can try to set the grace periods in the settings to help those sensors along. Those just create some waiting time at different stages of the connection process, so nothing to loose but a few seconds.
-
-
Had the same problem on backup, the reset seems to fix it. The resulting zip file however is empty. Changing line 48 in backup.js to
return zip.generateAsync({type:"binarystring"});
fixes that as
Espruino.Core.Utils.fileSaveDialog
seems to expect data in (binary) string form.Edit: Browser is Chromium 98.0.4758.102 (Offizieller Build) Arch Linux (64-Bit)
-
Thanks, I have experimented using a palette. In this commit the palette is created to match what the image converter does for 3bpp RGB and 4bpp RGBA images. That seems to work fine for my purposes. I assume the image converter implicitly always uses the same "palette" when doing 3bpp or 4bpp images?
This palette has only 8 colors used and a lookup table is used to get the matching color for drawing on the buffer directly with correct color.
Does the buffer bit depth impact the performance of drawing operations? Perhaps more/less bit shifting at certain depths? -
-
I have managed to add some features and squeeze some additional performance out of the watchface. Using it daily is quite possible now. To do that, I had to render part of the watchface to a buffer, to be able to overlay it with analog hands without having to redraw the whole thing on every refresh.
I did however not manage to create a working solution with a buffer bit depth other than 16 bit. I expected 4 bit to be enough for Bangle 2, but that garbled all colors.
I have also tried using 8 bit color, but had the same problem as with 4 bit, just other wrong colors.
In essence I am trying to do something like this:var img16 = { width : 16, height : 16, bpp : 16, transparent : 1, buffer : require("heatshrink").decompress(atob("AA//AA34gABFgEfAIwf/D70H/4BG8ABF/EfAIv/8ABFD/4ffgEQAIsEgABFwABGwgBGD/4ffwkfAIuH8ABF/kQAIv/+ABFD/4ffA"))}; var img3 = { width : 16, height : 16, bpp : 3, buffer : require("heatshrink").decompress(atob("gEP/+SoVJAtNt2mS23d2wFpA"))}; var b = Graphics.createArrayBuffer(176,176,8); b.drawImage(img16,0,0,{scale:5});//top left in buffer b.drawImage(img3,0,96,{scale:5});//bottom left in buffer g.drawImage(b.asImage()); g.drawImage(img16,96,0,{scale:5});//top right direct g.drawImage(img3,96,96,{scale:5});//bottom right direct
Changing the bit depth of the
b
buffer to 16 makes this example work just fine. I could not get it to work correctly for 8 bit.
4 bit color with a transparent color defined would use the minimum possible amount of RAM, but that probably needs a palette and matching conversion of the images used in the watchface?
If a 4 bit palette for drawing the contents ofb
tog
gives me correct colors, does dithering of images with bigger bit depths still work when drawing to the 4 bit buffer? -
Edit: Spoken too soon... Had inadvertendly compared execution from RAM via IDE with running from flash... Still, a few ms can be gained.
Moving up the scopes when using global variables seems extremely expensive. I have changed to giving the resource definition as function parameter instead of using the globally defined variable and the digitalretro watchface went from 1360ms to about 970ms, so about 30% faster 😱
The performance page in the documentation says that globals are slower to find, but finding out by how much suprised me. -
Thanks, the watchface is a port and runs on the currently developed imageclock. The discussion on that is here: Thread
-
I have tried to eval some code inside a function and it seems to be running in the global scope and not that of the function.
var global = "a"; function test(){ var local = "b"; eval('print("global", global)'); eval('print("local", local)'); } test();
prints:
global a Uncaught ReferenceError: "local" is not defined
Should
eval()
pick up the local variable? -
I have actually removed all possibilities for image resources but uncompressed binary, since the other types are not really beneficial for anything. That shaves another couple of percent of the draw time.
I had not seensetClipRect
yet, that could be really useful to draw partially, without having to split the background in several parts. Splitting the background would be possible for native watchfaces, but for the automatic amazfit conversion probably not that easy since there are no bounds to image sizes.
Is it correct, that I could set a ClipRect and then just drawImage with the full screen size background and have it only touch the ClipRect pixels and be faster than drawing without ClipRect? That would be an awesome alternative to drawing into Arraybuffers and compositing them together. -
-
I have bought the cheapest and biggest "Hydrogel" protector on ebay. I think it was for Samsung Galaxy Note for 2,99€ shipped or something like that. Enough material for 8 Bangles :)
Cut it roughly (some mm extra all around) with scissors. Then apply it to the bangle and wrap as good as possible. The stuff is relatively flexible. Then I have cut around the glass using an scalpel (exacto knife or sharp box cutter should work). The hydrogel does not create a perfectly flat surface, but that is only visible in some light conditions. Small (1mm radius) bubbles vanish on their own in about 2 days.
!PATIENCE necessary!
Do multiple light passes. To much pressure will make you slip and cut your fingers, the bangle case, or the protector over the glass. On my pictures you can see a little slip on the bottem right. It goes a little bit further on the case, but it is not easy to show on a picture.
I had tried protectors for my BIP S, but screen curvature and outer corner radius is to different to work well.
-
Thanks Gordon. I had expected that implementing the code generation would be complicated, but actually it was relatively easy, since the flattening of the tree was already there. Some results with different watchfaces:
simpleanalog No data file, tree: 420ms Tree: 440ms Collapsed: 190ms Precompiled: 170ms digitalretro No data file, tree: 2830ms Tree: 2830ms Collapsed: 1490ms Precompiled: 1366ms gtgear No data file, tree: LOW_MEMORY,MEMORY Tree: 2020ms Collapsed: 1150ms Precompiled: 1060ms
Precompiling the watchface to draw calls gets about 10% reduction in drawing time after reducing the tree down to an array. Storing all images directly as binary string does not do a lot, on the order of single digit milliseconds. But every little thing counts.
Would you say replacing a switch with something like this generally would be faster?
function doA(p){} function doB(p){} for (var c of items){ switch(c.name){ case "A": doA(c.param); break; case "B": doB(c.param); break; } } // faster than the switch? for (var c of items){ eval('do' + c.name + '(' + c.param + ')'); }
Absolutely catastrophic security-wise, but probably not really problematic for the bangle. If people start using watchfaces from wherever, maybe input sanitation would be in order.
Maybe drawing parts of the clock (digital time, weather, status icons) into arraybuffers in an event driven way and only compositing those together on every draw could be faster? At least digital time would only be refreshed once a minute instead of every call like now. Parts could individually change at their own speed without beeing actually redrawn on every refresh. Instead there would be the compositing overhead on every draw...
-
I have found 2 huge improvements:
- Collapsing the tree into a flat array on the browser side saves about 50% rendering time on the watch. That might complicate further savings using partial redraws, but that's currently just an idea.
- Using g.transformVertices for drawing the rotated analog vector hands. About 30% faster while drawing those.
I have added expensive code for tracking time, so the absolute times are inflated a lot by that. The overview of the tracked times can be printed with printPerfLog(). With deactivated time tracking it is still a bit to slow for amazfit watchfaces with seconds. It however is a lot closer to sub second rendering than before.
... drawImage last: 88 average: 86 count: 2 total: 173 drawIteratively last: 282 average: 282 count: 1 total: 282 drawIteratively_handling_Image last: 97 average: 99 count: 2 total: 198 drawIteratively_handling_Poly last: 62 average: 62 count: 1 total: 62
In this example the two drawIteratively_handling_Image take 25ms longer than the two drawImage call they wrap. The logging uses about 8ms per stored element (
startPerfLog
andendPerfLog
combined).
It seems the remaining 4.5 ms per call have been used by anif
checking an object property, aswitch
statement and the function call to mydrawImage
function.
Is that expected? 4.5 ms at 64MHz would be about 280k instructions, that seems somewhat excessive to me 😉. - Collapsing the tree into a flat array on the browser side saves about 50% rendering time on the watch. That might complicate further savings using partial redraws, but that's currently just an idea.
-
I have just pushed changes to convert a good part of the amazfit features. There is still some fine tuning to do, but there are watchfaces that convert pretty nice. The screenshots were made by taking watchfaces from https://amazfitwatchfaces.com/ and the decompiler from the Help page on https://v1ack.github.io/watchfaceEditor/.
Reading all image data from a file using length and offset keeps RAM usage manageable, but now rendering speed remains a big problem. The more complex watchfaces take up to 1,6 seconds for one rendering pass. Thats way to slow to show for example seconds when unlocked and eats battery like there's no tomorrow.
@Gordon: Any ideas how to get some extra speed from the rendering?
I have tried rendering everything to a buffer and then writing the result to the display, but that takes even longer since I need to write the full resolution not only for the background, but also an additional time when everything else has been rendered to the buffer. It seems rendering to the off-screen-buffer is comparable in speed to directly rendering to the display. Drawing a full screen at 3bpp alone takes about 95ms. -
While that's fine, that is still quite a lot - 10 times as much as some watchfaces.
The simpleanalog one has more like 15%, which would be about the bare minimum for something useful. The digitalretro watchface is aimed at being a feature complete demo and has nearly all possible weather codes covered which are currently 48 images at 63*64 pixels.
if the images are stored as binary files in Storage,
Since I don't want to clutter the file system with 50-100 files, maybe writing offset and lengths to the json resources file and only the actual pixel data into one big concatenated "image" file would be a reasonable way? Reading directly should still be possible using offset and length parameters.
-
The Amazfit decompiler creates a very similar json, but it numbers all extracted images in series and references them with a starting image and a count. Thats bad for doing changes before recompiling, since inserting an image means renaming all files after it and changing all references. But it is probably easy enough to either automatically convert the referenced "blocks" of images to directories for the new format or support this naming scheme optionally. Both ways should lead to at least somewhat working automatic ports. Changes would still need manual modifications. At that point renaming the files and using the hierarchical structure would be better. The current feature set is probably enough for some of the watch faces, but there are still features left.It seems there was development on the Amazfit side of things, since I last tried that. The last version of this editor that I tried was completely unusable and broken. That seems to be a lot better now. Actually the format seems to have had some changes too, will have to take a closer look.
https://v1ack.github.io/watchfaceEditor/ -
@HughB
Yes, a tutorial would be useful. Since the file formats are probably going to change, I would hold off on that a bit though.@Gordon
I pushed some commits, which do help on "perceived" performance. There were some bugs on setting intervals and timers and the watchfaces have been drawn very often. The conversion of compressed or b64 image strings to buffers for drawing is now cached, which saves some expensive operations on every refresh. There is still lots of room for improvements. At least figuring out what actually has to be redrawn would help.
The digitalretro watchface needs about half of the available RAM on bangle2, so my focus for now is drawing speed. More because of saved energy due to less calculations, watchfaces are not that dynamic when drawn every minute or even second.
Would creating the script in the browser help with memory? The reading of the resources file and getting it into an object is fast enough, on the order of 80ms. Is the script code accessed directly from flash, or does it get loaded into memory before it is executed? Can I actually avoid getting the image data into RAM until it is needed there for drawing?The idea comes from the format that is created by the unofficial de/compiler for Amazfit BIP watchfaces. I have a watch face I like for my BIP S that is now easy to port. Since there seem to be thousands of these unofficial watchfaces for BIP S, maybe there are some that are worth porting over to a similar format, but not worth creating a dedicated app for every single one. Porting would be for the most part slight structural changes to the decompiled json. Probably most of that can be automated.
You can connect to the watch with the IDE and see the log printed in the console. Can the H10 have multiple connections? Maybe it was still connected to something else during your tries? My Wahoo Tickr X2 indicates if it is connected, but not if one or both possible connections are currently in use.