Bangle.js + React.js

Posted on
  • Hi all!

    I have been experimenting with the Bangle.js since getting one and the first question that popped to mind was wether or not it could run react. Turns out, it doesn’t seem likely. 64kb of ram! However, that doesn’t stop a hacker and hackers are who own these right?

    I’ve managed to figure out an interesting way to use react with Bangle.js. This post will assume you have used react (or preferably) react-native in some capacity.

    Here is a short demo of what it looks like: https://twitter.com/ericlewis/status/120­3556182973198336

    Before we go much further, the code for this isn’t quite ready for public consumption yet. But I will describe how it works, what I did, and what’s next with possibly other things sprinkled in. Once I get things cleaned up and streamlined (read: easier to use) I will publish all of the code necessary for what is described. Let’s begin!

    Why?
    It’s always important to start with why and there is a why beyond intellectual curiosity in this case.

    Upon receiving the bangle it and reading the reference materials I came to the sullen conclusion that to create anything complicated would require a lot of thought and planning about pixels. Less than ideal.

    As many Web developers are aware, we don’t really need to do that. We don’t like doing that. We have flexbox.

    So the first why is: I wanted to make UI for bangle without worrying about pixels.

    Of course, react doesn’t have flexbox built in to it, that’s the DOMs job. And since we have used react for web before, we know that there is a library called react-dom and it gets used to render our app. React is an expressive way to think about and reconcile trees of data. React has a thing called react-reconciler which does exactly what you might suspect. We could think of ReactDOM as a react-reconciler!

    But what does that mean for us? Well, it means we can write our own reconciler. Various projects exist which do exactly this. They are often thought of as custom react renderers. React-native does this in the context of native mobile apps. Ink does this for using react to create CLI apps. We are going to do this in order to create a program which can run on the bangle. That’s right. We are going to create a custom react-reconciler implementation which can generate commands like g.drawString()!

    Things haven’t gotten too weird yet. It’s reasonable to want to create a static generator using react. What’s weird is flexbox, and how on earth do you do that when we don’t have a DOM to lean on?

    The answer is Yoga from Facebook. A cross platform library that implements the flexbox layout algorithm. We use the JavaScript flavor. A match made in heaven for react: it’s api lines up nicely with the react-reconciler api which means we can use this to create a shadow tree of our potential layout, then step through it to assign x/y values to our generated output. This sounds drastic, but the code isn’t as scary looking. It’s a fancy way of saying we know all the x & y coordinates without ourselves having to do math or think hard about trees. Styling is much simpler since it’s cascading: we just apply it before rendering the children, children apply before content.

    Okay wow. That was a lot of stuff. Let’s where we are at now: we have created a way to use react to generate Espruino graphics commands that render a static view on the watch. Nice.
    Aside here: you’re probably wondering how we are writing react code without you know, DOM elements. What are our components? And the answer is: there are 2 primitive currently, View & Text. They do what you imagine they would, and map to div & span roughly. This is a similar concept to react native, where there is a handful of primitive components which correspond to underlying native views. In our case. They correspond to draw commands and are used by Yoga for layout. More components could be added to support other graphics features like polygons, image, etc.

    Static views are woefully boring, though. And we aren’t using much react since state can’t really change. This next part is where things get kind of complicated, mixed up, really weird, and uncomfortable.

    To understand what we are going to do next we will need to take a small journey in to how two things work: the Apple Watch and react-native. Not like... how they work together, but we will get to that...

    The original Apple Watch and react-native have a very odd similarity in their functionality: they’re asynchronous by nature. The original Apple watch’s didn’t have the battery in order to run full blown applications on it, so the way apps were written was by creating a host extension, essentially a process which lives on the iPhone that could communicate with the Apple Watch via Wifi or Bluetooth. The views were created statically with storyboards and iOS would run your “watch app” code and transmit the view data to the watch. Watch would send events back to your watch app extension. To be continued..

  • We can assume it’s obvious now that Apple Watch is works asynchronously, taking advantage of its bigger siblings faster processor, networking abilities, and battery. Interesting.. :)

    React-native is also asynchronous by nature! React-native let’s you write react code which then runs as native android and iOS applications, it does this in a similar way as us- generating a shadow tree which is then used to tell the native system what to draw, similar to us generating graphics commands. But unlike Espruino, it’s not like you can just load up some JavaScript in a native app and talk to the native view system. You need a bridge to communicate from a JavaScript context or VM to native and vice versa. In react-natives case, it has a queue of commands that build up on the JS side that describe things like layout (similar to our drawString!) native methods to call, etc. which occasionally gets “flushed” to the native side, where native code processes the commands and actually draws things on the screen. This is asynchronous, platform agnostic, and sounds familiar. Oh, and react-native also Yoga to implement flexbox.

    Now we know have some color on these two systems and we shall bring it together.

    From Apple Watch, we draw inspiration of its host / extension model. We will take advantage of phone for running our react code.

    We have Bluetooth on both our phone and the bangle, so we can easily communicate back and forth via Bluetooth.println and sending raw commands.

    Using our react-native inspired bridge, we can execute our react code on the phone, the react code which creates commands for drawing on the Espruino. Flush that to the native side, then use the BLE connection to transmit the drawing commands to the Bangle. And voila, we now have a way to do exactly the same thing we did earlier. Statically render content to the watch. Or did we?

    Now that the react code is constantly running on our phone, any changes to state will cause it (we are going to call it react-bangle) now to flush drawing commands for transmitting to the watch. This means things like timers will work! Which brings us to another super power we may have overlooked:

    We have access to real internet; at high fidelity (signal including)! Since the context our app is actually running in is on the phone in a full blown JS engine we have access to things like fetch! We can use our fancy timers and fetch and flexbox in react to now create a complex data view on our watch which updates automatically! Neat!

    You’ll probably have noticed a couple things by now but I’m going to point out the less obvious one first: in our case we use a phone as a host, but more specially it’s an app which uses JavaScriptCore as it’s JS engine for executing our watch app code. I drew parallels to react-native earlier because the phone app is architecturally similar, as is the react-reconciler. The biggest difference is we have a longer bridge: ours traverse JS -> Native -> BLE -> Watch. It’s basically the same thing, just a longer journey. That’s the less obvious and interesting fact!

    Now the more obvious thing: we like interacting. The bangle has setWatch commands for listening to button presses. We need to link that back across our bridge. This is done as follows: before sending the react generated command over Bluetooth, we wrap it in a few boilerplate commands which standardize 2 things:
    Capture all button presses and use Bluetooth.println to emit a chunk of JSON identifying the button and redrawing the originally sent command (so we don’t accidentally lose focus). On our host’s native side we then link these events directly to an event emitter on the hosts JavaScript engine. You can then subscribe to this event emitter as you normally would in your react watch app code and update state which... causes the entire process to be repeated and your watch will display the newly minted program with its fresh data!

    Wow, we’ve done it. We’ve figured out a way to run a react app which feels as if it were a native bangle app (given some lag, but you’d be surprised). So what’s next?

    Well, a couple things. For one- all we’ve managed to do is run a react app in a clobbered slow manner. Getting the JS files to execute is a pain, you must start the process from your phone, etc etc. so how do we make it easier? Well, we extend our native apps ability and do a little bootstrapping.

  • There is a lot of gritty detail to how my own app works which is better saved for when I can share code. But we are going to do 2.5 basic things: create a (normal) bangle app & create new react-bangle app, change our host and JS engine a bit.

    First: why.
    Our goal is to create a bangle app which launches and can tell our host app to fire up our react-bangle app we will soon create. This lets us launch a react-app on our host from the watch. It’s super simple: we emulate a button press, but use a button index that doesn’t exist. This gets intercepted uniquely and bam: we will have our react-bangle app on the screen.

    That’s cool, we could create launcher bangle apps for each react-bangle app we make. But that’s still a pain. We have all this power, and the internet.

    Instead, we will “bootstrap” ourselves by creating an app launcher which launches our hard coded react-bangle app. And that react bangle app is going to be... an app launcher!

    In the current iteration it is a simple react app which pulls a JSON file from github of different app names, these map to folders which map to 2 files: main and vendor JavaScript files. Those contain the react & react-bangle code (and whatever other packages) needed for executing on the host. This is simply using create-react-app with our custom react-reconciler, and yarn build to grab the 2 production JS files. Our runtime JS is hardcoded to the host right now. This means we have a repo, we can display it’s contents on the watch. What happens when you select a app to launch?

    Our bootstrapped react app has a super power in its global JS environment: a function which asks the host app to go and download the JS files for the selected app (passed as a paean), spin up a new JS engine, pause the one the bootstrapped app is running in, and then execute it, just like we do normally! The new engine gets all the Watch events, and it’s as though we are running an entirely different application on the bangle. Because we are. Sorta.

    Some extra, neat things before I sign off and get back to hacking:
    This all probably sounds crazy inefficient to you, especially transmitting those drawing commands. Not to fret, this was heavily considered in order to make the app feel native to watch. Once such thing was hijacking all button presses. Another thing that was done: taking advantage of the doublebuffered LCD mode. This gets rid of that annoying flicker, and since we know exactly when and how our draws work (and we only call them once anyway) then it’s easy to implement. You just call flip after executing the generated command. There is a subtle downside to this: for stuff that doesn’t change too often, it almost doesn’t feel like the apps functioning! But, for things like counters: it’s great! Another bottleneck to overcome is our bandwidth: you can currently only send 16 chars at a time over the Bluetooth connection. This means we really want to reduce boilerplate on each render. This is achieved in 2 ways: the first way is taking advantage of chaining. All graphics commands can be chained, so in our emitted code you don’t see: g.drawString()g.drawString()g.setColor()­

    You see: g.drawString().drawString().setColor()

    Every char counts! We just saved 2. The more complex the render command, the more savings. Which leads us to another optimization: less verbose commands. While useful when coding as a human being, it’s non-optimal for transmitting in 16 byte chunks. You’ll remember earlier we created a bootstrap app. We can use this bootstrap app to create aliases for commonly used commands, then adjust our reconciler. Now, the commands emitted from earlier would look like this:
    g.s().s().c().

    That’s pretty massive savings. And if your output is this simple, you will notice that it is almost imperceptible from a native bangle app (actually, usually feels better, because there is no screen flash). This principle is applied where it can be, including with colors. Another help is the cascading styles, which if used more cleverly could result in tremendous savings for complex styles.

    What’s next?
    The reconciler still needs some work. The event emission system is janky. Work needs to be done to be able to use the sensors (which means the event system needs to not be janky). Everything is tied to a swift app (runs on macOS / iPad / iPhone / Apple Watch 😂) and needs to be refactored. Side note: the whole swift part of this is no more than 500 lines of code. Bluetooth + JS engine. 500 lines! The codes not badly architected at least. And you could totally emulate the architecture from a browser; which is why I laid it out the description as I did.

    The whole sending entire commands isn’t super nice, and we are missing critical components like images! Adding more native components is high on the list. As well as figuring out some way to only send graphics commands to the bangle for pieces of the UI which actually changed (thankfully, Yoga should make this pretty trivial).

    That’s all I have for now! Going to get back to hacking. Thanks so much for reading, and follow me on twitter if you would like to see videos and pics of the progress!

    https://twitter.com/ericlewis

  • /me clicks the "Follow" button :)

    Wasn't aware that the original Apple watch worked this way, interesting.

  • @user106457 : Simply awesome, Just noticed that you shared Gist on tweeter about BlueTooth Manger for BangleJS device. What is this? It is something like gadgetbridge but for iOS?

  • Sun 2019.12.08

    Thank you @user106457 for that brief post. ;-)

    Wow, that took some time to compose, I'm sure.

    I like your writting style, with the chronological pp entry highlights. It also allows following along a joy. Planning on (many) more tutorial articles as I see that on the horizon peering into the crystal ball?

    Pardon me while I get a second cup of coffee to enable me to read the sequel . . . .

  • This looks really interesting - thanks! Also, interesting about the first Apple Watch - it explains a lot :)

    Since you're doing a bunch of stuff on the phone, I wonder whether you could take advantage of something like: https://github.com/espruino/EspruinoWebT­ools/blob/master/examples/imageconverter­-html.html

    So you're basically doing all the rendering on the phone, compressing, then pushing it to Bangle.js.

    Normally that'd be a bit slow, but I reckon if you went with the 120x120 graphics mode and maybe compressed to the 4 bit Mac palette - even better if you could render only what changed - you could end up with something relatively performant.

  • Interesting. It could be just like some remote desktop protocol - sending images to the watch, sending events back to phone, all logic on phone - then it could be made more effective than evaluating "g.s().s().c()" or it could be some interesting mix of javascript code running on both sides so the app part that is in the watch works somehow also when phone is disconnected.

  • Yes, I have considered this too! For longer programs it could be useful. But, one big reason it works this way (minimizing the involvement of native or native code) also takes a page from Apple Watch: if the device running Espruino ever becomes reasonably powerful enough we can simply move our code from being executed on host to being executed on the more powerful Espruino. The other reason for avoiding that is that I plan to figure out delta updates, so I can send far shorter commands.

    Edit: I see why you suggested this, it’s almost constant in execution time.

  • that’s basically how it works but I send strings instead of images.

  • Thank you! In the future I will open the code :)

  • It’s like Puck.js for Swift

  • ...Bangle.js becomes the remote (X11) display server for the phone... (or other BLE enabled 'computing' center). As long as it is not full images - but predefined sprites (w/o parms and with parms for predefined variations/sizes) - a very efficient way to do things. Some ideas just come and go, float back and forth - between servers and clients - partially and fully... If it is full image, one full display image in 16 colors eats - uncompressed - up almost all RAM emory there is... not much left to do communication - so streaming would be the only answer...

  • indeed. that is sort of what I would like to go for. symbols for common ideas and what not. but again, everyone would be surprised how performant the string method can be. especially with optimizations that prevent things like sending graphics commands which would result in draws off screen.

  • Starting to open pieces, this is used to generate commands from react: https://github.com/ericlewis/react-bangl­e

  • Post a reply
    • Bold
    • Italics
    • Link
    • Image
    • List
    • Quote
    • code
    • Preview
About

Bangle.js + React.js

Posted by Avatar for ericlewis @ericlewis

Actions