`yield` keyword in Espruino

Posted on
  • Hi all,

    Is there plan to implement yield keyword in Espruino?

    Motivation

    Main purpose is to be able to translate such a simple Python code:

    do_something()
    while True:
        x = do_something_else()
        if x == 0: 
            break
        sleep(0.1)   # wait 100ms 
    do_another_thing()
    

    into this form (in Livescript):

    do-something!
    while true
      break if do-something-else! is 0
      yield new Promise -> set-timeout it, 100
    do-another-thing!
    

    or in Javascript:

    doSomething();
    for (;;) {
      if (doSomethingElse() === 0) {
        break;
      }
      (yield new Promise(fn$));
    }
    doAnotherThing();
    function fn$(it){
      return setTimeout(it, 100);
    }
    

    as we discussed in this Livescript thread. There are two things come into play: Promises, which you seem to discussed earlier and the yield keyword.

    ...or is there any external library/technique to achieve something similar?

  • There's not really the possibility to implement yield in Espruino at the moment. Due to the way the parser works, stopping and resuming execution is tricky.

    However if the livescript compiler is smart, it might be able to split the code up into basic blocks, and call them via callbacks.

  • There are ways to do blocking delays in Espruino, but that's not encouraged.

    What you put up above seems to call for a

    doSomething()
    inter=setInterval({.. function here, instead of break, clearInterval(inter).},100);
    doOtherThing()

    Or am I missing something?

  • The main issue about this is that Python is multi threaded... Javascript not at all. A different approach has to be taken in Javascript... something, for example, along the lines @DrAzzy suggests, or a sequencer, an 'async' pub-sub mechanism, a producer/consumer setup. You will find more about this sequencer which even allows to append or prepend the queue for actions. Prepended actions will happen next even though follow ups of the current are already in queue. Prepending is needed for post processing of values that are needed of scheduled follow-up actions. The post processing could happen in the scheduled follow-up action(s), but would duplicate and convolute code, because post-processing for consuming may vary on result and is really not the consumers'/recipients' job but the producers'/deliverers' on (see application of separation of concerns and CRC - Class-Responsibility-Collaborator design).

    I have seen an approach in the water timing use that has functions that call themselves on timeouts (intervals, similar to @DrAzzy's suggestion). On timeout they check against particular values of particular shared (common, root level) (state) variables. Depending on the condition, the function does something, including updating (some of the state) variables, and then sets a new timeout for self-invocation, or it sets a new timeout for self-invocation right away. This works for 'slow', infrequently happening, slowly reacting things where the timely execution is not a big deal... if a watering happens -0+5 minutes later, no big deal... Having a poll on a time or after a longer time interval and doing retries on a shorter interval can help improve the overall system through put.

    For timely, subsequently tight execution there is just no other way of call backs - which can lead to callback hell - or - to do pub-sub. With both callbacks and pub-sub race-conditions one has to be aware of possible race conditions and call-stack exhaustion (circular reference / event/result dependencies).

    For producer/consumer see Producer.Consumer.js that talks about the use of Generators. Unfortunately, it did not make it into ES6. It though made me think of a ES5/ES6 shim/backfill/...or what ever you want to call it that can facilitate it, but there is no way to avoid code inversion.

    Btw, you just 'earned' my 1000st post... (not to talk about the other countless in personal messages)... therefore, ping me again, and I will give you more details when having a bit more time at hand.

  • @Gordon

    Umm... Livescript compiler is smart, but I don't think It's that far :)

    @DrAzzy

    For now, it's best to make these waits via callbacks; but as @allObjects mentioned, this easily leads to a callback hell... Livescript makes this problem solved a little bit easier, but that's what can be done without a serious hack.

    @allObjects

    Well, it seems I won the big bounty :)

    Yes, Python can be multithreaded, but there are many coroutine libraries exist (including Gevent, which is what we use heavily in our DCS library). Gevent is a kind of "hack" in Python and works great. (Guido Van Rossum (the author of Python) does not like gevent, though. He tries to make something with similar practicability with a library called asyncio which is mostly influenced by the Twisted project (a very popular networking library in Python) which is mostly influenced by Javascript, which is... Yeah, Espruino already implemented! :) So I thought maybe this is possible in Javascript.

    I tried to read your "sequencer" code but I'm not able to, it's probably because I don't know Javascript that much. But I'll try to read it again, it might be the key of the solution in my case.

  • This is best I can do with Livescript:

    sleep = (ms, f) !-> set-timeout f, ms 
    
    do-something!
    <- :lo(op) -> 
      x = do-something-else!
      if x is 0
        op!; return
      <- sleep 100ms 
      lo(op)
    do-another-thing!
    

    More generally, a sequental-style, "multi-threaded" code could be written in Livescript as follows:

    st = new Date! .get-time!
    td = -> (new Date! .get-time! - st) + "ms :"
    sleep = (ms, f) !-> set-timeout f, ms 
    
    do
      i = 3
      console.log td!, "start"
      <- :lo(op) ->
        console.log td!, "hi #{i}"
        i := i - 1
        if i is 0
          op!;return
        <- sleep 1000ms
        lo(op)
      <- sleep 1500ms
      <- :lo(op) -> 
        console.log td!, "hello #{i}"
        i := i + 1
        if i is 3
          op!;return
        <- sleep 1000ms
        lo(op)
      console.log td!, "heyy"
    
    do 
      a = 5
      <- :lo(op) -> 
        console.log td!, "this runs in parallel!", a
        a := a - 1 
        if a is 0
          op!;return 
        <- sleep 500ms
        lo(op)
    
    

    Output:

    0ms : start
    2ms : hi 3
    4ms : this runs in parallel! 5
    506ms : this runs in parallel! 4
    1004ms : hi 2
    1009ms : this runs in parallel! 3
    1512ms : this runs in parallel! 2
    2007ms : hi 1
    2015ms : this runs in parallel! 1
    3510ms : hello 0
    4514ms : hello 1
    5517ms : hello 2
    5520ms : heyy
    

    Compiled JS:

    var st, td, sleep, i, a;
    st = new Date().getTime();
    td = function(){
      return (new Date().getTime() - st) + "ms :";
    };
    sleep = function(ms, f){
      setTimeout(f, ms);
    };
    i = 3;
    console.log(td(), "start");
    (function lo(op){
      console.log(td(), "hi " + i);
      i = i - 1;
      if (i === 0) {
        op();
        return;
      }
      return sleep(1000, function(){
        return lo(op);
      });
    })(function(){
      return sleep(1500, function(){
        return function lo(op){
          console.log(td(), "hello " + i);
          i = i + 1;
          if (i === 3) {
            op();
            return;
          }
          return sleep(1000, function(){
            return lo(op);
          });
        }(function(){
          return console.log(td(), "heyy");
        });
      });
    });
    a = 5;
    (function lo(op){
      console.log(td(), "this runs in parallel!", a);
      a = a - 1;
      if (a === 0) {
        op();
        return;
      }
      return sleep(500, function(){
        return lo(op);
      });
    })(function(){});
    
  • Because of the distinct architectural feature of Espruino hardware and software combo, which is being driven solely by events, - a sleep implemented as a loop for killing time in the application/javascript code is the worst one can do... (as is too in a browser...). Espruino runs - Sloppy - non-event-oriented thinking speaking, Espruino runs mainly two code 'loops' that (can) go on at the 'same' time:

    1. hardware/firmware code 'loop', which handles the 'low level' things - pins, timers, hardware events, hardware implemented I/O and communication sub systems
    2. firmware/javascript code 'loop', which handles the Javascript thread and execution (interpretation) of the Javascript application/user code

    If something relevant for loop 2 happens in loop 1, for example, an input pin has changed state from 0 to 1 (3.3v or 5V) and a watch is set on that pin with raising or both edge detection, the code in loop 1 talks to loop 2 and makes 'sure' that the callback registered with the setWatch(...) is called with respective argument(s)...

    Sounds simple, but it is not that simple... Why? Something in loop 2 - application/user javascript code may already be in process and a direct, immediate invocation is not possible. Therefore, the loop 1 firmware sticks the pertinent information about the pin event into the Javascript execution queue, like a work order. When the currently running user/javascript code is done of which loop 2 is the host of, loop 2 checks the queue and picks up the (next) ready work order...

    You may ask now, what happens if more than one thing happens in loop 1 - on the system side so to speak - while one and the same things is still going on in loop 2 - on the application side so to speak. Espruino firmware manages that with buffers:

    • buffers for incoming bytes over communication (sub systems)
    • buffers for events like pin state changes, recordings, etc.

    When timing is right - no pressing system / hardware interrupt service is active - the loops communicate with each other. With the help of buffers Espruino makes sure - to a certain extent - that no system / hardware interrupts get lost. I used the term loop, because it is a simple concept to understand: Do something and then sleep until something else has to be done. But it is not really a closed loop: The sleep is like the clasp in a code neckless. In Espruino, the clasp that makes firmware loop 1 a loop is the setup of hardware interrupt service routines. A hardware interrupt - pin change, subsystem receives or sent a byte, timer overflows - makes the processor to invoke the corresponding routine, and when the routine is done, the processor resumes work of loop 2, which strictly speaking isn't a loop either: the clasp that makes it a loop is the queue of javascript invocation context. When all work orders 'are done', the processor goes really to sleep... light or deep... depending your setDeepSleep(...) setting. Luckily, you have a setBusyIndicator(pinOrOnBoardLED)
    to provide you feedback about Espruino's busy / sleep state (...send Espruino to the sleep clinique for observation...).

  • @allObjects

    Regarding to your explanations, I thought I could write a simple function to build an event-like functionality that uses setWatch(...) under the hood, and I planned to ask "can we define a virtual pin where we can setWatch?" later on, but I couldn't make a pin triggered via software because it seems setWatch is cancelled on a pin that is configured as output and Pin.write() sets a pin's mode as "output".

    So I ended up defining some simple functions to handle some simple events:

    # application
    do
        i = 3
        console.log td!, "start"
        <- :lo(op) ->
            console.log td!, "hi #{i}"
            i--
            <- wait-for \something
            if i is 0
                op!;return # break
            lo(op)
        <- sleep 1500ms
        <- :lo(op) ->
            console.log td!, "hello #{i}"
            i++
            if i is 3
                op!;return # break
            <- sleep 1000ms
            lo(op)
        <- sleep 0
        console.log td!, "heyy"
    
    do
        a = 5
        <- :lo(op) ->
            console.log td!, "this runs in parallel!", a
            a--
            go \something
            if a is 0
                op!;return # break
            <- sleep 500ms
            lo(op)
    

    The output is:

    0ms : start
    2ms : hi 3
    3ms : this runs in parallel! 5
    4ms : hi 2
    505ms : this runs in parallel! 4
    505ms : hi 1
    1007ms : this runs in parallel! 3
    1508ms : this runs in parallel! 2
    2009ms : this runs in parallel! 1
    2508ms : hello 0
    3510ms : hello 1
    4510ms : hello 2
    4512ms : heyy
    

    My current problem seems to be solved now. I can write the code in the way I wanted to without using ES6 specific properties of Javascript.

    Hint: You can compile whole code into Javascript online at Livescript.net

  • Thanks for sharing. Will take a look at. I expect that your additional tooling and constructs invert a loop, so you can go half way through, start or resume another thing or loop that also goes half way through, which creates the condition for the initial loop to continue. It is a state related problem that lets a routine know wether you enter the first time or a subsequent one, also know when it has to end.

    Regarding the use of a virtual pin to 'mess with the system', sacrificing two connected pins would get you there... not most efficiently, but sometimes brute force is the only thing to get something going NOW...

    I'm glad that you can keep structuring code the way you are used to and that the tooling and some code helps you with it...

  • On second thoughts, first loop of function_here() will be run at the same time with doOtherThing(). We don't want that. Instead, we need to doOtherThing() when (or if?) interval loop breaks.

  • Post a reply
    • Bold
    • Italics
    • Link
    • Image
    • List
    • Quote
    • code
    • Preview
About

`yield` keyword in Espruino

Posted by Avatar for ceremcem @ceremcem

Actions