Most recent activity
-
-
-
-
@Gordon Thanks for the tip, but it did not help.
I still get this at server startup:
Inactivity detected restarting webserver... Webserver restarted. >Uncaught Error: CIPSERVER failed (Timeout) at line 1 col 53 throw Error("CIPSERVER failed ("+(a?a:"Timeout")+")"); ^ in function called from system
Here is the inactivity checker with 2 sec delay for server start:
function checkInactivity() { if (Date.now() - lastPageHandled > 60000) { clearInterval(checkInactivityIntervalId); lastPageHandled = Date.now(); // Close server and reopen it. console.log("Inactivity detected restarting webserver..."); if (server) { server.close(); } server=undefined; setTimeout(function() { server = http.createServer(pageHandler); if (server) { console.log("Webserver restarted."); checkInactivityIntervalId = setInterval(checkInactivity, 2000); server.listen(80); } else { console.log("ERROR createServer() returned false."); console.log(err); setTimeout(function() { reboot(); }, 5000); return; } }, 2000); } }
-
@Gordon : Closing and creating the server does not seem to resolve the jam.
I made the inactivity checker like this:function checkInactivity() { if (Date.now() - lastPageHandled > 20000) { clearInterval(checkInactivityIntervalId); lastPageHandled = Date.now(); console.log("Inactivity detected restarting webserver..."); if (server) { server.close(); } server = http.createServer(pageHandler); if (server) { console.log("Webserver restarted."); checkInactivityIntervalId = setInterval(checkInactivity, 2000); server.listen(80); } else { console.log("ERROR createServer() returned false."); console.log(err); setTimeout(function() { reboot(); }, 5000); return; } } }
Should I do some deeper cleanup/reset/disconnection than just recreating the http server?
Also I get these errors during the restart:
>Uncaught Error: CIPSERVER failed (Timeout) at line 1 col 53 throw Error("CIPSERVER failed ("+(a?a:"Timeout")+")"); ^ in function called from system
-
@Gordon Faster timeout on client side makes the situation worse since the request that has gone out from client cannot be aborted in a way that it would go away from the buffers on network or sever side. Therefore issuing new request after timeout just adds to the pile. The request can be aborted on the client side ajax but I think that has no effect on server side.
Of course after timeout I could wait longer before issuing next request, but that is handled kind of automatically by waiting the timeout longer.
Using WebSockets is a bit more complicated (not done that before and I like the idea of stateless and client agnostic server) so I would not like to go there now that the requests actually work :-)
How should I close and reopen the server in a sound way (code example)?
By the way, thank you so much for helping me out!
-
@Gordon Unfortunately no help or different behavior in this case.
The 60 second timeout helps to give Espruino time to answer if several requests are buffered to be handled. The server already responds with Connection:close header, but that does not help when the request is not completed properly.Yes indeed the situation is like a DOS attack, but it should recover after the spam is over and that is not the case at the moment.
Next I will try the wifi disconnect trick. But how the server knows when to do that since everything seems fine on that side?
-
One difficulty is that on client side ajax I cannot make HTTP request with "Connection:Close" headers since they are forced to keep-alive.
Here is my client side request function:
var xhr; function httpGetAsync(theUrl, jsonpcb, ok, err) { if (xhr && xhr.readyState != 4) { xhr.abort(); } xhr = jQuery.ajax({ type:'GET', headers: {Connection: close}, // No effect url:theUrl, cache:false, timeout: (60000), jsonpCallback: jsonpcb, crossDomain: true, dataType: 'jsonp', error: err, success: ok, beforeSend: function( xhr, settings ) { xhr.setRequestHeader( "Connection", "close" ); // No effect settings.headers = {Connection: close}; // No effect } }); }
But still in wireshark I can see the request has "Connection : Keep-alive" in HTTP headers.
@Gordon Now it recovers! Great! Thanks!
Of course that does not resolve the root cause..