
Nope no magic corruption:) Only different decimal precision
Look at code at line 57L60:
/*57*/ var newDiv = 0xE4C000; /*58*/ var msband = newDiv & 0xFFFF0000; /*59*/ var midand = newDiv & 0x0000FF00; /*60*/ var lsband = newDiv & 0x000000FF;
Going back from line 605958:
Assign 3 variables using the value of a variable callednewDiv
and some binary operations. What's the value of newDiv at that moment in time? It's exactly0xE4C000
, because you just assigned that value at L57.
Anything happened before is irrelevant.To demonstrate it (start with
correct.js
):Add
var bla = newDiv
to L45
Addprint('bla == 14991360:', bla == 14991360)
to L66It will print
bla == 14991360: false
so, thebla
variable still holds that almost14991360butnotexactly value.
Just typebla14991360
and you get0
. Or type(bla14991360)*1e9
, and you get the same1.86264514923
.
No corruption here, as far as I can tell, only different fraction handling.

' It's exactly 0xE4C000, because you just assigned that value at L57'
Yes and now you have proved my point. We are masking bits with identical hex values. In either case the answer should be identical, as we are dealing with bits. Thats the reason for performing a mask operation, to detect which bit.
'as far as I can tell, only different fraction handling'
That case is using a floating point value for input. The example provided used the same integer equivalent of the hex notation. I left out the
parseInt()
conversion as the browser tests didn't need it.We have the node.js test, the Firefox browser test and my Chrome browser test, that produce correct identical results in these three examples. In this instance on one device using a different mechanism, we get different conflicting results. We shouldn't.
I'm reading up at mozilla . . .
https://developer.mozilla.org/enUS/docs/Web/JavaScript/Reference/Operators/Bitwise_Operators
"Bitwise operators treat their operands as a sequence of 32 bits (zeroes and ones), rather than as decimal, hexadecimal, or octal numbers."
"The operands of all bitwise operators are converted to signed 32bit integers in two's complement format, except for zerofill right shift which results in an unsigned 32bit integer. "
As these are numbers, it shouldn't matter if it is decimal or in hex notation. The browser tests show this. (a thought  aren't integers converted to floating point under the hood? hmmmm)
I did notice that the mozilla site uses integer constants for their flags, rather than hex notation as I did. Could this be a difference between C/C++ and Javascript?
Testing that . . .
Thr 2019.07.25
Thank you @AkosLukacs for the speedy reply. Thanks for testing on different OS and browser. At least I can be assured I'm not going insane. ;) (yet)
Always amazes me how each of us tackle problems. I'll take time to study the effort you put in. Still not solved though.
After I posted and gave it some thought, I wondered if a floating point issue might be at hand.
But if that be true, then Line L43 L44 in correct.js
would mean that the assignment of the same value is being corrupted. This would imply that the floating point value, of an identical value, is changing after the first assignment!!!