Saturday, August 4, 2012

On Machine Emulation Slowdown


The worst-case optimal slowdown for a virtual machine A being emulated on a machine B can be found by optimally emulating B in A and run it on B, and compare the speed of this emulation of B to the speed of B. (If the language of A is not as expressive as the language of B, then the emulation of B in A is impossible, and the worst-case slowdown doesn't make any sense, or can be said to be infinite.)

So any emulator of B in the language of A allows us to get an upper bound on the worst-case slowdown occurring necessarily when using the virtual machine A instead of B.

Similarly, when a high-level language A compiles to a low-level language B, an interpreter from B to A allows us to get an upper bound on the worst-case slowdown induced by programming in A instead of B. (It should be noted however that this factor will tend to be much worse than the average slowdown.)

Note: When designing a high-level language A to be compiled to B, one could always add the primitives available in B to A to make the worst-case slowdown equal to 1 (C may be an example of a language that does that, as it is possible to include assembly in a C program). However, the inclusion of B in A should be done very carefully, as it could potentially break the nice properties of A.

No comments:

Post a Comment