I know this isn't part of the joke but it is technically possible to support Windows software on OS X, take a look at the old WINE. I've gone through most of the old WINE source code in my early days of interest for Windows Internals so I know quite a bit about how it works. It was a hefty lot of work because it basically required a whole re-write of the system-call implementation in Windows and then translation for this to what is supported on Linux, but it did work at the time.
For programs which are built with a byte-code based mechanism (e.g. .NET software relies on Microsoft Intermediate Language (MSIL) which is byte-code) you can make the "virtual machine" cross-platform and then the software will work on the supported platforms as well. This also means that in theory, the .NET framework can be compatible with OS X and Linux. This is exactly how Java accomplishes it's cross-platform support, the source code isn't compiled down to machine code, it gets compiled down to "byte-code" which is just instructions which the Virtual Machine will comprehend and translate to instructions which can be understood by the OS.
Would be very interesting to see something like WINE come back to life for OS X and Linux, or even for ReactOS to be developed on again... But I guess the interest is quite low for many nowadays. Requires a lot of reverse-engineering, and I guess at the time of ReactOS there was also the leaked Win2k source code (a part of it, not much) which probably helped them - a cat has been through that as well on my behalf and the rumours of the Internet Explorer team having some knowledge on Windows Internals is indeed true according to her.
Just to clarify, my reference to "virtual machine" is a different context to "virtual machine" like VMWare/VirtualBox environments. The former is basically a layer which translates instructions to the correct ones, and the latter is true "virtualisation".
For example, if we take the following instructions from the x86 ASM architecture: MOV, JMP, CALL, PUSH, POP. The byte representation of the opcodes (operation code) can be translated to something else, and then when the virtualisation layer is used, the byte-code is fixed up and thus the opcodes are translated to how they should be. Another example would be different versions of operations to be executed depending on the host environment (e.g. do something for Windows only, or something else for Linux). The aim would be for the virtual layer to handle it all, so a program can be written in the language compiled down to the byte-code without having to be "aware" of the target environment, but it will be automatically translated in-memory by the virtual layer for you. Some malicious software/packers make use of such byte-code techniques to "obfuscate" hard-coded shell-code, and then the deobfuscation routine is essentially "decryption" of the shell-code in-memory before or after copying the buffer to executable memory. It has a lot of potential but attackers managed to abuse that one as well, and virtual layers for such techniques in malicious software commonly fools novice analysts without breaking a sweat.