It's certainly possible to develop on a Windows machine, in fact, my first application was exclusively developed on the old Dell Precision I had at the time :)
There are three routes;
- Install OSx86 (aka iATKOS / Kalyway) on a second partition/disk and dual boot.
- Run Mac OS X Server under VMWare (Mac OS X 10.7 (Lion) onwards, read the update below).
- Use Delphi XE4 and the macincloud service. This is a commercial toolset, but the component and lib support is growing.
The first route requires modifying (or using a pre-modified) image of Leopard that can be installed on a regular PC. This is not as hard as you would think, although your success/effort ratio will depend upon how closely the hardware in your PC matches that in Mac hardware - e.g. if you're running a Core 2 Duo on an Intel Motherboard, with an NVidia graphics card you are laughing. If you're running an AMD machine or something without SSE3 it gets a little more involved.
If you purchase (or already own) a version of Leopard then this is a gray area since the Leopard EULA states you may only run it on an "Apple Labeled" machine. As many point out if you stick an Apple sticker on your PC you're probably covered.
The second option is more costly. The EULA for the workstation version of Leopard prevents it from being run under emulation and as a result, there's no support in VMWare for this. Leopard server, however, CAN be run under emulation and can be used for desktop purposes. Leopard server and VMWare are expensive, however.
If you're interested in option 1) I would suggest starting at Insanelymac and reading the OSx86 sections.
I do think you should consider whether the time you will invest is going to be worth the money you will save though. It was for me because I enjoy tinkering with this type of stuff and I started during the early iPhone betas, months before their App Store became available.
Alternatively, you could pick up a low-spec Mac Mini from eBay. You don't need much horsepower to run the SDK and you can always sell it on later if you decide to stop development or buy a better Mac.
Update: You cannot create a Mac OS X Client virtual machine for OS X 10.6 and earlier. Apple does not allow these Client OSes to be virtualized. With Mac OS X 10.7 (Lion) onwards, Apple has changed its licensing agreement in regards to virtualization. Source: VMWare KnowledgeBase
When you have some binary data that you want to ship across a network, you generally don't do it by just streaming the bits and bytes over the wire in a raw format. Why? because some media are made for streaming text. You never know -- some protocols may interpret your binary data as control characters (like a modem), or your binary data could be screwed up because the underlying protocol might think that you've entered a special character combination (like how FTP translates line endings).
So to get around this, people encode the binary data into characters. Base64 is one of these types of encodings.
Why 64?
Because you can generally rely on the same 64 characters being present in many character sets, and you can be reasonably confident that your data's going to end up on the other side of the wire uncorrupted.
Best Answer
Yes, it’s frustrating—sometimes
type
and other programs print gibberish, and sometimes they do not.First of all, Unicode characters will only display if the current console font contains the characters. So use a TrueType font like Lucida Console instead of the default Raster Font.
But if the console font doesn’t contain the character you’re trying to display, you’ll see question marks instead of gibberish. When you get gibberish, there’s more going on than just font settings.
When programs use standard C-library I/O functions like
printf
, the program’s output encoding must match the console’s output encoding, or you will get gibberish.chcp
shows and sets the current codepage. All output using standard C-library I/O functions is treated as if it is in the codepage displayed bychcp
.Matching the program’s output encoding with the console’s output encoding can be accomplished in two different ways:
A program can get the console’s current codepage using
chcp
orGetConsoleOutputCP
, and configure itself to output in that encoding, orYou or a program can set the console’s current codepage using
chcp
orSetConsoleOutputCP
to match the default output encoding of the program.However, programs that use Win32 APIs can write UTF-16LE strings directly to the console with
WriteConsoleW
. This is the only way to get correct output without setting codepages. And even when using that function, if a string is not in the UTF-16LE encoding to begin with, a Win32 program must pass the correct codepage toMultiByteToWideChar
. Also,WriteConsoleW
will not work if the program’s output is redirected; more fiddling is needed in that case.type
works some of the time because it checks the start of each file for a UTF-16LE Byte Order Mark (BOM), i.e. the bytes0xFF 0xFE
. If it finds such a mark, it displays the Unicode characters in the file usingWriteConsoleW
regardless of the current codepage. But whentype
ing any file without a UTF-16LE BOM, or for using non-ASCII characters with any command that doesn’t callWriteConsoleW
—you will need to set the console codepage and program output encoding to match each other.How can we find this out?
Here’s a test file containing Unicode characters:
Here’s a Java program to print out the test file in a bunch of different Unicode encodings. It could be in any programming language; it only prints ASCII characters or encoded bytes to
stdout
.The output in the default codepage? Total garbage!
However, what if we
type
the files that got saved? They contain the exact same bytes that were printed to the console.The only thing that works is UTF-16LE file, with a BOM, printed to the console via
type
.If we use anything other than
type
to print the file, we get garbage:From the fact that
copy CON
does not display Unicode correctly, we can conclude that thetype
command has logic to detect a UTF-16LE BOM at the start of the file, and use special Windows APIs to print it.We can see this by opening
cmd.exe
in a debugger when it goes totype
out a file:After
type
opens a file, it checks for a BOM of0xFEFF
—i.e., the bytes0xFF 0xFE
in little-endian—and if there is such a BOM,type
sets an internalfOutputUnicode
flag. This flag is checked later to decide whether to callWriteConsoleW
.But that’s the only way to get
type
to output Unicode, and only for files that have BOMs and are in UTF-16LE. For all other files, and for programs that don’t have special code to handle console output, your files will be interpreted according to the current codepage, and will likely show up as gibberish.You can emulate how
type
outputs Unicode to the console in your own programs like so:This program works for printing Unicode on the Windows console using the default codepage.
For the sample Java program, we can get a little bit of correct output by setting the codepage manually, though the output gets messed up in weird ways:
However, a C program that sets a Unicode UTF-8 codepage:
does have correct output:
The moral of the story?
type
can print UTF-16LE files with a BOM regardless of your current codepageWriteConsoleW
.chcp
, and will probably still get weird output.