It's certainly possible to develop on a Windows machine, in fact, my first application was exclusively developed on the old Dell Precision I had at the time :)
There are three routes;
- Install OSx86 (aka iATKOS / Kalyway) on a second partition/disk and dual boot.
- Run Mac OS X Server under VMWare (Mac OS X 10.7 (Lion) onwards, read the update below).
- Use Delphi XE4 and the macincloud service. This is a commercial toolset, but the component and lib support is growing.
The first route requires modifying (or using a pre-modified) image of Leopard that can be installed on a regular PC. This is not as hard as you would think, although your success/effort ratio will depend upon how closely the hardware in your PC matches that in Mac hardware - e.g. if you're running a Core 2 Duo on an Intel Motherboard, with an NVidia graphics card you are laughing. If you're running an AMD machine or something without SSE3 it gets a little more involved.
If you purchase (or already own) a version of Leopard then this is a gray area since the Leopard EULA states you may only run it on an "Apple Labeled" machine. As many point out if you stick an Apple sticker on your PC you're probably covered.
The second option is more costly. The EULA for the workstation version of Leopard prevents it from being run under emulation and as a result, there's no support in VMWare for this. Leopard server, however, CAN be run under emulation and can be used for desktop purposes. Leopard server and VMWare are expensive, however.
If you're interested in option 1) I would suggest starting at Insanelymac and reading the OSx86 sections.
I do think you should consider whether the time you will invest is going to be worth the money you will save though. It was for me because I enjoy tinkering with this type of stuff and I started during the early iPhone betas, months before their App Store became available.
Alternatively, you could pick up a low-spec Mac Mini from eBay. You don't need much horsepower to run the SDK and you can always sell it on later if you decide to stop development or buy a better Mac.
Update: You cannot create a Mac OS X Client virtual machine for OS X 10.6 and earlier. Apple does not allow these Client OSes to be virtualized. With Mac OS X 10.7 (Lion) onwards, Apple has changed its licensing agreement in regards to virtualization. Source: VMWare KnowledgeBase
I agree, SpeakHere is not a very good starting point to learn iPhone audio.
iPhone audio uses two concepts. AudioQueues, and AudioSessions. If you want to record to a file, you will need to create one AudioSession, activate the session, and create an AudioInputQueue and an AudioOutputQueue.
The reference for AudioQueues (by far the part you will deal with most) is:
http://developer.apple.com/iphone/library/documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/Introduction/Introduction.html
As for AudioSessions:
http://developer.apple.com/iphone/library/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Introduction/Introduction.html
Though you can ignore most of the AudioSession stuff since you won't be doing anything quite that complicated. So basically, here are the steps:
- Initialize your audio session with an AudioInterrupt callback. This callback handles the case of an incoming call, interrupting your program.
- Setup the data formats for both your incoming audio, and the filebound audio. This is stored in a struct called AudioStreamBasicDescripion.
- Create an AudioQueue object and use AudioQueueNewInput to initialize it. You will have to specify a callback to handle incoming audio. This will be where you can specify saving the audio to a file, though beware, this is a real time thread, and you will have to try as best you can not to block it for too long.
- Define how many AudioQueueBuffers your recording system will have. These buffers are filled up according to your sampling rate which you specified in step 2. You will have to adjust these so that you have enough time to do your processing before the next buffer arrives.
- AudioSessionSetActive(YES);
- AudioQueueStart on your AudioQueue.
I didn't include all parameters here, but that's what the API is for.
Hope that helps.
[EDIT]
Sorry, forgot to include the output stuff, though they are fairly straight forward. Create another AudioQueue, initialize with an AudioQueueNewOutput, and the API should be able to guide you the rest of the way.
Cheers.
Best Answer
Actually, there are no examples at all. Here is my working code. Recording is triggered by the user pressing a button on the navBar. The recording uses cd quality (44100 samples), stereo (2 channels) linear pcm. Beware: if you want to use a different format, especially an encoded one, make sure you fully understand how to set the AVAudioRecorder settings (read carefully the audio types documentation), otherwise you will never be able to initialize it correctly. One more thing. In the code, I am not showing how to handle metering data, but you can figure it out easily. Finally, note that the AVAudioRecorder method deleteRecording as of this writing crashes your application. This is why I am removing the recorded file through the File Manager. When recording is done, I save the recorded audio as NSData in the currently edited object using KVC.