Update: The original question was for Windows Server 2008, but the solution is easier for Windows Server 2008 R2 and Windows Server 2012 (and Windows 7 and 8). You can add the user through the NTFS UI by typing it in directly. The name is in the format of IIS APPPOOL\{app pool name}. For example: IIS APPPOOL\DefaultAppPool.
IIS APPPOOL\{app pool name}
Note: Per comments below, there are two things to be aware of:
- Enter the string directly into the "Select User or Group" and not in the search field.
- In a domain environment you need to set the Location to your local computer first.
Reference to Microsoft Docs article: Application Pool Identities > Securing Resources
Original response: (for Windows Server 2008) This is a great feature, but as you mentioned it's not fully implemented yet. You can add the app pool identity from the command prompt with something like icacls, then you can manage it from the GUI. For example, run something like this from the command prompt:
icacls c:\inetpub\wwwroot /grant "IIS APPPOOL\DefaultAppPool":(OI)(CI)(RX)
Then, in Windows Explorer, go to the wwwroot folder and edit the security permissions. You will see what looks like a group (the group icon) called DefaultAppPool. You can now edit the permissions.
However, you don't need to use this at all. It's a bonus that you can use if you want. You can use the old way of creating a custom user per app pool and assigning the custom user to disk. That has full UI support.
This SID injection method is nice because it allows you to use a single user but fully isolate each site from each other without having to create unique users for each app pool. Pretty impressive, and it will be even better with UI support.
Note: If you are unable to find the application pool user, check to see if the Windows service called Application Host Helper Service is running. It's the service that maps application pool users to Windows accounts.
You need more than 512 MBs of RAM to boot a wim. WIM files are RAMdisk files so they need at least as much RAM as the file is big. Look at your WIM file and use that as the base amount of RAM to use with your machine. You should always add buffer space to the VM for processing at boot time. Some configurations are made and the only place to store them in a RAM OS is the RAM.
I add ~ 100 MBs as buffer but you should experiment around to get a good feel for you buffer. You might need more but most likely you would need less.
Best Answer
"I am in the process of de-virtualising everything" - really? o_O why?
FWIW of your two suggested approaches, I'd migrate it but I'd be more than a little bit wary of this approach personally.
Your question is a little unclear: does the VM run now as it is? Unless you're having an actual problem other than being wary of virtualisation, my real suggestion is to actually leave it where it is, virtualised.
update to address comments
Ok, to address your comments, if the server is critical and currently running then I'd suggest borrowing "first do no harm" from the medical community. What I mean by that is that if you wish to change how this server is hosted at all, you should place the results of any migration onto a new server, so that the current server will be available as much as possible while you work on the new one, and so that anything you do cannot 'damage' the current service to your users. This approach will also allow you to take the time and do things right.
If you can't get the budget to do this with a critical system then you may have just found the reason why your predecessor made what appear to be a few very questionable choices...
As for the suitability of virtualisation, I'd say that your predecessor was barking mad to run a mission critical system in a virtualbox install on a workstation OS, but that doesn't mean that there's anything wrong with virtualisation per se. This is not really worse than running critical servers on old workstations "because that's all we had around at the time" and I think we've all seen that happen.
I'm running most (about 60 servers) of our production servers on eight VMWare ESXi servers and our development/testing environments on 3 Microsoft HyperV boxes - these are both free 'server quality' virtualisation products (though you do pay for the fancy tools to manage a datacentre full of them) and I've never had unplanned downtime from either of them. Both of these also have tools that allow you to migrate/import currently running servers so this could make a migration very simple
So given what you've described, I'd suggest: