Well, after much frustration and research, I have come up with a working script. For those that want to know how to do the same, the script below works well to restore any given file.
<#
Parameters: Folder Path, File to Restore, Deletion Date
Example Usage:
.\RecoverFile.ps1 "ClientName\Folder\2010\02\03\" "mydoc.pdf" "2010-08-04 09:54:24.117"
#>
$filePath = [IO.Path]::Combine("D:\ClientData\", $args[0] )
$fileName = $args[1]
$dateDeleted = Get-Date $args[2]
Write-Host "Restoring '" -NoNewLine
Write-Host $filePath -NoNewLine
Write-Host $fileName -NoNewLine
Write-Host "' which was deleted on '" -NoNewLine
Write-Host $dateDeleted -NoNewLine
Write-Host "'"
$recoveryDate = Get-Date $dateDeleted.AddDays(-1).ToShortDateString()
$pg = Get-ProtectionGroup -DPMServerName DPMSERVER01 | Where-Object {$_.FriendlyName -eq "Document Repository Data"}
$ds = Get-Datasource $pg
$so = New-SearchOption -FromRecoveryPoint $recoveryDate.AddDays(-1).ToShortDateString() -ToRecoveryPoint $recoveryDate.ToShortDateString() -SearchDetail FilesFolders -SearchType exactMatch -Location $filePath -SearchString $fileName
$ri = Get-RecoverableItem -Datasource $ds -SearchOption $so
$ro = New-RecoveryOption -TargetServer CLIENTDATASERVER01 -RecoveryLocation OriginalServer -FileSystem -OverwriteType overwrite -RecoveryType Recover
$recoveryJob = Recover-RecoverableItem -RecoverableItem $ri -RecoveryOption $ro
#4.3 Wait till the recovery job completes
while (! $recoveryJob.hasCompleted )
{
# Show a progress bar
Write-Host "." -NoNewLine
Start-Sleep 1
}
if($recoveryJob.Status -ne "Succeeded")
{
Write-Host "Recovery failed" -ForeGroundColor Red
}
else
{
Write-Host "Recovery successful" -ForeGroundColor Green
}
I'm not a distributed file system ninja, but after consolidating as many drives I can into as few machines as I can, I would try using iSCSI to connect the bulk of the machines to one main machine. There I could consolidate things into hopefully a fault tolerant storage. Preferably, fault tolerant within a machine (if a drive goes out) and among machines (if a whole machine is power off).
Personally I like ZFS. In this case, the build in compression, dedupe and fault tolerance would be helpful. However, I'm sure there are many other ways to compress the data while making it fault tolerant.
Wish I had a real turnkey distributed file solution to recommend, I know this is really kludgey but I hope it points you in the right direction.
Edit: I am still new to ZFS and setting up iSCSI, but recalled seeing a video from Sun in Germany where they were showing the fault tolerance of ZFS. They connected three USB hubs to a computer and put four flash drives in each hub. Then to prevent any one hub from taking the storage pool down they made a RAIDz volume consisting of one flash drive from each hub. Then they stripe the four ZFS RAIDz volumes together. That way only four flash drive were used for parity. Next of course the unplugged one hub and that degraded every zpool, but all the data was available. In this configuration up to four drive could be lost, but only if any two drive were not in the same pool.
If this configuration was used with the raw drive of each box, then that would preserve more drives for data and not for parity. I heard FreeNAS can (or was going to be able to) share drives in a "raw" manner via iSCSI, so I presume Linux can do the same. As I said, I'm still learning, but this alternate method would be less wasteful from drive parity stand point than my previous suggestion. Of course, it would rely on using ZFS which I don't know if would be acceptable. I know it is usually best to stick to what you know if you are going to have to build/maintain/repair something, unless this is a learning experience.
Hope this is better.
Edit: Did some digging and found the video I spoke about. The part where they explain spreading the USB flash drive over the hubs starts at 2m10s. The video is to demo their storage server "Thumper" (X4500) and how to spread the disks across controllers so if you have a hard disk controller failure your data will still be good. (Personally I think this is just a video of geeks having fun. I wish I had a Thumper box myself, but my wife wouldn't like me running a pallet jack through the house. :D That is one big box.)
Edit: I remembered comming across a distributed file system called OpenAFS. I hadn't tried it, I had only read some about it. Perhaps other know how it handles in the real world.
Best Answer
I realize this is a 4 year old question at this point. But I figured I'd add a new answer since there are currently no other up-voted ones.
The way I'd go about this is parsing the date into an actual
DateTime
object from the file name. Then you can use normal date comparisons rather than string comparisons. The most common way to do this is using a Calculated Property as part of aSelect-Object
statement. In the example below, we add a calculated property calledParsedDate
to the existing output.The regular expression actually makes things more complicated than they might need to be though. If your files all have the exact same naming pattern of app_<DATE>.log, you can skip the Regex and simplify by letting
ParseExact
do all the work like this. You just need to escape the literal characters in the format string by surrounding them with single quotes.Technically speaking, you could also do the date parsing inside the Where clause and skip the intermediate Select clause. But I'll leave that as an exercise for the reader.