The two answers about it being marked failed and rebuilding are correct, and hopefully this is what will happen. That's the best-case scenario.
The other possibility is that the software does not notice, and then it'll still think the drives are in sync. (For example, this could happen if you did this stunt with the power off) The end result will most likely be corruption, and the only fix will be to format and restore from backup.
Remember, RAID works at the disk level, it doesn't know anything about the filesystem on top. Just a bunch of sectors. When the filesystem requests block 10, the RAID layer knows that block 10 is stored on block 10 of both disk1 and disk2. Somehow, it picks one disk or the other and reads block 10. Except because of your modifying the disks behind its back, block 10 on disk1 and disk2 is different. Oops. You can expect a mix of disk1 and disk2 on a per-block basis, including blocks used to store filesystem metadata.
Fixing the mess
I suggest your best bet to recover from this, given that format and restore from backup is not an option:
(a) Immediately image both drives. Backups are important. Optionally, you may want to only work on the copies.
(b) If the array has not been in read/write mode after this mistake, just pull the modified drive. Rebuild with a new, blank drive.
(c) If the array has been in read/write mode, pick a drive and drop it out the array. Rebuild onto a new drive.
(d) If you completely don't care which drive, just (replacing X with your array number, of course): This forces a resync.
echo repair > /sys/block/mdX/md/sync_action
(e) Force a fsck on the now rebuilt array.
(f) Do whatever you can to verify your data. For example, run debsums to check OS integrity, supplying all needed package files for things that don't have MD5 sums.
Note that the drive needs to be blank, or at least all RAID info wiped from it, otherwise the rebuild won't work right.
Biggest 'Doh!' of the week I reckon - sorry dude.
The drives themselves won't be physically broken, this is simply that you've killed the array by removing a second disk before the first one had rebuilt - I'm >90% sure your array is toast. Basically you shouldn't have removed them at all while live, if you absolutely had to you should have waited for the array to rebuild before doing the second disk.
It's reinstall/restore time I'm afraid - your data is gone.
Best Answer
You still need a good back-up!
Ideally you have separate RAID volumes for the operating system and for your application data, and you'll only need to move the data disks. Because even if you move the disks with the operating system volume to an identical server, things like MAC addresses on the NIC's will be different and than may give some (minor) problems when you try to reboot after moving disks.
Typically if you use software RAID, you have a little bit more leeway in how different your servers can be and still and up with a working and intact RAID set.
When you use hardware RAID, I expect you'll need exactly the same RAID controller, although possibly you might have some leeway and a controller from the same series works too.
I think most hardware RAID controllers use the first few sectors of the drive to store their array meta data and the disk will appear uninitialised to other RAID controllers, or the OS if you plug the disk in a system without a raid controller.
Depending on the brand of RAID controller, you may still need to import the moved disks from the RAID controllers BIOS menu, as the moved disks can be considered a foreign configuration, created by a different controller.