The two answers about it being marked failed and rebuilding are correct, and hopefully this is what will happen. That's the best-case scenario.
The other possibility is that the software does not notice, and then it'll still think the drives are in sync. (For example, this could happen if you did this stunt with the power off) The end result will most likely be corruption, and the only fix will be to format and restore from backup.
Remember, RAID works at the disk level, it doesn't know anything about the filesystem on top. Just a bunch of sectors. When the filesystem requests block 10, the RAID layer knows that block 10 is stored on block 10 of both disk1 and disk2. Somehow, it picks one disk or the other and reads block 10. Except because of your modifying the disks behind its back, block 10 on disk1 and disk2 is different. Oops. You can expect a mix of disk1 and disk2 on a per-block basis, including blocks used to store filesystem metadata.
Fixing the mess
I suggest your best bet to recover from this, given that format and restore from backup is not an option:
(a) Immediately image both drives. Backups are important. Optionally, you may want to only work on the copies.
(b) If the array has not been in read/write mode after this mistake, just pull the modified drive. Rebuild with a new, blank drive.
(c) If the array has been in read/write mode, pick a drive and drop it out the array. Rebuild onto a new drive.
(d) If you completely don't care which drive, just (replacing X with your array number, of course): This forces a resync.
echo repair > /sys/block/mdX/md/sync_action
(e) Force a fsck on the now rebuilt array.
(f) Do whatever you can to verify your data. For example, run debsums to check OS integrity, supplying all needed package files for things that don't have MD5 sums.
Note that the drive needs to be blank, or at least all RAID info wiped from it, otherwise the rebuild won't work right.
Best Answer
If you are using RAID1 you won't lose half your swap, only one of the two mirrors. The worst case here is you'll lose any performance benefit you might otherwise have gained. If you have two separate swap areas on the individual drives the kernel will use both in a fashion similar to RAID0 (if they have the same priority set) or JBOD (if priorities differ, using the top priority area until full then the next) so if one of the drives dies your system is likely to fall over as soon as any access to the swap area(s) is needed. This is why swap spaces usually live on the RAID1 volumes - it is simply safer and that is more important than performance usually.
Two separate swap areas would get used similar to RAID0 so you would expect to see a performance increase generally, though it depends on the other load your drives are under at the time. With modern kernels the RAID1 driver can try to guess which drive is best to read each block from so you might get a bit of the read performance boost, though obviously for writing to swap you won't as both mirrors must be updated. On most modern setups the performance of swap is not as important as its safety - RAM is relatively cheap these days so unless you are butting against the limit of how much RAM your motherboard can take you should aim to have enough RAM so that swap space is used as little as possible anyway.
It will make little difference if you are using the same pair of disks. A common reason for having swap on a separate array is when using RAID5/6 for the main array (which is not true in your case) to avoid and paging out to the swap areas being hit by the RAID5/6 write performance issue. You can probably tune performance by trying to ensure that the swap areas are close to the busiest part of the disks (so if you have a 1Tb array with a 250Gb logical volume used for your most busy active database files, putting youw swap volume next to that) to reduce head movements while swapping - but really such tweaks are not time well spent as by the time you are swapping heavily the %-or-two benefit won't be enough to make the difference between performing OK and not.
I believe you can partition software RAID volumes as far as the kernel is concerned, but that doesn't mean the installer understands such arrangements. In examples not using LVM I've always seen the drives divided into partitions and having a separate RAID array for each partition, rather than one large RAID volume which is partitioned. I recommend the LVM method unless you have specific reason to avoid it as it is more flexible and (in my experience) no less reliable than other arrangements.