Prevent filesystem from entering read-only mode

failedfilesystemsraidread-only

I have found that my server's filesystem is continuously entering read-only mode. There have been some issues with the raid1 array, but I have removed the bad disk from the array. However, it is still physically plugged into the system because I haven't had a chance to go over to the datacentre, I suspect udev and the system kernel is still picking up the bad disk and throwing errors. In /var/log/messages, there are errors like this:

Mar  2 06:53:14 nocloud kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x4010000 action 0xe frozen
Mar  2 06:53:14 nocloud kernel: ata1: irq_stat 0x00400040, connection status changed
Mar  2 06:53:14 nocloud kernel: ata1: SError: { PHYRdyChg DevExch }
Mar  2 06:53:14 nocloud kernel: ata1: hard resetting link
Mar  2 06:53:20 nocloud kernel: ata1: link is slow to respond, please be patient (ready=0)
Mar  2 06:53:21 nocloud kernel: ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Mar  2 06:53:21 nocloud kernel: ata1.00: configured for UDMA/133
Mar  2 06:53:21 nocloud kernel: ata1: EH complete

This happens fairly randomly throughout the day until eventually the filesystem becomes read-only. When this happens, my system becomes non-operational which kind of defeats the purpose of having a raid1. Note, ata1 is the bad disk (I think ata1 corresponds to /dev/sda because they are both first in line).

Under mdadm, /dev/sda1,2 is no longer being used, but I can't prevent the system kernel from continuing to query that disk when I am no longer using it and throwing these errors.

Is there a way to prevent my filesystem from automatically going into read-only mode? Furthermore, is it safe to do so?

Thanks in advance.

EDIT: Additional information:
output from cat /proc/mdstat

md1 : active raid1 sdb2[1]
      976554876 blocks super 1.1 [2/1] [_U]
      bitmap: 5/8 pages [20KB], 65536KB chunk

md0 : active raid1 sdb1[1]
      204788 blocks super 1.0 [2/1] [_U]

Output from mount:

/dev/mapper/VolGroup-LogVol00 on / type ext4 (rw,noatime)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/md0 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

EDIT2:
pvdisplay output:

--- Physical volume ---
PV Name               /dev/md1
VG Name               VolGroup
PV Size               931.32 GiB / not usable 2.87 MiB
Allocatable           yes (but full)
PE Size               16.00 MiB
Total PE              59604
Free PE               0
Allocated PE          59604

Best Answer

Ext3/4 file systems (not sure about ext2) can be configured to flip to read only when they detect an error of some form, but there is usually a message similar to "EXT4-fs (sdb1): Remounting file system read-only" in your logs when this happens.

What does tune2fs show you? Run tune2fs -l /dev/md1 to list the file systems current values. The setting your looking for is "Errors behavior". tune2fs can be used to change the error behavior, but you really should replace the drive before you have issues with the second drive.