When a windows computer connects to a windows server it first tries the currently logged in username with a null password first. If server back end does not recognize the username it will fall back and give you anonymous access.
eg. if bob tries to connect to a network share then windows will first try connecting with the username bob before giving a login box
FreeNAS is based on SAMBA though and it acts a little different. If the user is unknown to the system, the default behavior is to deny access.
So what needs to be done is to mimic this behavior and any connection attempts with a unknown username get anonymous access.
So you will need to modify the SAMBA config to map unknown users (bob) to the nobody account and then create an account called 'nobody' that has no password
This way when bob tries to connect the server does not recognize the username bob so it falls back to anonymous
To map unknown users to the guest account, add this configuration to the globals
[global]
#...
guest account = nobody
map to guest = bad user
Then Create a User called 'nobody' without a password and you should be able to access the share anonymously.
EDIT
Also change
[global]
security = user
and make the nobody user by running this command from Shell (via ssh or telnet or something)
User$ smbpasswd -an nobody
I'm not a ZFS guru but I'll take a shot: It sounds like the ZFS subsystem is still trying to access the failed drive, and hanging for some reason. Try setting the pool's failmode
value to continue
(zpool set failmode=continue
) and see if that makes the hang go away and lets you suss out what's going on.
(Note that this isn't a fix: The system still can't access a drive it thinks it should be able to access, it's just telling it to return an error and keep going rather than blocking until an answer is received.)
Best Answer
Actually, you can rename a zpool. Just not live. It is done exactly as you suggested. You can only export & re-import it using the new name. 'man zpool import' explains it:
It is worth noting that while your mounts will likely survive, they won't be where they used to be, most likely. If you were, for example, importing NFS mounts on a client using
192.168.1.5:/<pool>/<dataset>
then the mount would change to192.168.1.5:/<new-pool-name>/<dataset>
. Since you have to export the pool to rename it anyway, this task of renaming a pool IS a downtime event -- so if on re-import you have to reset some mounts or change some mountpoints on the server, I don't see that it's that huge a deal.