I have a backup server with ZFS (Ubuntu 16.04; 32GB RAM, 4x6TB HDD, raidz2). Recently I've found the problem with space available.
# zpool list -v
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool 21.6T 19.9T 1.76T - 62% 91% 2.30x ONLINE -
raidz2 21.6T 19.9T 1.76T - 62% 91%
sda5 - - - - - -
sdb5 - - - - - -
sdc5 - - - - - -
sdd5 - - - - - -
It looks like almost all space is allocated. I have no idea what consumes it. Take a look at the volumes size:
# zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
pool 425G 13.4T 0 140K 0 13.4T
pool/backup 425G 742G 0 140K 0 742G
pool/backup/avol 425G 69.0G 0 198K 0 69.0G
pool/backup/avol/old_dumps 425G 69.0G 0 69.0G 0 0
pool/backup/nnn 425G 517G 0 163K 0 517G
pool/backup/nnn/cdvol 425G 5.00G 0 5.00G 0 0
pool/backup/nnn/himvol 425G 98.3G 0 98.3G 0 0
pool/backup/nnn/irvol 425G 33.8G 0 140K 0 33.8G
pool/backup/nnn/irvol/smavol 425G 33.8G 0 33.8G 0 0
pool/backup/nnn/menvol 425G 931M 0 931M 0 0
pool/backup/nnn/nevvol 425G 77.9G 0 77.9G 0 0
pool/backup/nnn/scovol 425G 27.4G 0 27.4G 0 0
pool/backup/nnn/vm 425G 274G 0 16.5M 0 274G
pool/backup/nnn/vm/123 425G 1.47G 0 1.47G 0 0
pool/backup/nnn/vm/124 425G 9.23G 0 9.23G 0 0
pool/backup/nnn/vm/125 425G 13.5G 0 13.5G 0 0
pool/backup/nnn/vm/126 425G 10.5G 0 10.5G 0 0
pool/backup/nnn/vm/128 425G 16.9G 0 16.9G 0 0
pool/backup/nnn/vm/130 425G 8.96G 0 8.96G 0 0
pool/backup/nnn/vm/131 425G 147G 0 147G 0 0
pool/backup/nnn/vm/132 425G 11.3G 0 11.3G 0 0
pool/backup/nnn/vm/135 425G 39.7G 0 39.7G 0 0
pool/backup/nnn/vm/136 425G 16.0G 0 16.0G 0 0
pool/backup/old 425G 50.5G 0 140K 0 50.5G
pool/backup/old/himvol 425G 50.5G 0 50.5G 0 0
pool/backup/telvol 425G 105G 0 105G 0 0
pool/backup2 425G 2.74T 0 140K 0 2.74T
pool/backup2/nnn 425G 2.74T 0 140K 0 2.74T
pool/backup2/nnn/vm 425G 2.74T 0 151K 0 2.74T
pool/backup2/nnn/vm/101 425G 28.0G 0 28.0G 0 0
pool/backup2/nnn/vm/103 425G 38.0G 0 38.0G 0 0
pool/backup2/nnn/vm/104 425G 333G 0 333G 0 0
pool/backup2/nnn/vm/105 425G 526M 0 526M 0 0
pool/backup2/nnn/vm/106 425G 17.1G 0 17.1G 0 0
pool/backup2/nnn/vm/107 425G 17.0G 0 17.0G 0 0
pool/backup2/nnn/vm/109 425G 235G 0 235G 0 0
pool/backup2/nnn/vm/110 425G 321G 0 321G 0 0
pool/backup2/nnn/vm/111 425G 1.11G 0 1.11G 0 0
pool/backup2/nnn/vm/112 425G 73.6G 0 73.6G 0 0
pool/backup2/nnn/vm/114 425G 1.27T 0 1.27T 0 0
pool/backup2/nnn/vm/116 425G 1.31G 0 1.31G 0 0
pool/backup2/nnn/vm/117 425G 19.9G 0 19.9G 0 0
pool/backup2/nnn/vm/119 425G 7.15G 0 7.15G 0 0
pool/backup2/nnn/vm/121 425G 178G 0 178G 0 0
pool/backup2/nnn/vm/122 425G 237G 0 237G 0 0
Recently I turned off dedupliaction and copied all volumes (zfs send | zfs receive; zfs destroy) to get rid of deduplicated data, but it is still present:
# zpool status -D
pool: pool
state: ONLINE
scan: scrub in progress since Wed Jul 12 11:23:27 2017
1 scanned out of 19.9T at 1/s, (scan is slow, no estimated time)
0 repaired, 0.00% done
config:
NAME STATE READ WRITE CKSUM
pool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sda5 ONLINE 0 0 0
sdb5 ONLINE 0 0 0
sdc5 ONLINE 0 0 0
sdd5 ONLINE 0 0 0
errors: No known data errors
dedup: DDT entries 41434395, size 978 on disk, 217 in core
bucket allocated referenced
______ ______________________________ ______________________________
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
------ ------ ----- ----- ----- ------ ----- ----- -----
1 25.3M 2.41T 1.95T 1.99T 25.3M 2.41T 1.95T 1.99T
2 5.00M 469G 340G 347G 11.2M 1.03T 762G 779G
4 7.37M 549G 438G 451G 36.9M 2.69T 2.14T 2.21T
8 1.41M 124G 80.7G 83.5G 14.6M 1.26T 833G 862G
16 281K 16.8G 10.7G 11.5G 5.72M 337G 219G 235G
32 73.7K 4.57G 3.79G 3.96G 3.14M 198G 167G 174G
64 40.5K 2.58G 2.32G 2.41G 3.25M 215G 195G 202G
128 8.49K 358M 272M 298M 1.38M 60.2G 45.7G 50.0G
256 3.22K 201M 171M 180M 1.10M 69.8G 59.7G 62.7G
512 1.46K 56.1M 52.2M 56.9M 1.20M 41.1G 38.1G 42.1G
1K 372 12.5M 10.4M 11.7M 501K 19.5G 16.3G 18.0G
2K 169 7.41M 6.14M 6.78M 468K 20.3G 17.0G 18.8G
4K 64 3.40M 2.69M 2.85M 358K 19.1G 15.0G 15.9G
8K 14 316K 172K 238K 151K 3.37G 1.82G 2.52G
16K 10 35.5K 31.5K 75.6K 206K 738M 667M 1.54G
32K 4 102K 85.5K 105K 185K 4.71G 3.93G 4.79G
256K 2 1K 1K 11.6K 704K 352M 352M 4.00G
Total 39.5M 3.55T 2.81T 2.87T 106M 8.36T 6.42T 6.61T
Maybe this is the reason? Is there any way to check what is using deduplicated data and remove them? What else can consume disk space?
There's something strange with zpool scrub. I started it over 6 hours ago (CEST timezone), and current status is:
scan: scrub in progress since Wed Jul 12 15:48:20 2017
1 scanned out of 20.0T at 1/s, (scan is slow, no estimated time)
0 repaired, 0.00% done
Load on the server is quit big (uptime from 2 to 80), iostat shows 100% disks utilization but no processes are running (except ssh server).
UPDATE: Today I have almost 1TB of free space. Nothing has been done on the server, maybe zfs needs some time to clean up old data?
SOLVED: The problem has gone. Deduplication table is now empty, 6,75TB of free space! It took about 6 days for zfs to clean it up.
Best Answer
Run a python script to detect and delete duplicated data file:
http://code.activestate.com/recipes/362459-dupinator-detect-and-delete-duplicate-files/