My VDP version is 6.1(based EMC).
VDP Backup schedule is not running, because that appliance state is "admin" status.
"gsan" service was degraded. From the logs I can see it might caused by disk full, but even I cleared all my restore points, disk space won't be reclaimed because appliance status is "admin".
Here are some of my VDP state, anyone can help? Thank you so much.
root@vdp:~/#: dpnctl status
Identity added: /home/dpn/.ssh/dpnid (/home/dpn/.ssh/dpnid)
dpnctl: INFO: gsan status: degraded
dpnctl: INFO: MCS status: up.
dpnctl: INFO: emt status: up.
dpnctl: INFO: Backup scheduler status: up.
dpnctl: INFO: axionfs status: down.
dpnctl: INFO: Maintenance windows scheduler status: enabled.
dpnctl: INFO: Unattended startup status: enabled.
dpnctl: INFO: avinstaller status: up.
dpnctl: INFO: [see log file "/usr/local/avamar/var/log/dpnctl.log"]
root@vdp:~/#: status.dpn
Thu Nov 3 13:01:01 KST 2016 [vdp.kj1.netact.skt.com] Thu Nov 3 04:01:01 2016 UTC (Initialized Mon Feb 15 06:27:42 2016 UTC)
Node IP Address Version State Runlevel Srvr+Root+User Dis Suspend Load UsedMB Errlen %Full Percent Full and Stripe Status by Disk
0.0 38.123.3.89 7.2.80-98 ONLINE fullaccess mhpu+0hpu+0000 1 false 0.30 14872 16371111 32.9% 32%(onl:1166) 32%(onl:1164) 32%(onl:1167)
Srvr+Root+User Modes = migrate + hfswriteable + persistwriteable + useraccntwriteable
System ID: 1455517662@00:50:56:BA:A1:74
All reported states=(ONLINE), runlevels=(fullaccess), modes=(mhpu+0hpu+0000)
System-Status: ok
Access-Status: admin
Checkpoint failed with result MSG_ERR_DISKFULL : cp.20161103033244 started Thu Nov 3 12:33:14 2016 ended Thu Nov 3 12:33:14 2016, completed 0 of 3497 stripes
Last GC: finished Thu Nov 3 12:32:44 2016 after 00m 30s >> recovered 0.00 KB (MSG_ERR_DISKFULL)
No hfscheck yet
Maintenance windows scheduler capacity profile is active.
The maintenance window is currently running.
Next backup window start time: Thu Nov 3 20:00:00 2016 KST
Next maintenance window start time: Fri Nov 4 08:00:00 2016 KST
root@vdp:~/#:
root@vdp:~/#: df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 32G 6.0G 24G 20% /
udev 7.8G 180K 7.8G 1% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/sda1 128M 37M 85M 31% /boot
/dev/sda7 1.5G 160M 1.3G 12% /var
/dev/sda9 138G 16G 115G 12% /space
/dev/sdg1 512G 499G 14G 98% /data01
/dev/sdh1 512G 497G 16G 98% /data02
/dev/sdi1 512G 499G 14G 98% /data03
I think that issue cause are "/data01", "/data02", "/data03" locations.
But that locations have some checkpoint files.
I don't how delete that checkpoint files.
Please, help me. Thank you so much.
Best Regards,
Neal.Choi