I never really checked, but I just noticed that the container running deconz 2.19.1 with about 50 devices is causing almost 50-90% disk active times on my Syno NAS (basic 2.5" 4TB disks in RAID1 using btrfs). This just sounds too high. and it also seems odd that deconz is continuously writing 0.5-1MB/s? I expected that most io would be cached, except for occasional writes to DB?
Not sure if it could be related, but I am experiencing delays and lag. In vnc the debug log ouput just freezes for 5-10s and none of the devices respond during that time. Also selecting anything in the deconz window or dragging the log window do not work. I see nothing obviously wrong in the log (no errors), just missing logging during that 5s. And probably also some reporting events just go missing, explaining missed motion detection. Even went back to deconz 2.17.1, same issue.
BTW: I also had this lag a few months ago, but after changing the docker from bridged to mcvlan network that seemed to be solved until it popped up again recently.
I have been troubleshooting on and off for the last 2 days, because I also had some wifi interference issues at the same time.
[Update] Went back to the Pi4 for now, using the same restored DB and no issues any more.
One difference I noticed is that on the Pi4 there is almost exclusively expected “ZCL attribute report…” while on the docker I saw a lot of “Skip idle timer callback, too early: elapsed 660 msec” (I see almost none of these on the Pi4)?
Just wondering: looking at iotop on the Pi4, I only see a fraction of the io (<1% and just a few kB/s) of what I saw on theDocker in DSM (30-40% and 1MB/s), while the NAS is obviously much more powerful. How is that possible?
To answer my own question as far as Deconz (2.19.2) on my DS720+ Synology NAS is concerned: the high IO Active times decreased a lot when I added the NVME SSD Cache again. Running deconz is now acceptable, so I have turned off the Pi4 for now. The Backup/Restore option in Phoscon works very well!
I have 2 x 2.5 4TB disks in the NAS, and while certainly not the fastest, I am not sure why the IO Active was so extreme on the NAS while the Pi4 did not have any issue at all. I guess it is just something specific for Docker on the Syno NAS…