Delayed and Missing Bulb Updates (deCONZ Struggling to Handle Large Networks?)

I’m seeing errors like the below all day long. Always one bulb. What is this about? I can’t say I’ve seen these before. Should I just replace it? It does turn on and off fine.

11:35:51:029 delay sending request 27 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0008 onAir: 1,
11:35:51:129 delay sending request 28 dt 0 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
11:35:50:730 delay sending request 28 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
11:35:50:429 delay sending request 27 dt 2 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0008 onAir: 1,
12:05:51:429 delay sending request 225 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,

Thanks in advance!

Here’s another chunk of errors/notifications that might be related?

12:29:48:006 failed to add task 377217 type: 6, too many tasks,
12:29:48:006 5 running tasks, wait,
12:29:48:006 failed to add task 377220 type: 11, too many tasks,
12:29:48:006 failed to add task 377221 type: 6, too many tasks,
12:29:48:007 5 running tasks, wait,
12:29:48:028 5 running tasks, wait,
12:29:48:129 5 running tasks, wait,
12:29:48:185 	0x001788010BBD4E8D force poll (2),
12:29:48:229 5 running tasks, wait,
12:29:48:329 5 running tasks, wait,
12:29:48:429 5 running tasks, wait,
12:29:48:529 5 running tasks, wait,
12:29:48:611 delay sending request 168 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:48:622 delay sending request 168 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:48:623 delayed group sending,
12:29:48:625 delay sending request 168 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:48:626 delayed group sending,
12:29:48:626 delayed group sending,
12:29:48:629 5 running tasks, wait,
12:29:48:729 5 running tasks, wait,
12:29:48:778 0x0000000000000000 error APSDE-DATA.confirm: 0xE1 on task,
12:29:48:778 delay sending request 168 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:48:829 5 running tasks, wait,
12:29:48:872 delay sending request 168 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:48:929 5 running tasks, wait,
12:29:48:972 0x0000000000000000 error APSDE-DATA.confirm: 0xE1 on task,
12:29:48:974 delay sending request 168 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:49:029 5 running tasks, wait,
12:29:49:129 5 running tasks, wait,
12:29:49:229 5 running tasks, wait,
12:29:49:329 5 running tasks, wait,
12:29:49:370 delay sending request 168 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:49:429 5 running tasks, wait,
12:29:49:440 delay sending request 168 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:49:529 5 running tasks, wait,
12:29:49:598 delay sending request 168 dt 2 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:49:629 5 running tasks, wait,
12:29:49:658 delay sending request 168 dt 2 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:49:706 delay sending request 168 dt 2 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:49:729 5 running tasks, wait,
12:29:49:787 0x0000000000000000 error APSDE-DATA.confirm: 0xE1 on task,
12:29:49:791 delay sending request 168 dt 2 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:49:829 5 running tasks, wait,
12:29:49:929 5 running tasks, wait,
12:29:49:953 0x0000000000000000 error APSDE-DATA.confirm: 0xE1 on task,
12:29:49:954 delay sending request 168 dt 2 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:50:031 5 running tasks, wait,
12:29:50:070 delay sending request 168 dt 2 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:50:129 5 running tasks, wait,
12:29:50:164 delay sending request 168 dt 2 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:50:230 5 running tasks, wait,
12:29:50:262 delay sending request 168 dt 2 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:50:325 delay sending request 168 dt 2 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:50:329 5 running tasks, wait,
12:29:50:410 delay sending request 168 dt 2 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:50:429 delay sending request 168 dt 2 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:50:529 delay sending request 168 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:50:621 delay sending request 168 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:50:629 delay sending request 168 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:50:672 0x00178801097845F4 error APSDE-DATA.confirm: 0xE9 on task,
12:29:50:673 delay sending request 168 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:50:730 delay sending request 168 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:50:829 delay sending request 168 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:50:929 delay sending request 168 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:51:029 delay sending request 168 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:51:068 delay sending request 168 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:51:105 delay sending request 168 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:51:129 delay sending request 168 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:51:229 delay sending request 168 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:51:329 delay sending request 168 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:51:429 delay sending request 168 dt 3 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:51:538 delay sending request 168 dt 4 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:51:629 delay sending request 168 dt 4 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:51:651 0x0017880106968FBD error APSDE-DATA.confirm: 0xE9 on task,
12:29:51:652 delay sending request 168 dt 4 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:51:729 delay sending request 168 dt 4 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:29:51:825 	0x001788010BBD4E8D force poll (2),
12:29:51:969 	0x001788010BBD4E8D force poll (2),
12:29:55:494 delayed group sending,
12:29:55:495 delayed group sending,
12:29:55:495 delayed group sending,
12:29:55:495 delayed group sending,
12:29:55:495 delayed group sending,
12:29:55:495 delayed group sending,
12:29:55:495 delayed group sending,
12:29:55:496 delayed group sending,
12:29:55:496 delayed group sending,
12:29:55:496 delayed group sending,
12:29:55:496 delayed group sending,
12:29:55:496 delayed group sending,
12:29:55:729 5 running tasks, wait,
12:29:55:829 5 running tasks, wait,
12:29:55:929 5 running tasks, wait,
12:29:56:030 5 running tasks, wait,
12:29:56:088 0x0000000000000000 error APSDE-DATA.confirm: 0xE1 on task,
12:29:56:129 5 running tasks, wait,
12:29:56:229 5 running tasks, wait,
12:29:56:339 0x0000000000000000 error APSDE-DATA.confirm: 0xE1 on task,
12:29:57:037 0x00178801060959B4 error APSDE-DATA.confirm: 0xE9 on task,
12:29:59:902 reuse dead link (dead link container size now 394)

And a reminder if @manup or others don’t remember - I’m the guy with the big system: 368 nodes (1 ConBee II, 17 FLS-CT strip controllers, and 350 Hue bulbs) as well as 106 groups.

Here’s another chunk of logs with some scary things:

12:59:48:346 5 running tasks, wait,
12:59:48:346 failed to add task 389217 type: 11, too many tasks,
12:59:48:347 failed to add task 389218 type: 6, too many tasks,
12:59:48:347 5 running tasks, wait,
12:59:48:347 failed to add task 389221 type: 11, too many tasks,
12:59:48:347 failed to add task 389222 type: 6, too many tasks,
12:59:48:347 5 running tasks, wait,
12:59:48:347 failed to add task 389225 type: 11, too many tasks,
12:59:48:347 failed to add task 389226 type: 6, too many tasks,
12:59:48:348 5 running tasks, wait,
12:59:48:348 5 running tasks, wait,
12:59:48:404 5 running tasks, wait,
12:59:48:769 5 running tasks, wait,
12:59:48:798 delay sending request 213 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0008 onAir: 1,
12:59:48:798 delay sending request 214 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:59:48:822 0x0000000000000000 error APSDE-DATA.confirm: 0xE1 on task,
12:59:48:822 delay sending request 213 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0008 onAir: 1,
12:59:48:822 delay sending request 214 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:59:48:864 5 running tasks, wait,
12:59:48:920 delay sending request 213 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0008 onAir: 1,
12:59:48:921 delay sending request 214 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:59:48:979 5 running tasks, wait,
12:59:49:005 delay sending request 213 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0008 onAir: 1,
12:59:49:005 delay sending request 214 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:59:49:054 5 running tasks, wait,
12:59:49:098 delay sending request 213 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0008 onAir: 1,
12:59:49:099 delay sending request 214 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:59:49:124 delay sending request 213 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0008 onAir: 1,
12:59:49:125 delay sending request 214 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,
12:59:49:149 5 running tasks, wait,
12:59:49:210 delay sending request 213 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0008 onAir: 1,
12:59:49:210 delay sending request 214 dt 1 ms to 0x001788010BBD4E8D, ep: 0x0B cluster: 0x0300 onAir: 1,

What does it mean it says “too many tasks”? I mean, I get it, but why would that ever happen?

It happen whith automation ?

The message can be normal, deconz is able to manage them.
But it s like you are sending too much request in too less time, some of them are put in a queue list. but it seem too some of them are discarted …

12:29:56:339 0x0000000000000000 error APSDE-DATA.confirm: 0xE1 on task,
12:29:57:037 0x00178801060959B4 error APSDE-DATA.confirm: 0xE9 on task,

Can be too the device that have problem to manage request

I do have a VERY large system and I also have an adaptive lighting scheme running that updates color and brightness on all the lights that are on every 90 seconds. So it’s a busy system. Why can’t deCONZ handle that?

And still - there’s the original question here about the single bulb that keeps showing up in the logs. Strange, no?

Honnestly I don’t know deconz limitation, and if they are from the deconz application or the zigbee network ? I think @manup know that better than me.

And yes, it’s for that I talked about the device connection, strange this one appear so often in logs, you can swap it to make tests ?

I have found values, but IDK why they have been choosed like that

#define MAX_GROUP_SEND_DELAY 5000 // ms between to requests to the same group
#define GROUP_SEND_DELAY 50 // default ms between to requests to the same group
#define MAX_TASKS_PER_NODE 2
#define MAX_BACKGROUND_TASKS 5

Good find, @Smanar

This raises a good question, I think, for @manup - can you help us understand why these values are set the way they are? It seems… rather restrictive, especially for large systems.

I have hundreds of bulbs, and dozens of deCONZ groups, all rolled up into additional lighting groups in Home Assistant. I have found, through much trial and error, that the “freezing” of the system (which then requires me to leave and re-join the Zigbee network to correct) as well as lights that deCONZ and HA report as having turned on and off when they have not done so, are almost always due to “overloading” (for lack of a beter term!) deCONZ by doing “too much” at once. It can be as simple as turning on or off too many groups at once. I have had to resort to turning off lights by deCONZ light groups one by one which seems to give deCONZ time to accomodate. Turning on or off more than a few groups at once causes the freezing or the mis-reporting of states. It’s very frustrating. It’s made it impossible for me to automate more than a few things at once and also impossible to use circadian rhythm color and brightness updating tools, which of course make changes to many lights at the same time.

Yes, I could create some very complex rate limiting automations, and could decide not to use circadian rhythm tools, but I shouldn’t have to make those compromises. deCONZ should be able to handle an API request to make a change to all 367 bulbs at once (i.e. turning all my lights on or off or updating color and brightness to all lights at once) by doing whatever kind of network-protecting rate-limiting itself.

Any thoughts here, @manup? cc: @Mimiix (maybe you can tag “de_employees” for me?

Thanks for your help, as always!

1 Like

As you wish :)!

@de_employees

1 Like

Hi,

The problem is actually already on the table. As you can see in the logs, there are massive delays and dropouts because simply too many commands are sent at the same time. We will not be able to do much about this, it is simply a limitation of the standard. Of course there are ways to fix this.

The easiest would be to reduce the groups.
As an example:
If there are 10 lamps in a group and all of them should be set to 80% warm white, one command will go out. No problem.
If there are 10 lamps in 10 groups, 10 commands go out and there could be a delay or the commands are not implemented at all.
The more groups are to be switched simultaneously, the greater the delay.
It should be noted that a lamp can of course be brought into several groups. This probably makes the reduction easier.

Another way would be to bring in a second cooridinator in the form of another gateway and to divide the devices to be controlled on both. This of course reduces the load. A control via your existing home assistant system is no problem.

Best regards.

I’m not sure how Home Assistant handles it. I can well imagine that you have created groups in the Home Assistant, but not in our app. The assumption is that the HA provides each device individually with a command, which leads to a complete overload.
If you create groups in the Phoscon app, they are also displayed in the HA, but only one group command is sent.

Home assistant exposes both lights and groups from Deconz. It also retries (max 3 retries) it’s action when receiving bridge busy errors from Deconz. Doesn’t deconz retry if the network is saturated?

1 Like