cluster::destroy() freed the cluster memory and its children (attributes,
commands, events) but never removed the cluster from the parent endpoint's
linked list, leaving a dangling pointer. This caused use-after-free crashes
when creating a new cluster on the same endpoint after destroying one.
Fix: look up the parent endpoint via the endpoint_id stored in the cluster
struct and unlink before freeing, consistent with how attribute::destroy,
command::destroy and event::destroy handle their parent lists.
When using the esp_matter data model (CONFIG_ESP_MATTER_ENABLE_DATA_MODEL=y),
attribute::get(endpoint_id, cluster_id, attribute_id) is called during endpoint
registration via emberAfExternalAttributeReadCallback. If the cluster doesn't
exist on the endpoint, the lookup returns NULL, which is then passed to the
two-argument get(cluster_t*, attribute_id) overload that logs at error level.
Add a NULL guard in the three-argument overload to return nullptr early,
consistent with how command::get(endpoint_id, cluster_id, command_id) already
handles this case.
Fixes#1692
attributes
- re-implemented the set_val to set them using the TLV buffer for an
attribute using DataModelProvider::WriteAttribute() API.
- renamed older set_val() to set_val_internal() and made it private.
- changed the set_val's occurances with set_val_internal inside the
component. Since our sdk should not be worrying about getting data
from the internally managed attributes, its safe to use the
set_val_internal().
- updated release notes
- re-implemented the attribute::get_val() to get the TLV data for an
attribute using DataModelProvider::ReadAttribute() and then decoding
that into esp_matter_attr_val_t.
- Renamed the older get_val() to get_val_internal() and made it
a private API.
We have attribute base which only contains the id and flags.
But having the data type would be benefitial when fetching value for
internally managed attributes. So, rather than guessing the type we
can fetch it.
endpoint, cluster, and command ids
current api to get the command handler is a bit overkill to use, it
requires additional parameter which is flag and to get that we need to
look into the cluster create apis and then jump to command create apis
to figure out that parameter.