On a moderately large site (1500+ published nodes), we use an Ultimate Picker to create links between certain nodes. These are in varying locations, and are filtered by document type.
As the site increased in size, we noted a distinct (and worsening) decrease in performance on these nodes, which could take up to 10 seconds to open. Investigation showed that it was due to the Ultimate Pickers that filtered by document type and looked at grandchild nodes of a node near the top of the content tree.
We have removed the offending controls and the site now functions at the speed we would expect it to.
Is this something that could be considered for optimisation in these circumstances? While I understand that the way this was configured wasn't optimal at our end, it seems odd to me that the node IDs don't seem to get stored as a data key when the node is loaded in the backend - the database gets hammered on SAVE as well, even though we're only saving a comma delimited list of node IDs!
Ultimate Picker poor performance
On a moderately large site (1500+ published nodes), we use an Ultimate Picker to create links between certain nodes. These are in varying locations, and are filtered by document type.
As the site increased in size, we noted a distinct (and worsening) decrease in performance on these nodes, which could take up to 10 seconds to open. Investigation showed that it was due to the Ultimate Pickers that filtered by document type and looked at grandchild nodes of a node near the top of the content tree.
We have removed the offending controls and the site now functions at the speed we would expect it to.
Is this something that could be considered for optimisation in these circumstances? While I understand that the way this was configured wasn't optimal at our end, it seems odd to me that the node IDs don't seem to get stored as a data key when the node is loaded in the backend - the database gets hammered on SAVE as well, even though we're only saving a comma delimited list of node IDs!
is working on a reply...