Copied to clipboard

Flag this post as spam?

This post will be reported to the moderators as potential spam to be looked at


  • Thomas Beckert 193 posts 469 karma points
    Oct 20, 2016 @ 16:11
    Thomas Beckert
    0

    Performance issue - over 20.000 Content Nodes

    Hi,

    we just created abou 20.000 Content nodes and this leads us to a critical perfomance breakdown.

    When I save or delete a node, it takes about 8 seconds in average. We are running our umbraco instance on a virtual machine with 4 vCPU und 12GB RAM. Before creating this great amount of nodes, we had about 500-600 nodes, everything works fine.

    The perfomance issue is actually only on read / write operations. Create / Update and Delete nodes.

    Is there a way to increase the performance or is umbraco not able to handle this amount of nodes?

    I can add 2 vCPU, maybe this helps. Any other hints / advises?

    Best regards -

    Tom

  • Alex Skrypnyk 6148 posts 24097 karma points MVP 8x admin c-trib
    Oct 20, 2016 @ 16:19
    Alex Skrypnyk
    1

    Hi Tom,

    Try to use UnVersion package. This package automaticaly removes any previous versions for those times when a version history aren't important, and you don't want to take up the database space.

    https://our.umbraco.org/projects/website-utilities/unversion/

    Thanks,

    Alex

  • Alex Skrypnyk 6148 posts 24097 karma points MVP 8x admin c-trib
    Nov 07, 2016 @ 11:03
    Alex Skrypnyk
    0

    Hi Thomas Beckert,

    Did you find soultion?

    Can you share with community?

    Thanks,

    Alex

  • Thomas Beckert 193 posts 469 karma points
    Nov 07, 2016 @ 11:09
    Thomas Beckert
    1

    Hi, Alex,

    unfortunately not. The unversion Tool did hot have any effect on the performance. Next thing I try is to seperate the sql Server from my current vm-Webserver. Maybe the isn't just enough sql power. I keep you up-to-date.

  • Alex Skrypnyk 6148 posts 24097 karma points MVP 8x admin c-trib
    Nov 07, 2016 @ 12:44
    Alex Skrypnyk
    0

    Hi Tom,

    It's really strange, because 20 000 nodes are not a problem at all for Umbraco.

    Do you have maybe some event handlers on creating or saving?

    What version of Umbraco are you using?

    Examine works fine?

    Look please at this topic - https://our.umbraco.org/forum/developers/api-questions/61151-Performance-of-bulk-Content-updates-with-ContentService there are nice lessons for similar problem.

    Thanks,

    Alex

  • Sotiris Filippidis 286 posts 1501 karma points
    Nov 07, 2016 @ 13:04
    Sotiris Filippidis
    1

    I will +1 Alex on this. 20000 nodes by themselves are no cause for reduced back-end performance, EXCEPT if there are event handlers that do things on creating or saving nodes (maybe if not from you, then from a plugin?)

  • Thomas Beckert 193 posts 469 karma points
    Nov 07, 2016 @ 13:16
    Thomas Beckert
    0

    We do have some event handlers, but they are also using the umbraco API for creating sub-nodes e.g. But this only happens for certain doctypes. I also comment out the event-code. The read / write operations of the content service are the ones that lags a lot. At the moment, when I save and publish a node, it takes aobut 2-3 seconds. When I had about 80.000 nodes, it was about 8-10 seconds. It took a long time to delete the nodes down to 20.000 again. I wrote a script for that - to avoid system crash I only deleted 200 nodes per call.

  • Sotiris Filippidis 286 posts 1501 karma points
    Nov 07, 2016 @ 13:20
    Sotiris Filippidis
    0

    Are there any Examine-related event handlers that do strange stuff with the Lucene index when publishing / saving / deleting? I'm only asking because I've been there :)

  • Thomas Beckert 193 posts 469 karma points
    Nov 07, 2016 @ 13:24
    Thomas Beckert
    0

    Nope. We only use the eventhandlers to create subnodes or do a deleting on database-related content.

  • Sotiris Filippidis 286 posts 1501 karma points
    Nov 07, 2016 @ 13:32
    Sotiris Filippidis
    0

    Okay, then since you have commented out the event code for those events you're essentially clear of any time-consuming event code, right? And, as you are saying, it still takes a lot of time to do any database related operation.

    If this is the case, my next thought would be the SQL server itself. Have you noticed increased memory usage or heavy disk I/O there when you do such operations? I've seen that a database-wide index rebuild can help in such cases, depending on index fragmentation. I usually use this guy's scripts: https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html

    I'm not saying that this definitely is your problem, but if I were you I'd try to eliminate all possible causes of delay to see what's left.

  • Alex Skrypnyk 6148 posts 24097 karma points MVP 8x admin c-trib
    Nov 07, 2016 @ 13:42
    Alex Skrypnyk
    0

    Also you can try to turn off xml cache on the disk, and test solution with only database storing data.

    Try at umbracoSettings:

     <XmlCacheEnabled>False</XmlCacheEnabled>
    
  • Thomas Beckert 193 posts 469 karma points
    Nov 07, 2016 @ 14:28
    Thomas Beckert
    0

    Thanks for the hints. I let you know the result. The index optimizer seems very promissing @Sotiris - is there a best practise parameter set for an umbraco database? Indeed there is a higher I/O traffic and also CPU usage, if I perform a lot of read/write processes in the umbraco instance.

  • Sotiris Filippidis 286 posts 1501 karma points
    Nov 07, 2016 @ 14:31
    Sotiris Filippidis
    0

    I usually use one of the default calls, seems to be enough, like this (replace MY-DATABASE-NAME with your database):

    EXECUTE dbo.IndexOptimize
    @Databases = 'MY-DATABASE-NAME',
    @FragmentationLow = NULL,
    @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
    @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
    @FragmentationLevel1 = 5,
    @FragmentationLevel2 = 30
    
  • Thomas Beckert 193 posts 469 karma points
    Nov 07, 2016 @ 15:20
    Thomas Beckert
    0

    Ok. I tried the index optimization, but it has no improval.

    How do I test - I create a node "adventskalender" (it represent an advent calandar and has 24 subnodes, that are created in the saved-Event of the adventskalender-node. Code looks like this:

       if (node.ContentType.Alias == "adventskalender")
                {
                    // Start --> AutoSort (wird nur durchgeführt, wenn ein neuer Knoten erstmalig gespeichert wurde)
                    var dirty = (IRememberBeingDirty)node;
                    var isNew = dirty.WasPropertyDirty("Id");
    
                    if (isNew == true)
                    {                       
                        for(int i=1; i < 25; i++)
                        {
                            var adventtag = contentService.CreateContent(i.ToString() + ". Dezember", node.Id, "adventskalenderTag", 0);
                            adventtag.SetValue("tagImDezember", i);
                            contentService.SaveAndPublishWithStatus(adventtag, 0, false);
                        }
                    }
                }
    

    This operation take 1 minute and 47 seconds (before and after index optimization). Creating 24 nodes should be such a great effort that it results in such a long creating time.

    What I know is, as soon as I delete nodes in my instance, performance increases. I guess if I would delete half of the current nodes, the action would take under one minute.

    Disable XMLCache: I did not find this tag in the umbracoSettings or web.config.

  • Sotiris Filippidis 286 posts 1501 karma points
    Nov 07, 2016 @ 15:39
    Sotiris Filippidis
    0

    It shouldn't take that long.

    Kind of a long shot, but have you tried SaveAndPublish() instead of SaveAndPublishWithStatus()?

    Do you also have any custom Examine indexers, other than the default ones?

  • Sotiris Filippidis 286 posts 1501 karma points
    Nov 07, 2016 @ 15:47
    Sotiris Filippidis
    0

    Also, I noticed an "autosort" comment - do you perform some sorting elsewhere in the code on publish?

  • Thomas Beckert 193 posts 469 karma points
    Nov 07, 2016 @ 15:57
    Thomas Beckert
    0

    SaveAndPublish is obsolete and it is advised to use the WithStatus-Function. Or do you have any experience that it works faster with the old function?

    I do have a custom examine-provider, but the doctype "adventskalender" is not included in the index.

    Sorting does not apear anywhere in my code. This comment was an old one from another project. Sorry for the confusion.

  • Sotiris Filippidis 286 posts 1501 karma points
    Nov 07, 2016 @ 16:09
    Sotiris Filippidis
    0

    No, I don't, hence the "kind of a long shot" statement :)

    I'm out of ideas :) Unless there is something else going on in your event handlers, I can't imagine where the problem lies. Question: Have you tested the whole site (including data) on another machine?

  • Nicholas Westby 2054 posts 7103 karma points c-trib
    Nov 07, 2016 @ 16:21
    Nicholas Westby
    0

    Umbraco just seems to currently have performance issues with large amounts of content nodes. For example, I had an issue where I couldn't published 11,000 content nodes using the built-in dialog to publish content and all descendants: http://issues.umbraco.org/issue/U4-9113

    Unfortunately, it seems that Shannon misunderstood the issue, so I'm not sure if this is likely to be addressed anytime soon.

    By the way, I also had issues deleting content nodes. I had about 10,000 content nodes a while back that I needed to delete. Like you, I had to delete about 200 at a time (or was it move 200 at a time... can't remember), or else I'd run into an out of memory exception.

Please Sign in or register to post replies

Write your reply to:

Draft