Copied to clipboard

Flag this post as spam?

This post will be reported to the moderators as potential spam to be looked at


  • Nicholas Westby 2054 posts 7100 karma points c-trib
    Oct 21, 2019 @ 17:00
    Nicholas Westby
    1

    Community Opinions: Benchmarking Performance in the Umbraco Core?

    There is this really useful website called uBenchmarks that shows how fast various operations in Umbraco are: https://ubenchmarks.offroadcode.com/

    It shows things like the how fast (or slow) it is to get a content based on its ID:

    uBenchmarks Chart for Getting Content from the Cache by ID

    This is really useful for a number of reasons:

    • It shows developers which versions of Umbraco should be avoided (e.g., 8.1.0 appears to be the slowest one for this particular metric).
    • It shows the Umbraco community (HQ and others) areas for improvement.
    • It shows progress (including improvements and regressions) over time.

    However, it isn't always updated in a timely manner, and I have reason to question its longevity.

    I'm thinking something like this should be part of the Umbraco core. Rather than create a GitHub ticket, I thought I'd start here first to get ideas from the community about how this might be best implemented.

    For example, maybe this is part of the test suite. But then, that's not all that accessible unless you are running the Umbraco source code yourself.

    Or maybe as part of the build/release process, it can update the documentation website automatically (or some other website that is purely for performance information).

    There is also the matter of ensuring the metric is stable over time, and I'm not sure the best way to solve that. One way would be to use a particular machine or hardware configuration (e.g., in AWS) so all tests are on a level playing field. Or maybe a factor can be applied (kind of like the Windows Experience Index) to either adjust the number (like is sometimes done with currency and inflation) or just put that factor next to the numbers for reference. Another alternative would be to run all the numbers for all versions of Umbraco each time a release is made, so it will be valid on any machine.

    What are your thoughts on how (or even if) such a thing should be implement?

  • Peter Duncanson 430 posts 1360 karma points c-trib
    Oct 23, 2019 @ 13:37
    Peter Duncanson
    101

    Hi Nick,

    Thanks for starting this post and most of your kind words about our little project uBenchmarks :)

    Lets knock this one on the head first:

    "However, it isn't always updated in a timely manner, and I have reason to question its longevity."

    Its not updated timely for a reason, it takes an age to run! Well it does if we want the stats to be valid and given the heat and attention they can bring we want them to be as valid as they can be, we don't like having quirks and we don't want to be accused of not being fair.

    All the test on all the versions currently have to be run on the same hardware and multiple times which means runnning all of them in the one batch. Currently this takes just over 4 days! Its not a case of just adding on the new release and just testing that, there could be changes between the runs that could skew the data. That said now that V7 is effectively mothballed we could stop running those tests and just re-run V8 going forward now we have a good base line of results.

    Longevity wise we've been running it for over 2 years now. Stephen the main developer of it has now left Offroadcode for pastures new but we have an agreement for him to keep it up and running for another year at least.

    Could we open source it and have everyone working on it? Yes we could however the code to do the tests is far far far from pretty and due to the changed between versions there are multiple fixes/changes/hacks to get them to keep working. Its not a nice place to work in hence Stephen doing most of the magic, he didn't want anyone else having to learn all those quirks.

    Should it be in core? Maybe something like it yes but I personally really like the independent nature of it, if it was in Core it might be tempting to hide some metrics or tweak things to get better results for the test only or just stop updating them as "we would rather be spending that time working on new features". I do know that internally HQ do have some tests that they run but they are very different and private tools used by the developers, ie don't expect them to become public (probably for the same reasons ours haven't been open sourced yet either).

    In the future there might be an Azure instance that can just sit there and run multiple tests all day long 365 days of the year that the community can commit in on multiple versions and all that could be reported on automatically, but its a hell of an ask to put all that together. If we'd started out making the framework of uBenchmarks with that sort of goal in mind then it would never have seen the light of day. Instead it was an even/lunchtime/spare day project which returned results real fast. It does just enough to do the job, the time saved by not going all out on it is used to make Umbraco better or drink more tea. That is not to say the ease of use (or lack there of) of the test code should reflect the validity of the test results, the results are solid, the tests work they just ain't pretty under the hood :)

    Stephen and I have put plenty of hours into ensuring that the results are valid, correct, fair, accurate and the best we can produce given the shifting sand nature of Umbraco releases. HQ have often reached out and enquired about results here and there and helped us fine tune some of our results especially about how to accurately record memory usage. But the tests so far have remained closed source.

    If you have any ideas on how to make it better or faster I'm all ears. Maybe open sourcing is the way to go? I'm open to that in theory but the practicalities I suspect will get in the way. I'm not really ready or capable of handling the additional load on our time to support all the queries, docs, issues, etc. to help others come into something that wasn't ever built to be open sourced. For something that ultimately is a nice to have which is run once every 3-4 months. I think that time would be better spent elsewhere.

    Pete

  • Ronald Barendse 39 posts 217 karma points hq c-trib
    Oct 25, 2019 @ 11:58
    Ronald Barendse
    0

    Having benchmarks being a part of the release build might indeed be a good idea, so performance regressions can be identified even before a version is released.

    There's already a great open source tool for this: https://github.com/dotnet/BenchmarkDotNet.

  • Nicholas Westby 2054 posts 7100 karma points c-trib
    Nov 01, 2019 @ 16:54
    Nicholas Westby
    1

    Hi Pete,

    Thanks for your thorough reply. Very insightful, as always.

    I agree with everything you are saying, and it sounds like there isn't necessarily much appetite to have this integrated into the core, so I'll leave the matter alone for now.

    I'm happy to hear that it will keep going for at least another year.

Please Sign in or register to post replies

Write your reply to:

Draft