Copied to clipboard

Flag this post as spam?

This post will be reported to the moderators as potential spam to be looked at


  • Dean 1 post 21 karma points
    Feb 01, 2024 @ 13:03
    Dean
    0

    Umbraco 11.4.2 - lots of deadlocks on umbracoLock table?

    We are running Umbraco 11.4.2 in Azure with Azure SQL. We have 18k Members and site is scaled with 50 AzureSql DTU and P2V2 web app tier.

    However, our Umbraco log is often full of Deadlocks as per below.

    These seem to be occurring when Members login and Member Save happens. We are finding when these events happen our Umbraco front end and back office have a momentary outage. When its back everything is normal until the next deadlock.

    We were wondering if anyone else has seen this behaviour before any any thoughts what might be causing this?

    Microsoft.Data.SqlClient.SqlException (0x80131904): Transaction (Process ID 76) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
       at Microsoft.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
       at Microsoft.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
       at Microsoft.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
       at Microsoft.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
       at Microsoft.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString, Boolean isInternal, Boolean forDescribeParameterEncryption, Boolean shouldCacheForAlwaysEncrypted)
       at Microsoft.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean isAsync, Int32 timeout, Task& task, Boolean asyncWrite, Boolean inRetry, SqlDataReader ds, Boolean describeParameterEncryptionRequest)
       at Microsoft.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean& usedCache, Boolean asyncWrite, Boolean inRetry, String method)
       at Microsoft.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, Boolean sendToPipe, Int32 timeout, Boolean& usedCache, Boolean asyncWrite, Boolean inRetry, String methodName)
       at Microsoft.Data.SqlClient.SqlCommand.ExecuteNonQuery()
       at Umbraco.Cms.Infrastructure.Persistence.FaultHandling.RetryPolicy.ExecuteAction[TResult](Func`1 func)
       at NPoco.Database.ExecuteNonQueryHelper(DbCommand cmd)
       at NPoco.Database.Execute(String sql, CommandType commandType, Object[] args)
    ClientConnectionId:3669e8c0-a0d2-44ff-ab83-3bd2ef2af77c
    Error Number:1205,State:51,Class:13
    ClientConnectionId before routing:5a670904-5617-40a6-b0d7-6ec876c5a2a9
    Routing Destination:c2b5ba574579.tr10008.uksouth1-a.worker.database.windows.net,11002
    

    enter image description here

  • Chris Hall 22 posts 94 karma points
    Jun 20, 2024 @ 15:49
    Chris Hall
    0

    Hi Dean,

    Did you ever get any insight on this? I am in a similar position - we have an upcoming event where we expect around 600 concurrent members to be using the system. (They will completing a process and each stage of this process involves reading / writing to member properties via member API).

    From load testing, it doesn't seem to handle much more than about 10 concurrent users before it hits both deadlock and lock-acquisition errors attemping these read/writes.... even scaling up to a VCore based Azure SQL DB - if anything it seemed to be just as bad if not worse.

    I wonder if you ever found a solution/

    Thanks, Chris

  • Chris Hall 22 posts 94 karma points
    Jul 08, 2024 @ 15:29
    Chris Hall
    0

    Just to follow up on this, if anyone's interested or facing the same issue...

    The only way we were able to resolve the deadlocks was by moving the App and DB onto a it's own VM. The network latency between the Azure App Service and Azure SQL DB seemed to be the cause.

    I could replicate this to some degree using a remote DB and running load testing on a development instance of my app on my local machine, which again was mitigated by running the DB and App on the same machine.

    Also, updating the Umbraco write/read lock timeout periods to their maximum was necessary to get the results we needed.

    From load testing results - It could handle 600 concurrent users with a meaty enough VM spec, and not run into deadlocks.

Please Sign in or register to post replies

Write your reply to:

Draft