We've had perfectly stable performance since launching the site in October - until the early hours of the morning yesterday, when the app service stopped responding.
A manual restart of the app service fixed the issue, and the site ran fine all day until the early hours of this morning, when exactly the same thing happened again.
So two nights in a row, the site has bombed at about 2:00AM UK time (which is usually a low traffic/quiet time for us). Prior to this, we've had no down time at all for months.
Just wondering if anyone else has experienced any issues with Umbraco running on Azure this week? We've not deployed any code changes to the site since 7 January - so it's not a bug we've introduced.
Scratching my head working out how to debug such a problem. Any pointers welcome :)
Last log entry before the crash (in the Umbraco log folder) is:
2019-01-18 02:45:13,116 [P11680/D2/T36] INFO Umbraco.Core.PluginManager - Resolved Umbraco.Core.Models.PublishedContent.PublishedContentModel (took 0ms)
The next log entry is 4 hours later, when I manually restarted the app service - after getting the phone call we all hate - telling me "hey, the website is down" :(
2019-01-18 06:42:51,619 [P11680/D2/T29] ERROR Umbraco.Core.UmbracoApplicationBase - Unhandled exception in AppDomain (terminating)
System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
Possibly that mention of "out of memory" is a false positive, connected with the app service being manually restarted?
We're going to look at this now. Another strange thing - when it goes down, it looks like the app becomes completely unresponsive. I.e. we're not logging lots of ASP.NET 500 errors. Our custom server error page is bypassed, and we're seeing a plain text message "503 - The service is unavailable". Googling would suggest this may mean the app has run out of resources. But on checking, there's no evidence that has happened either - everything looks healthy.
What kind of plan have you got the webapp and db in ? This rings a bell, we have sql in lower plan and things started going under every now and then, we then upgraded the plan and the error went away.
Thanks - we've just upgraded the plan actually (this morning), because my colleague believes it could have been a memory issue (or lack of memory).
One of the ways we improve the performance of our fairly large site is to cache a lot of stuff (content pulled from other sites/services, and in some cases, the result of expensive Umbraco node queries) using MemoryCache. And we wonder if perhaps the growing content of the site (combined with a recent spike in the number of visitors) had caused the IIS process to run out of RAM.
We've just upgraded to P2V2 (with 7GB RAM), to see if that solves the problem. I believe we run the production SQL database (and a development SQL database), plus 3 app services on the same plan - so it is quite a lot of stuff running. Gets quite expensive as you upgrade though.
Also, there seems to be a debate about whether to run the app in 32-bit or 64-bit mode. We were running as 32-bit (because that seemed to be the advice), but we've tried switching to 64-bit at the same time as upgrading to a higher tier.
We have something similar going on... we went back through the load balancing and Azure File System provider information and found a few things we missed. Its running better now; we will know soon if that took care of everything.
Azure app service suddenly started crashing daily
Morning all,
Umbraco 7.12.1 running as an Azure app.
We've had perfectly stable performance since launching the site in October - until the early hours of the morning yesterday, when the app service stopped responding.
A manual restart of the app service fixed the issue, and the site ran fine all day until the early hours of this morning, when exactly the same thing happened again.
So two nights in a row, the site has bombed at about 2:00AM UK time (which is usually a low traffic/quiet time for us). Prior to this, we've had no down time at all for months.
Just wondering if anyone else has experienced any issues with Umbraco running on Azure this week? We've not deployed any code changes to the site since 7 January - so it's not a bug we've introduced.
Scratching my head working out how to debug such a problem. Any pointers welcome :)
Thanks,
Steve.
Steve,
I am guessing content changes have been made? Can you take a look at your umbraco logs anything funny in there?
Regards
Ismial
Hello Ismial,
Last log entry before the crash (in the Umbraco log folder) is:
2019-01-18 02:45:13,116 [P11680/D2/T36] INFO Umbraco.Core.PluginManager - Resolved Umbraco.Core.Models.PublishedContent.PublishedContentModel (took 0ms)
The next log entry is 4 hours later, when I manually restarted the app service - after getting the phone call we all hate - telling me "hey, the website is down" :(
2019-01-18 06:42:51,619 [P11680/D2/T29] ERROR Umbraco.Core.UmbracoApplicationBase - Unhandled exception in AppDomain (terminating) System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
Possibly that mention of "out of memory" is a false positive, connected with the app service being manually restarted?
Steve.
Steve,
Anything in IIS logs i.e page being hit just before it goes down?
Regards
Ismail
We're going to look at this now. Another strange thing - when it goes down, it looks like the app becomes completely unresponsive. I.e. we're not logging lots of ASP.NET 500 errors. Our custom server error page is bypassed, and we're seeing a plain text message "503 - The service is unavailable". Googling would suggest this may mean the app has run out of resources. But on checking, there's no evidence that has happened either - everything looks healthy.
What kind of plan have you got the webapp and db in ? This rings a bell, we have sql in lower plan and things started going under every now and then, we then upgraded the plan and the error went away.
Thanks - we've just upgraded the plan actually (this morning), because my colleague believes it could have been a memory issue (or lack of memory).
One of the ways we improve the performance of our fairly large site is to cache a lot of stuff (content pulled from other sites/services, and in some cases, the result of expensive Umbraco node queries) using MemoryCache. And we wonder if perhaps the growing content of the site (combined with a recent spike in the number of visitors) had caused the IIS process to run out of RAM.
We've just upgraded to P2V2 (with 7GB RAM), to see if that solves the problem. I believe we run the production SQL database (and a development SQL database), plus 3 app services on the same plan - so it is quite a lot of stuff running. Gets quite expensive as you upgrade though.
Also, there seems to be a debate about whether to run the app in 32-bit or 64-bit mode. We were running as 32-bit (because that seemed to be the advice), but we've tried switching to 64-bit at the same time as upgrading to a higher tier.
Hey Steve, wondering if your problem has gone away?
With a bit of luck, yes.
We think that upgrading to a tier with more memory has solved the problem (and switching to 64 bit ).
If it breaks again, I'll be sure to come back here and post :)
We have something similar going on... we went back through the load balancing and Azure File System provider information and found a few things we missed. Its running better now; we will know soon if that took care of everything.
is working on a reply...