On Sun, Mar 11, 2012 at 02:40:19PM -0400, Bruce Taber wrote: > On 03/11/2012 12:25 PM, David Engel wrote:
> > On Sun, Mar 11, 2012 at 12:13:21PM -0400, Daniel Kristjansson wrote:
> >> On Sun, 2012-03-11 at 10:50 -0400, Tom Lichti wrote:
> >>> On Sat, Mar 10, 2012 at 8:41 PM, Gary Buhrmaster
> >>> <firstname.lastname@example.org> wrote:
> >>> Then may I respectfully suggest that we are going down the wrong path
> >>> on this? I've used the default network settings, the tweaked settings
> >>> on the page Daniel suggested, and Daniel's own settings, and none of
> >>> them has made a difference. The only thing that made any noticeable
> >>> difference was Daniel's commit the other day, but that only made
> >>> recordings go from completely un-watchable to highly annoying (and
> >>> still un-watchable).
> >> Thanks for testing the network settings though it was important
> >> to eliminate that possibility. I've also gotten a report off-line
> >> that logging is not the culprit. So far the only commonality
> >> appears to be high CPU usage. My commit slightly lowered CPU usage,
> >> but increasing that timeout further won't do much.
> >> I plan to run an oprofile on mythbackend with a couple HDHomeRun
> >> recordings going sometime in the coming week. Even though I'm not
> >> seeing the issue on my i5, it should still show where CPU is being
> >> used and hopefully show something useful.
> > I'm running a git bisect today to try to find when the higher CPU
> > usage started. That might not be the real cause of the problem, but I
> > still want to figure that out.
> > David
> I began to see these types of bad recordings about the same time as the
> commit for the code that flags them as bad. Back in November? I don't
> believe that was the culprit but more of a coincidence. I don't really
> have a way to determine that beyond a shadow of a doubt.
Your recollection is spot on except for being just a hair off on the
Daniel, I traced my high CPU usage to the following commit.
Author: Daniel Thor Kristjansson <email@example.com>
Date: Fri Dec 2 16:19:42 2011 -0500
Adds recording quality tracking to DTV recorders.
Specifically, it is the following code in DTVRecorder::BufferedWrite().
if (!timeOfFirstData.isValid() && curRecording)
timeOfFirstData = mythCurrentDateTime();
uint64_t now = mythCurrentDateTime().toTime_t();
if (!timeOfLatestData.isValid() || (now - timeOfLatestData.toTime_t() >= 5))
timeOfLatestData = mythCurrentDateTime();
With that code commented out and 6 tuners busy, my mythbackend CPU
usage drops from ~150% (1.5 of 2 cores busy) to ~20%. The
QMutexLocker is responsible for some of the higher usage, but the
biggest culprits are the toTime_t() and isValid() time calls.
Strangely, with my debugging reverted and all tuners busy, I didn't
see the same recording corruption I did yesterday. Perhaps my problem
yesterday was transient or I'm right on the edge and a slightly
different mix of content pushes my over.
David > I've gone round and round in testing everything I could think of to see
> if I could identify the problem. The last step has been to replace the
> slave backend system with one that is more powerful. That has seemed to
> alleviate the problem. The same drives from the old system were moved
> into the new. I think most of the bad recordings that still exist in my
> setup are ones that were recorded on the previous slave. I hadn't
> noticed any incidences of high CPU usage but they may have been missed.
> And I haven't seen any new recordings with the same problem.
> mythtv-dev mailing list
mythtv-dev mailing list