Mailing List Archive

REPAIR/OPTIMIZE in HouseKeeper (was Re: Running optimize_mythdb.pl before mythfilldatabase)
On 02/19/2007 10:27 PM, Michael T. Dean wrote:
> I'm also considering asking Chris Pinkham if he'd be interested in a
> patch that puts the REPAIR and OPTIMIZE in the housekeeper.

Looking deeper in this, I'm starting to think this may not be a good idea...

REPAIR TABLE works only for MyISAM and ARCHIVE storage engines, so it
would fail for any users who have switched to InnoDB, MERGE, BDB or
whatever.

OPTIMIZE TABLE works only for MyISAM, BDB, and InnoDB. For BDB,
OPTIMIZE TABLE is mapped to ANALYZE TABLE.

ANALYZE TABLE works with MyISAM, BDB, and (as of MySQL 4.0.13) InnoDB
tables.

Although it would be possible to ignore failures of these commands, it
doesn't make any sense to run them daily on unsupported storage engines.

Also, if we were running a "hidden" REPAIR table and the server crashed,
another REPAIR table should be executed immediately upon server startup
(before reading from the table) or there could be data loss. Because
Myth uses tables in the database to get information it needs before
housekeeping begins, Myth can't guarantee this will happen. However, if
the user /chooses/ to run REPAIR table on her own, she should understand
the importance of "cleaning up the mess" if the server dies.

So, other options would include allowing the user to specify a script
that can be run at particular times (i.e. one on server startup and
another for daily cleanup) and providing sample scripts that do the
REPAIR on startup and OPTIMIZE daily.

The server startup script is unnecessary because, generally, mythbackend
itself is started from a startup script--which could be modified to
execute any additional commands/scripts the user desires (i.e.
mysqlcheck/optimize_mythdb.pl/whatever).

The daily cleanup script is "somewhat" useful, but because the user can
specify a script name to run for mythfilldatabase, it's probably not
necessary. Granted, mythfilldatabase is only run on the master backend,
but I really can't think of a reason to run a daily script on a slave
backend.

Even though the help text for the "mythfilldatabase Program" field in
frontend settings doesn't explicitly specify the possibility of doing a
REPAIR/OPTIMIZE/ANALYZE, changing it to do so is probably not worthwhile
as it makes the help text more complex. A user just trying to get
things working shouldn't have to worry about the "fine-tuning"--that can
come later as he is reading the HOWTO/wiki/lists/...

So, although letting Myth worry about maintaining the database integrity
sounds good in theory, due to differences in users' configurations, I
think we need to leave things as they are--such that REPAIR's and
OPTIMIZE's and ANALYZE's are the user's responsibility.

Mike
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
Re: REPAIR/OPTIMIZE in HouseKeeper (was Re: Running optimize_mythdb.pl before mythfilldatabase) [ In reply to ]
Michael T. Dean <mtdean@thirdcontact.com> says:
> I think we need to leave things as they are--such that REPAIR's and
> OPTIMIZE's and ANALYZE's are the user's responsibility.

Which brings us back to my suggestion of adding the optimize_mythdb.pl
functionality to mythfilldatabase, callable through command-line
flags.

--
Yeechang Lee <ylee@pobox.com> | +1 650 776 7763 | San Francisco CA US
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
Re: REPAIR/OPTIMIZE in HouseKeeper (was Re: Running optimize_mythdb.pl before mythfilldatabase) [ In reply to ]
On 02/22/2007 12:18 PM, Yeechang Lee wrote:
> Michael T. Dean <mtdean@thirdcontact.com> says:
>
>> I think we need to leave things as they are--such that REPAIR's and
>> OPTIMIZE's and ANALYZE's are the user's responsibility.
>>
> Which brings us back to my suggestion of adding the optimize_mythdb.pl
> functionality to mythfilldatabase, callable through command-line
> flags.
>
I like the idea, but rather than code the functionality into the C++
program, why not just make a sample script for the contrib directory
(like optimize_mythdb.pl--or even just modify optimize_mythdb.pl) to
call mythfilldatabase? Then, users wanting optimize/repair (or ANALYZE
or ...) in their mfdb script can use it rather than calling
mythfilldatabase.

Scripts are much more easily modified by users, and since database
repair/analysis/optimization are very user-/configuration-specific
requirements...

So, something like the attached, perhaps. But, since I'm not a Perl
coder, I'm sure it could be fixed up a bit (i.e. is the arg passing safe?).

Mike
REPAIR/OPTIMIZE in HouseKeeper (was Re: Running optimize_mythdb.pl before mythfilldatabase) [ In reply to ]
> Date: Thu, 22 Feb 2007 11:52:23 -0500
> From: "Michael T. Dean" <mtdean@thirdcontact.com>

> On 02/19/2007 10:27 PM, Michael T. Dean wrote:
> > I'm also considering asking Chris Pinkham if he'd be interested in a
> > patch that puts the REPAIR and OPTIMIZE in the housekeeper.

> Looking deeper in this, I'm starting to think this may not be a good idea...

I agree with you that it's not a good idea, and for an entirely
different set of reasons as well. Here's what I wrote a couple
of days ago, then held off to see if anyone else had some opinions:

No! Please no!

Unless you can guarantee that the housekeeper will -never- be running
while a recording is in progress (or several minutes beforehand!),
please do NOT add things that lock tables for extended periods of time
and run when the user can't control them.

Otherwise, you WILL trash recordings in progress from anything that
must write to recordedmarkup/recordedseek, at least until the current
issue of interaction between seek inserts & buffer-reading is resolved.
[.And remember that, by default in recent releases, mfdb runs at
-random- times chosen by DataDirect, not by the user, hence the user
never knows when a repair/optimize will lock a table and glitch a
recording.]

(I'd argue that Myth should instead use a decent database. When I
started using Myth, I had no direct experience with MySQL, and figured
that all databases were roughly equal---but that's because I know DB
theory, have used other non-MySQL DB's, and it never occurred to me
that MySQL would fall so flat on a variety of things the theorists had
solved decades ago. Unfortunately, between MySQL's whole-table-locking
approach [.which is half of the problem wrt the current issue with
locking out DB inserts & thus locking out buffer-emptying from IVTV,
as per the discussion of a few weeks ago], and MySQL's need for
constant maintenance (*), I really wish it was possible to use any
other database that isn't so fragile. I'm quite pleased that usleep
has been trying to add the necessarily hooks to actually enable, e.g.,
postgres or something else to be used.)

(*) E.g., because pretty much every time somebody says anything at all
DB-related, you (mtdean) come back with "have you repaired & optimized?"
It'd be nice to use a database that didn't require -daily- optimization
just to perform well, and I've certainly seen enough mail in the last
year and a half on this list from people saying "Myth is acting weird"
only to discover that their tables are broken that I no longer have
much faith in MySQL actually working well without constant attempts
at repair. I mean, just imagine how much less traffic there'd be on
the list if the database Myth was using wasn't a continual source of
buggy behavior.
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
Re: REPAIR/OPTIMIZE in HouseKeeper (was Re: Running optimize_mythdb.pl before mythfilldatabase) [ In reply to ]
f-myth-users@media.mit.edu wrote:
> > Date: Thu, 22 Feb 2007 11:52:23 -0500
> > From: "Michael T. Dean" <mtdean@thirdcontact.com>
>
> > On 02/19/2007 10:27 PM, Michael T. Dean wrote:
> > > I'm also considering asking Chris Pinkham if he'd be interested in a
> > > patch that puts the REPAIR and OPTIMIZE in the housekeeper.
>
> > Looking deeper in this, I'm starting to think this may not be a good idea...
>
> I agree with you that it's not a good idea, and for an entirely
> different set of reasons as well. Here's what I wrote a couple
> of days ago, then held off to see if anyone else had some opinions:
>
> No! Please no!
>
> Unless you can guarantee that the housekeeper will -never- be running
> while a recording is in progress (or several minutes beforehand!),
> please do NOT add things that lock tables for extended periods of time
> and run when the user can't control them.
>
> Otherwise, you WILL trash recordings in progress from anything that
> must write to recordedmarkup/recordedseek, at least until the current
> issue of interaction between seek inserts & buffer-reading is resolved.
> [.And remember that, by default in recent releases, mfdb runs at
> -random- times chosen by DataDirect, not by the user, hence the user
> never knows when a repair/optimize will lock a table and glitch a
> recording.]
>
> (I'd argue that Myth should instead use a decent database. When I
> started using Myth, I had no direct experience with MySQL, and figured
> that all databases were roughly equal---but that's because I know DB
> theory, have used other non-MySQL DB's, and it never occurred to me
> that MySQL would fall so flat on a variety of things the theorists had
> solved decades ago. Unfortunately, between MySQL's whole-table-locking
> approach [.which is half of the problem wrt the current issue with
> locking out DB inserts & thus locking out buffer-emptying from IVTV,
> as per the discussion of a few weeks ago], and MySQL's need for
> constant maintenance (*), I really wish it was possible to use any
> other database that isn't so fragile. I'm quite pleased that usleep
> has been trying to add the necessarily hooks to actually enable, e.g.,
> postgres or something else to be used.)
>
> (*) E.g., because pretty much every time somebody says anything at all
> DB-related, you (mtdean) come back with "have you repaired & optimized?"
> It'd be nice to use a database that didn't require -daily- optimization
> just to perform well, and I've certainly seen enough mail in the last
> year and a half on this list from people saying "Myth is acting weird"
> only to discover that their tables are broken that I no longer have
> much faith in MySQL actually working well without constant attempts
> at repair. I mean, just imagine how much less traffic there'd be on
> the list if the database Myth was using wasn't a continual source of
> buggy behavior.

What do you suggest as an alternative - Postgres, SQL lite - Firebird?

I have never heard of daily optimisation to get MYSQL to perform and I
look after a few 100 req/sec installs.

Dave
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
Re: REPAIR/OPTIMIZE in HouseKeeper (was Re: Running optimize_mythdb.pl before mythfilldatabase) [ In reply to ]
On 02/22/2007 05:31 PM, f-myth-users@media.mit.edu wrote:
> > Date: Thu, 22 Feb 2007 11:52:23 -0500
> > From: "Michael T. Dean" <mtdean@thirdcontact.com>
>
> > On 02/19/2007 10:27 PM, Michael T. Dean wrote:
> > > I'm also considering asking Chris Pinkham if he'd be interested in a
> > > patch that puts the REPAIR and OPTIMIZE in the housekeeper.
>
> > Looking deeper in this, I'm starting to think this may not be a good idea...
>
> I agree with you that it's not a good idea, and for an entirely
> different set of reasons as well. Here's what I wrote a couple
> of days ago, then held off to see if anyone else had some opinions:
>
> No! Please no!
>
> Unless you can guarantee that the housekeeper will -never- be running
> while a recording is in progress (or several minutes beforehand!),
> please do NOT add things that lock tables for extended periods of time
> and run when the user can't control them.
>

I'm working with Chris Pinkham on that right now. We have an initial
plan, and I'm considering making some adjustments. (Probably should
have mentioned that to Chris before saying so here, though...) It'll
probably be a couple of weeks before I have the patch (travel for work
is getting in the way of my Myth time).

> Otherwise, you WILL trash recordings in progress from anything that
> must write to recordedmarkup/recordedseek, at least until the current
> issue of interaction between seek inserts & buffer-reading is resolved.
> [.And remember that, by default in recent releases, mfdb runs at
> -random- times chosen by DataDirect, not by the user, hence the user
> never knows when a repair/optimize will lock a table and glitch a
> recording.]
>

But housekeeping isn't just running mythfilldatabase. DailyCleanup is
much more consistent about its execution time.

> (*) E.g., because pretty much every time somebody says anything at all
> DB-related, you (mtdean) come back with "have you repaired & optimized?"
>

Yes. Which is why I was considering putting it in daily cleanup. But,
I no longer plan to do so.

However, regarding the not-running during recordings, the approach we're
taking is to allow a job to specify whether to run anyway (i.e. if
recordings taking place during the job window will prevent the job from
running that day due to a requested lead-time constraint). The
mythfilldatabase execution will be set to run anyway (because missing it
could cause missed/incorrect recordings, which is probably more annoying
to users than recording glitches). However, daily cleanup will not be
run anyway (we can always "catch up" on it tomorrow).

So, this is the one benefit to adding another "daily" script to run. It
could be set up to run as part of daily cleanup (and /not/ ignore the
specified "lead time"--as doing a daily repair/optimize isn't critical),
while mfdb can be executed during a recording if absolutely necessary.

I don't have any personal reason to choose one approach or the other, so
I'll leave it up to the devs whether an extra script to configure is
worth the benefit (to possibly only a small number of users)--especially
since that benefit can be achieved through careful selection of cron job
execution time or even through some fancy scriptwork.

Mike

[OT]:

/me wonders if you would still have the "trashed recordings" issue with
a more current version of Myth (like 0.20-fixes or even 0.19-fixes)...

Only wondering because I don't have these issues with 4x pcHDTV
HD-3000's (often all four are recording simultaneously), MySQL server
running on the master backend, non-RAID'ed PATA disks; and
mythfilldatabase--and even optimize_mythdb.pl--run during recordings
sometimes. Note that I'm not saying you should upgrade as I don't know
whether there is a difference that would help you. Another system I
manage has 4x PVR-250's, combined frontend/backend, with MySQL server
(and Apache httpd and ProFTPd and TeamSpeak server and a bunch of other
junk), non-RAID'ed PATA disks, and haven't had any recording issues with
it, either. (I used to use LVM on the SDTV system, but don't, anymore.)

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
Re: REPAIR/OPTIMIZE in HouseKeeper (was Re: Running optimize_mythdb.pl before mythfilldatabase) [ In reply to ]
> solved decades ago. Unfortunately, between MySQL's whole-table-locking


You can avoid this by using innodb instead of myisam as has been
mentioned before on this list. Though I agree, since most are using myisam
we definately want to avoid having the database writes done the way they are
now.


> as per the discussion of a few weeks ago], and MySQL's need for
> constant maintenance (*), I really wish it was possible to use any


You know, it's funny. I've been using mysql "professionally" in my day to
day work, on literally hundreds of machines including my own on everything
from tiny to massive databases and I have never had to do more than a repair
on a table when a machine lost power mid disk write. The only time I've had
anything "Weird" happen and got "spontaneous" corruption was what turned out
to be a XFS bug causing data corruption all over the partition.

This is on many types of hardware, for many applications on several OS's.

Am I just lucky or what?


_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
Re: REPAIR/OPTIMIZE in HouseKeeper (was Re: Running optimize_mythdb.pl before mythfilldatabase) [ In reply to ]
On Thu, Feb 22, 2007 at 08:40:09PM -0400, Greg Estabrooks wrote:
> > as per the discussion of a few weeks ago], and MySQL's need for
> > constant maintenance (*), I really wish it was possible to use any
>
>
> You know, it's funny. I've been using mysql "professionally" in my day to
> day work, on literally hundreds of machines including my own on everything
> from tiny to massive databases and I have never had to do more than a repair
> on a table when a machine lost power mid disk write. The only time I've had
> anything "Weird" happen and got "spontaneous" corruption was what turned out
> to be a XFS bug causing data corruption all over the partition.

I can't explain why (not that I've really tried to figure it out), but
I've seen more issues with my MythTV database running on Linux/ext3 than
with all the other MySQL databases I've supported combined. Including
some with far more activity on less common platforms.

--
Michael Heironimus
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
Re: REPAIR/OPTIMIZE in HouseKeeper (was Re: Running optimize_mythdb.pl before mythfilldatabase) [ In reply to ]
Greg Estabrooks <greg@phaze.org> says:
> > solved decades ago. Unfortunately, between MySQL's
> > whole-table-locking
>
> You can avoid this by using innodb instead of myisam as has been
> mentioned before on this list.

Huh. Would "whole-table locking" explain my issue 1) as I describe at
<URL:http://www.gossamer-threads.com/lists/mythtv/users/247868#247868>?
If so, how would I make the switch?

--
Yeechang Lee <ylee@pobox.com> | +1 650 776 7763 | San Francisco CA US
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
Re: REPAIR/OPTIMIZE in HouseKeeper (was Re: Running optimize_mythdb.pl before mythfilldatabase) [ In reply to ]
On 2/23/07, David Campbell <dave@cpfc.org> wrote:
>
> What do you suggest as an alternative - Postgres, SQL lite - Firebird?


Personally I've never had a problem with mysql, but there are enough scary
stories here to make me nervous of it (plus I had some trouble with charsets
upgrading from 4 to 5 that I really shouldn't have had to deal with. I think
that might have been either my fault or Gentoo's though). I'd love to be
able to use Oracle, but that's just because I use it at work.

Steve
REPAIR/OPTIMIZE in HouseKeeper (was Re: Running optimize_mythdb.pl before mythfilldatabase) [ In reply to ]
> Date: Thu, 22 Feb 2007 18:34:59 -0500
> From: "Michael T. Dean" <mtdean@thirdcontact.com>

> > Unless you can guarantee that the housekeeper will -never- be running
> > while a recording is in progress (or several minutes beforehand!),
> > please do NOT add things that lock tables for extended periods of time
> > and run when the user can't control them.

> I'm working with Chris Pinkham on that right now. We have an initial
> plan, and I'm considering making some adjustments. (Probably should
> have mentioned that to Chris before saying so here, though...) It'll
> probably be a couple of weeks before I have the patch (travel for work
> is getting in the way of my Myth time).

Good news.

> But housekeeping isn't just running mythfilldatabase. DailyCleanup is
> much more consistent about its execution time.

Okay. Still has to avoid stepping on recordings, though.

> > (*) E.g., because pretty much every time somebody says anything at all
> > DB-related, you (mtdean) come back with "have you repaired & optimized?"

> Yes. Which is why I was considering putting it in daily cleanup. But,
> I no longer plan to do so.

Right.

> However, regarding the not-running during recordings, the approach we're
> taking is to allow a job to specify whether to run anyway (i.e. if
> recordings taking place during the job window will prevent the job from
> running that day due to a requested lead-time constraint). The
> mythfilldatabase execution will be set to run anyway (because missing it
> could cause missed/incorrect recordings, which is probably more annoying
> to users than recording glitches). However, daily cleanup will not be
> run anyway (we can always "catch up" on it tomorrow).

I'm presuming that just mfdb doesn't cause glitches 'cause it won't be
locking tables long enough (especially after the seek/buffer/lock
issue gets fixed). If we get operational experience that mfdb -does-
glitch, then it should get run as soon as practical after DD's
suggested time, such that it doesn't also run during a recording.
That's complicated (and will tend to quantize runtimes to start on
half-hours, etc, without additional cleverness/randomization), so I'm
hoping that level of complexity is unnecessary. (I don't have data
'cause I've arranged things such that mfdb never runs when I'm recording.)

> So, this is the one benefit to adding another "daily" script to run. It
> could be set up to run as part of daily cleanup (and /not/ ignore the
> specified "lead time"--as doing a daily repair/optimize isn't critical),
> while mfdb can be executed during a recording if absolutely necessary.

Assuming it won't glitch the recording.

> I don't have any personal reason to choose one approach or the other, so
> I'll leave it up to the devs whether an extra script to configure is
> worth the benefit (to possibly only a small number of users)--especially
> since that benefit can be achieved through careful selection of cron job
> execution time or even through some fancy scriptwork.

> Mike

> [OT]:

> /me wonders if you would still have the "trashed recordings" issue with
> a more current version of Myth (like 0.20-fixes or even 0.19-fixes)...

Yes. Many others have reported exactly the same problems in .19, .20,
and SVN, which is why Chris has been looking into it. The problem is
that the scheduler runs when a recording ends, its queries hold a lock
on the entire recordedmarkup/recordedseek table, and that hangs the
process that's emptying video buffers; see all the prior discussion
from a few weeks ago. As long as that lock is held, nothing can help
besides huge ivtv buffers (which -still- can't be made big enough,
since this lock can be held for 30 seconds and those buffers chew up
RAM and have max size limits---and non-ivtv sources may not even allow
that level of configurability) or a more-efficient scheduler query,
and apparently that query hasn't gotten much more efficient. (And
then you're still playing a game of catchup; the solution is to avoid
the lock completely, not just try to shave a few seconds off a marginal
situation.)

> Only wondering because I don't have these issues with 4x pcHDTV
> HD-3000's (often all four are recording simultaneously),

You're not understanding something here. It's not an I/O load issue.

Do HD3000's not write data into recordedmarkup/recordedseek? Are you
using innoDB? Do you have particularly simple scheduling rules with
very few "all channels" situations? And---very important in the other
direction---do you record 100% of all your recordings with hard
padding appended to their ends? [.After all, if you rarely postroll,
that means that one show ends -just- as one begins, which puts the
scheduler query right at the start of the new recording, where it's
likely that any corruption will be overlooked 'cause it's not actually
part of the program you're watching, or is perhaps written off as "the
tuner is weird for the first few seconds and writes bad data", or
whatever---except in the common case of a program that ends on the
half-hour in the middle of another that goes the full hour, etc. But
if you typically postroll -and- typically have a recording still in
progress on another tuner when that postroll ends, that puts the
scheduler query smack in the middle of something you're trying to
watch, where it's really obvious. Yet not doing pre/postrolls
guarantees (in my situation) simply losing beginnings and endings of
recordings, so that's no solution, either, and wouldn't help the
half-hour/full-hour program case, either.]

Also note that it's not just "end of recording"---deletions also cause
scheduler runs and also glitch my recordings. I'm having to be
careful never to delete anything while a tuner is in operation,
unless I feel like waiting until only -one- tuner is running and
doing the deletion during a commercial. This is hardly what a "PVR"
is all about... :)

So it's not a "data rate" situation---it doesn't -matter- how many
instances of the same type of tuner you have, because (unlike what
everyone told me for MONTHS until I tried every possible solution and
started asking some obnoxious questions), what's stalling the buffers
is NOT contention for the disk---it's the database lock. You can lose
data from even a single running tuner if that lock is held. If you've
got 4-6 running in parallel, you'll just lose data from ALL of them.

And since I never even -considered- that (a) a scheduler query had
anything to do with recordedmarkup, and (b) MySQL would lock an
entire -table- so commonly, it never even occurred to me that it
was a database lock and not I/O performance.

> MySQL server
> running on the master backend, non-RAID'ed PATA disks; and
> mythfilldatabase--and even optimize_mythdb.pl--run during recordings
> sometimes. Note that I'm not saying you should upgrade as I don't know
> whether there is a difference that would help you. Another system I
> manage has 4x PVR-250's, combined frontend/backend, with MySQL server
> (and Apache httpd and ProFTPd and TeamSpeak server and a bunch of other
> junk), non-RAID'ed PATA disks, and haven't had any recording issues with
> it, either. (I used to use LVM on the SDTV system, but don't, anymore.)

Right, but again, you're saying, "Look at all this load, and yet it's
working!" But it's not load. I'm not losing on the disk head
thrashing, nor am I losing on total bandwidth to disk. I'm losing
because MySQL is holding a lock for 30 seconds while the scheduler
runs, and that held lock stops the process that's reading ivtv buffers
dead in its tracks. It LOOKED like a disk head/thrashing/contention
issue for a long time, of course, because IF enough of the DB happened
to be in memory and/or otherwise didn't happen to hit the disk too
hard, the scheduler query happened just fast enough that ivtv's
buffers didn't overflow because the lock was only held for 10 seconds
and not 20 (etc). So I spent a few dozen hours trying (and testing,
and wondering at the inconclusiveness of the tests) all the canonical
solutions, e.g., multiple spindles, different filesystems, different
kernel I/O scheduling algorithms, optimizing the DB every four hours,
and a pile of other things that you can find documented in my previous
posts on this problem, some of which -appeared- to help but didn't
actually fix the problem for much longer than the duration of the
test.
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
Re: REPAIR/OPTIMIZE in HouseKeeper (was Re: Running optimize_mythdb.pl before mythfilldatabase) [ In reply to ]
> scheduler runs and also glitch my recordings. I'm having to be
> careful never to delete anything while a tuner is in operation,
> unless I feel like waiting until only -one- tuner is running and
> doing the deletion during a commercial. This is hardly what a "PVR"
> is all about... :)


Now that would be annoying since I delete stuff manually all the time :) .

I imagine even autoexpiry of recordings would trigger a query update
and those can happen anytime the housekeeper runs.... though it does
make me wonder if it happens once after a full expiring round or
once per expired recording... Might have to look into that one to
make sure it's not calling the BFSQ more than once.

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
Re: REPAIR/OPTIMIZE in HouseKeeper (was Re: Running optimize_mythdb.pl before mythfilldatabase) [ In reply to ]
On 02/22/2007 10:25 PM, f-myth-users@media.mit.edu wrote:
> > /me wonders if you would still have the "trashed recordings" issue with
> > a more current version of Myth (like 0.20-fixes or even 0.19-fixes)...
>
> Yes. Many others have reported exactly the same problems in .19, .20,
> and SVN, which is why Chris has been looking into it. The problem is
> that the scheduler runs when a recording ends, its queries hold a lock
> on the entire recordedmarkup/recordedseek table, and that hangs the
> process that's emptying video buffers; see all the prior discussion
> from a few weeks ago.

I remember it. I read it closely--and with interest--because I've never
seen any of those issues.

> > Only wondering because I don't have these issues with 4x pcHDTV
> > HD-3000's (often all four are recording simultaneously),
>
> You're not understanding something here. It's not an I/O load issue.
>
> Do HD3000's not write data into recordedmarkup/recordedseek?

Yep. MPEG-2, just like with IvyTV, so it needs a seektable.

> Are you using innoDB?
Nope. MyISAM.
> Do you have particularly simple scheduling rules with
> very few "all channels" situations?

72 recording rules, including 1 power search (new series), 4 Fine One
(for movies), and 67 Any Channel/All episodes rules. Oh, and I just
cleaned out about 10 Any/All rules last week.

> And---very important in the other
> direction---do you record 100% of all your recordings with hard
> padding appended to their ends? [.After all, if you rarely postroll,
> that means that one show ends -just- as one begins, which puts the
> scheduler query right at the start of the new recording, where it's
> likely that any corruption will be overlooked 'cause it's not actually
> part of the program you're watching, or is perhaps written off as "the
> tuner is weird for the first few seconds and writes bad data", or
> whatever---except in the common case of a program that ends on the
> half-hour in the middle of another that goes the full hour, etc. But
> if you typically postroll -and- typically have a recording still in
> progress on another tuner when that postroll ends, that puts the
> scheduler query smack in the middle of something you're trying to
> watch, where it's really obvious. Yet not doing pre/postrolls
> guarantees (in my situation) simply losing beginnings and endings of
> recordings, so that's no solution, either, and wouldn't help the
> half-hour/full-hour program case, either.]
>

No hard padding, but I probably have 1 show per night ending at :01
after while others are recording. Also, when mfdb finishes, it requests
a complete reschedule, and I know that happens relatively frequently
while I'm recording on multiple cards.

> Also note that it's not just "end of recording"---deletions also cause
> scheduler runs and also glitch my recordings. I'm having to be
> careful never to delete anything while a tuner is in operation,
> unless I feel like waiting until only -one- tuner is running and
> doing the deletion during a commercial. This is hardly what a "PVR"
> is all about... :)
>

Oh, and I watch and delete, and most of my watching time is during
primetime, which is also when I record about 90% of my shows, so I do a
lot of deletes during recording. (I am using slow deletes, but that
shouldn't have any effect on it.)

But, unfortunately, I can't help to diagnose the problem (which is why I
didn't participate in the discussion)--because I'm not seeing it and I
don't know why not. So, really, you can just ignore my wondering.

Mike
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
Re: REPAIR/OPTIMIZE in HouseKeeper [ In reply to ]
Yeechang Lee <ylee@pobox.com> writes:

> Greg Estabrooks <greg@phaze.org> says:
>> > solved decades ago. Unfortunately, between MySQL's
>> > whole-table-locking
>>
>> You can avoid this by using innodb instead of myisam as has been
>> mentioned before on this list.
>
> Huh. Would "whole-table locking" explain my issue 1) as I describe at
> <URL:http://www.gossamer-threads.com/lists/mythtv/users/247868#247868>?
> If so, how would I make the switch?

Probably. You know, it's not that hard to check with the backend to
see if it's busy and/or how long to the next recording. Here's couple
of scraps from my jumbo myth cron job that handles copying files
around and running mythfilldatabase and optimize and backup databases
and ....

# policy 1 - run whenever nothing is writing
/bin/echo -e -n "GET / HTTP 1.0\r\nHost: 127.0.0.1:6544\r\n\r\n" | nc -w 5 localhost 6544 | grep "Encoder .* is local on .* and is recording" > /dev/null 2>&1
if [ $? -eq 0 ]; then
exit 0
fi

and this checks the time to next recording

sub TimeToNextShow {
# Connect to the mythbackend status socket and read the time of the
# next program to be recorded
$port = 6544; # random port
$iaddr = inet_aton($host) || die "no host: $host";
$paddr = sockaddr_in($port, $iaddr);

$proto = getprotobyname('tcp');
socket(SOCK, PF_INET, SOCK_STREAM, $proto) || die "socket: $!";
connect(SOCK, $paddr) || die "connect: $!";
syswrite SOCK,"GET / HTTP 1.0\r\nHost: 127.0.0.1:6544\r\n\r\n";
while (<SOCK>) {
# If we're recording now, go away
if( /Encoder \d+ is local on \w+ and is recording:/ ) {
$end_time = time;
print "Recording now, time to go: $_" if($verbose);
exit 1;
}

# Grab the time of the first scheduled recording. Assume it is
# within 24 hours, else there's something very wrong
$am = "";
if( ($nhour,$nmin,$am) = /<a href="#">\w+ \d+\/\d+ (\d+):(\d+) ([AP]M )?- / ) {
# 12 am is 0, 12 pm is 12
if( $nhour == 12 ) {
$nhour = 0;
}
if( $am =~ /PM / ) {
$nhour += 12;
}
# get rid of warnings
$isdst = $yday = $sec = $wday = $mon = $year = $mday = 0;
($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime;
if( $nhour < $hour ) {
$nhour +=24;
}
$end_time = time + (($nhour - $hour)*60 + ($nmin - $min))*60;
print "We have until ".scalar(localtime($end_time)). "\n" if ( $verbose );
last;
}
}
close (SOCK) || die "close: $!";
$end_time;
}
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
Re: REPAIR/OPTIMIZE in HouseKeeper [ In reply to ]
Tony Lill <ajlill@ajlc.waterloo.on.ca> says:
> You know, it's not that hard to check with the backend to see if
> it's busy and/or how long to the next recording.

I know. If you'd read the message I referenced, you'd have seen that
a) I already have my own version of your script for checking whether a
mythbackend is recording or not and b) I'm talking about something
different, which is how to prevent mythfrontend from being
unresponsive to button presses during the database backup process. As
MythTV is as yet neither telepathic nor precognistic, no script can
predict at any given moment whether I'll want to fast-forward a
recording or not. (Maybe in 0.30.)

--
Yeechang Lee <ylee@pobox.com> | +1 650 776 7763 | San Francisco CA US
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
Re: REPAIR/OPTIMIZE in HouseKeeper (was Re: Running optimize_mythdb.pl before mythfilldatabase) [ In reply to ]
David Campbell wrote:
> I have never heard of daily optimisation to get MYSQL to perform and I
> look after a few 100 req/sec installs.
>

MythTV is the first application I've seen where daily optimization is
recommended, and I've worked with some pretty sizable MySQL databases.
I don't know if its database utilization is really atypical or what the
issue is. If we switch to PostgreSQL, I wonder if we'll see the same
issues, just with people running 'vacuum' instead of 'optimize'?
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
Re: REPAIR/OPTIMIZE in HouseKeeper [ In reply to ]
Yeechang Lee <ylee@pobox.com> writes:

> Tony Lill <ajlill@ajlc.waterloo.on.ca> says:
>> You know, it's not that hard to check with the backend to see if
>> it's busy and/or how long to the next recording.
>
> I know. If you'd read the message I referenced, you'd have seen that
> a) I already have my own version of your script for checking whether a
> mythbackend is recording or not and b) I'm talking about something
> different, which is how to prevent mythfrontend from being
> unresponsive to button presses during the database backup process. As
> MythTV is as yet neither telepathic nor precognistic, no script can
> predict at any given moment whether I'll want to fast-forward a
> recording or not. (Maybe in 0.30.)

Must have missed that message. Anyway, the way I predict whether or
not I'm going to press a button is by checking if the time is between
3am and 8am. No ESP required !-)
--
Tony Lill, Tony.Lill@AJLC.Waterloo.ON.CA
President, A. J. Lill Consultants fax/data (519) 650 3571
539 Grand Valley Dr., Cambridge, Ont. N3H 2S2 (519) 241 2461
--------------- http://www.ajlc.waterloo.on.ca/ ----------------

Understatement of the century:
"Hello everybody out there using minix - I'm doing a (free) operating
system (just a hobby, won't be big and professional like gnu) for
386(486) AT clones"

- Linus Torvalds, August 1991

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users