> RecordFilePrefix is the backend's recording directory so as the same
> thing, what you've said is confusing. What I said is, if a backend
> had been setup for this host, the RecordFilePrefix (backend's recording
> directory) for this hostname would exist and the frontend would look
> for the file in that directory. This has been true since about 0.8 .
I know, it was one of the first few things I committed. :) http://cvs.mythtv.org/trac/changeset/1350 > If the host has only been used as a frontend and mythtv-setup was
> never run on the host, there would be no RecordFilePrefix for the
> hostname. Two ways to rectify this would be to run mythtv-setup to
> set the "Directory to hold recordings" (which is RecordFilePrefix)
> or to edit the settings table manually to add one.
We're both right, there appears to be an inconsistency in SVN. :(
A while back, ProgramInfo::GetPlaybackURL() method was added to try to
deal with the issue of whether a file could be played back locally or not.
It is used by mythcommflag and mythtranscode, but as I look over
tv_play.cpp, it does not appear that TV uses GetPlaybackURL() at
all, so it relies on the old code in the RingBuffer to determine whether a
file can be played back locally or not.
ProgramInfo::GetPlaybackURL() first checks if we're playing back on the
same machine that the file was recorded on and if so, the 'URL' returned is
the path to the local file. If we're not on the recording host, the code
checks to see if the file exists in the same location it would be if we
were on the recording host and returns the path to the local file. If the
file does not exist in the same directory as the recording host, it then
checks to see if there is a local RecordFilePrefix and checks that directory.
GetPlaybackURL() does a better job of checking for the local file than
the code in RingBuffer does, and tv_play.cpp should probably be modified to use
GetPlaybackURL(). So, with current SVN, for playing back files in
mythfrontend, your statement is correct that you need a RecordFilePrefix
set, but for mythcommflag and mythtranscode you do not since they use
GetPlaybackURL() which searches for the file in a few places.
<snip my (cpinkham's) comments on dedicated NFS scenario> > True but this is a different scenario. If someone has a backend
> writing files to a remote NFS dir. they sure haven't been taking
> my advice so it doesn't matter what I say ;-).
I think there may be more people out there doing this than you think.
I definitelyl value your advice, :) but since I know there are people
using dedicated NFS servers I wanted to speak up on those. > M is a master and S is a slave. S is recording and being watched
> from a frontend at M. If S writes to a local disk and there is a
> network problem, the problem can be fixed and you can restart the
> playback at M. However, if S is writing to an NFS mount from M and
> there is a network problem, the file will be damaged and you can
> never play it back from any host. Therefore, on my master with
> three slaves, I always write to local disks.
I guess since I can't say I ever remembering having any NFS-related
recording problems that I see the benefits of having a large
dedicated NFS server outweigh the fact that I risk losing a recording
or two (that I can usually re-record by just saying "delete and allow
re-recording") if there are network issues. I have three M179 and two
air2pc cards, so I use relatively low-end hardware for backends. My
master is only a P3-700 and my slave is a P3-550. If I wanted to put
a local hard drive in them that was big enough to hold a decent amount
of recordings, it would cost more than the backend itself. I'm not saying
that $100 is going to break the bank, but putting a half terabyte or more
of storage in a dedicated NFS server and expanding that as needs arise
seemed to make sense (to me) when I setup the system initially. > [.I also don't like to write multiple files to the same spindle and
> this will come up with multiple directories after 0.19. If the disk
> heads are positioned and a video file is being written, it can fill
> the cylinder, track to track seek then fill the next cylinder. Very
> efficient. If more than one file is being written to the same physical
> disk, the disk has to reposition from the end of one file to the end
> of the other. This usually takes 8-15ms while the disk isn't writing
> to either file. The difference in throughput between streaming data
> and writing small chucks to multiple locations can be as much as 100X.
> Therefore, in addition to network issues, I want each slave's file
> written to it's own disk.]
Using this logic, you would never want to have more tuners in one machine
than you have hard drives so you wouldn't be seeking around the drive
to write the multiple streams. The thing that saves you from having to do
this is the fact that the kernel will optimize writes to the drive so it
doesn't have to seek around as much. In my mind, and going along with my
comment above about re-recording a failed recording, it's only video so I
accomplish this same thing by turning on asynchronous writing on NFS.
This allows the NFS server to do a better job of interleaving the writes
for multiple recordings and doesn't give the big performance hits like
you're talking about. The backends just stream the data to the NFS server
and they continue recording while the NFS server buffers the data and
writes it out to the hard disk at its leisure allowing Linux to optimize
writes to the drive, the same as if you had 2 tuners writing to a local
disk. > In your example, you are suggesting that there is an NFS fileserver N
> that M and S have mounted. If so, the frontend on M will find the
> file in the RecordFilePrefix dir and will not request the file from
> S and will not send the data twice. I hope no one is doing this anyway.
> Master Backend Overried is based on the assumption that any salve
> using NFS mounts are mounting the disks local to the master. Now, if
> someone has a slave that is mounting the file directory over NFS to
> a directory that is not visible to the master, please post so we can
> all have a good laugh ;-).
Again, different use cases I guess. I have dedicated backends, so I
don't run frontends, so it's more of this scenario (both with the M and
S mounting the directory off of the NFS server N):
* Myth protocol, playing a file recorded on the master
N -> M -> F
* Myth protocol, playing a file recorded on the master with MBE override on
N -> M -> F (I think that's what it does right, it doesn't need the slave?)
* Myth protocol, playing a file recorded on the master with MBE override off
N -> S -> F
N -> F
In the NFS case where the NFS directory is mounted on the backends and frontends,
MBE override isn't really required if tv_play was using GetPlaybackURL, because
the playback code could determine whether to play locally or stream. > > allow reading the file directly (via NFS, CIFS, etc.) if it existed
> > rather than using the Myth protocol, because I have used a dedicated
> > NFS server since I started using Myth back around v0.7.
> Unless you were referring to something else, the "0.18" above must have
> been a typo. I remember this being shortly after the split around 7 or 8.
There are 2 places where this can be handled (ProgramInfo::GetPlaybackURL() and
RingBuffer::OpenFile()), 3 if you count MBE override.
I think I answered this above hopefully. :) That and the fact that the code
is inconsistent in that tv_play.cpp doesn't use GetPlaybackURL like the
flagger and transcoder do may be contributing to the misunderstandings on users'
parts. > It's probably fine either way. I just get a little uncomfortable when
> people try to say that you have to NFS mount or assume you want to NFS
> mount for better performance when that just isn't true.
Yeah, I agree, NFS is 100% optional and for some people can cause a performance
hit if you're just sharing out the MBE directory to a frontend. I think
that part of this confusion also arises from the fact that if you want to
share mythmusic, mythgallery, or mythvideo, then NFS/CIFS/etc. is required
on any remote frontends. This is another reason I just use a dedicated NFS
server so if I'm watching AVI files or other stuff that it is not impacting my
backends (except for maybe an increased latency in writing because the fileserver
is a little busier, but with asynchronous writes that shouldn't be as much of
Anyway, I put on my TODO to look at making tv_play.cpp use GetPlaybackURL() so
things will be more consistent after 0.19. As with a lot of things, there's not
one answer that's right for everyone, but Myth without NFS is the "works everywhere"
mythtv-users mailing list