Mailing List Archive

[PATCH] qemu/xendisk: set maximum number of grants to be used
Legacy (non-pvops) gntdev drivers may require this to be done when the
number of grants intended to be used simultaneously exceeds a certain
driver specific default limit.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/hw/xen_disk.c
+++ b/hw/xen_disk.c
@@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
if (xen_mode != XEN_EMULATE) {
batch_maps = 1;
}
+ if (xc_gnttab_set_max_grants(xendev->gnttabdev,
+ max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)
+ xen_be_printf(xendev, 0, "xc_gnttab_set_max_grants failed: %s\n",
+ strerror(errno));
}

static int blk_init(struct XenDevice *xendev)
Re: [PATCH] qemu/xendisk: set maximum number of grants to be used [ In reply to ]
>>> On 11.05.12 at 09:19, "Jan Beulich" <JBeulich@suse.com> wrote:
> Legacy (non-pvops) gntdev drivers may require this to be done when the
> number of grants intended to be used simultaneously exceeds a certain
> driver specific default limit.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/hw/xen_disk.c
> +++ b/hw/xen_disk.c
> @@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
> if (xen_mode != XEN_EMULATE) {
> batch_maps = 1;
> }
> + if (xc_gnttab_set_max_grants(xendev->gnttabdev,
> + max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)

In more extensive testing it appears that very rarely this value is still
too low:

xen be: qdisk-768: can't map 11 grant refs (Cannot allocate memory, 342 maps)

342 + 11 = 353 > 352 = 32 * 11

Could someone help out here? I first thought this might be due to
use_aio being non-zero, but ioreq_start() doesn't permit more than
max_requests struct ioreqs-s to be around.

Additionally, shouldn't the driver be smarter and gracefully handle
grant mapping failures (as the per-domain map track table in the
hypervisor is a finite resource)?

Jan

> + xen_be_printf(xendev, 0, "xc_gnttab_set_max_grants failed: %s\n",
> + strerror(errno));
> }
>
> static int blk_init(struct XenDevice *xendev)




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Re: [PATCH] qemu/xendisk: set maximum number of grants to be used [ In reply to ]
On Fri, 11 May 2012, Jan Beulich wrote:
> >>> On 11.05.12 at 09:19, "Jan Beulich" <JBeulich@suse.com> wrote:
> > Legacy (non-pvops) gntdev drivers may require this to be done when the
> > number of grants intended to be used simultaneously exceeds a certain
> > driver specific default limit.
> >
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >
> > --- a/hw/xen_disk.c
> > +++ b/hw/xen_disk.c
> > @@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
> > if (xen_mode != XEN_EMULATE) {
> > batch_maps = 1;
> > }
> > + if (xc_gnttab_set_max_grants(xendev->gnttabdev,
> > + max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)
>
> In more extensive testing it appears that very rarely this value is still
> too low:
>
> xen be: qdisk-768: can't map 11 grant refs (Cannot allocate memory, 342 maps)
>
> 342 + 11 = 353 > 352 = 32 * 11
>
> Could someone help out here? I first thought this might be due to
> use_aio being non-zero, but ioreq_start() doesn't permit more than
> max_requests struct ioreqs-s to be around.

Actually 342 + 11 = 353, that should be still OK because it is equal to
32 * 11 + 1, where the additional 1 is for the ring, right?


> Additionally, shouldn't the driver be smarter and gracefully handle
> grant mapping failures (as the per-domain map track table in the
> hypervisor is a finite resource)?

yes, probably

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Re: [PATCH] qemu/xendisk: set maximum number of grants to be used [ In reply to ]
>>> On 11.05.12 at 19:07, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Fri, 11 May 2012, Jan Beulich wrote:
>> >>> On 11.05.12 at 09:19, "Jan Beulich" <JBeulich@suse.com> wrote:
>> > Legacy (non-pvops) gntdev drivers may require this to be done when the
>> > number of grants intended to be used simultaneously exceeds a certain
>> > driver specific default limit.
>> >
>> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> >
>> > --- a/hw/xen_disk.c
>> > +++ b/hw/xen_disk.c
>> > @@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
>> > if (xen_mode != XEN_EMULATE) {
>> > batch_maps = 1;
>> > }
>> > + if (xc_gnttab_set_max_grants(xendev->gnttabdev,
>> > + max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)
>>
>> In more extensive testing it appears that very rarely this value is still
>> too low:
>>
>> xen be: qdisk-768: can't map 11 grant refs (Cannot allocate memory, 342 maps)
>>
>> 342 + 11 = 353 > 352 = 32 * 11
>>
>> Could someone help out here? I first thought this might be due to
>> use_aio being non-zero, but ioreq_start() doesn't permit more than
>> max_requests struct ioreqs-s to be around.
>
> Actually 342 + 11 = 353, that should be still OK because it is equal to
> 32 * 11 + 1, where the additional 1 is for the ring, right?

The +1 is for the ring, yes. And the calculation in the driver actually
appears to be fine. It's rather an issue with fragmentation afaict -
the driver needs to allocate 11 contiguous slots, and such may not
be available. I'll send out a v2 of the patch soon, taking fragmentation
into account.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel