Mailing List Archive

[PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model
* This is a xen-devel only post, since we have not reached concesus on
what to add / remove in this new model. This series tries to be
conservative about adding in new feature compared to V1.

This series implements NAPI + kthread 1:1 model for Xen netback.

This model
- provides better scheduling fairness among vifs
- is prerequisite for implementing multiqueue for Xen network driver

The first two patches are ground work for the third patch. First one
simplifies code in netback, second one can reduce memory footprint if we
switch to 1:1 model.

The third patch has the real meat:
- make use of NAPI to mitigate interrupt
- kthreads are not bound to CPUs any more, so that we can take
advantage of backend scheduler and trust it to do the right thing

Change since V1:
- No page pool in this version. Instead page tracking facility is
removed.

Wei Liu (3):
xen-netback: remove page tracking facility
xen-netback: switch to per-cpu scratch space
xen-netback: switch to NAPI + kthread 1:1 model

drivers/net/xen-netback/common.h | 92 ++--
drivers/net/xen-netback/interface.c | 122 +++--
drivers/net/xen-netback/netback.c | 959 +++++++++++++++--------------------
3 files changed, 537 insertions(+), 636 deletions(-)

--
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model [ In reply to ]
On 2013-5-27 19:29, Wei Liu wrote:
> * This is a xen-devel only post, since we have not reached concesus on
> what to add / remove in this new model. This series tries to be
> conservative about adding in new feature compared to V1.
>
> This series implements NAPI + kthread 1:1 model for Xen netback.
>
> This model
> - provides better scheduling fairness among vifs
> - is prerequisite for implementing multiqueue for Xen network driver
>
> The first two patches are ground work for the third patch. First one
> simplifies code in netback, second one can reduce memory footprint if we
> switch to 1:1 model.
>
> The third patch has the real meat:
> - make use of NAPI to mitigate interrupt
> - kthreads are not bound to CPUs any more, so that we can take
> advantage of backend scheduler and trust it to do the right thing
>
> Change since V1:
> - No page pool in this version. Instead page tracking facility is
> removed.

What is your thought about page pool in V1? will you re-post it later on?

Thanks
Annie
>
> Wei Liu (3):
> xen-netback: remove page tracking facility
> xen-netback: switch to per-cpu scratch space
> xen-netback: switch to NAPI + kthread 1:1 model
>
> drivers/net/xen-netback/common.h | 92 ++--
> drivers/net/xen-netback/interface.c | 122 +++--
> drivers/net/xen-netback/netback.c | 959 +++++++++++++++--------------------
> 3 files changed, 537 insertions(+), 636 deletions(-)
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model [ In reply to ]
On Tue, May 28, 2013 at 10:35:43PM +0800, annie li wrote:
>
> On 2013-5-27 19:29, Wei Liu wrote:
> >* This is a xen-devel only post, since we have not reached concesus on
> > what to add / remove in this new model. This series tries to be
> > conservative about adding in new feature compared to V1.
> >
> >This series implements NAPI + kthread 1:1 model for Xen netback.
> >
> >This model
> > - provides better scheduling fairness among vifs
> > - is prerequisite for implementing multiqueue for Xen network driver
> >
> >The first two patches are ground work for the third patch. First one
> >simplifies code in netback, second one can reduce memory footprint if we
> >switch to 1:1 model.
> >
> >The third patch has the real meat:
> > - make use of NAPI to mitigate interrupt
> > - kthreads are not bound to CPUs any more, so that we can take
> > advantage of backend scheduler and trust it to do the right thing
> >
> >Change since V1:
> > - No page pool in this version. Instead page tracking facility is
> > removed.
>
> What is your thought about page pool in V1? will you re-post it later on?
>

That would be necessary if we introduce mapping in the future. It's sort
of redundant at the moment with the copying scheme.


Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model [ In reply to ]
On 27/05/13 12:29, Wei Liu wrote:
> * This is a xen-devel only post, since we have not reached concesus on
> what to add / remove in this new model. This series tries to be
> conservative about adding in new feature compared to V1.
>
> This series implements NAPI + kthread 1:1 model for Xen netback.
>
> This model
> - provides better scheduling fairness among vifs
> - is prerequisite for implementing multiqueue for Xen network driver
>
> The first two patches are ground work for the third patch. First one
> simplifies code in netback, second one can reduce memory footprint if we
> switch to 1:1 model.
>
> The third patch has the real meat:
> - make use of NAPI to mitigate interrupt
> - kthreads are not bound to CPUs any more, so that we can take
> advantage of backend scheduler and trust it to do the right thing
>
> Change since V1:
> - No page pool in this version. Instead page tracking facility is
> removed.

Andrew Bennieston has done some performance measurements with (I think)
the V1 series and it shows a significant decrease in performance of
from-guest traffic even with only two VIFs.

Andrew will be able to comment more on this.

Andrew, can you also make available your results for others to review?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model [ In reply to ]
On Tue, Jun 11, 2013 at 11:06:43AM +0100, David Vrabel wrote:
> On 27/05/13 12:29, Wei Liu wrote:
> > * This is a xen-devel only post, since we have not reached concesus on
> > what to add / remove in this new model. This series tries to be
> > conservative about adding in new feature compared to V1.
> >
> > This series implements NAPI + kthread 1:1 model for Xen netback.
> >
> > This model
> > - provides better scheduling fairness among vifs
> > - is prerequisite for implementing multiqueue for Xen network driver
> >
> > The first two patches are ground work for the third patch. First one
> > simplifies code in netback, second one can reduce memory footprint if we
> > switch to 1:1 model.
> >
> > The third patch has the real meat:
> > - make use of NAPI to mitigate interrupt
> > - kthreads are not bound to CPUs any more, so that we can take
> > advantage of backend scheduler and trust it to do the right thing
> >
> > Change since V1:
> > - No page pool in this version. Instead page tracking facility is
> > removed.
>
> Andrew Bennieston has done some performance measurements with (I think)
> the V1 series and it shows a significant decrease in performance of
> from-guest traffic even with only two VIFs.
>
> Andrew will be able to comment more on this.
>
> Andrew, can you also make available your results for others to review?
>

In my third series there is also simple performance figures attached.
Andrew could you please have a look at that as well?

If you have time, could you try my third series? In the third series,
the only possible performance impact is the new model, which should
narrow the problem down.


Wei.

> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model [ In reply to ]
On 11/06/13 11:15, Wei Liu wrote:
> On Tue, Jun 11, 2013 at 11:06:43AM +0100, David Vrabel wrote:
>> On 27/05/13 12:29, Wei Liu wrote:
>>> * This is a xen-devel only post, since we have not reached concesus on
>>> what to add / remove in this new model. This series tries to be
>>> conservative about adding in new feature compared to V1.
>>>
>>> This series implements NAPI + kthread 1:1 model for Xen netback.
>>>
>>> This model
>>> - provides better scheduling fairness among vifs
>>> - is prerequisite for implementing multiqueue for Xen network driver
>>>
>>> The first two patches are ground work for the third patch. First one
>>> simplifies code in netback, second one can reduce memory footprint if we
>>> switch to 1:1 model.
>>>
>>> The third patch has the real meat:
>>> - make use of NAPI to mitigate interrupt
>>> - kthreads are not bound to CPUs any more, so that we can take
>>> advantage of backend scheduler and trust it to do the right thing
>>>
>>> Change since V1:
>>> - No page pool in this version. Instead page tracking facility is
>>> removed.
>>
>> Andrew Bennieston has done some performance measurements with (I think)
>> the V1 series and it shows a significant decrease in performance of
>> from-guest traffic even with only two VIFs.
>>
>> Andrew will be able to comment more on this.
>>
>> Andrew, can you also make available your results for others to review?

Absolutely; there is now a page at
http://wiki.xenproject.org/wiki/Xen-netback_NAPI_%2B_kThread_V1_performance_testing
detailing the tests I performed and the results I saw, along with some
summary text from my analysis.

Note that I also performed these tests without manually distributing
IRQs across cores, and the performance was, as expected, rather poor. I
didn't include those plots on the Wiki page since they don't really
provide any new information.


> In my third series there is also simple performance figures attached.
> Andrew could you please have a look at that as well?

I had a look at those; I think they agree with my tests where there is
overlap. The tests I performed were repeated a number of times and
covered a broader range of scenarios and have associated error bars
which provide a measure of variability between tests (as well as
indicating the statistical significance of differences between tests).

The error bars can also be interpreted in terms of fairness; smaller
error bars mean that all TCP streams across all VIFs attain similar
throughput to each other. Larger error bars mean that there is quite a
lot of variation from one stream to another, e.g. as a stream or VIF may
be starved of resources.

> If you have time, could you try my third series? In the third series,
> the only possible performance impact is the new model, which should
> narrow the problem down.
> Wei.

I am going to test the V3 patches as soon as I get the time; hopefully
later this week, or early next week. I'll post the results once I have them.

Andrew.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model [ In reply to ]
On Wed, Jun 12, 2013 at 02:44:17PM +0100, Andrew Bennieston wrote:
> >>Andrew will be able to comment more on this.
> >>
> >>Andrew, can you also make available your results for others to review?
>
> Absolutely; there is now a page at http://wiki.xenproject.org/wiki/Xen-netback_NAPI_%2B_kThread_V1_performance_testing
> detailing the tests I performed and the results I saw, along with
> some summary text from my analysis.
>

Thanks Andrew! Nice plot and nice analysis.

Just one nit, the CPU curves are not very distinguishable. Would you
mind not using dotted lines for your next graph? ;-)


Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model [ In reply to ]
On 13/06/13 10:01, Wei Liu wrote:
> On Wed, Jun 12, 2013 at 02:44:17PM +0100, Andrew Bennieston wrote:
>>>> Andrew will be able to comment more on this.
>>>>
>>>> Andrew, can you also make available your results for others to review?
>>
>> Absolutely; there is now a page at http://wiki.xenproject.org/wiki/Xen-netback_NAPI_%2B_kThread_V1_performance_testing
>> detailing the tests I performed and the results I saw, along with
>> some summary text from my analysis.
>>
>
> Thanks Andrew! Nice plot and nice analysis.
>
> Just one nit, the CPU curves are not very distinguishable. Would you
> mind not using dotted lines for your next graph? ;-)
>
>
> Wei.
>
The CPU curves are pretty much identical. Using solid lines obscures the
interesting data somewhat... I'll see what I can do, though :)

Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model [ In reply to ]
On Thu, Jun 13, 2013 at 12:18:01PM +0100, Andrew Bennieston wrote:
> On 13/06/13 10:01, Wei Liu wrote:
> >On Wed, Jun 12, 2013 at 02:44:17PM +0100, Andrew Bennieston wrote:
> >>>>Andrew will be able to comment more on this.
> >>>>
> >>>>Andrew, can you also make available your results for others to review?
> >>
> >>Absolutely; there is now a page at http://wiki.xenproject.org/wiki/Xen-netback_NAPI_%2B_kThread_V1_performance_testing
> >>detailing the tests I performed and the results I saw, along with
> >>some summary text from my analysis.
> >>
> >
> >Thanks Andrew! Nice plot and nice analysis.
> >
> >Just one nit, the CPU curves are not very distinguishable. Would you
> >mind not using dotted lines for your next graph? ;-)
> >
> >
> >Wei.
> >
> The CPU curves are pretty much identical. Using solid lines obscures
> the interesting data somewhat... I'll see what I can do, though :)
>

Oh I see. That's why I can only see one CPU curve. :-)


Wei.

> Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model [ In reply to ]
On 11/06/13 11:15, Wei Liu wrote:
> On Tue, Jun 11, 2013 at 11:06:43AM +0100, David Vrabel wrote:
>> On 27/05/13 12:29, Wei Liu wrote:
>>> * This is a xen-devel only post, since we have not reached concesus on
>>> what to add / remove in this new model. This series tries to be
>>> conservative about adding in new feature compared to V1.
>>>
>>> This series implements NAPI + kthread 1:1 model for Xen netback.
>>>
>>> This model
>>> - provides better scheduling fairness among vifs
>>> - is prerequisite for implementing multiqueue for Xen network driver
>>>
>>> The first two patches are ground work for the third patch. First one
>>> simplifies code in netback, second one can reduce memory footprint if we
>>> switch to 1:1 model.
>>>
>>> The third patch has the real meat:
>>> - make use of NAPI to mitigate interrupt
>>> - kthreads are not bound to CPUs any more, so that we can take
>>> advantage of backend scheduler and trust it to do the right thing
>>>
>>> Change since V1:
>>> - No page pool in this version. Instead page tracking facility is
>>> removed.
>>
>> Andrew Bennieston has done some performance measurements with (I think)
>> the V1 series and it shows a significant decrease in performance of
>> from-guest traffic even with only two VIFs.
>>
>> Andrew will be able to comment more on this.
>>
>> Andrew, can you also make available your results for others to review?
>>
>
> In my third series there is also simple performance figures attached.
> Andrew could you please have a look at that as well?
>
> If you have time, could you try my third series? In the third series,
> the only possible performance impact is the new model, which should
> narrow the problem down.

Wei, I finally have the results from testing your V3 patches. They are
available at:

http://wiki.xenproject.org/wiki/Xen-netback_NAPI_%2B_kThread_V3_performance_testing

This time, the base for the tests was linux-next, rather than v3.6.11
(mostly to reduce the effort in backporting patches) so the results
can't be directly compared to the V1, but I still ran tests without,
then with, your patches, so you should be able to see the direct effect
of those patches.

The summary is that there is (as expected) no impact on the dom0 -> VM
measurements, and the VM -> dom0 measurements are identical with and
without the patches up to 4 concurrently transmitting VMs or so, after
which the original version outperforms the patched version. The
difference becomes less pronounced as the number of TCP streams is
increased, though.

My conclusion from these results would be that your V3 patches have
fairly minimal performance impact, although they should improve
_fairness_ (due to the kthread per VIF) on the transmit (into VM)
pathway, and simplify the handling of the receive (out of VM) scenario too.

In other news, it looks like the throughput in general has improved
between 3.6 and -next :)

Cheers,
Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model [ In reply to ]
On Wed, Jul 03, 2013 at 01:45:13PM +0100, Andrew Bennieston wrote:
[...]
> >If you have time, could you try my third series? In the third series,
> >the only possible performance impact is the new model, which should
> >narrow the problem down.
>
> Wei, I finally have the results from testing your V3 patches. They
> are available at:
>
> http://wiki.xenproject.org/wiki/Xen-netback_NAPI_%2B_kThread_V3_performance_testing
>

Thanks, Andrew.

> This time, the base for the tests was linux-next, rather than
> v3.6.11 (mostly to reduce the effort in backporting patches) so the
> results can't be directly compared to the V1, but I still ran tests
> without, then with, your patches, so you should be able to see the
> direct effect of those patches.
>
> The summary is that there is (as expected) no impact on the dom0 ->
> VM measurements, and the VM -> dom0 measurements are identical with
> and without the patches up to 4 concurrently transmitting VMs or so,
> after which the original version outperforms the patched version.
> The difference becomes less pronounced as the number of TCP streams
> is increased, though.
>
> My conclusion from these results would be that your V3 patches have
> fairly minimal performance impact, although they should improve
> _fairness_ (due to the kthread per VIF) on the transmit (into VM)
> pathway, and simplify the handling of the receive (out of VM)
> scenario too.
>

I'm happy to know at least my patches don't have significant negative
impact. :-)

> In other news, it looks like the throughput in general has improved
> between 3.6 and -next :)
>

Agreed.


Wei.

> Cheers,
> Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel