Mailing List Archive

1 2 3 4 5 6  View All
Re: Test your connectivity for World IPv6 Day [ In reply to ]
Le 6 juin 2011 à 21:41, Daniel Roesen a écrit :

> On Mon, Jun 06, 2011 at 05:58:43PM +0200, Rémi Després wrote:
>> Unless there are good reasons to know that a longer PMTU applies to all
>> their connections, all servers SHOULD send IPv6 packets to off-link
>> destinations with 1280 octets as default PMTU.
>
> So you advocate making 1280 effectively the maximum MTU, not the minimum?

The "default" MTU, not the "maximum".

If some PMTUD permits to detect that a larger PMTU is possible on a specific path, it can then be used.
Also, on-link connections can use the link MTU.
Regards,
RD

>
> I'm with Fred here. Fix broken paths, not work around the problems.
>
> Best regards,
> Daniel
>
> --
> CLUE-RIPE -- Jabber: dr@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
Re: Test your connectivity for World IPv6 Day [ In reply to ]
Le 6 juin 2011 à 23:34, Tassos Chatzithomaoglou a écrit :

> I also support the idea of using everywhere a 1500 MTU and not only on W6D.

The problem is that the value that MUST work everywhere is 1280, not 1500.

RD
Re: Test your connectivity for World IPv6 Day [ In reply to ]
Dear All,
A possible option would be for develop a new opportunistic PMTUD
standard:
- start from 1280 and increase it until it is possible....

Drawback:
- you less likely capable of using the available bandwith.....
- you are wasting ~15% of the available frame size.


Janos Mohacsi
Head of HBONE+ project
Network Engineer, Deputy Director of Network Planning and Projects
NIIF/HUNGARNET, HUNGARY
Key 70EF9882: DEC2 C685 1ED4 C95A 145F 4300 6F64 7B00 70EF 9882

On Tue, 7 Jun 2011, Rémi Després wrote:

>
> Le 6 juin 2011 ? 23:34, Tassos Chatzithomaoglou a écrit :
>
>> I also support the idea of using everywhere a 1500 MTU and not only on W6D.
>
> The problem is that the value that MUST work everywhere is 1280, not 1500.
>
> RD
>
>
>
Re: Test your connectivity for World IPv6 Day [ In reply to ]
Le 11-06-07 07:03, Mohacsi Janos a écrit :
> Dear All,
> A possible option would be for develop a new opportunistic PMTUD standard:
> - start from 1280 and increase it until it is possible....

see RFC4821.

Marc.

>
> Drawback:
> - you less likely capable of using the available bandwith.....
> - you are wasting ~15% of the available frame size.
>
>
> Janos Mohacsi
> Head of HBONE+ project
> Network Engineer, Deputy Director of Network Planning and Projects
> NIIF/HUNGARNET, HUNGARY
> Key 70EF9882: DEC2 C685 1ED4 C95A 145F 4300 6F64 7B00 70EF 9882
>
> On Tue, 7 Jun 2011, Rémi Després wrote:
>
>>
>> Le 6 juin 2011 ? 23:34, Tassos Chatzithomaoglou a écrit :
>>
>>> I also support the idea of using everywhere a 1500 MTU and not only
>>> on W6D.
>>
>> The problem is that the value that MUST work everywhere is 1280, not
>> 1500.
>>
>> RD
>>
>>
>>


--
=========
IETF81 Quebec city: http://ietf81.ca
IPv6 book: Migrating to IPv6, Wiley. http://www.ipv6book.ca
Stun/Turn server for VoIP NAT-FW traversal: http://numb.viagenie.ca
DTN Implementation: http://postellation.viagenie.ca
NAT64-DNS64 Opensource: http://ecdysis.viagenie.ca
Space Assigned Number Authority: http://sanaregistry.org
Re: Test your connectivity for World IPv6 Day [ In reply to ]
Le 7 juin 2011 à 13:26, Marc Blanchet a écrit :

> Le 11-06-07 07:03, Mohacsi Janos a écrit :
>> Dear All,
>> A possible option would be for develop a new opportunistic PMTUD standard:
>> - start from 1280 and increase it until it is possible....
>
> see RFC4821.

Agreed.

However, RFC 4821 isn't completely clear as to which PMTU to try first.
As detailed in my last answer to Fred, 1280 seems the best choice, at least in IPv6.
Can we agree on this?

Regards,
RD




>
> Marc.
>
>>
>> Drawback:
>> - you less likely capable of using the available bandwith.....
>> - you are wasting ~15% of the available frame size.
>>
>>
>> Janos Mohacsi
>> Head of HBONE+ project
>> Network Engineer, Deputy Director of Network Planning and Projects
>> NIIF/HUNGARNET, HUNGARY
>> Key 70EF9882: DEC2 C685 1ED4 C95A 145F 4300 6F64 7B00 70EF 9882
>>
>> On Tue, 7 Jun 2011, Rémi Després wrote:
>>
>>>
>>> Le 6 juin 2011 ? 23:34, Tassos Chatzithomaoglou a écrit :
>>>
>>>> I also support the idea of using everywhere a 1500 MTU and not only
>>>> on W6D.
>>>
>>> The problem is that the value that MUST work everywhere is 1280, not
>>> 1500.
>>>
>>> RD
>>>
>>>
>>>
>
>
> --
> =========
> IETF81 Quebec city: http://ietf81.ca
> IPv6 book: Migrating to IPv6, Wiley. http://www.ipv6book.ca
> Stun/Turn server for VoIP NAT-FW traversal: http://numb.viagenie.ca
> DTN Implementation: http://postellation.viagenie.ca
> NAT64-DNS64 Opensource: http://ecdysis.viagenie.ca
> Space Assigned Number Authority: http://sanaregistry.org
Re: Test your connectivity for World IPv6 Day [ In reply to ]
Hi,

On Tue, Jun 07, 2011 at 01:55:35PM +0200, Rémi Després wrote:
> As detailed in my last answer to Fred, 1280 seems the best choice, at least in IPv6.
> Can we agree on this?

I'm not agreeing. Crippling the protocol by restricting ourselves to use
the least common denominator all the time, except only in those case where
full packet sizes are not working (there might even be 9000-clear paths)
is not the way *forward*.

We need to assume a working internet and fix the remaining breakage, instead
of assuming a broken everything and shying away from doing anything that
might not work for 0.001% of the cases - number made up ad-hoc, but I'm
sitting behind a less-than-1500 DSL line, and I hardly ever see PMTU
problems so far. There were bugs in some OSes and some virtualization
solutions, but they got *fixed* - and how did they get fixed? By using
larger MTUs, noticing that it breaks, and resolving the root cause!

Gert Doering
-- NetMaster
--
did you enable IPv6 on something today...?

SpaceNet AG Vorstand: Sebastian v. Bomhard
Joseph-Dollinger-Bogen 14 Aufsichtsratsvors.: A. Grundner-Culemann
D-80807 Muenchen HRB: 136055 (AG Muenchen)
Tel: +49 (89) 32356-444 USt-IdNr.: DE813185279
Re: Test your connectivity for World IPv6 Day [ In reply to ]
Le 11-06-07 07:55, Rémi Després a écrit :
>
> Le 7 juin 2011 à 13:26, Marc Blanchet a écrit :
>
>> Le 11-06-07 07:03, Mohacsi Janos a écrit :
>>> Dear All,
>>> A possible option would be for develop a new opportunistic PMTUD standard:
>>> - start from 1280 and increase it until it is possible....
>>
>> see RFC4821.
>
> Agreed.
>
> However, RFC 4821 isn't completely clear as to which PMTU to try first.

I've heard that it is already implemented on linux. don't know the
details yet.

> As detailed in my last answer to Fred, 1280 seems the best choice, at least in IPv6.
> Can we agree on this?

nooo! IPv4 typical MTU is 1500. you are suggesting downgrading IPv6
performance (compared to IPv4) by doing MTU 1280? does not make sense to
me. PMTUd issues are the exception, not the problem. But the level of
exceptions are higher now than it shall be later, since we are really
deploying IPv6 now, removing old stuff, fixing as we go.

Think also about backbone nodes/high-end servers that are able to do 9K
MTU. Change to 1280 MTU. no.

Marc.

--
=========
IETF81 Quebec city: http://ietf81.ca
IPv6 book: Migrating to IPv6, Wiley. http://www.ipv6book.ca
Stun/Turn server for VoIP NAT-FW traversal: http://numb.viagenie.ca
DTN Implementation: http://postellation.viagenie.ca
NAT64-DNS64 Opensource: http://ecdysis.viagenie.ca
Space Assigned Number Authority: http://sanaregistry.org
Re: Test your connectivity for World IPv6 Day [ In reply to ]
On Tue, 7 Jun 2011 12:30:08 +0200
Rémi Després <remi.despres@free.fr> wrote:

>
> Le 6 juin 2011 à 21:41, Daniel Roesen a écrit :
>
> > On Mon, Jun 06, 2011 at 05:58:43PM +0200, Rémi Després wrote:
> >> Unless there are good reasons to know that a longer PMTU applies to all
> >> their connections, all servers SHOULD send IPv6 packets to off-link
> >> destinations with 1280 octets as default PMTU.
> >
> > So you advocate making 1280 effectively the maximum MTU, not the minimum?
>
> The "default" MTU, not the "maximum".
>
> If some PMTUD permits to detect that a larger PMTU is possible on a specific path, it can then be used.

Conventional PMTUD only works downwards, so if you start at a
default MTU 1280, you'll never discover a larger PMTU than 1280. MSS
also works to constrain that, as it tells the other end the maximum
segment size to send.

> Also, on-link connections can use the link MTU.
> Regards,
> RD
>
> >
> > I'm with Fred here. Fix broken paths, not work around the problems.
> >
> > Best regards,
> > Daniel
> >
> > --
> > CLUE-RIPE -- Jabber: dr@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
>
Re: Test your connectivity for World IPv6 Day [ In reply to ]
On Tue, 07 Jun 2011 08:07:13 -0400
Marc Blanchet <marc.blanchet@viagenie.ca> wrote:

> Le 11-06-07 07:55, Rémi Després a écrit :
> >
> > Le 7 juin 2011 à 13:26, Marc Blanchet a écrit :
> >
> >> Le 11-06-07 07:03, Mohacsi Janos a écrit :
> >>> Dear All,
> >>> A possible option would be for develop a new opportunistic PMTUD standard:
> >>> - start from 1280 and increase it until it is possible....
> >>
> >> see RFC4821.
> >
> > Agreed.
> >
> > However, RFC 4821 isn't completely clear as to which PMTU to try first.
>
> I've heard that it is already implemented on linux. don't know the
> details yet.
>

tcp_mtu_probing - INTEGER
Controls TCP Packetization-Layer Path MTU Discovery. Takes three
values:
0 - Disabled
1 - Disabled by default, enabled when an ICMP black hole detected
2 - Always enabled, use initial MSS of tcp_base_mss.

under /proc/sys/net/ipv4

> > As detailed in my last answer to Fred, 1280 seems the best choice, at least in IPv6.
> > Can we agree on this?
>
> nooo! IPv4 typical MTU is 1500. you are suggesting downgrading IPv6
> performance (compared to IPv4) by doing MTU 1280? does not make sense to
> me. PMTUd issues are the exception, not the problem. But the level of
> exceptions are higher now than it shall be later, since we are really
> deploying IPv6 now, removing old stuff, fixing as we go.
>
> Think also about backbone nodes/high-end servers that are able to do 9K
> MTU. Change to 1280 MTU. no.
>
> Marc.
>
> --
> =========
> IETF81 Quebec city: http://ietf81.ca
> IPv6 book: Migrating to IPv6, Wiley. http://www.ipv6book.ca
> Stun/Turn server for VoIP NAT-FW traversal: http://numb.viagenie.ca
> DTN Implementation: http://postellation.viagenie.ca
> NAT64-DNS64 Opensource: http://ecdysis.viagenie.ca
> Space Assigned Number Authority: http://sanaregistry.org
Re: Test your connectivity for World IPv6 Day [ In reply to ]
Le 7 juin 2011 à 14:07, Marc Blanchet a écrit :

> Le 11-06-07 07:55, Rémi Després a écrit :
>>
>> Le 7 juin 2011 à 13:26, Marc Blanchet a écrit :
>>
>>> Le 11-06-07 07:03, Mohacsi Janos a écrit :
>>>> Dear All,
>>>> A possible option would be for develop a new opportunistic PMTUD standard:
>>>> - start from 1280 and increase it until it is possible....
>>>
>>> see RFC4821.
>>
>> Agreed.
>>
>> However, RFC 4821 isn't completely clear as to which PMTU to try first.
>
> I've heard that it is already implemented on linux. don't know the details yet.
>
>> As detailed in my last answer to Fred, 1280 seems the best choice, at least in IPv6.
>> Can we agree on this?
>
> nooo!

Too bad for IPv6-connectivity reliability!
Where ICMPv6 don't come back (it is wrong but happens), some connections may be established and later be unable to send full packets.

> IPv4 typical MTU is 1500.

Yes.
A direct consequence is that IPv6 paths that include IPv6/IPv4 tunnels (e.g. with Tunnel brokers or 6rd) may have MTU's < 1500.

> you are suggesting downgrading IPv6 performance (compared to IPv4) by doing MTU 1280? does not make sense to me.

Many users don't care about a minor optimization, but do care about safe connectivity.


> PMTUd issues are the exception, not the problem. But the level of exceptions are higher now than it shall be later, since we are really deploying IPv6 now, removing old stuff, fixing as we go.


> Think also about backbone nodes/high-end servers that are able to do 9K MTU. Change to 1280 MTU. no.

I didn't forget them, but please note that I don't propose to abandon PMTU discovery, the tool that permits to discover jumbo MTU's.


Let me quote again excerpts of RFC 4821:
" Since protocols that do not implement PLPMTUD are still subject to
problems due to ICMP black holes, it may be desirable to limit to
these protocols to "safe" MTUs likely to work on any path (e.g., 1280
bytes)."
and:
" As an optimization, it may be appropriate to probe at certain common
or expected MTU sizes, for example, 1500 bytes for standard Ethernet,
or 1500 bytes minus header sizes for tunnel protocols."
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

(1500 - 40) may be safer than 1500, but 1280 is THE value that is guaranteed to be safe.

RD


>
> Marc.
>
> --
> =========
> IETF81 Quebec city: http://ietf81.ca
> IPv6 book: Migrating to IPv6, Wiley. http://www.ipv6book.ca
> Stun/Turn server for VoIP NAT-FW traversal: http://numb.viagenie.ca
> DTN Implementation: http://postellation.viagenie.ca
> NAT64-DNS64 Opensource: http://ecdysis.viagenie.ca
> Space Assigned Number Authority: http://sanaregistry.org
Re: Test your connectivity for World IPv6 Day [ In reply to ]
* Rémi Després

> Le 7 juin 2011 à 00:20, Tore Anderson a écrit :
>
>> It's quite hard to discover the blackholes (and by extension fix
>> them) if you're defaulting to 1280.
>
> Which blackholes? - With 1280, there isn't any related to MTU's.
> Right?

Wrong. The blackholes are still there. Masking the symptoms does not fix
the underlying problem.

>> In any case, the few users that have HE/SixXS/etc. tunnels can take
>> care of themselves. If it breaks, they get to keep both parts.
>
> Too bad for IPv6!
>
>> For real deployments, on the other hand ... well, I'm hoping no
>> serious ISP or content provider will willingly put their end users
>> or web sites behind MTU-impaired links or tunnels. Recipe for
>> disaster if you ask me.
>
> What do you mean, precisely, by "MTU-impaired links or tunnels"?

If an ISP, a tunnel broker, or a concious tunnel-using end user are
using a link/tunnel that has a MTU lower than 1500 but where PMTUD does
not work reliably, that is their problem, not mine. I refuse to work
around their defective network by crippling the MTU for all my visitors.

What MTU do you recommend for IPv4 servers, by the way? 576 or 68?

--
Tore Anderson
Redpill Linpro AS - http://www.redpill-linpro.com/
Tel: +47 21 54 41 27
Re: Test your connectivity for World IPv6 Day [ In reply to ]
Le 7 juin 2011 à 15:56, Tore Anderson a écrit :
>> ...
>>> For real deployments, on the other hand ... well, I'm hoping no
>>> serious ISP or content provider will willingly put their end users
>>> or web sites behind MTU-impaired links or tunnels. Recipe for
>>> disaster if you ask me.
>>
>> What do you mean, precisely, by "MTU-impaired links or tunnels"?
>
> If an ISP, a tunnel broker, or a concious tunnel-using end user are
> using a link/tunnel that has a MTU lower than 1500 but where PMTUD does
> not work reliably, that is their problem, not mine.

A tunnel supporting less than 1500 must indeed return ICMP PTB messages like any tunnel.

But if the source host has a firewall that filters inbound ICMPv6 messages, this becomes this host's problem.
It becomes also a problem of hosts it communicates with although these hosts have no responsibility in the problem.

This host avoids the problem if it works with an "effective MTU for sending" of 1280 for off-link destinations, except for paths where PMTUD has detected better values.

> I refuse to work
> around their defective network by crippling the MTU for all my visitors.

In my understanding, it isn't a problem of defective ISP network.
It is a problem of uncertain effectiveness, so far, of PMTUD (worse in UDP than in TCP, and aggravated where some firewalls unduly filter ICMPv6 messages).

>
> What MTU do you recommend for IPv4 servers, by the way? 576 or 68?

As you of course know, despite this ironic question, the problem comes up in IPv6 because routers can no longer fragment packets.


Regards,
RD


>
> --
> Tore Anderson
> Redpill Linpro AS - http://www.redpill-linpro.com/
> Tel: +47 21 54 41 27
Re: Test your connectivity for World IPv6 Day [ In reply to ]
Le 7 juin 2011 à 14:29, Mark Smith a écrit :

> On Tue, 7 Jun 2011 12:30:08 +0200
> Rémi Després <remi.despres@free.fr> wrote:
>
>>
>> Le 6 juin 2011 à 21:41, Daniel Roesen a écrit :
>>
>>> On Mon, Jun 06, 2011 at 05:58:43PM +0200, Rémi Després wrote:
>>>> Unless there are good reasons to know that a longer PMTU applies to all
>>>> their connections, all servers SHOULD send IPv6 packets to off-link
>>>> destinations with 1280 octets as default PMTU.

I agree that this sentence I wrote is too restrictive.
Longer PMTU's can be discovered on a per connection basis.
Yet 1280 remains AFAIK a good value to start with.


>>>
>>> So you advocate making 1280 effectively the maximum MTU, not the minimum?
>>
>> The "default" MTU, not the "maximum".
>>
>> If some PMTUD permits to detect that a larger PMTU is possible on a specific path, it can then be used.
>
> Conventional PMTUD only works downwards, so if you start at a
> default MTU 1280, you'll never discover a larger PMTU than 1280. MSS
> also works to constrain that, as it tells the other end the maximum
> segment size to send.

This being so, which size do you recommend to start with in IPv6?
- 1500?
- 9000 to get a chance to reach jumbo-frame sizes?

RD




>
>> Also, on-link connections can use the link MTU.
>> Regards,
>> RD
>>
>>>
>>> I'm with Fred here. Fix broken paths, not work around the problems.
>>>
>>> Best regards,
>>> Daniel
>>>
>>> --
>>> CLUE-RIPE -- Jabber: dr@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
>>
Re: Test your connectivity for World IPv6 Day [ In reply to ]
> > Conventional PMTUD only works downwards, so if you start at a
> > default MTU 1280, you'll never discover a larger PMTU than 1280. MSS
> > also works to constrain that, as it tells the other end the maximum
> > segment size to send.
>
> This being so, which size do you recommend to start with in IPv6?
> - 1500?
> - 9000 to get a chance to reach jumbo-frame sizes?

Jumbo frames need to be negotiated in any case - there is no official
jumbo frame support in the Ethernet standards. Thus 1500 is a good
place to start - and that's where we'll start.

I fully agree with those who see no reason to start at 1280 because
there *may* be PMTUD problems. Much better to get this out into the
open, the sooner the better. *Yes* there is a difference in that
routers aren't supposed to fragment in IPv6 - however, if you look
at IPv4 traffic today you'll see that a significant amount of it is
sent with the DF bit turned on - in which case IPv4 and IPv6 are in
the same situation, no fragmentation in routers, PMTUD needs to work.

Steinar Haug, Nethelp consulting, sthaug@nethelp.no
Re: Test your connectivity for World IPv6 Day [ In reply to ]
* Rémi Després

> A tunnel supporting less than 1500 must indeed return ICMP PTB
> messages like any tunnel.
>
> But if the source host has a firewall that filters inbound ICMPv6
> messages, this becomes this host's problem. It becomes also a problem
> of hosts it communicates with although these hosts have no
> responsibility in the problem.
>
> This host avoids the problem if it works with an "effective MTU for
> sending" of 1280 for off-link destinations, except for paths where
> PMTUD has detected better values.
>
>> I refuse to work around their defective network by crippling the
>> MTU for all my visitors.
>
> In my understanding, it isn't a problem of defective ISP network. It
> is a problem of uncertain effectiveness, so far, of PMTUD (worse in
> UDP than in TCP, and aggravated where some firewalls unduly filter
> ICMPv6 messages).

I have no firewalls that filter ICMPv6 PTB between my servers and my
border routers. So when my servers are the source host, PMTUD will work,
unless something in on the end users's [ISP's/tunnel broker's] side is
making it break. In which case - not my problem.

I have no influence on what MTU is chosen by the remote host (i.e. the
end user's computer). If they'd like to use a MTU of 1280 - that is fine
by me. I'll serve them just fine. If they give me a TCP MSS of 1220,
I'll respect that, too.

Just don't ask me to lower *my own* MTU for *every* packet my servers
transmit because *some* users *may* have defective
firewalls/networks/ISPs/tunnels/whatever that prevents my servers from
discovering the MTU in the end user's inbound path.

>> What MTU do you recommend for IPv4 servers, by the way? 576 or 68?
>
> As you of course know, despite this ironic question, the problem
> comes up in IPv6 because routers can no longer fragment packets.

And when the DF bit is set?

--
Tore Anderson
Redpill Linpro AS - http://www.redpill-linpro.com/
Tel: +47 21 54 41 27
RE: Test your connectivity for World IPv6 Day [ In reply to ]
+1.

The next 48 hours are a good time to discover the brokenness and recommend
best practices to those with that brokenness. If we make sure that all the
links we manage are optimally configured and report any issues we have with
those participating in W6D, then we're starting off on the right foot.

Frank

-----Original Message-----
From: ipv6-ops-bounces+frnkblk=iname.com@lists.cluenet.de
[mailto:ipv6-ops-bounces+frnkblk=iname.com@lists.cluenet.de] On Behalf Of
Gert Doering
Sent: Tuesday, June 07, 2011 7:00 AM
To: Rémi Després
Cc: Marc Blanchet; ipv6-ops@lists.cluenet.de
Subject: Re: Test your connectivity for World IPv6 Day

Hi,

On Tue, Jun 07, 2011 at 01:55:35PM +0200, Rémi Després wrote:
> As detailed in my last answer to Fred, 1280 seems the best choice, at
least in IPv6.
> Can we agree on this?

I'm not agreeing. Crippling the protocol by restricting ourselves to use
the least common denominator all the time, except only in those case where
full packet sizes are not working (there might even be 9000-clear paths)
is not the way *forward*.

We need to assume a working internet and fix the remaining breakage, instead
of assuming a broken everything and shying away from doing anything that
might not work for 0.001% of the cases - number made up ad-hoc, but I'm
sitting behind a less-than-1500 DSL line, and I hardly ever see PMTU
problems so far. There were bugs in some OSes and some virtualization
solutions, but they got *fixed* - and how did they get fixed? By using
larger MTUs, noticing that it breaks, and resolving the root cause!

Gert Doering
-- NetMaster
--
did you enable IPv6 on something today...?

SpaceNet AG Vorstand: Sebastian v. Bomhard
Joseph-Dollinger-Bogen 14 Aufsichtsratsvors.: A. Grundner-Culemann
D-80807 Muenchen HRB: 136055 (AG Muenchen)
Tel: +49 (89) 32356-444 USt-IdNr.: DE813185279
RE: Test your connectivity for World IPv6 Day [ In reply to ]
If a host is behind a firewall that filters ICMPv6 messages and the person
managing the host can't change/fix the firewall, then yes, temporarily
lowering the hosts MTU makes sense. While working to get that firewall
reconfigured or fixed.

-----Original Message-----
From: ipv6-ops-bounces+frnkblk=iname.com@lists.cluenet.de
[mailto:ipv6-ops-bounces+frnkblk=iname.com@lists.cluenet.de] On Behalf Of
Rémi Després
Sent: Tuesday, June 07, 2011 10:11 AM
To: Tore Anderson
Cc: IPv6-OPS
Subject: Re: Test your connectivity for World IPv6 Day

<snip>

A tunnel supporting less than 1500 must indeed return ICMP PTB messages like
any tunnel.

But if the source host has a firewall that filters inbound ICMPv6 messages,
this becomes this host's problem.
It becomes also a problem of hosts it communicates with although these hosts
have no responsibility in the problem.

This host avoids the problem if it works with an "effective MTU for sending"
of 1280 for off-link destinations, except for paths where PMTUD has detected
better values.

> I refuse to work
> around their defective network by crippling the MTU for all my visitors.

In my understanding, it isn't a problem of defective ISP network.
It is a problem of uncertain effectiveness, so far, of PMTUD (worse in UDP
than in TCP, and aggravated where some firewalls unduly filter ICMPv6
messages).

>
> What MTU do you recommend for IPv4 servers, by the way? 576 or 68?

As you of course know, despite this ironic question, the problem comes up in
IPv6 because routers can no longer fragment packets.


Regards,
RD
Re: Test your connectivity for World IPv6 Day [ In reply to ]
Le 7 juin 2011 à 18:11, Tore Anderson a écrit :

> * Rémi Després
>
>> A tunnel supporting less than 1500 must indeed return ICMP PTB
>> messages like any tunnel.
>>
>> But if the source host has a firewall that filters inbound ICMPv6
>> messages, this becomes this host's problem. It becomes also a problem
>> of hosts it communicates with although these hosts have no
>> responsibility in the problem.
>>
>> This host avoids the problem if it works with an "effective MTU for
>> sending" of 1280 for off-link destinations, except for paths where
>> PMTUD has detected better values.
>>
>>> I refuse to work around their defective network by crippling the
>>> MTU for all my visitors.
>>
>> In my understanding, it isn't a problem of defective ISP network. It
>> is a problem of uncertain effectiveness, so far, of PMTUD (worse in
>> UDP than in TCP, and aggravated where some firewalls unduly filter
>> ICMPv6 messages).
>
> I have no firewalls that filter ICMPv6 PTB between my servers and my
> border routers. So when my servers are the source host, PMTUD will work,
> unless something in on the end users's [ISP's/tunnel broker's] side is
> making it break. In which case - not my problem.
>
> I have no influence on what MTU is chosen by the remote host (i.e. the
> end user's computer). If they'd like to use a MTU of 1280 - that is fine
> by me. I'll serve them just fine. If they give me a TCP MSS of 1220,
> I'll respect that, too.
>
> Just don't ask me to lower *my own* MTU for *every* packet my servers
> transmit because *some* users *may* have defective
> firewalls/networks/ISPs/tunnels/whatever that prevents my servers from
> discovering the MTU in the end user's inbound path.

OK, well controlled servers like yours can be less conservative than, for example, servers people run behind some unknown firewalls.
Consciously setting their default PMTU to 1500 makes sense, I agree. (My initial statement was too general.)

OTOH, I am still convinced that an OS intended for plug and play operation, including behind all typical firewalls, should have 1280 by default as a security against potential connectivity problems.



>>> What MTU do you recommend for IPv4 servers, by the way? 576 or 68?
>>
>> As you of course know, despite this ironic question, the problem
>> comes up in IPv6 because routers can no longer fragment packets.
>
> And when the DF bit is set?

OK, I agree that TCP packets sent with PMTUD normally have their DF bit set, so that all IPv4 paths have in practice to support 1500.


The question that remains IMHO is how much efficiency can be lost by starting with 1500 in the short term, i.e. while a large proportion of the IPv6 traffic crosses some tunnels.
Even if one neglects 6to4 and Teredo, a large part of the native-address traffic still uses tunnel brokers or 6rd.
Yet, I agree that this is secondary compared to connectivity issues, and wouldn't recommend 1280 simply for this.

Thanks for the points you made.

Regards,
RD


>
> --
> Tore Anderson
> Redpill Linpro AS - http://www.redpill-linpro.com/
> Tel: +47 21 54 41 27
Re: Test your connectivity for World IPv6 Day [ In reply to ]
On Tue, 7 Jun 2011 17:31:11 +0200
Rémi Després <remi.despres@free.fr> wrote:

>
> Le 7 juin 2011 à 14:29, Mark Smith a écrit :
>
> > On Tue, 7 Jun 2011 12:30:08 +0200
> > Rémi Després <remi.despres@free.fr> wrote:
> >
> >>
> >> Le 6 juin 2011 à 21:41, Daniel Roesen a écrit :
> >>
> >>> On Mon, Jun 06, 2011 at 05:58:43PM +0200, Rémi Després wrote:
> >>>> Unless there are good reasons to know that a longer PMTU applies to all
> >>>> their connections, all servers SHOULD send IPv6 packets to off-link
> >>>> destinations with 1280 octets as default PMTU.
>
> I agree that this sentence I wrote is too restrictive.
> Longer PMTU's can be discovered on a per connection basis.
> Yet 1280 remains AFAIK a good value to start with.
>
>
> >>>
> >>> So you advocate making 1280 effectively the maximum MTU, not the minimum?
> >>
> >> The "default" MTU, not the "maximum".
> >>
> >> If some PMTUD permits to detect that a larger PMTU is possible on a specific path, it can then be used.
> >
> > Conventional PMTUD only works downwards, so if you start at a
> > default MTU 1280, you'll never discover a larger PMTU than 1280. MSS
> > also works to constrain that, as it tells the other end the maximum
> > segment size to send.
>
> This being so, which size do you recommend to start with in IPv6?
> - 1500?
> - 9000 to get a chance to reach jumbo-frame sizes?
>

Host's outbound interface MTU. Not so much a recommendation though, as
this is IPv4 and IPv6's PMTUD default behaviour.

> RD
>
>
>
>
> >
> >> Also, on-link connections can use the link MTU.
> >> Regards,
> >> RD
> >>
> >>>
> >>> I'm with Fred here. Fix broken paths, not work around the problems.
> >>>
> >>> Best regards,
> >>> Daniel
> >>>
> >>> --
> >>> CLUE-RIPE -- Jabber: dr@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
> >>
>
Re: Test your connectivity for World IPv6 Day [ In reply to ]
Hi,

On Tue, Jun 07, 2011 at 05:31:11PM +0200, Rémi Després wrote:
> This being so, which size do you recommend to start with in IPv6?
> - 1500?
> - 9000 to get a chance to reach jumbo-frame sizes?

Local LAN MTU. Whatever that is.

Gert Doering
-- NetMaster
--
did you enable IPv6 on something today...?

SpaceNet AG Vorstand: Sebastian v. Bomhard
Joseph-Dollinger-Bogen 14 Aufsichtsratsvors.: A. Grundner-Culemann
D-80807 Muenchen HRB: 136055 (AG Muenchen)
Tel: +49 (89) 32356-444 USt-IdNr.: DE813185279
RE: Test your connectivity for World IPv6 Day [ In reply to ]
So far everything is going smoothly with our site, no complaints to date. Also note Twitter hashtag #WorldIPv6Day

http://www.seven.com

Let me know if you have any issues.
Re: Test your connectivity for World IPv6 Day [ In reply to ]
Seeing some ping-pong issues with www.ipv6matrix.org on NewNet.

>tracert www.ipv6matrix.org

Tracing route to elephant.ipv6matrix.org [2a00:19e8:20:1::aa]
over a maximum of 30 hops:

1 <1 ms <1 ms <1 ms 2a02:2148:100:902::1
2 163 ms 64 ms 33 ms 2a02:2148:77:77:2::6
3 94 ms 34 ms 34 ms 2a02:2148:2:2000::1
4 88 ms 170 ms 90 ms 20gigabitethernet4-3.core1.fra1.he.net
[2001:7f8::1b1b:0:1]
5 140 ms 97 ms 97 ms 10gigabitethernet5-3.core1.lon1.he.net
[2001:470:0:1d2::1]
6 99 ms 159 ms 98 ms 2001:7f8:4::23e7:1
7 99 ms 115 ms 201 ms 2a01:178:9191:3::200
8 115 ms 98 ms 97 ms 2a01:178:9191:3::1
9 98 ms 98 ms 98 ms 2a01:178:9191:3::200
10 102 ms 97 ms 98 ms 2a01:178:9191:3::1
11 122 ms 97 ms 192 ms 2a01:178:9191:3::200
12 98 ms 132 ms 98 ms 2a01:178:9191:3::1
13 99 ms 97 ms 97 ms 2a01:178:9191:3::200
14 195 ms 98 ms 98 ms 2a01:178:9191:3::1
15 98 ms 97 ms 98 ms 2a01:178:9191:3::200
16 97 ms 98 ms 98 ms 2a01:178:9191:3::1
17 99 ms 98 ms 99 ms 2a01:178:9191:3::200
18 129 ms 98 ms 98 ms 2a01:178:9191:3::1
19 99 ms 98 ms 99 ms 2a01:178:9191:3::200
20 98 ms 109 ms 99 ms 2a01:178:9191:3::1
21 99 ms 97 ms 98 ms 2a01:178:9191:3::200
22 98 ms 139 ms 98 ms 2a01:178:9191:3::1
23 99 ms 99 ms 99 ms 2a01:178:9191:3::200
24 99 ms 99 ms 99 ms 2a01:178:9191:3::1
25 99 ms 98 ms 155 ms 2a01:178:9191:3::200
26 100 ms 99 ms 99 ms 2a01:178:9191:3::1
27 100 ms 123 ms 99 ms 2a01:178:9191:3::200
28 99 ms 98 ms 132 ms 2a01:178:9191:3::1
29 99 ms 99 ms 99 ms 2a01:178:9191:3::200
30 102 ms 222 ms 213 ms 2a01:178:9191:3::1

Trace complete.

--
Tassos
RE: Test your connectivity for World IPv6 Day [ In reply to ]
Perhaps not anymore?

nagios:/tmp# traceroute6 www.ipv6matrix.org
traceroute to www.ipv6matrix.org (2a00:19e8:20:1::aa), 30 hops max, 80 byte
packets
1 2607:fe28:0:1003::2 (2607:fe28:0:1003::2) 3.923 ms 4.353 ms 5.383 ms
2 router-core.mtcnet.net (2607:fe28:0:1000::1) 6.074 ms 6.372 ms 6.713
ms
3 sxct.movl.mtcnet.net (2607:fe28:11:1002::194) 8.271 ms 8.516 ms 9.341
ms
4 v6-siouxcenter.movl.153.netins.net (2001:5f8:7f06::9) 16.877 ms 17.124
ms 17.332 ms
5 v6-ins-kb1-et-11-6-307.kmrr.netins.net (2001:5f8:2:2::1) 20.513 ms
20.781 ms 21.116 ms
6 v6-ins-kc1-te-9-2.kmrr.netins.net (2001:5f8::10:1) 21.508 ms 17.599 ms
17.863 ms
7 sl-crs4-chi-te0-2-0-1.v6.sprintlink.net (2600:4::6) 26.898 ms 26.054
ms 26.249 ms
8 sl-crs3-chi-po0-1-2-0.v6.sprintlink.net (2600:0:2:1239:144:232:3:143)
27.093 ms 26.052 ms 26.103 ms
9 sl-st30-chi-po0-4-0-0.v6.sprintlink.net (2600:0:2:1239:144:232:8:184)
26.206 ms 20.901 ms 23.879 ms
10 * * *
11 10gigabitethernet7-2.core1.nyc4.he.net (2001:470:0:1c6::1) 63.056 ms
64.132 ms 63.923 ms
12 10gigabitethernet3-3.core1.lon1.he.net (2001:470:0:128::2) 131.639 ms
131.642 ms 132.250 ms
13 * * *
14 2a01:178:9191:3::17 (2a01:178:9191:3::17) 133.774 ms 134.849 ms
134.514 ms
15 2a01:178:500::6 (2a01:178:500::6) 128.390 ms 128.413 ms 128.602 ms
16 2a00:19e8::1:d (2a00:19e8::1:d) 128.893 ms 127.646 ms 127.683 ms
17 2a00:19e8::1:5 (2a00:19e8::1:5) 127.557 ms 126.850 ms *
18 elephant.ipv6matrix.org (2a00:19e8:20:1::aa) 128.325 ms 127.747 ms
127.941 ms
nagios:/tmp#

-----Original Message-----
From: ipv6-ops-bounces+frnkblk=iname.com@lists.cluenet.de
[mailto:ipv6-ops-bounces+frnkblk=iname.com@lists.cluenet.de] On Behalf Of
Tassos Chatzithomaoglou
Sent: Wednesday, June 08, 2011 12:15 AM
To: IPv6-OPS
Subject: Re: Test your connectivity for World IPv6 Day

Seeing some ping-pong issues with www.ipv6matrix.org on NewNet.

>tracert www.ipv6matrix.org

Tracing route to elephant.ipv6matrix.org [2a00:19e8:20:1::aa]
over a maximum of 30 hops:

1 <1 ms <1 ms <1 ms 2a02:2148:100:902::1
2 163 ms 64 ms 33 ms 2a02:2148:77:77:2::6
3 94 ms 34 ms 34 ms 2a02:2148:2:2000::1
4 88 ms 170 ms 90 ms 20gigabitethernet4-3.core1.fra1.he.net
[2001:7f8::1b1b:0:1]
5 140 ms 97 ms 97 ms 10gigabitethernet5-3.core1.lon1.he.net
[2001:470:0:1d2::1]
6 99 ms 159 ms 98 ms 2001:7f8:4::23e7:1
7 99 ms 115 ms 201 ms 2a01:178:9191:3::200
8 115 ms 98 ms 97 ms 2a01:178:9191:3::1
9 98 ms 98 ms 98 ms 2a01:178:9191:3::200
10 102 ms 97 ms 98 ms 2a01:178:9191:3::1
11 122 ms 97 ms 192 ms 2a01:178:9191:3::200
12 98 ms 132 ms 98 ms 2a01:178:9191:3::1
13 99 ms 97 ms 97 ms 2a01:178:9191:3::200
14 195 ms 98 ms 98 ms 2a01:178:9191:3::1
15 98 ms 97 ms 98 ms 2a01:178:9191:3::200
16 97 ms 98 ms 98 ms 2a01:178:9191:3::1
17 99 ms 98 ms 99 ms 2a01:178:9191:3::200
18 129 ms 98 ms 98 ms 2a01:178:9191:3::1
19 99 ms 98 ms 99 ms 2a01:178:9191:3::200
20 98 ms 109 ms 99 ms 2a01:178:9191:3::1
21 99 ms 97 ms 98 ms 2a01:178:9191:3::200
22 98 ms 139 ms 98 ms 2a01:178:9191:3::1
23 99 ms 99 ms 99 ms 2a01:178:9191:3::200
24 99 ms 99 ms 99 ms 2a01:178:9191:3::1
25 99 ms 98 ms 155 ms 2a01:178:9191:3::200
26 100 ms 99 ms 99 ms 2a01:178:9191:3::1
27 100 ms 123 ms 99 ms 2a01:178:9191:3::200
28 99 ms 98 ms 132 ms 2a01:178:9191:3::1
29 99 ms 99 ms 99 ms 2a01:178:9191:3::200
30 102 ms 222 ms 213 ms 2a01:178:9191:3::1

Trace complete.

--
Tassos
Re: Test your connectivity for World IPv6 Day [ In reply to ]
ping-pong is still on...

Judging from your output, hop 13 must be LINX (hop 6 in my output).
Then going through 2a01:178:9191:3::17 in your output vs
2a01:178:9191:3::200 in my output.

Trying a trace on LINX LG (226.x peers), it returns your path.
Trying a trace on LINX LG (224.x peers), it returns my path (or "Command
timed out").

Looking Glass Query results

Type escape sequence to abort.
Tracing the route to elephant.ipv6matrix.org (2A00:19E8:20:1::AA)

1 2001:7F8:4::23E7:1 4 msec 0 msec 0 msec
2 2A01:178:9191:3::200 4 msec 0 msec 0 msec
3 2A01:178:9191:3::1 0 msec 4 msec 0 msec
4 2A01:178:9191:3::200 0 msec 0 msec 0 msec
5 2A01:178:9191:3::1 8 msec 4 msec 4 msec
6 2A01:178:9191:3::200 0 msec 0 msec 0 msec
7 2A01:178:9191:3::1 4 msec 4 msec 0 msec
8 2A01:178:9191:3::200 0 msec 0 msec 0 msec
9 2A01:178:9191:3::1 0 msec 4 msec *
10 2A01:178:9191:3::200 0 msec 4 msec 0 msec
11 2A01:178:9191:3::1 0 msec 4 msec 0 msec
12 2A01:178:9191:3::200 0 msec 0 msec 0 msec
13 2A01:178:9191:3::1 4 msec 0 msec 0 msec
14 2A01:178:9191:3::200 4 msec 0 msec 4 msec
15 2A01:178:9191:3::1 0 msec 4 msec *
16 2A01:178:9191:3::200 0 msec 0 msec 0 msec
17 2A01:178:9191:3::1 4 msec 0 msec 0 msec
18 2A01:178:9191:3::200 4 msec 4 msec 4 msec
19 2A01:178:9191:3::1 0 msec 20 msec 0 msec
20 2A01:178:9191:3::200 0 msec 0 msec 4 msec
21 *
2A01:178:9191:3::1 0 msec 4 msec
22 2A01:178:9191:3::200 4 msec 0 msec 4 msec
23 2A01:178:9191:3::1 8 msec 4 msec 4 msec
24 2A01:178:9191:3::200 4 msec 4 msec 4 msec
25 2A01:178:9191:3::1 8 msec 4 msec 4 msec
26 2A01:178:9191:3::200 4 msec 0 msec 8 msec
27 2A01:178:9191:3::1 0 msec * 4 msec
28 2A01:178:9191:3::200 4 msec 0 msec 0 msec
29 2A01:178:9191:3::1 0 msec 4 msec 4 msec
30 2A01:178:9191:3::200 4 msec 0 msec 0 msec
Destination not found inside max hopcount diameter.


--
Tassos


Frank Bulk wrote on 08/06/2011 08:22:
> Perhaps not anymore?
>
> nagios:/tmp# traceroute6 www.ipv6matrix.org
> traceroute to www.ipv6matrix.org (2a00:19e8:20:1::aa), 30 hops max, 80 byte
> packets
> 1 2607:fe28:0:1003::2 (2607:fe28:0:1003::2) 3.923 ms 4.353 ms 5.383 ms
> 2 router-core.mtcnet.net (2607:fe28:0:1000::1) 6.074 ms 6.372 ms 6.713
> ms
> 3 sxct.movl.mtcnet.net (2607:fe28:11:1002::194) 8.271 ms 8.516 ms 9.341
> ms
> 4 v6-siouxcenter.movl.153.netins.net (2001:5f8:7f06::9) 16.877 ms 17.124
> ms 17.332 ms
> 5 v6-ins-kb1-et-11-6-307.kmrr.netins.net (2001:5f8:2:2::1) 20.513 ms
> 20.781 ms 21.116 ms
> 6 v6-ins-kc1-te-9-2.kmrr.netins.net (2001:5f8::10:1) 21.508 ms 17.599 ms
> 17.863 ms
> 7 sl-crs4-chi-te0-2-0-1.v6.sprintlink.net (2600:4::6) 26.898 ms 26.054
> ms 26.249 ms
> 8 sl-crs3-chi-po0-1-2-0.v6.sprintlink.net (2600:0:2:1239:144:232:3:143)
> 27.093 ms 26.052 ms 26.103 ms
> 9 sl-st30-chi-po0-4-0-0.v6.sprintlink.net (2600:0:2:1239:144:232:8:184)
> 26.206 ms 20.901 ms 23.879 ms
> 10 * * *
> 11 10gigabitethernet7-2.core1.nyc4.he.net (2001:470:0:1c6::1) 63.056 ms
> 64.132 ms 63.923 ms
> 12 10gigabitethernet3-3.core1.lon1.he.net (2001:470:0:128::2) 131.639 ms
> 131.642 ms 132.250 ms
> 13 * * *
> 14 2a01:178:9191:3::17 (2a01:178:9191:3::17) 133.774 ms 134.849 ms
> 134.514 ms
> 15 2a01:178:500::6 (2a01:178:500::6) 128.390 ms 128.413 ms 128.602 ms
> 16 2a00:19e8::1:d (2a00:19e8::1:d) 128.893 ms 127.646 ms 127.683 ms
> 17 2a00:19e8::1:5 (2a00:19e8::1:5) 127.557 ms 126.850 ms *
> 18 elephant.ipv6matrix.org (2a00:19e8:20:1::aa) 128.325 ms 127.747 ms
> 127.941 ms
> nagios:/tmp#
>
> -----Original Message-----
> From: ipv6-ops-bounces+frnkblk=iname.com@lists.cluenet.de
> [mailto:ipv6-ops-bounces+frnkblk=iname.com@lists.cluenet.de] On Behalf Of
> Tassos Chatzithomaoglou
> Sent: Wednesday, June 08, 2011 12:15 AM
> To: IPv6-OPS
> Subject: Re: Test your connectivity for World IPv6 Day
>
> Seeing some ping-pong issues with www.ipv6matrix.org on NewNet.
>
> >tracert www.ipv6matrix.org
>
> Tracing route to elephant.ipv6matrix.org [2a00:19e8:20:1::aa]
> over a maximum of 30 hops:
>
> 1<1 ms<1 ms<1 ms 2a02:2148:100:902::1
> 2 163 ms 64 ms 33 ms 2a02:2148:77:77:2::6
> 3 94 ms 34 ms 34 ms 2a02:2148:2:2000::1
> 4 88 ms 170 ms 90 ms 20gigabitethernet4-3.core1.fra1.he.net
> [2001:7f8::1b1b:0:1]
> 5 140 ms 97 ms 97 ms 10gigabitethernet5-3.core1.lon1.he.net
> [2001:470:0:1d2::1]
> 6 99 ms 159 ms 98 ms 2001:7f8:4::23e7:1
> 7 99 ms 115 ms 201 ms 2a01:178:9191:3::200
> 8 115 ms 98 ms 97 ms 2a01:178:9191:3::1
> 9 98 ms 98 ms 98 ms 2a01:178:9191:3::200
> 10 102 ms 97 ms 98 ms 2a01:178:9191:3::1
> 11 122 ms 97 ms 192 ms 2a01:178:9191:3::200
> 12 98 ms 132 ms 98 ms 2a01:178:9191:3::1
> 13 99 ms 97 ms 97 ms 2a01:178:9191:3::200
> 14 195 ms 98 ms 98 ms 2a01:178:9191:3::1
> 15 98 ms 97 ms 98 ms 2a01:178:9191:3::200
> 16 97 ms 98 ms 98 ms 2a01:178:9191:3::1
> 17 99 ms 98 ms 99 ms 2a01:178:9191:3::200
> 18 129 ms 98 ms 98 ms 2a01:178:9191:3::1
> 19 99 ms 98 ms 99 ms 2a01:178:9191:3::200
> 20 98 ms 109 ms 99 ms 2a01:178:9191:3::1
> 21 99 ms 97 ms 98 ms 2a01:178:9191:3::200
> 22 98 ms 139 ms 98 ms 2a01:178:9191:3::1
> 23 99 ms 99 ms 99 ms 2a01:178:9191:3::200
> 24 99 ms 99 ms 99 ms 2a01:178:9191:3::1
> 25 99 ms 98 ms 155 ms 2a01:178:9191:3::200
> 26 100 ms 99 ms 99 ms 2a01:178:9191:3::1
> 27 100 ms 123 ms 99 ms 2a01:178:9191:3::200
> 28 99 ms 98 ms 132 ms 2a01:178:9191:3::1
> 29 99 ms 99 ms 99 ms 2a01:178:9191:3::200
> 30 102 ms 222 ms 213 ms 2a01:178:9191:3::1
>
> Trace complete.
>
> --
> Tassos
>
>
>
>
>
Re: Test your connectivity for World IPv6 Day [ In reply to ]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi,

Traceroute6 from AS2614 looks like this:

bwt:~/tmp/# date
Wed Jun 8 09:13:34 EEST 2011
bwt:~/tmp/# traceroute6 www.ipv6matrix.org
traceroute to www.ipv6matrix.org (2a00:19e8:20:1::aa), 30 hops max, 40
byte packets
1 2001:b30:1::1 (2001:b30:1::1) 0.549 ms 0.604 ms 0.710 ms
2 2001:b30:0:5::1 (2001:b30:0:5::1) 0.379 ms 0.506 ms 0.579 ms
3 te-4-3.br1.nat.roedu.net (2001:b30:0:b::1) 1.760 ms * *
4 buca-b1-link.telia.net (2001:2000:3080:14b::1) 0.412 ms 0.484 ms
0.470 ms
5 s-fre-i28-v6.telia.net (2001:2000:3018:29::1) 58.488 ms 58.571 ms
58.695 ms
6 s-fre-i28-v6.telia.net (2001:2000:3018:29::1) 58.778 ms 58.697 ms
58.074 ms
7 2001:2000:3080::2 (2001:2000:3080::2) 82.394 ms * *
8 ten1-0-0.t40-cr1.ipv6.router.uk.clara.net (2001:a88:0:1::119)
82.530 ms * *
9 g4-1-t40-br2.ipv6.router.uk.clara.net (2001:a88:0:1::d9) 82.557 ms
82.495 ms 106.984 ms
10 2001:a88:0:2::3a (2001:a88:0:2::3a) 83.676 ms 83.729 ms 83.778 ms
11 elephant.ipv6matrix.org (2a00:19e8:20:1::aa) 83.080 ms 83.079 ms
83.031 ms

On 6/8/11 8:57 AM, Tassos Chatzithomaoglou wrote:
> ping-pong is still on...
>
> Judging from your output, hop 13 must be LINX (hop 6 in my output).
> Then going through 2a01:178:9191:3::17 in your output vs
> 2a01:178:9191:3::200 in my output.
>
> Trying a trace on LINX LG (226.x peers), it returns your path.
> Trying a trace on LINX LG (224.x peers), it returns my path (or "Command
> timed out").
>
> Looking Glass Query results
>
> Type escape sequence to abort.
> Tracing the route to elephant.ipv6matrix.org (2A00:19E8:20:1::AA)
>
> 1 2001:7F8:4::23E7:1 4 msec 0 msec 0 msec
> 2 2A01:178:9191:3::200 4 msec 0 msec 0 msec
> 3 2A01:178:9191:3::1 0 msec 4 msec 0 msec
> 4 2A01:178:9191:3::200 0 msec 0 msec 0 msec
> 5 2A01:178:9191:3::1 8 msec 4 msec 4 msec
> 6 2A01:178:9191:3::200 0 msec 0 msec 0 msec
> 7 2A01:178:9191:3::1 4 msec 4 msec 0 msec
> 8 2A01:178:9191:3::200 0 msec 0 msec 0 msec
> 9 2A01:178:9191:3::1 0 msec 4 msec *
> 10 2A01:178:9191:3::200 0 msec 4 msec 0 msec
> 11 2A01:178:9191:3::1 0 msec 4 msec 0 msec
> 12 2A01:178:9191:3::200 0 msec 0 msec 0 msec
> 13 2A01:178:9191:3::1 4 msec 0 msec 0 msec
> 14 2A01:178:9191:3::200 4 msec 0 msec 4 msec
> 15 2A01:178:9191:3::1 0 msec 4 msec *
> 16 2A01:178:9191:3::200 0 msec 0 msec 0 msec
> 17 2A01:178:9191:3::1 4 msec 0 msec 0 msec
> 18 2A01:178:9191:3::200 4 msec 4 msec 4 msec
> 19 2A01:178:9191:3::1 0 msec 20 msec 0 msec
> 20 2A01:178:9191:3::200 0 msec 0 msec 4 msec
> 21 *
> 2A01:178:9191:3::1 0 msec 4 msec
> 22 2A01:178:9191:3::200 4 msec 0 msec 4 msec
> 23 2A01:178:9191:3::1 8 msec 4 msec 4 msec
> 24 2A01:178:9191:3::200 4 msec 4 msec 4 msec
> 25 2A01:178:9191:3::1 8 msec 4 msec 4 msec
> 26 2A01:178:9191:3::200 4 msec 0 msec 8 msec
> 27 2A01:178:9191:3::1 0 msec * 4 msec
> 28 2A01:178:9191:3::200 4 msec 0 msec 0 msec
> 29 2A01:178:9191:3::1 0 msec 4 msec 4 msec
> 30 2A01:178:9191:3::200 4 msec 0 msec 0 msec
> Destination not found inside max hopcount diameter.
>
>
> --
> Tassos
>
>
> Frank Bulk wrote on 08/06/2011 08:22:
>> Perhaps not anymore?
>>
>> nagios:/tmp# traceroute6 www.ipv6matrix.org
>> traceroute to www.ipv6matrix.org (2a00:19e8:20:1::aa), 30 hops max, 80
>> byte
>> packets
>> 1 2607:fe28:0:1003::2 (2607:fe28:0:1003::2) 3.923 ms 4.353 ms
>> 5.383 ms
>> 2 router-core.mtcnet.net (2607:fe28:0:1000::1) 6.074 ms 6.372 ms
>> 6.713
>> ms
>> 3 sxct.movl.mtcnet.net (2607:fe28:11:1002::194) 8.271 ms 8.516
>> ms 9.341
>> ms
>> 4 v6-siouxcenter.movl.153.netins.net (2001:5f8:7f06::9) 16.877 ms
>> 17.124
>> ms 17.332 ms
>> 5 v6-ins-kb1-et-11-6-307.kmrr.netins.net (2001:5f8:2:2::1) 20.513 ms
>> 20.781 ms 21.116 ms
>> 6 v6-ins-kc1-te-9-2.kmrr.netins.net (2001:5f8::10:1) 21.508 ms
>> 17.599 ms
>> 17.863 ms
>> 7 sl-crs4-chi-te0-2-0-1.v6.sprintlink.net (2600:4::6) 26.898 ms
>> 26.054
>> ms 26.249 ms
>> 8 sl-crs3-chi-po0-1-2-0.v6.sprintlink.net
>> (2600:0:2:1239:144:232:3:143)
>> 27.093 ms 26.052 ms 26.103 ms
>> 9 sl-st30-chi-po0-4-0-0.v6.sprintlink.net
>> (2600:0:2:1239:144:232:8:184)
>> 26.206 ms 20.901 ms 23.879 ms
>> 10 * * *
>> 11 10gigabitethernet7-2.core1.nyc4.he.net (2001:470:0:1c6::1) 63.056 ms
>> 64.132 ms 63.923 ms
>> 12 10gigabitethernet3-3.core1.lon1.he.net (2001:470:0:128::2)
>> 131.639 ms
>> 131.642 ms 132.250 ms
>> 13 * * *
>> 14 2a01:178:9191:3::17 (2a01:178:9191:3::17) 133.774 ms 134.849 ms
>> 134.514 ms
>> 15 2a01:178:500::6 (2a01:178:500::6) 128.390 ms 128.413 ms 128.602 ms
>> 16 2a00:19e8::1:d (2a00:19e8::1:d) 128.893 ms 127.646 ms 127.683 ms
>> 17 2a00:19e8::1:5 (2a00:19e8::1:5) 127.557 ms 126.850 ms *
>> 18 elephant.ipv6matrix.org (2a00:19e8:20:1::aa) 128.325 ms 127.747 ms
>> 127.941 ms
>> nagios:/tmp#
>>
>> -----Original Message-----
>> From: ipv6-ops-bounces+frnkblk=iname.com@lists.cluenet.de
>> [mailto:ipv6-ops-bounces+frnkblk=iname.com@lists.cluenet.de] On Behalf Of
>> Tassos Chatzithomaoglou
>> Sent: Wednesday, June 08, 2011 12:15 AM
>> To: IPv6-OPS
>> Subject: Re: Test your connectivity for World IPv6 Day
>>
>> Seeing some ping-pong issues with www.ipv6matrix.org on NewNet.
>>
>> >tracert www.ipv6matrix.org
>>
>> Tracing route to elephant.ipv6matrix.org [2a00:19e8:20:1::aa]
>> over a maximum of 30 hops:
>>
>> 1<1 ms<1 ms<1 ms 2a02:2148:100:902::1
>> 2 163 ms 64 ms 33 ms 2a02:2148:77:77:2::6
>> 3 94 ms 34 ms 34 ms 2a02:2148:2:2000::1
>> 4 88 ms 170 ms 90 ms 20gigabitethernet4-3.core1.fra1.he.net
>> [2001:7f8::1b1b:0:1]
>> 5 140 ms 97 ms 97 ms 10gigabitethernet5-3.core1.lon1.he.net
>> [2001:470:0:1d2::1]
>> 6 99 ms 159 ms 98 ms 2001:7f8:4::23e7:1
>> 7 99 ms 115 ms 201 ms 2a01:178:9191:3::200
>> 8 115 ms 98 ms 97 ms 2a01:178:9191:3::1
>> 9 98 ms 98 ms 98 ms 2a01:178:9191:3::200
>> 10 102 ms 97 ms 98 ms 2a01:178:9191:3::1
>> 11 122 ms 97 ms 192 ms 2a01:178:9191:3::200
>> 12 98 ms 132 ms 98 ms 2a01:178:9191:3::1
>> 13 99 ms 97 ms 97 ms 2a01:178:9191:3::200
>> 14 195 ms 98 ms 98 ms 2a01:178:9191:3::1
>> 15 98 ms 97 ms 98 ms 2a01:178:9191:3::200
>> 16 97 ms 98 ms 98 ms 2a01:178:9191:3::1
>> 17 99 ms 98 ms 99 ms 2a01:178:9191:3::200
>> 18 129 ms 98 ms 98 ms 2a01:178:9191:3::1
>> 19 99 ms 98 ms 99 ms 2a01:178:9191:3::200
>> 20 98 ms 109 ms 99 ms 2a01:178:9191:3::1
>> 21 99 ms 97 ms 98 ms 2a01:178:9191:3::200
>> 22 98 ms 139 ms 98 ms 2a01:178:9191:3::1
>> 23 99 ms 99 ms 99 ms 2a01:178:9191:3::200
>> 24 99 ms 99 ms 99 ms 2a01:178:9191:3::1
>> 25 99 ms 98 ms 155 ms 2a01:178:9191:3::200
>> 26 100 ms 99 ms 99 ms 2a01:178:9191:3::1
>> 27 100 ms 123 ms 99 ms 2a01:178:9191:3::200
>> 28 99 ms 98 ms 132 ms 2a01:178:9191:3::1
>> 29 99 ms 99 ms 99 ms 2a01:178:9191:3::200
>> 30 102 ms 222 ms 213 ms 2a01:178:9191:3::1
>>
>> Trace complete.
>>
>> --
>> Tassos
>>
>>
>>
>>
>>


- --
=================================
Valeriu VRACIU
Network Engineer at RoEduNet
tel: +40 (232) 201003
fax: +40 (232) 201200
GSM: +40 (744) 615251
e-mail: valeriu.vraciu@roedu.net
=================================
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk3vFF4ACgkQncI+CatY94+nqgCbBSBd5c52VLWmSgO1e3FiMok1
cbMAni3l/A9TW48UrhWr5MRL8RKVEcl7
=W0bm
-----END PGP SIGNATURE-----

1 2 3 4 5 6  View All