Mailing List Archive

1 2  View All
Re: Seamless MPLS interacting with flat LDP domains [ In reply to ]
On Tue, 30 Apr 2019 at 15:04, <adamv0025@netconsultings.com> wrote:
> > So for the ASR920, you get about 20,000 FIB entries. That's what you want
> to
> > keep your eye on to determine whether you're at a point where you need to
> > do this.
> >
> > Ideally, you would be carrying IGP and LDP in FIB. With BGP-SD, you can
> > control the number of routes you want in FIB.
> >
> Also with OSPF prefix-suppression you can reduce the OSPF footprint to mere
> loopbacks (i.e. excluding all p2p links).

Originally, I was in support of prefix-suppression however, Cisco
implemented it on IOS and IOS-XE devices for OSPF and not ISIS, and
not for either OSPF or ISIS on IOS-XR. Later they implemented it on
IOS-XR devices but for ISIS only and not OSPF. Another Cisco fail at
aligning their own features across their own products. So, since it
can't even be deployed in an all Cisco network, let alone a typical
multi-vendor network I just don't use it anymore.

Having said that, it does work, this is an example on IOS using
prefix-suppression for OSPF:
https://null.53bits.co.uk/index.php?page=ospf-inter-area-filtering

IS-IS on Cisco has a method that exist to only advertise the loopback
interface in the LSDB for IOS/XE and IOS-XR however, they are two
different methods.

IOS:
router isis
advertise passive-only
passive-interface Loopback0

IOS-XR:
router isis
interface x/y
suppressed

I've got lots of notes on OSPF and IS-IS scaling in a mixed
Juniper/Cisco environment but they're very much in draft format, I can
try and make them presentable if I get some free time / and there is
demand.

Cheers,
James.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Seamless MPLS interacting with flat LDP domains [ In reply to ]
On Wed, May 1, 2019, at 00:15, adamv0025@netconsultings.com wrote:

> Converting DC to standard MPLS network and all your problems are solved.

Just talk about converting DC to MPLS and you will start having other kind of problems.
IMHO, if DCs didn't massively adopt MPLS is because lack of training and what I call "end-user mentality". In lots of cases you are dealing with people that when you say "MPLS" they understand "site-to-site L3VPN using some magic technology" (that they don't understand). Others do understand some tiny bits but see it as a "carrier technology". And there's enough of them so that manufacturers take their opinion in consideration. Result => VXLAN.
Those being said, there are DCs that managed to go the MPLS way, but it looks more like an exception rater than the rule. Unfortunately.

--
R.-A. Feurdean
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Seamless MPLS interacting with flat LDP domains [ In reply to ]
On 2/May/19 11:33, Radu-Adrian FEURDEAN wrote:

>
> Just talk about converting DC to MPLS and you will start having other kind of problems.
> IMHO, if DCs didn't massively adopt MPLS is because lack of training and what I call "end-user mentality". In lots of cases you are dealing with people that when you say "MPLS" they understand "site-to-site L3VPN using some magic technology" (that they don't understand). Others do understand some tiny bits but see it as a "carrier technology". And there's enough of them so that manufacturers take their opinion in consideration. Result => VXLAN.
> Those being said, there are DCs that managed to go the MPLS way, but it looks more like an exception rater than the rule. Unfortunately.

When exchange points started using it (VPLS) to operate the member
fabric, you know it was downhill from there :-).

And that was way before all this Cloud/DCI/VXLAN/SDN/SD-WAN monstrosity
our industry finds itself in :-).

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Seamless MPLS interacting with flat LDP domains [ In reply to ]
> Radu-Adrian FEURDEAN
> Sent: Thursday, May 2, 2019 10:34 AM
>
> On Wed, May 1, 2019, at 00:15, adamv0025@netconsultings.com wrote:
>
> > Converting DC to standard MPLS network and all your problems are solved.
>
> Just talk about converting DC to MPLS and you will start having other kind
of
> problems.
> IMHO, if DCs didn't massively adopt MPLS is because lack of training and
what
> I call "end-user mentality". In lots of cases you are dealing with people
that
> when you say "MPLS" they understand "site-to-site L3VPN using some magic
> technology" (that they don't understand).
>
Yeah that's exactly it, I'm against this "end-user mentality" on a daily
bases when talking about other "magical" stuff like MD-SAL or YANG.

> Others do understand some tiny
> bits but see it as a "carrier technology". And there's enough of them so
that
> manufacturers take their opinion in consideration. Result => VXLAN.
>
Though it's interesting that the same people who are afraid of the simple
bit which is the hop by hop transport are fine with the complex EVPN bit on
top :)
But I get it vendors nowadays assume the crowd's networking knowledge is
subpar hence these nicely pre-packaged click a button solutions for DC
deployments.

> Those being said, there are DCs that managed to go the MPLS way, but it
> looks more like an exception rater than the rule. Unfortunately.
>
We'll see what time brings.

adam

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Seamless MPLS interacting with flat LDP domains [ In reply to ]
On 2/May/19 14:14, adamv0025@netconsultings.com wrote:

> Though it's interesting that the same people who are afraid of the simple
> bit which is the hop by hop transport are fine with the complex EVPN bit on
> top :)

Which is exactly why the market I was in that was heavily using VPLS at
the time was mostly about bragging than actual function.

Anyone that came to me asking for VPLS, I asked them what l3vpn didn't
do enough. We never deployed a VPLS customer, even if the network itself
was running VPLS for PPPoE backhaul. I mean, I would sort of understand
if your payload was IPX/SPX, DECnet or AppleTalk... but anyone running
such payloads is probably more comfortable building and operating an
X.25, Frame Relay or ATM network!

The constant need to re-invent yourself so that you can appeal to your
customers or employers is what has seen a number of inappropriate
technologies being deployed in Service Provider and Enterprise networks.
In 2014, a customer wanted to nail a 10Gbps EoMPLS service on specific
paths on our IP/MPLS backbone, which meant we'd have had to use RSVP. I
directed them to our EoDWDM service instead. Not every knob needs to be
touched, unless it's the one that rules them all :-).

> But I get it vendors nowadays assume the crowd's networking knowledge is
> subpar hence these nicely pre-packaged click a button solutions for DC
> deployments.

I was speaking with a friend in the community a few months ago...
perhaps good old fashioned routing workshops at NOG's will make a
glorious come-back, since focus is less on what you can do by hand and
more about what you can click with a mouse (in the process, losing the
basics). But alas, there are no teachers anymore, as we've found. They
are in short supply, busy with day jobs, and the new blood that's coming
up is too far disconnected from the basics of deploying and running an
IP network, they'd rather be fooling around with an app somewhere.

There I go, rambling about the good old days, already :-)...

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Seamless MPLS interacting with flat LDP domains [ In reply to ]
Radu,

The MPLS in modern DC is none starter purely from technology pov.

In modern DCs compute nodes are your tenant PEs all talking to rest of the
fabric L3. So if you want to roll MPLS you would need to do that to the
compute nodes. That means that with exact match you will see in MSDCs
millions of FECs and millions of underlay routes which you can not
summarize. Plus on top of that an overlay say L3VPNs for the tenant/pods
reachability.

Good luck with operating that scale with MPLS forwarding. Besides while
some vendors of the hosts NICs claim support for MPLS they do that only on
ppt. In real life take very popular NIC vendor and you will find that MPLS
packets do not get round robin queuing to kernel like IPv4 or IPv6 but all
line up to a single buffer.

Only hacking the firmware of the NIC with some other NIC vendor which also
out of the box was far from decent I was able to spread those flows around
so performance of MPLS streams arriving at the compute was acceptable.

Best,
R.


On Thu, May 2, 2019 at 5:35 AM Radu-Adrian FEURDEAN <
cisco-nsp@radu-adrian.feurdean.net> wrote:

> On Wed, May 1, 2019, at 00:15, adamv0025@netconsultings.com wrote:
>
> > Converting DC to standard MPLS network and all your problems are solved.
>
> Just talk about converting DC to MPLS and you will start having other kind
> of problems.
> IMHO, if DCs didn't massively adopt MPLS is because lack of training and
> what I call "end-user mentality". In lots of cases you are dealing with
> people that when you say "MPLS" they understand "site-to-site L3VPN using
> some magic technology" (that they don't understand). Others do understand
> some tiny bits but see it as a "carrier technology". And there's enough of
> them so that manufacturers take their opinion in consideration. Result =>
> VXLAN.
> Those being said, there are DCs that managed to go the MPLS way, but it
> looks more like an exception rater than the rule. Unfortunately.
>
> --
> R.-A. Feurdean
> _______________________________________________
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Seamless MPLS interacting with flat LDP domains [ In reply to ]
On Fri, May 3, 2019, at 04:15, Robert Raszuk wrote:
> Radu,
>
> The MPLS in modern DC is none starter purely from technology pov.
>
> In modern DCs compute nodes are your tenant PEs all talking to rest of
> the fabric L3. So if you want to roll MPLS you would need to do that to
> the compute nodes. That means that with exact match you will see in
> MSDCs millions of FECs and millions of underlay routes which you can
> not summarize. Plus on top of that an overlay say L3VPNs for the
> tenant/pods reachability.

Hi,

That is a specific design for a specific DC size. And even so, I see a problem with the compute nodes being the PE rather than CE.
But you have a point for that case - MPLS is not for networks that grow to a certain high amount of routers (and in your case endpoints become routers).

> IPv6 but all line up to a single buffer.

Talking about IPv6, do you see many DCs deploying IPv6 in a menaingful way ? Just my curiosity...

Those being said, I'm curious how many datacenter do you see with millions of hosts AND routing down to host level.
For what I have visibility to, DCs go up to thousands of hosts (tens of thousands is already big, hundreds of thousands is huge), and network usually stops on a network device in front of them. You may not call them a "modern DC", but that's the most common occurrence. And even for the case of big and huge DCs, there is a question to be posed about where you pout a boundary for the aggregation level ? Always at DC border ? Really ? At that size ?

--
R.-A. Feurdean
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Seamless MPLS interacting with flat LDP domains [ In reply to ]
On 3/May/19 15:33, Radu-Adrian FEURDEAN wrote:

> Talking about IPv6, do you see many DCs deploying IPv6 in a menaingful way ? Just my curiosity...

Didn't Facebook say that all their M2M communications is exclusively on
IPv6?

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Seamless MPLS interacting with flat LDP domains [ In reply to ]
On 03/05/2019 15:33, Radu-Adrian FEURDEAN wrote:
> Talking about IPv6, do you see many DCs deploying IPv6 in a menaingful way ? Just my curiosity...

May be of interest in that regard: "Titel: Simplicity is key to network
redesign for LINE" -
https://blog.apnic.net/2019/05/03/simplicity-is-key-to-network-redesign-for-line/
Re: Seamless MPLS interacting with flat LDP domains [ In reply to ]
On Sat, May 4, 2019, at 10:22, Mark Tinka wrote:
> Didn't Facebook say that all their M2M communications is exclusively on IPv6?

Meaningful, yes, unfortunately 100% not representative (they also do U2M communication over IPv6 whenever possible).

What I was meaning a more "realistic" deployments. Facebook is one well-known example (and if my memory is good, LinkedIn is another). For each of them you can easily find (?? tens of ??) thousands of other much more "classic" deployments, with less than 100 racks (often down to 10 or even less) and no IPv6 on roadmap, not even at the last place. I was thinking at those guys - does any of them think of changing the status-quo ?

--
R.-A. Feurdean
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Seamless MPLS interacting with flat LDP domains [ In reply to ]
On Sat, May 4, 2019, at 11:48, Hansen, Christoffer wrote:

> May be of interest in that regard: "Titel: Simplicity is key to network
> redesign for LINE" -
> https://blog.apnic.net/2019/05/03/simplicity-is-key-to-network-redesign-for-line/

While still not very prepresentative, it's much better than a "GAFAM" (the "F" having been given as example in a previous e-mail) .
Thanks.

--
R.-A. Feurdean
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Seamless MPLS interacting with flat LDP domains [ In reply to ]
On 4/May/19 17:17, Radu-Adrian FEURDEAN wrote:

> Meaningful, yes, unfortunately 100% not representative (they also do U2M communication over IPv6 whenever possible).
>
> What I was meaning a more "realistic" deployments. Facebook is one well-known example (and if my memory is good, LinkedIn is another). For each of them you can easily find (?? tens of ??) thousands of other much more "classic" deployments, with less than 100 racks (often down to 10 or even less) and no IPv6 on roadmap, not even at the last place. I was thinking at those guys - does any of them think of changing the status-quo ?

Well, that's not unlike asking my Enterprise customers what their IPv6
plans are.

The major CDN's, I know, have gone IPv6 (U2M). There are a bunch of
other not-so-well-known CDN's out there, and I don't know what their
IPv6 plans are, or if they've done anything in that regard.

For regular folk just running a couple of racks in a data centre, no
clue, really. But for some reason or other, for those kind, based on
experience I've seen globally, I'd err (with a bit of conjecturbation)
on the side of them being less interested in IPv6 at present.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Seamless MPLS interacting with flat LDP domains [ In reply to ]
> Robert Raszuk
> Sent: Friday, May 3, 2019 3:16 AM
>
> Radu,
>
> The MPLS in modern DC is none starter purely from technology pov.
>
> In modern DCs compute nodes are your tenant PEs all talking to rest of the
> fabric L3. So if you want to roll MPLS you would need to do that to the
> compute nodes. That means that with exact match you will see in MSDCs
> millions of FECs and millions of underlay routes which you can not
> summarize. Plus on top of that an overlay say L3VPNs for the tenant/pods
> reachability.
>
Well I guess whenever the summarization is used in the pure IP underlay, in
MPLS underlay a seamless MPLS boundary would be used so all the underlay
routes/FECs would then be contained only to compute nodes.


> Good luck with operating that scale with MPLS forwarding. Besides while
> some vendors of the hosts NICs claim support for MPLS they do that only on
> ppt. In real life take very popular NIC vendor and you will find that MPLS
> packets do not get round robin queuing to kernel like IPv4 or IPv6 but all
line
> up to a single buffer.
>
> Only hacking the firmware of the NIC with some other NIC vendor which also
> out of the box was far from decent I was able to spread those flows around
> so performance of MPLS streams arriving at the compute was acceptable.
>
Hmm good to know, I wasn't aware of this.
I guess this was specific to a certain setup right?
https://www.netronome.com/blog/ovs-offload-models-used-nics-and-smartnics-pr
os-and-cons/


adam


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Seamless MPLS interacting with flat LDP domains [ In reply to ]
On 6 May 2019 10:12:25 BST, adamv0025@netconsultings.com wrote:
>> Robert Raszuk
>> Sent: Friday, May 3, 2019 3:16 AM
>>
>> Radu,
>>
>> The MPLS in modern DC is none starter purely from technology pov.
>>
>> In modern DCs compute nodes are your tenant PEs all talking to rest
>of the
>> fabric L3. So if you want to roll MPLS you would need to do that to
>the
>> compute nodes. That means that with exact match you will see in MSDCs
>> millions of FECs and millions of underlay routes which you can not
>> summarize. Plus on top of that an overlay say L3VPNs for the
>tenant/pods
>> reachability.
>>
>Well I guess whenever the summarization is used in the pure IP
>underlay, in
>MPLS underlay a seamless MPLS boundary would be used so all the
>underlay
>routes/FECs would then be contained only to compute nodes.

DoD label allocations and longest-prefix matching for FECs means that summarization can be used in the a Seamless MPLS architecture:

https://tools.ietf.org/html/rfc5283
https://tools.ietf.org/html/rfc7032

Cheers,
James.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

1 2  View All