> Saku Ytti
> Sent: Friday, July 19, 2019 7:46 AM
> On Fri, 19 Jul 2019 at 04:27, Jared Mauch <email@example.com> wrote:
> > Is there a reason to not do 4x10G or 1x100G? It’s cheap enough these
> > If they’re in-datacenter I can maybe understand 40G but outside the DC it’s
> unclear to me why someone would do this.
> Agreed. 40GE future looks extremely bad. This gen is 25G lanes, next gen is
> 50G lanes. QSFP56 will support 8 or 4 lanes at 25G or 50G. So you can get
> perfect break-out, without wasting any capacity. Commonly today 40GE port
> density is identical to 100GE density, wasting 60% of your investment, just to
> avoid using gearboxes and retimers.
Agree with the 40g dead in the future statement above but the 100 instead of 40 cause it's cheaper argument I'm not actually getting.
Disclaimer, I'm in the business where at the customer edge it's not so much about the actual tx/rx rates, but rather about port quantities so I'd happily use 2:1 front to back card oversubscription.
Now if we're talking about giving customers 100G ports instead of requested 40G ports -even though they don’t need it (truth be told they most likely don't even need 40) then what options do we have on MX?
MPC7s have come down in price significantly over the past year or so, and I can get max 4x 100G ports out of those whereas I can get 12x40G ports.
Now to go for MPC10s instead just to give each of those say 12 customers a 100G port even though they did not ask for 100GE and would barely use 40 in reality doesn't quite add up.
Maybe once there will be new 12x400G card and MPC10 prices will plummet -then sure..
I'd say that unless you're in the business where your access pipes are all red hot and you need to bear the "premium" pricing of the latest fastest HW, then I don't think the model of buy capacity now cause you will definitely need it in future is the right one. Instead I'd suggest you buy capacity when you actually need it and chances are the state of the art has moved on and you don’t need to pay "premium" any more to fulfil your then timely capacity needs.
juniper-nsp mailing list firstname.lastname@example.org https://puck.nether.net/mailman/listinfo/juniper-nsp