Discussion:
[j-nsp] MX80 Route table Size
Luca Salvatore
2013-09-23 01:24:48 UTC
Permalink
Hi,
I can't seem to find how many IPv4/IPv6 routes the MX80 range can support. I know it can do the full BGP table but the info does not seem to be anywhere on juniper.net.
I'm sure it used to be?. Perhaps I'm blind. I can find it for EX switches but not MX gear.

Does anyone have a like to official juniper doco that states the max route table for MX80?
--
Giuliano Medalha
2013-09-23 01:37:47 UTC
Permalink
Luca,

The information we have he is:

~4M RIB - (1 BGP session test only)
~1M FIB

We have some cases here with 6 full routing tables from 6 different carriers

Other cases include more than 60 sessions with 4 routes each.

The number of sessions itself can change the numbers too.

Att,

Giuliano
Giuliano Cardozo Medalha
Systems Engineer
+55 (17) 3011-3811
+55 (17) 8112-5394
JUNIPER J-PARTNER ELITE
giuliano at wztech.com.br
http://www.wztech.com.br/




WZTECH is registered trademark of WZTECH NETWORKS.
Copyright ? 2013 WZTECH NETWORKS. All Rights Reserved.

The information transmitted in this email message and any attachments
are solely for the intended recipient and may contain confidential or
privileged information. If you are not the intended recipient, any
review, transmission, dissemination or other use of this information
is prohibited. If you have received this communication in error,
please notify the sender immediately and delete the material from any
computer, including any copies.
Post by Luca Salvatore
Hi,
I can't seem to find how many IPv4/IPv6 routes the MX80 range can support. I know it can do the full BGP table but the info does not seem to be anywhere on juniper.net.
I'm sure it used to be?. Perhaps I'm blind. I can find it for EX switches but not MX gear.
Does anyone have a like to official juniper doco that states the max route table for MX80?
--
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Saku Ytti
2013-09-23 08:51:54 UTC
Permalink
On (2013-09-23 11:24 +1000), Luca Salvatore wrote:

Hi Luca,
Post by Luca Salvatore
I can't seem to find how many IPv4/IPv6 routes the MX80 range can support. I know it can do the full BGP table but the info does not seem to be anywhere on juniper.net.
I'm sure it used to be?. Perhaps I'm blind. I can find it for EX switches but not MX gear.
The HW has exactly same 256MB RLDRAM as rest of Juniper high-end gear,
including T4k. I.e. FIB is identical in all MX and new trio generation T
gear.

You can see how that memory is spread and populated via 'start shell pfe
network tfeb0' and 'show jnh 0 pool ...'
Post by Luca Salvatore
Does anyone have a like to official juniper doco that states the max route table for MX80?
Any number would be only indicative/marketing. Much as same as if you'd ask
how many routes can your laptop handle, it would depend what else will be
using the memory.
I'd say 1M is reasonable figure, maybe in some environments you could push
it to 1.5M but you'll certainly not going to see 2M.

I worry about this bit, as I already have boxes with >800k IPv4 prefixes.
--
++ytti
David Miller
2013-09-23 20:42:56 UTC
Permalink
Post by Saku Ytti
Hi Luca,
Post by Luca Salvatore
I can't seem to find how many IPv4/IPv6 routes the MX80 range can support. I know it can do the full BGP table but the info does not seem to be anywhere on juniper.net.
I'm sure it used to be?. Perhaps I'm blind. I can find it for EX switches but not MX gear.
The HW has exactly same 256MB RLDRAM as rest of Juniper high-end gear,
including T4k. I.e. FIB is identical in all MX and new trio generation T
gear.
You can see how that memory is spread and populated via 'start shell pfe
network tfeb0' and 'show jnh 0 pool ...'
Post by Luca Salvatore
Does anyone have a like to official juniper doco that states the max route table for MX80?
Any number would be only indicative/marketing. Much as same as if you'd ask
how many routes can your laptop handle, it would depend what else will be
using the memory.
I'd say 1M is reasonable figure, maybe in some environments you could push
it to 1.5M but you'll certainly not going to see 2M.
I worry about this bit, as I already have boxes with >800k IPv4 prefixes.
800k prefixes in the RIB or in the FIB?
I have been told repeatedly by Juniper that the "limits" of the MX-80 are:

MX80 FIB Capacity IPv4: 1Mil
MX80 FIB Capacity IPv6: 512k

MX80 RIB Capacity IPv4: 4Mil
MX80 RIB Capacity IPv6: 3Mil

These are soft limits - i.e. there is nothing within Junos that rejects
the 1,000,001th(st?) IPv4 route. The boxes will work beyond these
limits (or recommendations), but there is much hand waving if you ask
how much beyond these limits.

The MX-80 can easily handle a full BGP table today. The DFZ is
currently <470k. Depending on your setup and/or your projections of DFZ
growth, the MX-80 will be able to handle a full BGP table for some time
to come.

If your setup and/or your projections of DFZ growth will have your FIB
hitting >1Mill routes within the estimated lifetime of the box, then a
bigger MX with RE-S-1800x4(s) is decidedly "bigger" (in performance,
route capacity, and price).

-DMM

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 553 bytes
Desc: OpenPGP digital signature
URL: <https://puck.nether.net/pipermail/juniper-nsp/attachments/20130923/05a4640c/attachment.sig>
Saku Ytti
2013-09-23 20:51:44 UTC
Permalink
Post by David Miller
If your setup and/or your projections of DFZ growth will have your FIB
hitting >1Mill routes within the estimated lifetime of the box, then a
bigger MX with RE-S-1800x4(s) is decidedly "bigger" (in performance,
route capacity, and price).
Alas, bigger MX won't do much/anything to help with FIB scaling. You're
inherently limited by the LU chip memory, which does not grow when you upgrade
to bigger box.
Juniper is somewhat behind the curve in FIB scale compared to competition.
--
++ytti
Krasimir Avramski
2013-09-24 05:49:23 UTC
Permalink
Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on trio is
huge increment - it is in realm of ~5M routes(since they use dynamic memory
allocation to fill up with routes only) and more than 1M labeled prefix
routes.


Best Regards,
Krasi
Post by Saku Ytti
Post by David Miller
If your setup and/or your projections of DFZ growth will have your FIB
hitting >1Mill routes within the estimated lifetime of the box, then a
bigger MX with RE-S-1800x4(s) is decidedly "bigger" (in performance,
route capacity, and price).
Alas, bigger MX won't do much/anything to help with FIB scaling. You're
inherently limited by the LU chip memory, which does not grow when you upgrade
to bigger box.
Juniper is somewhat behind the curve in FIB scale compared to competition.
--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Saku Ytti
2013-09-24 06:40:33 UTC
Permalink
Post by Krasimir Avramski
Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on trio is
huge increment - it is in realm of ~5M routes(since they use dynamic memory
allocation to fill up with routes only) and more than 1M labeled prefix
I don't think this is apples to apples. The 16MB RLDRAM is just for jtree,
while 256MB in trio has lot more than just ktree, and some elements are
sprayed across the 4*64MB devices which make up the 256MB RDLRAM.

I'd be quite comfortable with 2M FIB throughout the lifecycle of current
generation, but I've never heard JNPR quote anything near this for trio scale.

I'm not sure I either understand why it matters if route is labeled or not, if
each route has unique label, then it means you're wasting NH space, but if you
are doing next-hop-self and advertising only loopback labels, then I don't
think labeled route should be more expensive.
(NH lives in RLDRAM in Trio as well, and I believe it specifically is sprayed
across all four RLDRAM devices).
--
++ytti
Krasimir Avramski
2013-09-24 07:21:10 UTC
Permalink
Agree.. other elements like counters, filters, descriptors etc .. but it is
dynamic allocation which isn't the case with ichip - 16M bank for
firewalls , 16M for jtree with fixed regions. Although there is a
workaround(
http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuration/junos-software-jtree-memory-repartitioning.html)
for
ichip I am calculating the worst case scenario with unique inner vpn label
usage with composite nexthops.


Best Regards,
Krasi
Post by Saku Ytti
Post by Krasimir Avramski
Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on trio
is
Post by Krasimir Avramski
huge increment - it is in realm of ~5M routes(since they use dynamic
memory
Post by Krasimir Avramski
allocation to fill up with routes only) and more than 1M labeled prefix
I don't think this is apples to apples. The 16MB RLDRAM is just for jtree,
while 256MB in trio has lot more than just ktree, and some elements are
sprayed across the 4*64MB devices which make up the 256MB RDLRAM.
I'd be quite comfortable with 2M FIB throughout the lifecycle of current
generation, but I've never heard JNPR quote anything near this for trio scale.
I'm not sure I either understand why it matters if route is labeled or not, if
each route has unique label, then it means you're wasting NH space, but if you
are doing next-hop-self and advertising only loopback labels, then I don't
think labeled route should be more expensive.
(NH lives in RLDRAM in Trio as well, and I believe it specifically is sprayed
across all four RLDRAM devices).
--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Nitzan Tzelniker
2013-09-24 14:18:37 UTC
Permalink
Hi,

The problem with the MX80 is not the FIB size but the slow RE
The time it take to receive full routing table is long and to put it into
the FIB is even worst

Nitzan
Post by Krasimir Avramski
Agree.. other elements like counters, filters, descriptors etc .. but it is
dynamic allocation which isn't the case with ichip - 16M bank for
firewalls , 16M for jtree with fixed regions. Although there is a
workaround(
http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuration/junos-software-jtree-memory-repartitioning.html
)
for
ichip I am calculating the worst case scenario with unique inner vpn label
usage with composite nexthops.
Best Regards,
Krasi
Post by Saku Ytti
Post by Krasimir Avramski
Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on
trio
Post by Saku Ytti
is
Post by Krasimir Avramski
huge increment - it is in realm of ~5M routes(since they use dynamic
memory
Post by Krasimir Avramski
allocation to fill up with routes only) and more than 1M labeled prefix
I don't think this is apples to apples. The 16MB RLDRAM is just for
jtree,
Post by Saku Ytti
while 256MB in trio has lot more than just ktree, and some elements are
sprayed across the 4*64MB devices which make up the 256MB RDLRAM.
I'd be quite comfortable with 2M FIB throughout the lifecycle of current
generation, but I've never heard JNPR quote anything near this for trio scale.
I'm not sure I either understand why it matters if route is labeled or not, if
each route has unique label, then it means you're wasting NH space, but
if
Post by Saku Ytti
you
are doing next-hop-self and advertising only loopback labels, then I
don't
Post by Saku Ytti
think labeled route should be more expensive.
(NH lives in RLDRAM in Trio as well, and I believe it specifically is sprayed
across all four RLDRAM devices).
--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Krasimir Avramski
2013-09-24 14:40:08 UTC
Permalink
We are aware ppc on mx80 is slower than intel REs... but the original
question was for scalability not for performance/convergence.
Take a look at newer MX104 for more RE performance.

Krasi
Post by Nitzan Tzelniker
Hi,
The problem with the MX80 is not the FIB size but the slow RE
The time it take to receive full routing table is long and to put it into
the FIB is even worst
Nitzan
Post by Krasimir Avramski
Agree.. other elements like counters, filters, descriptors etc .. but it is
dynamic allocation which isn't the case with ichip - 16M bank for
firewalls , 16M for jtree with fixed regions. Although there is a
workaround(
http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuration/junos-software-jtree-memory-repartitioning.html
)
for
ichip I am calculating the worst case scenario with unique inner vpn label
usage with composite nexthops.
Best Regards,
Krasi
Post by Saku Ytti
Post by Krasimir Avramski
Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on
trio
Post by Saku Ytti
is
Post by Krasimir Avramski
huge increment - it is in realm of ~5M routes(since they use dynamic
memory
Post by Krasimir Avramski
allocation to fill up with routes only) and more than 1M labeled
prefix
Post by Saku Ytti
I don't think this is apples to apples. The 16MB RLDRAM is just for
jtree,
Post by Saku Ytti
while 256MB in trio has lot more than just ktree, and some elements are
sprayed across the 4*64MB devices which make up the 256MB RDLRAM.
I'd be quite comfortable with 2M FIB throughout the lifecycle of current
generation, but I've never heard JNPR quote anything near this for trio scale.
I'm not sure I either understand why it matters if route is labeled or not, if
each route has unique label, then it means you're wasting NH space, but
if
Post by Saku Ytti
you
are doing next-hop-self and advertising only loopback labels, then I
don't
Post by Saku Ytti
think labeled route should be more expensive.
(NH lives in RLDRAM in Trio as well, and I believe it specifically is sprayed
across all four RLDRAM devices).
--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Amos Rosenboim
2013-09-24 14:33:31 UTC
Permalink
To add on Nitzan's comment(we work together):
When everything is stable all is good.
But bounce a full table BGP session, and than bounce an IGP adjacency and you are in a lot of trouble.
This seems to be a combination of the (in)famous Junos software issue described extensively by RAS and a processor that is so slow that makes the software issue appear in much smaller environment than what is described by RAS.
Having said all this, run it with few thousands routes and it's a beast.
I think this box really changed the game for many small ISPs.

Cheers

Amos

Sent from my iPhone

On 24 Sep 2013, at 17:21, "Nitzan Tzelniker" <nitzan.tzelniker at gmail.com<mailto:nitzan.tzelniker at gmail.com>> wrote:

Hi,

The problem with the MX80 is not the FIB size but the slow RE
The time it take to receive full routing table is long and to put it into
the FIB is even worst

Nitzan


On Tue, Sep 24, 2013 at 10:21 AM, Krasimir Avramski <krasi at smartcom.bg<mailto:krasi at smartcom.bg>>wrote:

Agree.. other elements like counters, filters, descriptors etc .. but it is
dynamic allocation which isn't the case with ichip - 16M bank for
firewalls , 16M for jtree with fixed regions. Although there is a
workaround(

http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuration/junos-software-jtree-memory-repartitioning.html
)
for
ichip I am calculating the worst case scenario with unique inner vpn label
usage with composite nexthops.


Best Regards,
Krasi


On 24 September 2013 09:40, Saku Ytti <saku at ytti.fi<mailto:saku at ytti.fi>> wrote:

On (2013-09-24 08:49 +0300), Krasimir Avramski wrote:

Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on
trio
is
huge increment - it is in realm of ~5M routes(since they use dynamic
memory
allocation to fill up with routes only) and more than 1M labeled prefix

I don't think this is apples to apples. The 16MB RLDRAM is just for
jtree,
while 256MB in trio has lot more than just ktree, and some elements are
sprayed across the 4*64MB devices which make up the 256MB RDLRAM.

I'd be quite comfortable with 2M FIB throughout the lifecycle of current
generation, but I've never heard JNPR quote anything near this for trio
scale.

I'm not sure I either understand why it matters if route is labeled or
not, if
each route has unique label, then it means you're wasting NH space, but
if
you
are doing next-hop-self and advertising only loopback labels, then I
don't
think labeled route should be more expensive.
(NH lives in RLDRAM in Trio as well, and I believe it specifically is
sprayed
across all four RLDRAM devices).

--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
https://puck.nether.net/mailman/listinfo/juniper-nsp
Paul Stewart
2013-09-23 11:12:50 UTC
Permalink
This is the busiest MX80 we have in production - RE shows 61% memory used,
box handles this quite well and no concerns currently... Traffic wise,
it's doing about 3Gb/s in it's role....

Paul


inet.0: 466698 destinations, 645933 routes (466680 active, 16 holddown, 12
hidden)
Direct: 22 routes, 22 active
Local: 21 routes, 21 active
OSPF: 2490 routes, 2489 active
BGP: 643365 routes, 464138 active
Static: 4 routes, 4 active
IGMP: 1 routes, 1 active
Aggregate: 11 routes, 5 active
RSVP: 19 routes, 0 active

inet.3: 19 destinations, 19 routes (19 active, 0 holddown, 0 hidden)
RSVP: 19 routes, 19 active

mpls.0: 17 destinations, 17 routes (17 active, 0 holddown, 0 hidden)
MPLS: 3 routes, 3 active
RSVP: 6 routes, 6 active
L2VPN: 4 routes, 4 active
VPLS: 4 routes, 4 active

inet6.0: 14397 destinations, 19148 routes (14397 active, 0 holddown, 10
hidden)
Direct: 18 routes, 11 active
Local: 16 routes, 16 active
OSPF3: 79 routes, 79 active
BGP: 19035 routes, 14291 active

bgp.l2vpn.0: 4 destinations, 4 routes (4 active, 0 holddown, 0 hidden)
BGP: 4 routes, 4 active
Post by Saku Ytti
Hi Luca,
Post by Luca Salvatore
I can't seem to find how many IPv4/IPv6 routes the MX80 range can
support. I know it can do the full BGP table but the info does not seem
to be anywhere on juniper.net.
I'm sure it used to be?. Perhaps I'm blind. I can find it for EX
switches but not MX gear.
The HW has exactly same 256MB RLDRAM as rest of Juniper high-end gear,
including T4k. I.e. FIB is identical in all MX and new trio generation T
gear.
You can see how that memory is spread and populated via 'start shell pfe
network tfeb0' and 'show jnh 0 pool ...'
Post by Luca Salvatore
Does anyone have a like to official juniper doco that states the max
route table for MX80?
Any number would be only indicative/marketing. Much as same as if you'd
ask
how many routes can your laptop handle, it would depend what else will be
using the memory.
I'd say 1M is reasonable figure, maybe in some environments you could push
it to 1.5M but you'll certainly not going to see 2M.
I worry about this bit, as I already have boxes with >800k IPv4 prefixes.
--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Paul Stewart
2013-09-24 14:50:41 UTC
Permalink
Not to hi-jack this thread but does anyone know *real-world* numbers yet
on the MX104 RE? I know it has more memory and is supposed to be "faster"
but have no idea yet how much faster it really is?

We don't have any in our network yet but anxious to deploy one end of
year...

Thanks for any input...

Paul
Post by Krasimir Avramski
We are aware ppc on mx80 is slower than intel REs... but the original
question was for scalability not for performance/convergence.
Take a look at newer MX104 for more RE performance.
Krasi
On 24 September 2013 17:18, Nitzan Tzelniker
Post by Nitzan Tzelniker
Hi,
The problem with the MX80 is not the FIB size but the slow RE
The time it take to receive full routing table is long and to put it
into
the FIB is even worst
Nitzan
On Tue, Sep 24, 2013 at 10:21 AM, Krasimir Avramski
Post by Krasimir Avramski
Agree.. other elements like counters, filters, descriptors etc .. but
it
is
dynamic allocation which isn't the case with ichip - 16M bank for
firewalls , 16M for jtree with fixed regions. Although there is a
workaround(
http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuratio
n/junos-software-jtree-memory-repartitioning.html
)
for
ichip I am calculating the worst case scenario with unique inner vpn
label
usage with composite nexthops.
Best Regards,
Krasi
Post by Saku Ytti
Post by Krasimir Avramski
Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on
trio
Post by Saku Ytti
is
Post by Krasimir Avramski
huge increment - it is in realm of ~5M routes(since they use
dynamic
Post by Saku Ytti
memory
Post by Krasimir Avramski
allocation to fill up with routes only) and more than 1M labeled
prefix
Post by Saku Ytti
I don't think this is apples to apples. The 16MB RLDRAM is just for
jtree,
Post by Saku Ytti
while 256MB in trio has lot more than just ktree, and some elements
are
Post by Saku Ytti
sprayed across the 4*64MB devices which make up the 256MB RDLRAM.
I'd be quite comfortable with 2M FIB throughout the lifecycle of
current
Post by Saku Ytti
generation, but I've never heard JNPR quote anything near this for
trio
Post by Saku Ytti
scale.
I'm not sure I either understand why it matters if route is labeled
or
Post by Saku Ytti
not, if
each route has unique label, then it means you're wasting NH space,
but
if
Post by Saku Ytti
you
are doing next-hop-self and advertising only loopback labels, then I
don't
Post by Saku Ytti
think labeled route should be more expensive.
(NH lives in RLDRAM in Trio as well, and I believe it specifically is
sprayed
across all four RLDRAM devices).
--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Mark Tinka
2013-09-24 15:48:55 UTC
Permalink
This post might be inappropriate. Click to display it.
Luca Salvatore
2013-09-24 22:29:24 UTC
Permalink
This concerns me a little. I'M about to take a full table on a MX5.
Is it only an issue when the adjacencyis lost and we need to receive the
table again or will performance of the entire box be affected?
--
Luca





On 25/09/13 12:18 AM, "Nitzan Tzelniker" <nitzan.tzelniker at gmail.com>
Post by Nitzan Tzelniker
Hi,
The problem with the MX80 is not the FIB size but the slow RE
The time it take to receive full routing table is long and to put it into
the FIB is even worst
Nitzan
On Tue, Sep 24, 2013 at 10:21 AM, Krasimir Avramski
Post by Krasimir Avramski
Agree.. other elements like counters, filters, descriptors etc .. but
it is
dynamic allocation which isn't the case with ichip - 16M bank for
firewalls , 16M for jtree with fixed regions. Although there is a
workaround(
http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuration
/junos-software-jtree-memory-repartitioning.html
)
for
ichip I am calculating the worst case scenario with unique inner vpn
label
usage with composite nexthops.
Best Regards,
Krasi
Post by Saku Ytti
Post by Krasimir Avramski
Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on
trio
Post by Saku Ytti
is
Post by Krasimir Avramski
huge increment - it is in realm of ~5M routes(since they use dynamic
memory
Post by Krasimir Avramski
allocation to fill up with routes only) and more than 1M labeled
prefix
Post by Saku Ytti
I don't think this is apples to apples. The 16MB RLDRAM is just for
jtree,
Post by Saku Ytti
while 256MB in trio has lot more than just ktree, and some elements
are
Post by Saku Ytti
sprayed across the 4*64MB devices which make up the 256MB RDLRAM.
I'd be quite comfortable with 2M FIB throughout the lifecycle of
current
Post by Saku Ytti
generation, but I've never heard JNPR quote anything near this for
trio
Post by Saku Ytti
scale.
I'm not sure I either understand why it matters if route is labeled or
not, if
each route has unique label, then it means you're wasting NH space,
but
if
Post by Saku Ytti
you
are doing next-hop-self and advertising only loopback labels, then I
don't
Post by Saku Ytti
think labeled route should be more expensive.
(NH lives in RLDRAM in Trio as well, and I believe it specifically is
sprayed
across all four RLDRAM devices).
--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Saku Ytti
2013-09-25 06:06:39 UTC
Permalink
Post by Luca Salvatore
This concerns me a little. I'M about to take a full table on a MX5.
Is it only an issue when the adjacencyis lost and we need to receive the
table again or will performance of the entire box be affected?
For what it's worth we're running metric crapton of MX80s which take 540k IPv4
routes, 15k IPv6 routes and some VPNv4 routes, from 1ks to 100ks.

They mostly work in our environment and RPD or KRT Queue does not get too
angry. However, recently we added MD5 to the IPv4 RR sessions, and during
flapping the RR session we might get RPD slip and have unrelated customer
facing VPNv4 peers flap. This issue only triggered on boxes which had more
than few customer facing BGP sessions, JTAC is on the case but I'm not overly
optimistic.

I'd really hope that instead of just putting out fires, Juniper takes deeper
architectural review of RPD, rip out protocol code and implement them in under
newer, scalable more robust design. And maybe do phased migration, where
customers have option to run old JunOS or new JunOS, like CSCO has done with
catalyst and GSR. Allowing new platform to get exposure before it has
feature-parity with old platform.
How many good developers they'd need for that? Committed team of 5?
--
++ytti
Amos Rosenboim
2013-09-25 07:58:46 UTC
Permalink
What I described only happens in convergence scenarios.

Amos

Sent from my iPhone

On 25 Sep 2013, at 02:21, "Luca Salvatore" <Luca at ninefold.com<mailto:Luca at ninefold.com>> wrote:

This concerns me a little. I'M about to take a full table on a MX5.
Is it only an issue when the adjacencyis lost and we need to receive the
table again or will performance of the entire box be affected?
--
Luca





On 25/09/13 12:18 AM, "Nitzan Tzelniker" <nitzan.tzelniker at gmail.com<mailto:nitzan.tzelniker at gmail.com>>
wrote:

Hi,

The problem with the MX80 is not the FIB size but the slow RE
The time it take to receive full routing table is long and to put it into
the FIB is even worst

Nitzan


On Tue, Sep 24, 2013 at 10:21 AM, Krasimir Avramski
<krasi at smartcom.bg<mailto:krasi at smartcom.bg>>wrote:

Agree.. other elements like counters, filters, descriptors etc .. but
it is
dynamic allocation which isn't the case with ichip - 16M bank for
firewalls , 16M for jtree with fixed regions. Although there is a
workaround(


http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuration
/junos-software-jtree-memory-repartitioning.html
)
for
ichip I am calculating the worst case scenario with unique inner vpn
label
usage with composite nexthops.


Best Regards,
Krasi


On 24 September 2013 09:40, Saku Ytti <saku at ytti.fi<mailto:saku at ytti.fi>> wrote:

On (2013-09-24 08:49 +0300), Krasimir Avramski wrote:

Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on
trio
is
huge increment - it is in realm of ~5M routes(since they use dynamic
memory
allocation to fill up with routes only) and more than 1M labeled
prefix

I don't think this is apples to apples. The 16MB RLDRAM is just for
jtree,
while 256MB in trio has lot more than just ktree, and some elements
are
sprayed across the 4*64MB devices which make up the 256MB RDLRAM.

I'd be quite comfortable with 2M FIB throughout the lifecycle of
current
generation, but I've never heard JNPR quote anything near this for
trio
scale.

I'm not sure I either understand why it matters if route is labeled or
not, if
each route has unique label, then it means you're wasting NH space,
but
if
you
are doing next-hop-self and advertising only loopback labels, then I
don't
think labeled route should be more expensive.
(NH lives in RLDRAM in Trio as well, and I believe it specifically is
sprayed
across all four RLDRAM devices).

--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
https://puck.nether.net/mailman/listinfo/juniper-nsp


_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
https://puck.nether.net/mailman/listinfo/juniper-nsp

Loading...