Discussion:
[j-nsp] MX104 Limitations
Colton Conor
2015-06-24 13:08:49 UTC
Permalink
We are considering upgrading to a Juniper MX104, but another vendor (not
Juniper) pointed out the following limitations about the MX104 in their
comparison. I am wondering how much of it is actually true about the MX104?
And if true, is it really that big of a deal?:

1. No fabric redundancy due to fabric-less design. There is no switch
fabric on the MX104, but there is on the rest of the MX series. Not sure if
this is a bad or good thing?

2. The Chassis fixed ports are not on an FRU. If a fixed port fails,
or if data path fails, entire chassis requires replacement.

3. There is no mention of software support for MACSec on the MX104,
it appears to be a hardware capability only at this point in time with
software support potentially coming at a later time.

4. No IX chipsets for the 10G uplinks (i.e. no packet
pre-classification, the IX chip is responsible for this function as well as
GE to 10GE i/f adaptation)

5. QX Complex supports HQoS on MICs only, not on the integrated 4
10GE ports on the PMC. I.e. no HQoS support on the 10GE uplinks

6. Total amount of traffic that can be handled via HQoS is restricted
to 24Gbps. Not all traffic flows can be shaped/policed via HQoS due to a
throughput restriction between the MQ and the QX. Note that the MQ can
still however perform basic port based policing/shaping on any flows. HQoS
support on the 4 installed MICs can only be enabled via a separate license.
Total of 128k queues on the chassis

7. 1588 TC is not supported across the chassis as the current set of
MICs do not support edge time stamping. Edge timestamping is only
supported on the integrated 10G ports. MX104 does not presently list 1588
TC as being supported.

8. BFD can be supported natively in the TRIO chipset. On the MX104,
it is not supported in hardware today. BFD is run from the single core
P2020 MPC.

9. TRIO based cards do not presently support PBB; thus it is
presently not supported on the MX104. PBB is only supported on older EZChip
based MX hardware. Juniper still needs a business case to push this forward

10. MX104 operating temperature: -40 to 65C, but MX5, MX10, MX40, MX80
and MX80-48T are all 0-40C all are TRIO based. Seems odd that the MX104
would support a different temperature range. There are only 3 temperature
hardened MICs for this chassis on the datasheet: (1) 16 x T1/E1 with CE,
(2) 4 x chOC3/STM1 & 1 x chOC12/STM4 with CE, (3) 20 x 10/100/1000 Base-T.

11. Air-flow side-to-side; there is no option for front-to-back cooling
with this chassis.

12. Routing Engine and MPC lack a built-in Ethernet sync port. If the
chassis is deployed without any GE ports, getting SyncE or 1588 out of the
chassis via an Ethernet port will be a problem. SR-a4/-a8 have a built-in
sync connector on the CPM to serve this purpose explicitly.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Raphael Mazelier
2015-06-24 15:36:25 UTC
Permalink
Hello,

I have no the full knowledge to disccussall of the points above, but the
real point is where you come from ? mx80 ? and why you need an upgrade
to (say) mx104 ?

And for what I know:

1. MX104 like MX80 have no SBC, true. They are integrated router.
So no redundancy on this point.

2. Yes.

8. If true it could be a real problem.BFD could be a huge resource consumer.

10. MX104 is an hardenned router. So the chassis can operate with a
larger range than the MX80. But this is not the sole use case of mx104.
So if you want to use it in a hard environnement you have to buy the
good card. Seems logic to me.

11. What the point if the chassis is correcltly cooled ?

The other points are really for special use case. If you need this kinds
of feature you have to carefully test any router you want to use.

Regards,
Post by Colton Conor
We are considering upgrading to a Juniper MX104, but another vendor (not
Juniper) pointed out the following limitations about the MX104 in their
comparison. I am wondering how much of it is actually true about the MX104?
1. No fabric redundancy due to fabric-less design. There is no switch
fabric on the MX104, but there is on the rest of the MX series. Not sure if
this is a bad or good thing?
2. The Chassis fixed ports are not on an FRU. If a fixed port fails,
or if data path fails, entire chassis requires replacement.
3. There is no mention of software support for MACSec on the MX104,
it appears to be a hardware capability only at this point in time with
software support potentially coming at a later time.
4. No IX chipsets for the 10G uplinks (i.e. no packet
pre-classification, the IX chip is responsible for this function as well as
GE to 10GE i/f adaptation)
5. QX Complex supports HQoS on MICs only, not on the integrated 4
10GE ports on the PMC. I.e. no HQoS support on the 10GE uplinks
6. Total amount of traffic that can be handled via HQoS is restricted
to 24Gbps. Not all traffic flows can be shaped/policed via HQoS due to a
throughput restriction between the MQ and the QX. Note that the MQ can
still however perform basic port based policing/shaping on any flows. HQoS
support on the 4 installed MICs can only be enabled via a separate license.
Total of 128k queues on the chassis
7. 1588 TC is not supported across the chassis as the current set of
MICs do not support edge time stamping. Edge timestamping is only
supported on the integrated 10G ports. MX104 does not presently list 1588
TC as being supported.
8. BFD can be supported natively in the TRIO chipset. On the MX104,
it is not supported in hardware today. BFD is run from the single core
P2020 MPC.
9. TRIO based cards do not presently support PBB; thus it is
presently not supported on the MX104. PBB is only supported on older EZChip
based MX hardware. Juniper still needs a business case to push this forward
10. MX104 operating temperature: -40 to 65C, but MX5, MX10, MX40, MX80
and MX80-48T are all 0-40C all are TRIO based. Seems odd that the MX104
would support a different temperature range. There are only 3 temperature
hardened MICs for this chassis on the datasheet: (1) 16 x T1/E1 with CE,
(2) 4 x chOC3/STM1 & 1 x chOC12/STM4 with CE, (3) 20 x 10/100/1000 Base-T.
11. Air-flow side-to-side; there is no option for front-to-back cooling
with this chassis.
12. Routing Engine and MPC lack a built-in Ethernet sync port. If the
chassis is deployed without any GE ports, getting SyncE or 1588 out of the
chassis via an Ethernet port will be a problem. SR-a4/-a8 have a built-in
sync connector on the CPM to serve this purpose explicitly.
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Saku Ytti
2015-06-24 18:22:35 UTC
Permalink
On (2015-06-24 08:08 -0500), Colton Conor wrote:

Hey,
Post by Colton Conor
1. No fabric redundancy due to fabric-less design. There is no switch
fabric on the MX104, but there is on the rest of the MX series. Not sure if
this is a bad or good thing?
I'd say categorically good thing. Less latency, less places for congestion,
less things to break.
Post by Colton Conor
4. No IX chipsets for the 10G uplinks (i.e. no packet
pre-classification, the IX chip is responsible for this function as well as
GE to 10GE i/f adaptation)
4x10GE MICs do not have IX Chip, as they are driven directly by MQ's built-in
PHY, same goes for MX80 chassis ports.
2x10GE MICs otoh do not use MQ PHY and thus need to have IX chip.

MX104 is special, because MQ PHY cannot drive SFP+, so the chassis ports
do not connect directly MQ, as you'd expect (no one would expect IX here, as
it never happened before):

4x10GE MIC/MX80 => MQ<->physical-port
2x10GE MIC => MQ<->IX<->physical-port
MX104 chassis => MQ<->IX<->bcm84728<->physical-port
Post by Colton Conor
5. QX Complex supports HQoS on MICs only, not on the integrated 4
10GE ports on the PMC. I.e. no HQoS support on the 10GE uplinks
This is capacity issue, QX can't really do much more than maybe 20Gbps bidir,
so it's marketing decision not to allow configuration of MQ for the chassis
ports, to avoid running to oversub. This is same as in MX80.
However Juniper does support HQoS on MQ ports, just not per-vlan, so you can
connect subrate service to chassis port and run queues inside that subrate
shaper.
Post by Colton Conor
8. BFD can be supported natively in the TRIO chipset. On the MX104,
it is not supported in hardware today. BFD is run from the single core
P2020 MPC.
I'm 99% certain that inline BFD is supported.
Post by Colton Conor
10. MX104 operating temperature: -40 to 65C, but MX5, MX10, MX40, MX80
and MX80-48T are all 0-40C all are TRIO based. Seems odd that the MX104
would support a different temperature range. There are only 3 temperature
hardened MICs for this chassis on the datasheet: (1) 16 x T1/E1 with CE,
(2) 4 x chOC3/STM1 & 1 x chOC12/STM4 with CE, (3) 20 x 10/100/1000 Base-T.
Target market was different. MX80 was going to sit in a DC, which is properly
cooled and has deep racks. MX104 was going to sit in a telco pop, which has
shallow racks and dubious cooling.
--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Phil Rosenthal
2015-06-24 13:58:54 UTC
Permalink
Comments inline below.
Post by Colton Conor
We are considering upgrading to a Juniper MX104, but another vendor (not
Juniper) pointed out the following limitations about the MX104 in their
comparison. I am wondering how much of it is actually true about the MX104?
None of these are showstoppers for everyone, but depending on your requirements, some of these might or not be a problem.

In essentially all of these, there is a question of "Well, what are you comparing it against?", as most things in that size/price range will have compromises as well.

Obviously this list came from someone with a biased viewpoint of nothing but problems with Juniper -- A Competitor. Consider that there are also positives.
For example, In Software, most people here would rank JunOS > Cisco IOS > Brocade > Arista > Force10

From question 12, it seems that you are considering Alcatel Lucent 7750 as your alternative -- Unfortunately you won't find nearly as many people with ALU experience, so it will be a bit harder to get fair commentary comparing the two. It might also be harder to find engineers to manage them.
Post by Colton Conor
1. No fabric redundancy due to fabric-less design. There is no switch
fabric on the MX104, but there is on the rest of the MX series. Not sure if
this is a bad or good thing?
The Switch Fabric is itself very reliable, and not the most likely point of failure. In fact, in all of my years, I have not had a switch fabric fail on any switch/router from any vendor.
I consider a redundant switch fabric "nice to have".
For us, MX480 makes much more sense than MX104. and MX480 has a redundant SF.
Post by Colton Conor
2. The Chassis fixed ports are not on an FRU. If a fixed port fails,
or if data path fails, entire chassis requires replacement.
True. That said, I have not had a failure on any Juniper MX 10G ports in production.
The only failures we have had are a few RE SSD failures, and an undetermined MPC failure that was causing occasional resets.

Our experiences in the past with Cisco and Brocade has had much higher failure rates of fixed ethernet ports.
Post by Colton Conor
3. There is no mention of software support for MACSec on the MX104,
it appears to be a hardware capability only at this point in time with
software support potentially coming at a later time.
We do not use this.
Post by Colton Conor
4. No IX chipsets for the 10G uplinks (i.e. no packet
pre-classification, the IX chip is responsible for this function as well as
GE to 10GE i/f adaptation)
The pre-classification may or may not be an issue for you.

GE to 10GE adaption, I think you would be doing something very wrong if your goal was to connect gig-e's to these ports.
Post by Colton Conor
5. QX Complex supports HQoS on MICs only, not on the integrated 4
10GE ports on the PMC. I.e. no HQoS support on the 10GE uplinks
True. May or may not be an issue for you. There is some QoS capability on the built in ports, but it is very limited. 16x10G and 32x10G MPC cards have somewhat more QoS capability than these on the MX240/480/960. HQoS is only on the -Q cards which are much more expensive on either MX104 or the bigger MX chassis.
Post by Colton Conor
6. Total amount of traffic that can be handled via HQoS is restricted
to 24Gbps. Not all traffic flows can be shaped/policed via HQoS due to a
throughput restriction between the MQ and the QX. Note that the MQ can
still however perform basic port based policing/shaping on any flows. HQoS
support on the 4 installed MICs can only be enabled via a separate license.
Total of 128k queues on the chassis
In most environments, there are a limited number of ports where HQoS is needed, so this may or may not be an issue.
Post by Colton Conor
7. 1588 TC is not supported across the chassis as the current set of
MICs do not support edge time stamping. Edge timestamping is only
supported on the integrated 10G ports. MX104 does not presently list 1588
TC as being supported.
We do not use TC, but more comments on 12 at the bottom.
Post by Colton Conor
8. BFD can be supported natively in the TRIO chipset. On the MX104,
it is not supported in hardware today. BFD is run from the single core
P2020 MPC.
9. TRIO based cards do not presently support PBB; thus it is
presently not supported on the MX104. PBB is only supported on older EZChip
based MX hardware. Juniper still needs a business case to push this forward
No comments on these 2.
Post by Colton Conor
10. MX104 operating temperature: -40 to 65C, but MX5, MX10, MX40, MX80
and MX80-48T are all 0-40C all are TRIO based. Seems odd that the MX104
would support a different temperature range. There are only 3 temperature
hardened MICs for this chassis on the datasheet: (1) 16 x T1/E1 with CE,
(2) 4 x chOC3/STM1 & 1 x chOC12/STM4 with CE, (3) 20 x 10/100/1000 Base-T.
The MX104 is essentially a next-generation MX80. One of the major design goals was temperature hardening, enabling it for use in places like cell towers. For this use case, the port options make a lot of sense.

In a datacenter environment, if you are at 40C, you've got other problems.
Post by Colton Conor
11. Air-flow side-to-side; there is no option for front-to-back cooling
with this chassis.
This is a major pet peeve on many platforms. It seems that the only places to have proper airflow are high-end (10G/40G) ToR switches and large chassis.

That said, as long as your datacenter is not running at very high temperatures, this should not be an issue.

Since question 12 brought up SR-A[48] ...
Loading Image... <https://www.alcatel-lucent.com/sites/live/files/7750sr_a8_f_right.gif>

That looks like side-to-side airflow to me.
Post by Colton Conor
12. Routing Engine and MPC lack a built-in Ethernet sync port. If the
chassis is deployed without any GE ports, getting SyncE or 1588 out of the
chassis via an Ethernet port will be a problem. SR-a4/-a8 have a built-in
sync connector on the CPM to serve this purpose explicitly.
You should probably read this page:
http://www.juniper.net/techpubs/en_US/junos13.3/topics/concept/chassis-external-clock-synchronization-interface-understanding-mx.html <http://www.juniper.net/techpubs/en_US/junos13.3/topics/concept/chassis-external-clock-synchronization-interface-understanding-mx.html>

We do not use TC/SyncE, but SCB-E/SCB-E2 on MX240/480/960 have have built in TC ports if you would like something with "dedicated" external clock interfaces.
Post by Colton Conor
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Mark Tinka
2015-06-25 10:59:31 UTC
Permalink
Post by Phil Rosenthal
Obviously this list came from someone with a biased viewpoint of
nothing but problems with Juniper -- A Competitor. Consider that there
are also positives. For example, In Software, most people here would
rank JunOS > Cisco IOS > Brocade > Arista > Force10 >From question 12,
it seems that you are considering Alcatel Lucent 7750 as your
alternative -- Unfortunately you won't find nearly as many people with
ALU experience, so it will be a bit harder to get fair commentary
comparing the two. It might also be harder to find engineers to manage
them.
And to be fair, you are not going to find competitive analysis between
vendors that is unbiased.

I know of some competitive analysts who are fair about their
competitor's implementations, to the extent that they will appreciate
how the competition solved a particular problem, and signal their
Product teams to do the same or better.

Otherwise, asking one vendor to speak to another vendor's solution is
not a useful way to evaluate your options, especially if you take their
word seriously.

Successful vendors have found that developing various use-cases - within
reason - in the market is what gets you share. My ALU-foo is quite
limited, but speaking for Cisco and Juniper, each of these vendors has a
flavor for pretty much everyone:

- Line cards with high-end QoS.
- Line cards with low-end QoS.
- Line cards with fixed ports.
- Line cards with combo ports.
- Line cards that are modular.
- Line cards that draw low power.
- Line cards that draw high power.
- Line cards that are over-subscribed.
- Line cards that run at line rate.
- Line cards with old optics (GBIC< X2, XFP, CFP).
- Line cards with new optics (SFP, SFP+, CPF2).
- Line cards with low memory.
- Line cards with high memory.
- Line cards that are unlimited re: features.
- Line cards that are license-based.
- e.t.c. - you get the picture.

It becomes a case of satisfying your particular use-case.

For example, a lack of H-QoS on the MX80 or MX104 is not a show-stopper
for us if we are using it as a peering/border router. As an edge router
where customers may require complex QoS architectures, you get what you
pay for. And I don't think the vendors should be castigated for that -
you can't blame Mercedes for not delivering fast laps around a grand
prix track when driving one of their 18-wheeler trucks... use-case.

So just focus on what your goals are. Evaluate those requirements with
Juniper and other competitors, and let them quote you the right solution.

Competitive analysis is counter-productive unless you are very intimate
with the workings of all the vendor equipment you're evaluating, enough
to challenge a lot of the bias you will hear. Best to get all the data
and then make your own choice.

Mark.

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Saku Ytti
2015-06-25 11:10:16 UTC
Permalink
On (2015-06-25 12:59 +0200), Mark Tinka wrote:

Hey Mark,
Post by Mark Tinka
For example, a lack of H-QoS on the MX80 or MX104 is not a show-stopper
for us if we are using it as a peering/border router. As an edge router
MX80 and MX104 fully support HQoS. Only limitation is that QX can only be used
for MIC ports, so you cannot do per-VLAN subrate services on chassis ports.

It is absolutely the same hardware as MPC on bigger JNPR, with same features
and limitations.
Only difference is, that MPC 'wastes' 50% of capacity for fabric, and
MX104/MX80 spend this capacity for additional ports. (In MX80 where fabric
should sit, you have MIC cards)
--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Olivier Benghozi
2015-06-25 11:14:13 UTC
Permalink
You meant: In MX80/104, where fabric should sit, you have 4 integrated 10GE ports.
Post by Saku Ytti
Only difference is, that MPC 'wastes' 50% of capacity for fabric, and
MX104/MX80 spend this capacity for additional ports. (In MX80 where fabric
should sit, you have MIC cards)
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Saku Ytti
2015-06-25 13:35:48 UTC
Permalink
On (2015-06-25 13:14 +0200), Olivier Benghozi wrote:

Hey Olivier,
Post by Olivier Benghozi
You meant: In MX80/104, where fabric should sit, you have 4 integrated 10GE ports.
This is common misconception. People think the chassis ports are magical,
because they don't support QX QoS. But the chassis ports are actually on the
WAN side of the MQ, because that is only side where you have 4x10GE PHY.

The MICs sit on the fabric side of the MQ.
Post by Olivier Benghozi
Post by Saku Ytti
Only difference is, that MPC 'wastes' 50% of capacity for fabric, and
MX104/MX80 spend this capacity for additional ports. (In MX80 where fabric
should sit, you have MIC cards)
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Olivier Benghozi
2015-06-25 16:07:45 UTC
Permalink
Hi Saku,

Well, it's what I can read in "Juniper MX Series", O'Reilly, by Harry Reynolds & Douglas Richard Hanks Jr.
Chapter 1, section MX80: "in lieu of a switch fabric, each MX80 comes with four fixed 10GE ports."

Olivier
Post by Saku Ytti
Hey Olivier,
Post by Olivier Benghozi
You meant: In MX80/104, where fabric should sit, you have 4 integrated 10GE ports.
This is common misconception. People think the chassis ports are magical,
because they don't support QX QoS. But the chassis ports are actually on the
WAN side of the MQ, because that is only side where you have 4x10GE PHY.
The MICs sit on the fabric side of the MQ.
Post by Olivier Benghozi
Post by Saku Ytti
Only difference is, that MPC 'wastes' 50% of capacity for fabric, and
MX104/MX80 spend this capacity for additional ports. (In MX80 where fabric
should sit, you have MIC cards)
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Saku Ytti
2015-06-25 17:18:26 UTC
Permalink
That comment does not directly state it's in fabric side, the
implication can be made, but it's not true. There is no external PHY,
it's exactly like 4x10GE MIC, hence it must connect on WAN side.
Post by Olivier Benghozi
Hi Saku,
Well, it's what I can read in "Juniper MX Series", O'Reilly, by Harry Reynolds & Douglas Richard Hanks Jr.
Chapter 1, section MX80: "in lieu of a switch fabric, each MX80 comes with four fixed 10GE ports."
Olivier
Post by Saku Ytti
Hey Olivier,
Post by Olivier Benghozi
You meant: In MX80/104, where fabric should sit, you have 4 integrated 10GE ports.
This is common misconception. People think the chassis ports are magical,
because they don't support QX QoS. But the chassis ports are actually on the
WAN side of the MQ, because that is only side where you have 4x10GE PHY.
The MICs sit on the fabric side of the MQ.
Post by Olivier Benghozi
Post by Saku Ytti
Only difference is, that MPC 'wastes' 50% of capacity for fabric, and
MX104/MX80 spend this capacity for additional ports. (In MX80 where fabric
should sit, you have MIC cards)
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Mark Tinka
2015-06-25 11:15:43 UTC
Permalink
Post by Saku Ytti
MX80 and MX104 fully support HQoS. Only limitation is that QX can only be used
for MIC ports, so you cannot do per-VLAN subrate services on chassis ports.
Sorry I wasn't clear - I meant this as an example, not literally...

Mark.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Adam Vitkovsky
2015-07-09 13:27:49 UTC
Permalink
Interesting facts.
Now the Juniper MX104 win over Cisco ASR903 (max prefix limit) is not that clear anymore.

Since the chassis is 80Gbps in total I'd assume around 40Gbps towards aggregation and 40Gbps to backbone.

Also if BFD is really not offloaded into HW it would be a bummer on such a slow CPU.

With regards to 1588 I'd like to know if or how anyone deployed this on MPLS backbone if the 4G is in a VRF???
In other words 1588 runs in GRT/inet.0 so how do you then rely the precise per hop delay/jitter info to a 4G cell which sits in a VRF?
Never mind that the cell doesn't really need this precision and running 1588 with the server in 4G VRF across the 1588-blind MPLS core is enough.

It seems Juniper is still waiting for a big customer that is not willing to wait for BGP to converge millions of MAC addresses if DF PE fails (PBB-EVPN)



adam
-----Original Message-----
Of Colton Conor
Sent: 24 June 2015 14:09
Subject: [j-nsp] MX104 Limitations
We are considering upgrading to a Juniper MX104, but another vendor (not
Juniper) pointed out the following limitations about the MX104 in their
comparison. I am wondering how much of it is actually true about the MX104?
1. No fabric redundancy due to fabric-less design. There is no switch
fabric on the MX104, but there is on the rest of the MX series. Not sure if
this is a bad or good thing?
2. The Chassis fixed ports are not on an FRU. If a fixed port fails,
or if data path fails, entire chassis requires replacement.
3. There is no mention of software support for MACSec on the MX104,
it appears to be a hardware capability only at this point in time with
software support potentially coming at a later time.
4. No IX chipsets for the 10G uplinks (i.e. no packet
pre-classification, the IX chip is responsible for this function as well as
GE to 10GE i/f adaptation)
5. QX Complex supports HQoS on MICs only, not on the integrated 4
10GE ports on the PMC. I.e. no HQoS support on the 10GE uplinks
6. Total amount of traffic that can be handled via HQoS is restricted
to 24Gbps. Not all traffic flows can be shaped/policed via HQoS due to a
throughput restriction between the MQ and the QX. Note that the MQ can
still however perform basic port based policing/shaping on any flows. HQoS
support on the 4 installed MICs can only be enabled via a separate license.
Total of 128k queues on the chassis
7. 1588 TC is not supported across the chassis as the current set of
MICs do not support edge time stamping. Edge timestamping is only
supported on the integrated 10G ports. MX104 does not presently list 1588
TC as being supported.
8. BFD can be supported natively in the TRIO chipset. On the MX104,
it is not supported in hardware today. BFD is run from the single core
P2020 MPC.
9. TRIO based cards do not presently support PBB; thus it is
presently not supported on the MX104. PBB is only supported on older EZChip
based MX hardware. Juniper still needs a business case to push this forward
10. MX104 operating temperature: -40 to 65C, but MX5, MX10, MX40, MX80
and MX80-48T are all 0-40C all are TRIO based. Seems odd that the MX104
would support a different temperature range. There are only 3 temperature
hardened MICs for this chassis on the datasheet: (1) 16 x T1/E1 with CE,
(2) 4 x chOC3/STM1 & 1 x chOC12/STM4 with CE, (3) 20 x 10/100/1000 Base-T.
11. Air-flow side-to-side; there is no option for front-to-back cooling
with this chassis.
12. Routing Engine and MPC lack a built-in Ethernet sync port. If the
chassis is deployed without any GE ports, getting SyncE or 1588 out of the
chassis via an Ethernet port will be a problem. SR-a4/-a8 have a built-in
sync connector on the CPM to serve this purpose explicitly.
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Mark Tinka
2015-07-09 13:35:43 UTC
Permalink
Post by Adam Vitkovsky
Interesting facts.
Now the Juniper MX104 win over Cisco ASR903 (max prefix limit) is not that clear anymore.
Since the chassis is 80Gbps in total I'd assume around 40Gbps towards aggregation and 40Gbps to backbone.
Also if BFD is really not offloaded into HW it would be a bummer on such a slow CPU.
With regards to 1588 I'd like to know if or how anyone deployed this on MPLS backbone if the 4G is in a VRF???
In other words 1588 runs in GRT/inet.0 so how do you then rely the precise per hop delay/jitter info to a 4G cell which sits in a VRF?
Never mind that the cell doesn't really need this precision and running 1588 with the server in 4G VRF across the 1588-blind MPLS core is enough.
It seems Juniper is still waiting for a big customer that is not willing to wait for BGP to converge millions of MAC addresses if DF PE fails (PBB-EVPN)
When my MX80's and ASR9001's run out of steam (we use these for peering
and transit), I'll look at the MX104, ASR1006 and ASR9904 as potential
replacements.

I think the MX104 is good enough for peering/transit. I also think it's
good enough for low-speed edge routing, e.g., non-Ethernet.

I'd likely never deploy an MX104 in places where the MX480/960 or the
larger ASR9900 routers are better-suited, i.e., major Ethernet aggregation.

Mark.

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Adam Vitkovsky
2015-07-09 14:34:24 UTC
Permalink
Hi Mark,
-----Original Message-----
Sent: 09 July 2015 14:36
Subject: Re: [j-nsp] MX104 Limitations
Post by Adam Vitkovsky
Interesting facts.
Now the Juniper MX104 win over Cisco ASR903 (max prefix limit) is not that
clear anymore.
Post by Adam Vitkovsky
Since the chassis is 80Gbps in total I'd assume around 40Gbps towards
aggregation and 40Gbps to backbone.
Post by Adam Vitkovsky
Also if BFD is really not offloaded into HW it would be a bummer on such a
slow CPU.
Post by Adam Vitkovsky
With regards to 1588 I'd like to know if or how anyone deployed this on
MPLS backbone if the 4G is in a VRF???
Post by Adam Vitkovsky
In other words 1588 runs in GRT/inet.0 so how do you then rely the precise
per hop delay/jitter info to a 4G cell which sits in a VRF?
Post by Adam Vitkovsky
Never mind that the cell doesn't really need this precision and running 1588
with the server in 4G VRF across the 1588-blind MPLS core is enough.
Post by Adam Vitkovsky
It seems Juniper is still waiting for a big customer that is not willing to wait
for BGP to converge millions of MAC addresses if DF PE fails (PBB-EVPN)
When my MX80's and ASR9001's run out of steam (we use these for peering
and transit), I'll look at the MX104, ASR1006 and ASR9904 as potential
replacements.
I'd like to see the ASR9K to run out of steam :)
Still haven't seen the preso on 9904 internals but I'm not quite convinced. Do I read it right it's just basically 9006-2 with some RU savings?
I think the MX104 is good enough for peering/transit. I also think it's
good enough for low-speed edge routing, e.g., non-Ethernet.
But MX104 can't hold the full internet routing table in forwarding-table so it's good only for peering or can it indeed?
I'd likely never deploy an MX104 in places where the MX480/960 or the
larger ASR9900 routers are better-suited, i.e., major Ethernet aggregation.
Mark.
I think MX104 can be a nice small town PE to aggregate the town ring so no competition to MX480/960

adam

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Mark Tinka
2015-07-09 14:39:23 UTC
Permalink
Post by Adam Vitkovsky
Still haven't seen the preso on 9904 internals but I'm not quite convinced. Do I read it right it's just basically 9006-2 with some RU savings?
The ASR99xx are simply the same architecture as the ASR9000's, but the
difference is the fabric is faster.

So there is a fixed limit to how much traffic the earlier generation can
handle, while the new chassis have more capacity to stick around longer
in time, as line cards get faster. Think of the ASR99xx as ASR9000-E's,
if you may :-).

If it were me, I'd be ASR99xx moving forward. I'd only consider
ASR9000's if I have restrictive power budgets, or think I'll never need
to go beyond what they can do today traffic-wise.
Post by Adam Vitkovsky
But MX104 can't hold the full internet routing table in forwarding-table so it's good only for peering or can it indeed?
Can't it? I've assumed it can. Haven't actually deployed one yet.
Post by Adam Vitkovsky
I think MX104 can be a nice small town PE to aggregate the town ring so no competition to MX480/960
I'll take the ASR920 for that, Thank You Very Much :-).

Mark.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
s***@nethelp.no
2015-07-09 14:44:21 UTC
Permalink
Post by Mark Tinka
Post by Adam Vitkovsky
But MX104 can't hold the full internet routing table in forwarding-table so it's good only for peering or can it indeed?
Can't it? I've assumed it can. Haven't actually deployed one yet.
It sure can. Last info I got from Juniper: 1.8M IPv4 or IPv6 prefixes
(or a combination of the two).

We have quite a few MX104s in production with a full table, plus
assorted L3VPNs.

Steinar Haug, Nethelp consulting, ***@nethelp.no
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Adam Vitkovsky
2015-07-09 15:05:58 UTC
Permalink
Sent: 09 July 2015 15:44
Post by Mark Tinka
Post by Adam Vitkovsky
But MX104 can't hold the full internet routing table in forwarding-table so
it's good only for peering or can it indeed?
Post by Mark Tinka
Can't it? I've assumed it can. Haven't actually deployed one yet.
It sure can. Last info I got from Juniper: 1.8M IPv4 or IPv6 prefixes
(or a combination of the two).
We have quite a few MX104s in production with a full table, plus
assorted L3VPNs.
Alright I'm gonna remember this from now on :)
With regards to slow CPU - ASR903 suffers from the same problem unfortunately

adam

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Scott Granados
2015-07-09 15:43:32 UTC
Permalink
I have set up the 104 in over seas pops and taken several views with out issue.
At a minimum 2 full table feeds + some peering. SHouldn’t be a problem.
Post by s***@nethelp.no
Post by Mark Tinka
Post by Adam Vitkovsky
But MX104 can't hold the full internet routing table in forwarding-table so it's good only for peering or can it indeed?
Can't it? I've assumed it can. Haven't actually deployed one yet.
It sure can. Last info I got from Juniper: 1.8M IPv4 or IPv6 prefixes
(or a combination of the two).
We have quite a few MX104s in production with a full table, plus
assorted L3VPNs.
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.
Saku Ytti
2015-07-09 15:52:07 UTC
Permalink
Post by s***@nethelp.no
Post by Mark Tinka
Post by Adam Vitkovsky
But MX104 can't hold the full internet routing table in forwarding-table so it's good only for peering or can it indeed?
Can't it? I've assumed it can. Haven't actually deployed one yet.
It sure can. Last info I got from Juniper: 1.8M IPv4 or IPv6 prefixes
(or a combination of the two).
ACK, MX80, MX104, MPC1, MPC2, 16x10GE etc all have exactly same Trio chipset,
same 256MB RDLRAM and consequently same hardware scale.
--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Kevin Day
2015-07-09 14:45:38 UTC
Permalink
Post by Mark Tinka
Post by Adam Vitkovsky
But MX104 can't hold the full internet routing table in forwarding-table so it's good only for peering or can it indeed?
Can't it? I've assumed it can. Haven't actually deployed one yet.
We have an MX104 in use taking several full v4 and v6 tables and it’s working fine.
Post by Mark Tinka
show route summary
Router ID:

inet.0: 541008 destinations, 541019 routes (541007 active, 0 holddown, 1 hidden)
Direct: 23 routes, 22 active
Local: 27 routes, 27 active
BGP: 540942 routes, 540931 active
Static: 27 routes, 27 active

inet6.0: 22676 destinations, 22685 routes (22675 active, 0 holddown, 1 hidden)
Direct: 20 routes, 12 active
Local: 25 routes, 25 active
BGP: 22631 routes, 22629 active
Static: 9 routes, 9 active



My only complaints about the MX104 are:

1) It’s 3.5U high, making rack planning a little weird, and requiring me to buy a hard to find half-U blank panel

2) It uses unusual power connectors on its power supplies, so you have to plan to buy special power cords just for this.

3) The Routing Engine CPU is a little slow for commits


We’re just treating it like an MX240/480/960 that has a pair of MPC’s built in, and a bonus 4x10G MIC.


_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/l
Saku Ytti
2015-07-09 15:57:21 UTC
Permalink
On (2015-07-09 09:45 -0500), Kevin Day wrote:

Hey,
Post by Kevin Day
1) It’s 3.5U high, making rack planning a little weird, and requiring me to buy a hard to find half-U blank panel
It is targeting metro applications, where racks often are telco racks. job-1
and job-2 were thrilled to get MX104 form-factor, MX80 was very problematic
and lead to 'creative' installations.
Post by Kevin Day
2) It uses unusual power connectors on its power supplies, so you have to plan to buy special power cords just for this.
It's standard C15/C16 which is temperature enchanced (120c) version of
standard C13/C14. Lot of vendors are doing that these days, I'd like to
understand why. Is there some new recommendation for fire safety or what has
triggered the change.
Post by Kevin Day
3) The Routing Engine CPU is a little slow for commits
Yes, however still slightly beefier than MX80.
Post by Kevin Day
We’re just treating it like an MX240/480/960 that has a pair of MPC’s built in, and a bonus 4x10G MIC.
The aggregate traffic rates won't exceed 75Gbps/55Mpps, while MX240 with pair
of MPC2 would be four time the lookup performance and double the memory
bandwidth. So treating it exactly the same will only work in environment which
is using capacity sparingly (like metro often does, if your metro legs are
20Gbps, then you usually won't see more traffic)
--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://pu
Kevin Day
2015-07-09 16:10:22 UTC
Permalink
Post by Saku Ytti
Post by Kevin Day
1) It’s 3.5U high, making rack planning a little weird, and requiring me to buy a hard to find half-U blank panel
It is targeting metro applications, where racks often are telco racks. job-1
and job-2 were thrilled to get MX104 form-factor, MX80 was very problematic
and lead to 'creative' installations.
I definitely see the appeal there, but in a typical datacenter environment where everything else is using whole numbers for heights, it’s definitely unusual. Out of a few thousand devices, it’s the only half-U sized thing we own.
Post by Saku Ytti
Post by Kevin Day
2) It uses unusual power connectors on its power supplies, so you have to plan to buy special power cords just for this.
It's standard C15/C16 which is temperature enchanced (120c) version of
standard C13/C14. Lot of vendors are doing that these days, I'd like to
understand why. Is there some new recommendation for fire safety or what has
triggered the change.
The answer I got was that because the MX104 has a much higher temperature range than the rest of their gear, one of the regulatory agencies required that the power cables and your power supply’s *connectors* also be required to handle the higher temperatures.

i.e. you can’t claim your device can handle 100C if your power supply doesn’t have a connector rated for 100C. C13/C14 isn’t rated for that, and you apparently can’t claim your C14 socket can really handle 100C because the standard doesn’t require it.

Not a dealbreaker, but it just means one more weird thing we have to stock.
Post by Saku Ytti
Post by Kevin Day
We’re just treating it like an MX240/480/960 that has a pair of MPC’s built in, and a bonus 4x10G MIC.
The aggregate traffic rates won't exceed 75Gbps/55Mpps, while MX240 with pair
of MPC2 would be four time the lookup performance and double the memory
bandwidth. So treating it exactly the same will only work in environment which
is using capacity sparingly (like metro often does, if your metro legs are
20Gbps, then you usually won't see more traffic)
This is true, you definitely can’t treat it like there’s no oversubscription.

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https:
s***@nethelp.no
2015-07-09 16:33:41 UTC
Permalink
Post by Saku Ytti
1) It$,1ry(Bs 3.5U high, making rack planning a little weird, and requiring me to buy a hard to find half-U blank panel
It is targeting metro applications, where racks often are telco racks. job-1
and job-2 were thrilled to get MX104 form-factor, MX80 was very problematic
and lead to 'creative' installations.
Same here - *much* easier to fit into telco racks. Our field techs
love them. We haven't bought any MX80s after MX104 became generally
available.

Steinar Haug, Nethelp consulting, ***@nethelp.no
Mark Tinka
2015-07-09 21:40:27 UTC
Permalink
Post by Saku Ytti
It's standard C15/C16 which is temperature enchanced (120c) version of
standard C13/C14. Lot of vendors are doing that these days, I'd like to
understand why. Is there some new recommendation for fire safety or what has
triggered the change.
We're seeing the same on the ME1200 as well. A little annoying, but
manageable.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Ross Halliday
2015-07-21 21:52:08 UTC
Permalink
Post by Saku Ytti
Post by Kevin Day
1) It’s 3.5U high, making rack planning a little weird, and requiring me to buy a hard to find half-U blank panel
It is targeting metro applications, where racks often are telco racks. job-1
and job-2 were thrilled to get MX104 form-factor, MX80 was very problematic
and lead to 'creative' installations.
Telco here. I love the MX104's format... in general. Most of our installations are DC so the .5U part is really negligible as approx. 1U is required off the bottom of the unit for clearance anyway. What drives me nuts about the height is actually the spacing of holes on the ears. Racking them is clumsy for this reason.

I'd love to see a switch in a similar form factor. We've had to put some EX4200s in one of our COs, and man, what a pain in the cavity those are. Flimsy little ears and way too deep.
Post by Saku Ytti
Post by Kevin Day
2) It uses unusual power connectors on its power supplies, so you have to plan to buy special power cords just for this.
It's standard C15/C16 which is temperature enchanced (120c) version of
standard C13/C14. Lot of vendors are doing that these days, I'd like to
understand why. Is there some new recommendation for fire safety or what has
triggered the change.
Thank you for this explanation!!! We have one AC unit at a tower site and were wondering what the story was. Our company now has *TWO* NEMA 5-15P to IEC 320 C15 cables.

About the only other thing that annoys me about the MX104 is the location of the chassis ground. Right in the corner with the fan tray. Seriously?

In general I love these little routers.

Cheers
Ross
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/ju

Mark Tinka
2015-07-09 21:34:58 UTC
Permalink
Post by Kevin Day
3) The Routing Engine CPU is a little slow for commits
If they can get the power budgets right, we may get an x86 RE for this
MX104.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Adam Vitkovsky
2015-07-09 14:59:15 UTC
Permalink
Hi Mark,
-----Original Message-----
Sent: 09 July 2015 15:39
Post by Adam Vitkovsky
Still haven't seen the preso on 9904 internals but I'm not quite convinced.
Do I read it right it's just basically 9006-2 with some RU savings?
The ASR99xx are simply the same architecture as the ASR9000's, but the
difference is the fabric is faster.
Right but I guess the main advantage of 9922 and 9912 is that the switch fabric is modular so one can grow/upgrade until the backplane becomes obsolete.
But on 9904 the fabric seem to be integrated on the RSP so it's the same as 9k10 or 9k6 but yes the fabric is faster indeed.
So there is a fixed limit to how much traffic the earlier generation can
handle, while the new chassis have more capacity to stick around longer
in time, as line cards get faster. Think of the ASR99xx as ASR9000-E's,
if you may :-).
If it were me, I'd be ASR99xx moving forward. I'd only consider
ASR9000's if I have restrictive power budgets, or think I'll never need
to go beyond what they can do today traffic-wise.
Post by Adam Vitkovsky
But MX104 can't hold the full internet routing table in forwarding-table so
it's good only for peering or can it indeed?
Can't it? I've assumed it can. Haven't actually deployed one yet.
Can't find the number now but while searching I did find it's actually 55 to 60Gbps switch capacity so that makes it directly comparable with ASR920 (the 1.5RU) -but no redundancy (though the MX104 redundancy is somewhat crippled by the common switch-fabric -whereas ASR903 has separate RP/SW-fabric)
Post by Adam Vitkovsky
I think MX104 can be a nice small town PE to aggregate the town ring so no
competition to MX480/960
I'll take the ASR920 for that, Thank You Very Much :-).
Yeah that depends on the rack space, the capacity required and the number of prefixes.
ASR920 is a 1/1.5RU and MX104 is 3.5RU they both have same switch capacity but ASR902 can hold only 20k prefixes in FIB and I guess on MX104 it was over 100k

adam

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Mark Tinka
2015-07-09 21:37:39 UTC
Permalink
Post by Adam Vitkovsky
Right but I guess the main advantage of 9922 and 9912 is that the switch fabric is modular so one can grow/upgrade until the backplane becomes obsolete.
But on 9904 the fabric seem to be integrated on the RSP so it's the same as 9k10 or 9k6 but yes the fabric is faster indeed.
And the ASR99xx will take more of the upcoming line cards that get released.
Post by Adam Vitkovsky
Can't find the number now but while searching I did find it's actually 55 to 60Gbps switch capacity so that makes it directly comparable with ASR920 (the 1.5RU) -but no redundancy (though the MX104 redundancy is somewhat crippled by the common switch-fabric -whereas ASR903 has separate RP/SW-fabric)
Well, the ASR920 can only hold 20,000 IPv4 entries in FIB. So the MX104,
with the ability to do a full table, beats it there.

But then again, that is why the ASR920 is better-priced for such a role.
Hard to compete with that.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Scott Granados
2015-07-09 15:39:16 UTC
Permalink
I’m not sure about the BFD thing.

As I recall and I would definitely suggest you take this with a few grains of salt and research a second source but BFD on directly connected paths will distribute to the line card. BFD say loopback to loopback or over a non direct path will be handled by the RE although I believe this was being addressed in future releases to have it distributed as well.
Post by Adam Vitkovsky
Interesting facts.
Now the Juniper MX104 win over Cisco ASR903 (max prefix limit) is not that clear anymore.
Since the chassis is 80Gbps in total I'd assume around 40Gbps towards aggregation and 40Gbps to backbone.
Also if BFD is really not offloaded into HW it would be a bummer on such a slow CPU.
With regards to 1588 I'd like to know if or how anyone deployed this on MPLS backbone if the 4G is in a VRF???
In other words 1588 runs in GRT/inet.0 so how do you then rely the precise per hop delay/jitter info to a 4G cell which sits in a VRF?
Never mind that the cell doesn't really need this precision and running 1588 with the server in 4G VRF across the 1588-blind MPLS core is enough.
It seems Juniper is still waiting for a big customer that is not willing to wait for BGP to converge millions of MAC addresses if DF PE fails (PBB-EVPN)
adam
-----Original Message-----
Of Colton Conor
Sent: 24 June 2015 14:09
Subject: [j-nsp] MX104 Limitations
We are considering upgrading to a Juniper MX104, but another vendor (not
Juniper) pointed out the following limitations about the MX104 in their
comparison. I am wondering how much of it is actually true about the MX104?
1. No fabric redundancy due to fabric-less design. There is no switch
fabric on the MX104, but there is on the rest of the MX series. Not sure if
this is a bad or good thing?
2. The Chassis fixed ports are not on an FRU. If a fixed port fails,
or if data path fails, entire chassis requires replacement.
3. There is no mention of software support for MACSec on the MX104,
it appears to be a hardware capability only at this point in time with
software support potentially coming at a later time.
4. No IX chipsets for the 10G uplinks (i.e. no packet
pre-classification, the IX chip is responsible for this function as well as
GE to 10GE i/f adaptation)
5. QX Complex supports HQoS on MICs only, not on the integrated 4
10GE ports on the PMC. I.e. no HQoS support on the 10GE uplinks
6. Total amount of traffic that can be handled via HQoS is restricted
to 24Gbps. Not all traffic flows can be shaped/policed via HQoS due to a
throughput restriction between the MQ and the QX. Note that the MQ can
still however perform basic port based policing/shaping on any flows. HQoS
support on the 4 installed MICs can only be enabled via a separate license.
Total of 128k queues on the chassis
7. 1588 TC is not supported across the chassis as the current set of
MICs do not support edge time stamping. Edge timestamping is only
supported on the integrated 10G ports. MX104 does not presently list 1588
TC as being supported.
8. BFD can be supported natively in the TRIO chipset. On the MX104,
it is not supported in hardware today. BFD is run from the single core
P2020 MPC.
9. TRIO based cards do not presently support PBB; thus it is
presently not supported on the MX104. PBB is only supported on older EZChip
based MX hardware. Juniper still needs a business case to push this forward
10. MX104 operating temperature: -40 to 65C, but MX5, MX10, MX40, MX80
and MX80-48T are all 0-40C all are TRIO based. Seems odd that the MX104
would support a different temperature range. There are only 3 temperature
hardened MICs for this chassis on the datasheet: (1) 16 x T1/E1 with CE,
(2) 4 x chOC3/STM1 & 1 x chOC12/STM4 with CE, (3) 20 x 10/100/1000 Base-T.
11. Air-flow side-to-side; there is no option for front-to-back cooling
with this chassis.
12. Routing Engine and MPC lack a built-in Ethernet sync port. If the
chassis is deployed without any GE ports, getting SyncE or 1588 out of the
chassis via an Ethernet port will be a problem. SR-a4/-a8 have a built-in
sync connector on the CPM to serve this purpose explicitly.
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/m
Loading...