Discussion:
[j-nsp] Opinions on fusion provider edge
Eldon Koyle
2018-11-06 01:32:27 UTC
Permalink
What kind of experiences (good or bad) have people had with Juniper's
Fusion Provider edge? Are there any limitations I should be aware of?

I'm looking at it to simplify management in a campus network environment
and to use features that are only available on the MX currently.

--
Eldon
--
I don't think the universe wants me to send this message
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Richard McGovern
2018-11-06 17:20:52 UTC
Permalink
I might suggest you look at an EVPN based design instead. This is going to be Juniper's #1 go to in the future. I believe things like Junos Fusion and MC-LAG, etc. may still be supported, but secondary to EVPN and associated features.

What is your planned SD devices? QFX5???

Richard McGovern
Sr Sales Engineer, Juniper Networks
978-618-3342


On 11/5/18, 8:32 PM, "Eldon Koyle" <ekoyle+***@gmail.com> wrote:

What kind of experiences (good or bad) have people had with Juniper's
Fusion Provider edge? Are there any limitations I should be aware of?

I'm looking at it to simplify management in a campus network environment
and to use features that are only available on the MX currently.

--
Eldon
--
I don't think the universe wants me to send this message



_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://p
Eldon Koyle
2018-11-06 18:30:05 UTC
Permalink
We are looking at a mix of QFX5100-48S and EX4300-32F (somewhere between 6
and 10 devices total). It looks like the QFX supports EVPN, but Juniper
doesn't seem to have any relatively inexpensive 1Gbe devices with EVPN
support.

We are planning on dual-homing most of our buildings (strictly L2, using
active-active EVPN or MC-LAG) to a pair of MXes with QSFP ports and fiber
breakout panels, however we have some odds and ends that don't make sense
there due to optic requirements (a few bidi and a few ER) and cost (just
can't justify upgrading to 10Gbe hardware in many locations).

One other concern is that licensing costs can add up quickly. In general,
would this end up requiring the AFL?

--
Eldon

On Tue, Nov 6, 2018 at 10:20 AM Richard McGovern <***@juniper.net>
wrote:

> I might suggest you look at an EVPN based design instead. This is going
> to be Juniper's #1 go to in the future. I believe things like Junos Fusion
> and MC-LAG, etc. may still be supported, but secondary to EVPN and
> associated features.
>
> What is your planned SD devices? QFX5???
>
> Richard McGovern
> Sr Sales Engineer, Juniper Networks
> 978-618-3342
>
>
> On 11/5/18, 8:32 PM, "Eldon Koyle" <ekoyle+***@gmail.com>
> wrote:
>
> What kind of experiences (good or bad) have people had with Juniper's
> Fusion Provider edge? Are there any limitations I should be aware of?
>
> I'm looking at it to simplify management in a campus network
> environment
> and to use features that are only available on the MX currently.
>
> --
> Eldon
> --
> I don't think the universe wants me to send this message
>
>
>
>
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://pu
Aaron1
2018-11-06 18:38:22 UTC
Permalink
This is a timely topic for me as I just got off a con-call yesterday with my Juniper SE and an SP specialist...

They also recommended EVPN as the way ahead in place of things like fusion. They even somewhat shy away from MC-lag

This was all while talking about a data center redesign that we are working on currently. Replacing ToR VC EX4550’s connected LAG to ASR9K with new dual QFX5120 leaf to single MX960, dual MPC7E-MRATE

I think we will connect each QFX to each mpc7e card. Is it best practice to not interconnect directly between the two QFX’s ? If so why not.

(please forgive, don’t mean to hijack thread, just some good topics going on here)

Aaron

> On Nov 6, 2018, at 12:30 PM, Eldon Koyle <ekoyle+***@gmail.com> wrote:
>
> We are looking at a mix of QFX5100-48S and EX4300-32F (somewhere between 6
> and 10 devices total). It looks like the QFX supports EVPN, but Juniper
> doesn't seem to have any relatively inexpensive 1Gbe devices with EVPN
> support.
>
> We are planning on dual-homing most of our buildings (strictly L2, using
> active-active EVPN or MC-LAG) to a pair of MXes with QSFP ports and fiber
> breakout panels, however we have some odds and ends that don't make sense
> there due to optic requirements (a few bidi and a few ER) and cost (just
> can't justify upgrading to 10Gbe hardware in many locations).
>
> One other concern is that licensing costs can add up quickly. In general,
> would this end up requiring the AFL?
>
> --
> Eldon
>
> On Tue, Nov 6, 2018 at 10:20 AM Richard McGovern <***@juniper.net>
> wrote:
>
>> I might suggest you look at an EVPN based design instead. This is going
>> to be Juniper's #1 go to in the future. I believe things like Junos Fusion
>> and MC-LAG, etc. may still be supported, but secondary to EVPN and
>> associated features.
>>
>> What is your planned SD devices? QFX5???
>>
>> Richard McGovern
>> Sr Sales Engineer, Juniper Networks
>> 978-618-3342
>>
>>
>> On 11/5/18, 8:32 PM, "Eldon Koyle" <ekoyle+***@gmail.com>
>> wrote:
>>
>> What kind of experiences (good or bad) have people had with Juniper's
>> Fusion Provider edge? Are there any limitations I should be aware of?
>>
>> I'm looking at it to simplify management in a campus network
>> environment
>> and to use features that are only available on the MX currently.
>>
>> --
>> Eldon
>> --
>> I don't think the universe wants me to send this message
>>
>>
>>
>>
> _______________________________________________
> juniper-nsp mailing list juniper-***@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/
Hugo Slabbert
2018-11-15 04:51:09 UTC
Permalink
>This was all while talking about a data center redesign that we are
>working on currently. Replacing ToR VC EX4550’s connected LAG to ASR9K
>with new dual QFX5120 leaf to single MX960, dual MPC7E-MRATE
>
>I think we will connect each QFX to each mpc7e card. Is it best practice to not interconnect directly between the two QFX’s ? If so why not.

Glib answer: because then it's not spine & leaf anymore ;)

Less glib answer:

1. it's not needed and is suboptimal

Going with a basic 3-stage (2 layer) spine & leaf, each leaf is connected
to each spine. Connectivity between any two leafs is via any spine to
which they are both connected. Suppose you have 2 spines, spine1 and
spine2, and, say, 10 leaf switches. If a given leaf loses its connection to
spine1, it would then just reach all other leafs via spine2.

If you add a connection between two spines, you do create an alternate
path, but it's also not an equal cost or optimal path. If we're going
simple least hops / shortest path, provided leaf1's connection to spine1 is
lost, in theory leaf2 could reach leaf1 via:

leaf2 -> spine1 -> spine2 -> leaf1

...but that would be a longer path than just going via the remaining:

leaf2 -> spine2 -> leaf2

...path. You could force it through the longer path, but why?

2. What's your oversub?

The pitch on spine & leaf networks is generally their high bandwith, high
availability (lots of links), and low oversubscription ratios. For the
sake of illustration let's go away from chassis gear for spines to a
simpler option like, say, 32x100G Tomahawk spines. The spines there have
capacity to connect 32x leaf switches at line rate. Whatever connections
the leaf switches have to the spines do not have any further oversub
imposed within the spine layer.

Now you interconnect your spines. How many of those 32x 100G ports are you
going to dedicate to spine interconnect? 2 links? If so, you've now
dropped the capacity for 2x more leafs in your fabric (and however many
compute nodes they were going to connect), and you're also only providing
200G interconnect between spines for 3 Tbps of leaf connection capacity.
Even if you ignore the less optimal path thing from above and try to
intentionally force a fallback path on spine:leaf link failure to traverse
your spine xconnect, you can impose up to 15:1 oversub in that scenario.

Or you could kill the oversub and carve out 16x of your 32x spine ports for
spine interconnects. But now you've shrunk your fabric significantly (can
only support 16 leaf switches)...and you've done so unnecessarily because
the redundancy model is for leafs to use their uplinks through spines
directly rather than using inter-spine links.

3. >2 spines

What if we leaf1 loses its connection to spine2 and leafx loses its
connection to spine1? Have we not created a reachability problem?

spine1 spine2
/ \
/ \
leaf1 leafx

Why, yes we have. The design solution here is either >1 links between each
leaf & spine (cheating; blergh) or a greater number of spines. What's your
redundancy factor? Augment the above to 4x spines and you've significantly
shrunk your risk of creating connectivity islands.

But if you've designed for interconnecting your spines, what do you for
interconnecting 4x spines? What about if you reach 6x spines? Again: the
model is that resilience is achieved at the leaf:spine interconnectivity
rather than at the "top of the tree" as you would have in a standard
hierarchical, 3-tier-type setup.

--
Hugo Slabbert | email, xmpp/jabber: ***@slabnet.com
pgp key: B178313E | also on Signal

On Tue 2018-Nov-06 12:38:22 -0600, Aaron1 <***@gvtc.com> wrote:

>This is a timely topic for me as I just got off a con-call yesterday with my Juniper SE and an SP specialist...
>
>They also recommended EVPN as the way ahead in place of things like fusion. They even somewhat shy away from MC-lag
>
>This was all while talking about a data center redesign that we are working on currently. Replacing ToR VC EX4550’s connected LAG to ASR9K with new dual QFX5120 leaf to single MX960, dual MPC7E-MRATE
>
>I think we will connect each QFX to each mpc7e card. Is it best practice to not interconnect directly between the two QFX’s ? If so why not.
>
>(please forgive, don’t mean to hijack thread, just some good topics going on here)
>
>Aaron
Nikos Leontsinis
2018-11-15 07:24:29 UTC
Permalink
CoS will not work on the SD ports.

On 15 Nov 2018, at 04:51, Hugo Slabbert <***@slabnet.com> wrote:

>> This was all while talking about a data center redesign that we are working on currently. Replacing ToR VC EX4550’s connected LAG to ASR9K with new dual QFX5120 leaf to single MX960, dual MPC7E-MRATE
>>
>> I think we will connect each QFX to each mpc7e card. Is it best practice to not interconnect directly between the two QFX’s ? If so why not.
>
> Glib answer: because then it's not spine & leaf anymore ;)
>
> Less glib answer:
>
> 1. it's not needed and is suboptimal
>
> Going with a basic 3-stage (2 layer) spine & leaf, each leaf is connected to each spine. Connectivity between any two leafs is via any spine to which they are both connected. Suppose you have 2 spines, spine1 and spine2, and, say, 10 leaf switches. If a given leaf loses its connection to spine1, it would then just reach all other leafs via spine2.
>
> If you add a connection between two spines, you do create an alternate path, but it's also not an equal cost or optimal path. If we're going simple least hops / shortest path, provided leaf1's connection to spine1 is lost, in theory leaf2 could reach leaf1 via:
>
> leaf2 -> spine1 -> spine2 -> leaf1
>
> ...but that would be a longer path than just going via the remaining:
>
> leaf2 -> spine2 -> leaf2
>
> ...path. You could force it through the longer path, but why?
>
> 2. What's your oversub?
>
> The pitch on spine & leaf networks is generally their high bandwith, high availability (lots of links), and low oversubscription ratios. For the sake of illustration let's go away from chassis gear for spines to a simpler option like, say, 32x100G Tomahawk spines. The spines there have capacity to connect 32x leaf switches at line rate. Whatever connections the leaf switches have to the spines do not have any further oversub imposed within the spine layer.
>
> Now you interconnect your spines. How many of those 32x 100G ports are you going to dedicate to spine interconnect? 2 links? If so, you've now dropped the capacity for 2x more leafs in your fabric (and however many compute nodes they were going to connect), and you're also only providing 200G interconnect between spines for 3 Tbps of leaf connection capacity. Even if you ignore the less optimal path thing from above and try to intentionally force a fallback path on spine:leaf link failure to traverse your spine xconnect, you can impose up to 15:1 oversub in that scenario.
>
> Or you could kill the oversub and carve out 16x of your 32x spine ports for spine interconnects. But now you've shrunk your fabric significantly (can only support 16 leaf switches)...and you've done so unnecessarily because the redundancy model is for leafs to use their uplinks through spines directly rather than using inter-spine links.
>
> 3. >2 spines
>
> What if we leaf1 loses its connection to spine2 and leafx loses its connection to spine1? Have we not created a reachability problem?
>
> spine1 spine2
> / \
> / \
> leaf1 leafx
>
> Why, yes we have. The design solution here is either >1 links between each leaf & spine (cheating; blergh) or a greater number of spines. What's your redundancy factor? Augment the above to 4x spines and you've significantly shrunk your risk of creating connectivity islands.
>
> But if you've designed for interconnecting your spines, what do you for interconnecting 4x spines? What about if you reach 6x spines? Again: the model is that resilience is achieved at the leaf:spine interconnectivity rather than at the "top of the tree" as you would have in a standard hierarchical, 3-tier-type setup.
>
> --
> Hugo Slabbert | email, xmpp/jabber: ***@slabnet.com
> pgp key: B178313E | also on Signal
>
>> On Tue 2018-Nov-06 12:38:22 -0600, Aaron1 <***@gvtc.com> wrote:
>>
>> This is a timely topic for me as I just got off a con-call yesterday with my Juniper SE and an SP specialist...
>>
>> They also recommended EVPN as the way ahead in place of things like fusion. They even somewhat shy away from MC-lag
>>
>> This was all while talking about a data center redesign that we are working on currently. Replacing ToR VC EX4550’s connected LAG to ASR9K with new dual QFX5120 leaf to single MX960, dual MPC7E-MRATE
>>
>> I think we will connect each QFX to each mpc7e card. Is it best practice to not interconnect directly between the two QFX’s ? If so why not.
>>
>> (please forgive, don’t mean to hijack thread, just some good topics going on here)
>>
>> Aaron
> _______________________________________________
> juniper-nsp mailing list juniper-***@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.n
Aaron1
2018-11-15 13:31:30 UTC
Permalink
Thanks Hugo, what about leaf to leaf connection? Is that good?

What about Layer 2 loop prevention?

Aaron

On Nov 14, 2018, at 10:51 PM, Hugo Slabbert <***@slabnet.com> wrote:

>> This was all while talking about a data center redesign that we are working on currently. Replacing ToR VC EX4550’s connected LAG to ASR9K with new dual QFX5120 leaf to single MX960, dual MPC7E-MRATE
>>
>> I think we will connect each QFX to each mpc7e card. Is it best practice to not interconnect directly between the two QFX’s ? If so why not.
>
> Glib answer: because then it's not spine & leaf anymore ;)
>
> Less glib answer:
>
> 1. it's not needed and is suboptimal
>
> Going with a basic 3-stage (2 layer) spine & leaf, each leaf is connected to each spine. Connectivity between any two leafs is via any spine to which they are both connected. Suppose you have 2 spines, spine1 and spine2, and, say, 10 leaf switches. If a given leaf loses its connection to spine1, it would then just reach all other leafs via spine2.
>
> If you add a connection between two spines, you do create an alternate path, but it's also not an equal cost or optimal path. If we're going simple least hops / shortest path, provided leaf1's connection to spine1 is lost, in theory leaf2 could reach leaf1 via:
>
> leaf2 -> spine1 -> spine2 -> leaf1
>
> ...but that would be a longer path than just going via the remaining:
>
> leaf2 -> spine2 -> leaf2
>
> ...path. You could force it through the longer path, but why?
>
> 2. What's your oversub?
>
> The pitch on spine & leaf networks is generally their high bandwith, high availability (lots of links), and low oversubscription ratios. For the sake of illustration let's go away from chassis gear for spines to a simpler option like, say, 32x100G Tomahawk spines. The spines there have capacity to connect 32x leaf switches at line rate. Whatever connections the leaf switches have to the spines do not have any further oversub imposed within the spine layer.
>
> Now you interconnect your spines. How many of those 32x 100G ports are you going to dedicate to spine interconnect? 2 links? If so, you've now dropped the capacity for 2x more leafs in your fabric (and however many compute nodes they were going to connect), and you're also only providing 200G interconnect between spines for 3 Tbps of leaf connection capacity. Even if you ignore the less optimal path thing from above and try to intentionally force a fallback path on spine:leaf link failure to traverse your spine xconnect, you can impose up to 15:1 oversub in that scenario.
>
> Or you could kill the oversub and carve out 16x of your 32x spine ports for spine interconnects. But now you've shrunk your fabric significantly (can only support 16 leaf switches)...and you've done so unnecessarily because the redundancy model is for leafs to use their uplinks through spines directly rather than using inter-spine links.
>
> 3. >2 spines
>
> What if we leaf1 loses its connection to spine2 and leafx loses its connection to spine1? Have we not created a reachability problem?
>
> spine1 spine2
> / \
> / \
> leaf1 leafx
>
> Why, yes we have. The design solution here is either >1 links between each leaf & spine (cheating; blergh) or a greater number of spines. What's your redundancy factor? Augment the above to 4x spines and you've significantly shrunk your risk of creating connectivity islands.
>
> But if you've designed for interconnecting your spines, what do you for interconnecting 4x spines? What about if you reach 6x spines? Again: the model is that resilience is achieved at the leaf:spine interconnectivity rather than at the "top of the tree" as you would have in a standard hierarchical, 3-tier-type setup.
>
> --
> Hugo Slabbert | email, xmpp/jabber: ***@slabnet.com
> pgp key: B178313E | also on Signal
>
>> On Tue 2018-Nov-06 12:38:22 -0600, Aaron1 <***@gvtc.com> wrote:
>>
>> This is a timely topic for me as I just got off a con-call yesterday with my Juniper SE and an SP specialist...
>>
>> They also recommended EVPN as the way ahead in place of things like fusion. They even somewhat shy away from MC-lag
>>
>> This was all while talking about a data center redesign that we are working on currently. Replacing ToR VC EX4550’s connected LAG to ASR9K with new dual QFX5120 leaf to single MX960, dual MPC7E-MRATE
>>
>> I think we will connect each QFX to each mpc7e card. Is it best practice to not interconnect directly between the two QFX’s ? If so why not.
>>
>> (please forgive, don’t mean to hijack thread, just some good topics going on here)
>>
>> Aaron

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/junip
Gert Doering
2018-11-15 13:33:33 UTC
Permalink
Hi,

On Thu, Nov 15, 2018 at 07:31:30AM -0600, Aaron1 wrote:
> What about Layer 2 loop prevention?

What is this "Layer 2 loop" thing?

gert
--
"If was one thing all people took for granted, was conviction that if you
feed honest figures into a computer, honest figures come out. Never doubted
it myself till I met a computer with a sense of humor."
Robert A. Heinlein, The Moon is a Harsh Mistress

Gert Doering - Munich, Germany ***@greenie.muc.de
Aaron1
2018-11-15 16:22:51 UTC
Permalink
Well, I’m a data center rookie, so I appreciate your patience

I do understand that layer 2 emulation is needed between data centers, if I do it with traditional mechanisms like VPLS or l2circuit martini, I’m just afraid if I make too many connections between spine and leaves that I might create a loop

However, I’m beginning to think that EVPN may take care of all that stuff, again, still learning some of the stuff that data centers due



Aaron

> On Nov 15, 2018, at 7:33 AM, Gert Doering <***@greenie.muc.de> wrote:
>
> Hi,
>
>> On Thu, Nov 15, 2018 at 07:31:30AM -0600, Aaron1 wrote:
>> What about Layer 2 loop prevention?
>
> What is this "Layer 2 loop" thing?
>
> gert
> --
> "If was one thing all people took for granted, was conviction that if you
> feed honest figures into a computer, honest figures come out. Never doubted
> it myself till I met a computer with a sense of humor."
> Robert A. Heinlein, The Moon is a Harsh Mistress
>
> Gert Doering - Munich, Germany ***@greenie.muc.de

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.n
Gert Doering
2018-11-15 19:10:32 UTC
Permalink
Hi,

On Thu, Nov 15, 2018 at 10:22:51AM -0600, Aaron1 wrote:
> Well, I???m a data center rookie, so I appreciate your patience
>
> I do understand that layer 2 emulation is needed between data centers, if I do it with traditional mechanisms like VPLS or l2circuit martini, I???m just afraid if I make too many connections between spine and leaves that I might create a loop

Since these connections are all *routed*, the routing protocol takes care
of loops. There is no redundant L2 anything (unless you do LACP links,
but then LACP takes care of it) that could loop.

The "user-visible layer2 network" stuff emulated via VXLAN, MPLS, ...
might form loop, so how you attach downstream L2 "infrastructure" will pose
some challenges - but this is totally independent from the leaf/spine
infra.

> However, I???m beginning to think that EVPN may take care of all that stuff, again, still learning some of the stuff that data centers due

EVPN is, basically, just putting a proper control-plane on top of MPLS
or VXLAN for "L2 routing" - put your MAC addresses into BGP, and it will
scale like hell.

ISPs I've talked to like EVPN, because "this is BGP, I understand BGP".

Enterprise folks find EVPN scary, because "this is BGP, nobody here knows
about BGP"... :-) (and indeed, if BGP is news to you, there are way too
many things that can be designed poorly, and half the "this is how you do
a DC with EVPN" documents design their BGP in ways that I wouldn't do...)

gert

--
"If was one thing all people took for granted, was conviction that if you
feed honest figures into a computer, honest figures come out. Never doubted
it myself till I met a computer with a sense of humor."
Robert A. Heinlein, The Moon is a Harsh Mistress

Gert Doering - Munich, Germany ***@greenie.muc.de
Pavel Lunin
2018-11-16 00:10:12 UTC
Permalink
Gert Doering wrote:

>
> EVPN is, basically, just putting a proper control-plane on top of MPLS
> or VXLAN for "L2 routing" - put your MAC addresses into BGP, and it will
> scale like hell.
>

"Like hell" is the right name for it.

Not that I don't like EVPN but... a) EVPN is not necessarily L2 b) Ethernet
is still Ethernet, even over EVPN. In order to announce the MAC over BGP,
you first need to learn it. With all the consequences and prerequisites.
And, of course, mapping dynamically leaned stuff to BGP announces comes at
a cost of making BGP routes as stable as learned MACs.

Magic doesn't exist.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
a***@netconsultings.com
2018-11-16 10:07:55 UTC
Permalink
> Of Pavel Lunin
> Sent: Friday, November 16, 2018 12:10 AM
>
> Gert Doering wrote:
>
> >
> > EVPN is, basically, just putting a proper control-plane on top of MPLS
> > or VXLAN for "L2 routing" - put your MAC addresses into BGP, and it
> > will scale like hell.
> >
>
> "Like hell" is the right name for it.
>
> Not that I don't like EVPN but... a) EVPN is not necessarily L2 b)
Ethernet is
> still Ethernet, even over EVPN. In order to announce the MAC over BGP, you
> first need to learn it. With all the consequences and prerequisites.
> And, of course, mapping dynamically leaned stuff to BGP announces comes
> at a cost of making BGP routes as stable as learned MACs.
>
> Magic doesn't exist.
>
It does and it's called PBB-EVPN
No just kidding :)

PBB on top of EVPN just brings back the conversational mac learning aspect
of it and solves the scalability issues of pure EVPN (makes BGP independent
of customer mac change rate or mac scale).
But as you rightly pointed out it's still Ethernet with all its problems.
Though I guess this "simulated" Ethernet is somewhat better than vanilla
Ethernet since you have all these clever features like split horizon groups
designated forwarders, multicast-style distribution of BOM traffic etc...
which depending on who's driving might prevent one from shooting himself in
the foot or provide enough rope to hang with...

adam


_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
a***@netconsultings.com
2018-11-16 11:12:29 UTC
Permalink
> Of Aaron1
> Sent: Thursday, November 15, 2018 4:23 PM
>
> Well, I’m a data center rookie, so I appreciate your patience
>
> I do understand that layer 2 emulation is needed between data centers, if I
> do it with traditional mechanisms like VPLS or l2circuit martini, I’m just afraid
> if I make too many connections between spine and leaves that I might create
> a loop
>
> However, I’m beginning to think that EVPN may take care of all that stuff,
> again, still learning some of the stuff that data centers due
>
>
Hey Aaron,

My advice would be if you're building a new DC build it as part of your MPLS network (yes no boundaries).

Rant//
The whole networking industry got it very wrong with the VXLAN technology, that was one of the industry's biggest blunders.
The VXLAN project of DC folks is a good example of short sighted goals and desire to reinvent the wheel (SP folks had VPLS around for years when VXLAN came to be).
SP folks then came up with EVPN as a replacement for VPLS and DC folks then shoehorned it on top of VXLAN.
Then micro-segmentation buzzword came along and DC folks quickly realized that there's no field in the VXLAN header to indicate common access group nor the ability to stack VXLAN headers on top of each other (or some tried with custom VXLAN spin offs) so DC folks came up with a brilliant idea -let's maintain access lists! -like it's 90's again. As an SP guy I'm just shaking my head thinking did these guys ever heard of L2-VPNs which were around since inception of MPLS? (so yes not telling people about mac addresses they should not be talking to is better than telling everyone and then maintaining ACLs) in SP sector we learned that in 90s.
Oh and then there's the Traffic-Engineering requirement to route mice flows around elephant flows in the DC, not mentioning the ability to seamlessly steer traffic flows right from VMs then across DC and MPLS core which is impossible with VXLAN islands in form of DCs hanging off of MPLS core.
Rant\\



adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/ma
Aaron1
2018-11-16 15:13:37 UTC
Permalink
Geez, sounds horrible , thanks Adam

We are buying QFX-5120’s for our new DC build. How good is the MPLS services capability of the QFX-5120?

Aaron

On Nov 16, 2018, at 5:12 AM, <***@netconsultings.com> <***@netconsultings.com> wrote:

>> Of Aaron1
>> Sent: Thursday, November 15, 2018 4:23 PM
>>
>> Well, I’m a data center rookie, so I appreciate your patience
>>
>> I do understand that layer 2 emulation is needed between data centers, if I
>> do it with traditional mechanisms like VPLS or l2circuit martini, I’m just afraid
>> if I make too many connections between spine and leaves that I might create
>> a loop
>>
>> However, I’m beginning to think that EVPN may take care of all that stuff,
>> again, still learning some of the stuff that data centers due
>>
>>
> Hey Aaron,
>
> My advice would be if you're building a new DC build it as part of your MPLS network (yes no boundaries).
>
> Rant//
> The whole networking industry got it very wrong with the VXLAN technology, that was one of the industry's biggest blunders.
> The VXLAN project of DC folks is a good example of short sighted goals and desire to reinvent the wheel (SP folks had VPLS around for years when VXLAN came to be).
> SP folks then came up with EVPN as a replacement for VPLS and DC folks then shoehorned it on top of VXLAN.
> Then micro-segmentation buzzword came along and DC folks quickly realized that there's no field in the VXLAN header to indicate common access group nor the ability to stack VXLAN headers on top of each other (or some tried with custom VXLAN spin offs) so DC folks came up with a brilliant idea -let's maintain access lists! -like it's 90's again. As an SP guy I'm just shaking my head thinking did these guys ever heard of L2-VPNs which were around since inception of MPLS? (so yes not telling people about mac addresses they should not be talking to is better than telling everyone and then maintaining ACLs) in SP sector we learned that in 90s.
> Oh and then there's the Traffic-Engineering requirement to route mice flows around elephant flows in the DC, not mentioning the ability to seamlessly steer traffic flows right from VMs then across DC and MPLS core which is impossible with VXLAN islands in form of DCs hanging off of MPLS core.
> Rant\\
>
>
>
> adam
>
> netconsultings.com
> ::carrier-class solutions for the telecommunications industry::
>

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https:
Gert Doering
2018-11-16 15:18:21 UTC
Permalink
Hi,

On Fri, Nov 16, 2018 at 09:13:37AM -0600, Aaron1 wrote:
> Geez, sounds horrible , thanks Adam
>
> We are buying QFX-5120???s for our new DC build. How good is the MPLS services capability of the QFX-5120?

Are they shipping already? Any success or horror stories?

25G looks promising for "10G is not enough, 40G is such a hassle", but
it's the usual "new chip, new product, has it matured enough?" discussion.

gert
--
"If was one thing all people took for granted, was conviction that if you
feed honest figures into a computer, honest figures come out. Never doubted
it myself till I met a computer with a sense of humor."
Robert A. Heinlein, The Moon is a Harsh Mistress

Gert Doering - Munich, Germany ***@greenie.muc.de
Giuliano C. Medalha
2018-11-16 15:32:00 UTC
Permalink
We are testing right now in our lab the new qfx5120

We are only waiting the official software release ...

The box is here already

The specs only shows L2circuit ... but we are waiting to see flexible ethernet encapsutation ( no vpls we already know ) to use vlan and mpls at the same interface.

But the main ideia is to use it with evpn/vxlan configuration and try qinq in vtep

After that we can post the results here

Att

Giuliano C. Medalha
WZTECH NETWORKS
+55 (17) 98112-5394
***@wztech.com.br

________________________________
From: juniper-nsp <juniper-nsp-***@puck.nether.net> on behalf of Aaron1 <***@gvtc.com>
Sent: Friday, November 16, 2018 13:14
To: ***@netconsultings.com
Cc: ***@juniper.net; Juniper List
Subject: Re: [j-nsp] Interconnecting spines in spine & leaf networks [ was Re: Opinions on fusion provider edge ]

Geez, sounds horrible , thanks Adam

We are buying QFX-5120’s for our new DC build. How good is the MPLS services capability of the QFX-5120?

Aaron

On Nov 16, 2018, at 5:12 AM, <***@netconsultings.com> <***@netconsultings.com> wrote:

>> Of Aaron1
>> Sent: Thursday, November 15, 2018 4:23 PM
>>
>> Well, I’m a data center rookie, so I appreciate your patience
>>
>> I do understand that layer 2 emulation is needed between data centers, if I
>> do it with traditional mechanisms like VPLS or l2circuit martini, I’m just afraid
>> if I make too many connections between spine and leaves that I might create
>> a loop
>>
>> However, I’m beginning to think that EVPN may take care of all that stuff,
>> again, still learning some of the stuff that data centers due
>>
>>
> Hey Aaron,
>
> My advice would be if you're building a new DC build it as part of your MPLS network (yes no boundaries).
>
> Rant//
> The whole networking industry got it very wrong with the VXLAN technology, that was one of the industry's biggest blunders.
> The VXLAN project of DC folks is a good example of short sighted goals and desire to reinvent the wheel (SP folks had VPLS around for years when VXLAN came to be).
> SP folks then came up with EVPN as a replacement for VPLS and DC folks then shoehorned it on top of VXLAN.
> Then micro-segmentation buzzword came along and DC folks quickly realized that there's no field in the VXLAN header to indicate common access group nor the ability to stack VXLAN headers on top of each other (or some tried with custom VXLAN spin offs) so DC folks came up with a brilliant idea -let's maintain access lists! -like it's 90's again. As an SP guy I'm just shaking my head thinking did these guys ever heard of L2-VPNs which were around since inception of MPLS? (so yes not telling people about mac addresses they should not be talking to is better than telling everyone and then maintaining ACLs) in SP sector we learned that in 90s.
> Oh and then there's the Traffic-Engineering requirement to route mice flows around elephant flows in the DC, not mentioning the ability to seamlessly steer traffic flows right from VMs then across DC and MPLS core which is impossible with VXLAN islands in form of DCs hanging off of MPLS core.
> Rant\\
>
>
>
> adam
>
> netconsultings.com
> ::carrier-class solutions for the telecommunications industry::
>

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

WZTECH is registered trademark of WZTECH NETWORKS.
Copyright © 2018 WZTECH NETWORKS. All Rights Reserved.

IMPORTANTE:
As informações deste e-mail e o conteúdo dos eventuais documentos anexos são confidenciais e para conhecimento exclusivo do destinatário. Se o leitor desta mensagem não for o seu destinatário, fica desde já notificado de que não poderá divulgar, distribuir ou, sob qualquer forma, dar conhecimento a terceiros das informações e do conteúdo dos documentos anexos. Neste caso, favor comunicar imediatamente o remetente, respondendo este e-mail ou telefonando ao mesmo, e em seguida apague-o.

CONFIDENTIALITY NOTICE:
The information transmitted in this email message and any attachments are solely for the intended recipient and may contain confidential or privileged information. If you are not the intended recipient, any review, transmission, dissemination or other use of this information is prohibited. If you have received this communication in error, please notify the sender immediately and delete the material from any computer, including any copies.

WZTECH is registered trademark of WZTECH NETWORKS.
Copyright © 2018 WZTECH NETWORKS. All Rights Reserved.

IMPORTANTE:
As informações deste e-mail e o conteúdo dos eventuais documentos anexos são confidenciais e para conhecimento exclusivo do destinatário. Se o leitor desta mensagem não for o seu destinatário, fica desde já notificado de que não poderá divulgar, distribuir ou, sob qualquer forma, dar conhecimento a terceiros das informações e do conteúdo dos documentos anexos. Neste caso, favor comunicar imediatamente o remetente, respondendo este e-mail ou telefonando ao mesmo, e em seguida apague-o.

CONFIDENTIALITY NOTICE:
The information transmitted in this email message and any attachments are solely for the intended recipient and may contain confidential or privileged information. If you are not the intended recipient, any review, transmission, dissemination or other use of this information is prohibited. If you have received this communication in error, please notify the sender immediately and delete the material from any computer, including any copies.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Aaron Gould
2018-11-20 19:17:50 UTC
Permalink
OK good, I just read this.

https://forums.juniper.net/jnet/attachments/jnet/Day1Books/360/1/DO_EVPNSforDCI.pdf

Day One: Using Ethernet VPNs for Data Center Interconnect

page 11, last sentence on that page...

"EVPN also has mechanisms that prevent the looping of BUM traffic in an all-active multi-homed topology."

-Aaron

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Hugo Slabbert
2018-11-19 05:31:39 UTC
Permalink
>Thanks Hugo, what about leaf to leaf connection? Is that good?

It Depends(tm). I would start with asking why you want to interconnect
your leafs. Same question again about scaling out >2 as well as just what
you're trying to accomplish with those links. A use case could be
something like MLAG/VPC/whatever to bring L2 redundancy down to the node
attachment. Personally I'm trying to kill the need for that (well, more
just run L3 straight down to the host and be done with all layers of
protocols and headers just to stretch L2 everywhere), but one battle at a
time.

--
Hugo Slabbert | email, xmpp/jabber: ***@slabnet.com
pgp key: B178313E | also on Signal

On Thu 2018-Nov-15 07:31:30 -0600, Aaron1 <***@gvtc.com> wrote:

>Thanks Hugo, what about leaf to leaf connection? Is that good?
>
>What about Layer 2 loop prevention?
>
>Aaron
Richard McGovern
2018-11-06 18:41:27 UTC
Permalink
To run EVPN on QFX5100 yes you need extra license – PFL (the less expensive option). NOTE: PFL and AFL always confusing to me, as which is more!!

You could then run the EX4300 connections as an ESI-LAG, versus MC-LAG – has advantages of standards based, can scale horizontally in the core, and AnyCast GW so no VRRP (vs MC-LAG). Of course with EVPN design you lose the single point of management, BUT with automation, scripting, use of Ansible/etc. this becomes secondary anyway, at least IMHO.

Good news is you do have multiple choices, which might be bad news as well -😊 For me, EVPN is the way to go, along with as much automation as possible. I think in the end, you’ll find more EVPN based deployments, vs Fusion (and MC-LAG) for new deployments.

I assume you are staying with MX Core because already there, and because no matter what you may need it has it, . . . which comes at a higher price point.

Good luck.

Richard McGovern
Sr Sales Engineer, Juniper Networks
978-618-3342


From: Eldon Koyle <ekoyle+***@gmail.com>
Date: Tuesday, November 6, 2018 at 1:30 PM
To: Richard McGovern <***@juniper.net>
Cc: Juniper List <juniper-***@puck.nether.net>
Subject: Re: [j-nsp] Opinions on fusion provider edge

We are looking at a mix of QFX5100-48S and EX4300-32F (somewhere between 6 and 10 devices total). It looks like the QFX supports EVPN, but Juniper doesn't seem to have any relatively inexpensive 1Gbe devices with EVPN support.

We are planning on dual-homing most of our buildings (strictly L2, using active-active EVPN or MC-LAG) to a pair of MXes with QSFP ports and fiber breakout panels, however we have some odds and ends that don't make sense there due to optic requirements (a few bidi and a few ER) and cost (just can't justify upgrading to 10Gbe hardware in many locations).

One other concern is that licensing costs can add up quickly. In general, would this end up requiring the AFL?

--
Eldon

On Tue, Nov 6, 2018 at 10:20 AM Richard McGovern <***@juniper.net<mailto:***@juniper.net>> wrote:
I might suggest you look at an EVPN based design instead. This is going to be Juniper's #1 go to in the future. I believe things like Junos Fusion and MC-LAG, etc. may still be supported, but secondary to EVPN and associated features.

What is your planned SD devices? QFX5???

Richard McGovern
Sr Sales Engineer, Juniper Networks
978-618-3342


On 11/5/18, 8:32 PM, "Eldon Koyle" <ekoyle+***@gmail.com<mailto:ekoyle%***@gmail.com>> wrote:

What kind of experiences (good or bad) have people had with Juniper's
Fusion Provider edge? Are there any limitations I should be aware of?

I'm looking at it to simplify management in a campus network environment
and to use features that are only available on the MX currently.

--
Eldon
--
I don't think the universe wants me to send this message


_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
htt
Saku Ytti
2018-11-06 18:39:31 UTC
Permalink
Hey Richard,

I think there are two separate issues here. Lot of people looking at
Fusion aren't choosing it for technology, they are choosing it for
media, as 1GE L3 ports are not available. So options are

a) L2 aggregation
b) Fusion
c) Wait for JNPR to release some MX244 with 4xQSFP28 + 40xSFP+ box,
which has attractive pay-as-you-go for 1GE deployments.

I'm sure there is another set of problems where EVPN and Fusion are
competition options, but for service providers fusion's value proposal
mainly is low rate ports, ports which Cisco and Juniper do not really
offer anymore on L3 devices.


On Tue, 6 Nov 2018 at 19:21, Richard McGovern <***@juniper.net> wrote:
>
> I might suggest you look at an EVPN based design instead. This is going to be Juniper's #1 go to in the future. I believe things like Junos Fusion and MC-LAG, etc. may still be supported, but secondary to EVPN and associated features.
>
> What is your planned SD devices? QFX5???
>
> Richard McGovern
> Sr Sales Engineer, Juniper Networks
> 978-618-3342
>
>
> On 11/5/18, 8:32 PM, "Eldon Koyle" <ekoyle+***@gmail.com> wrote:
>
> What kind of experiences (good or bad) have people had with Juniper's
> Fusion Provider edge? Are there any limitations I should be aware of?
>
> I'm looking at it to simplify management in a campus network environment
> and to use features that are only available on the MX currently.
>
> --
> Eldon
> --
> I don't think the universe wants me to send this message
>
>
>
> _______________________________________________
> juniper-nsp mailing list juniper-***@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp



--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.n
Richard McGovern
2018-11-06 19:16:22 UTC
Permalink
Agree 100%. If you need L3 for 1GE edge, no solution today. That solution (late 1H2019 or 2H2019) will come with EVPN/VXLAN support on EX4300-MP, from what I hear. If you can get away with just L2 at 1GE edge, than ESI-LAG to any L2 Access will work. Yes you can also use QFX5K Agg running EVPN/VXKLAN and then connect 1GE platform (via ESI-LAG) to it.

For question is need or beneficial to interconnect at edge, can be done, but not really beneficial as EVPN/VXLAN TOR or Agg are always 1 Hop away anyway. Hopefully this makes sense to you.

Regards

Richard McGovern
Sr Sales Engineer, Juniper Networks
978-618-3342


On 11/6/18, 1:39 PM, "Saku Ytti" <***@ytti.fi> wrote:

Hey Richard,

I think there are two separate issues here. Lot of people looking at
Fusion aren't choosing it for technology, they are choosing it for
media, as 1GE L3 ports are not available. So options are

a) L2 aggregation
b) Fusion
c) Wait for JNPR to release some MX244 with 4xQSFP28 + 40xSFP+ box,
which has attractive pay-as-you-go for 1GE deployments.

I'm sure there is another set of problems where EVPN and Fusion are
competition options, but for service providers fusion's value proposal
mainly is low rate ports, ports which Cisco and Juniper do not really
offer anymore on L3 devices.


On Tue, 6 Nov 2018 at 19:21, Richard McGovern <***@juniper.net> wrote:
>
> I might suggest you look at an EVPN based design instead. This is going to be Juniper's #1 go to in the future. I believe things like Junos Fusion and MC-LAG, etc. may still be supported, but secondary to EVPN and associated features.
>
> What is your planned SD devices? QFX5???
>
> Richard McGovern
> Sr Sales Engineer, Juniper Networks
> 978-618-3342
>
>
> On 11/5/18, 8:32 PM, "Eldon Koyle" <ekoyle+***@gmail.com> wrote:
>
> What kind of experiences (good or bad) have people had with Juniper's
> Fusion Provider edge? Are there any limitations I should be aware of?
>
> I'm looking at it to simplify management in a campus network environment
> and to use features that are only available on the MX currently.
>
> --
> Eldon
> --
> I don't think the universe wants me to send this message
>
>
>
> _______________________________________________
> juniper-nsp mailing list juniper-***@puck.nether.net
> https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp&d=DwIFaQ&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=cViNvWbwxCvdnmDGDIbWYLiUsu8nisqLYXmd-x445bc&m=GYWKMgRXDrXnz5JPzE5vBSJ8ek1KgPgJWwzkXriKaoE&s=8lXeaBloqUMJgy5C1smX8ri4S5BQ9PN7qGX4OczJZus&e=



--
++ytti


_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://
Antti Ristimäki
2018-11-07 13:02:02 UTC
Permalink
Hi,

We also went with the Fusion with MX10k routers, just because we need 1GE interfaces and also 10GE interfaces with e.g. colored optics. In my opinion traditional L2 aggregation style would have been the preferred and probably more robust way, but then depending on the satellite device it might be harder to do L2 protocol tunneling, which is fairly trivial with Fusion and CCC connections.

Wrt the original question about possible issues with Fusion, we have faced quite a many. Currently one of the biggest pains is to get CoS configured properly on Fusion ports. We have a case open, where any CoS scheduler change stops traffic forwarding out from the cascade port, if one has explicitly configured schedulers for the cascade port logical control (32769) and data (32770) units. This is pretty irritating, as traffic between the extended customer facing port and the CE device works just fine, keeping e.g. BGP up and running, but traffic to/from the core does not work.

I'm also somewhat concerned about the fact that the whole Fusion thing is more or less a black box and as such much more difficult to debug than traditional technologies.

From monitoring point of view it also a bit challenge that not all information related to satellite ports is not available via SNMP. E.g. queue specific counters are not available but have to be queried via CLI command, and IIRC also ifOutDiscards is not recorded for the extended ports.

Antti

----- On 6 Nov, 2018, at 20:39, Saku Ytti ***@ytti.fi wrote:

> Hey Richard,
>
> I think there are two separate issues here. Lot of people looking at
> Fusion aren't choosing it for technology, they are choosing it for
> media, as 1GE L3 ports are not available. So options are
>
> a) L2 aggregation
> b) Fusion
> c) Wait for JNPR to release some MX244 with 4xQSFP28 + 40xSFP+ box,
> which has attractive pay-as-you-go for 1GE deployments.
>
> I'm sure there is another set of problems where EVPN and Fusion are
> competition options, but for service providers fusion's value proposal
> mainly is low rate ports, ports which Cisco and Juniper do not really
> offer anymore on L3 devices.
>
>
> On Tue, 6 Nov 2018 at 19:21, Richard McGovern <***@juniper.net> wrote:
>>
>> I might suggest you look at an EVPN based design instead. This is going to be
>> Juniper's #1 go to in the future. I believe things like Junos Fusion and
>> MC-LAG, etc. may still be supported, but secondary to EVPN and associated
>> features.
>>
>> What is your planned SD devices? QFX5???
>>
>> Richard McGovern
>> Sr Sales Engineer, Juniper Networks
>> 978-618-3342
>>
>>
>> On 11/5/18, 8:32 PM, "Eldon Koyle" <ekoyle+***@gmail.com> wrote:
>>
>> What kind of experiences (good or bad) have people had with Juniper's
>> Fusion Provider edge? Are there any limitations I should be aware of?
>>
>> I'm looking at it to simplify management in a campus network environment
>> and to use features that are only available on the MX currently.
>>
>> --
>> Eldon
>> --
>> I don't think the universe wants me to send this message
>>
>>
>>
>> _______________________________________________
>> juniper-nsp mailing list juniper-***@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
>
> --
> ++ytti
> _______________________________________________
> juniper-nsp mailing list juniper-***@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
> --
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.ne
James Bensley
2018-11-08 13:45:37 UTC
Permalink
On Wed, 7 Nov 2018 at 13:03, Antti Ristimäki <***@csc.fi> wrote:
> Wrt the original question about possible issues with Fusion, we have faced quite a many. Currently one of the biggest pains is to get CoS configured properly on Fusion ports. We have a case open, where any CoS scheduler change stops traffic forwarding out from the cascade port, if one has explicitly configured schedulers for the cascade port logical control (32769) and data (32770) units. This is pretty irritating, as traffic between the extended customer facing port and the CE device works just fine, keeping e.g. BGP up and running, but traffic to/from the core does not work.
>
> I'm also somewhat concerned about the fact that the whole Fusion thing is more or less a black box and as such much more difficult to debug than traditional technologies.
>
> From monitoring point of view it also a bit challenge that not all information related to satellite ports is not available via SNMP. E.g. queue specific counters are not available but have to be queried via CLI command, and IIRC also ifOutDiscards is not recorded for the extended ports.

My experiences with SP Fusion so far have been to add more ports to
routers in a cost effective manner. Filling a chassis with 1G/10G
ports isn't super cost efficient as line cards are expensive and ports
are rarely run near 100% utilisation for an extended period of time,
so it makes for a poor ROI. At $job-1 we went down the road of using
SP Fusion as a layer 1 extension to PEs, using a 40G uplinks to the
router as the aggregation link. Operators have been doing this for
years using dumb layer 2 switches as a layer 2 extension. There is
nothing wrong with layer 2 aggregation switches in my opinion, the
only technical advantage in my opinion to using SP Fusion for a layer
1 extension to a router compared to a layer 2 switch is that SP Fusion
is one device to configure and monitor instead of two. Unless we had
deployed thousands of aggregation + satellite devices it's not really
having any major positive impact on my monitoring licensing costs
thought. Equally when using a typical router + layer 2 switch
extension, the config that goes into the layer 2 switch is so basic,
touching two devices instead of one again seems like a negligible
disadvantage to me.

The benefit we had from SP Fusion is that, and I'm guessing here, is
that Juniper wanted guinea pigs; they sold us QFX5100s as SP Fusion
devices plus line cards for the MX's for cheaper than we could buy
line cards + EXs, and guinea pigs we were. It took quite a bit of
effort to get the QFXs onto the correct code version in stand alone
mode. We also had to upgrade our MXs to 17.1 (this was not long after
it's release) and use the then new RE-64s because we needed HQoS over
Fusion and this was the only way it was supported. It was then more
hassle to get the QFXs to migrate into Fusion mode and download their
special firmware blob from the MXs. We had to get JTAC to help us and
even they struggled. Another issue is that we were a heavy users of
Inter-AS MPLS option B's and they aren't supported over SP Fusion
links. There is technically no reason why it wouldn't work, as Fusion
is a layer 1 extension, however, Inter-AS Opt B isn't one of the
features they test when releasing new Fusion code versions, so it's
officially unsupported, so we still had to deploy EX's for Opt B
links.

A colleague of mine worked on a separate project which was a DC Fusion
deployment and similar issues and it took him a lot of headache and
JTAC assistance to get that deployment working.

In my current $job we have/had a QFX switch stack in the office (so
nothing to do with Fusion) at that has been very troublesome. As per
some of the other threads on this list we've had lots of problems with
QFX switches and certain optics not working, either in stacked mode or
on certain code versions. Again, this went to JTAC, they couldn't fix
it, eventually we fixed it by trying various different code versions
and breaking the stack out.

So overall, not impressed with the QFX5100s at all.

Cheers,
James.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://p
Tarko Tikan
2018-11-08 14:23:02 UTC
Permalink
hey,

> There is
> nothing wrong with layer 2 aggregation switches in my opinion, the
> only technical advantage in my opinion to using SP Fusion for a layer
> 1 extension to a router compared to a layer 2 switch is that SP Fusion
> is one device to configure and monitor instead of two.

Except that it's not L1. It's still L2 with 802.1BR (or vendor
proprietary version of that).

You highlight the exact reasons why one should stay away from
fusion/fex/satellite - features must explicitly be
ported/accommodated/tested for them. Not all performance data is
available, OAM/CFM is a struggle etc.

--
tarko
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
James Bensley
2018-11-08 18:31:49 UTC
Permalink
On 8 November 2018 14:23:02 GMT, Tarko Tikan <***@lanparty.ee> wrote:
>hey,
>
>> There is
>> nothing wrong with layer 2 aggregation switches in my opinion, the
>> only technical advantage in my opinion to using SP Fusion for a layer
>> 1 extension to a router compared to a layer 2 switch is that SP
>Fusion
>> is one device to configure and monitor instead of two.
>
>Except that it's not L1. It's still L2 with 802.1BR (or vendor
>proprietary version of that).

Yep, Juniper told us at the time that Fusion was based on open standards (802.1BR) and not proprietary in any way. Funny how they don't support the use of any other 802.1BR complaint device and, I doubt it would work. They must have some property gubbins in there like pushing the Fusion firmware blob from the aggregation device to the satellite device. If the Fusion firmware wasn't on the QFX the MX and QFX wouldn't "bond". Not sure how the MX detects that (LLDP?) - I had a (albeit quick) look at the standard back then and couldn't seen anything related, so I presume an MX AD would reject a random 802.1 BR compatible device.

>You highlight the exact reasons why one should stay away from
>fusion/fex/satellite - features must explicitly be
>ported/accommodated/tested for them. Not all performance data is
>available, OAM/CFM is a struggle etc.

Agreed.

Cheers,
James.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Saku Ytti
2018-11-08 18:53:31 UTC
Permalink
On Thu, 8 Nov 2018 at 20:33, James Bensley <***@gmail.com> wrote:

> Yep, Juniper told us at the time that Fusion was based on open standards (802.1BR) and not proprietary in any way. Funny how they don't support the use of any other 802.1BR complaint device and, I doubt it would work. They must have some property gubbins in there like pushing the Fusion firmware blob from the aggregation device to the satellite device. If the Fusion firmware wasn't on the QFX the MX and QFX wouldn't "bond". Not sure how the MX detects that (LLDP?) - I had a (albeit quick) look at the standard back then and couldn't seen anything related, so I presume an MX AD would reject a random 802.1 BR compatible device.

I would be very surprised if they said 'not proprietary in any way',
I'm sure they said 'based on open standards'.

But that's like saying whatsapp is based on open standards, entirely
true, but it's also hella proprietary, these statements are not
mutually exclusive.

Curiously Fusion seems to be 'expensive' feature, as it's one of the
very few features which do not work with 'hyper-mode', which according
to my co-worker's very recent test is ~25% pps upside.
--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Tarko Tikan
2018-11-08 20:41:39 UTC
Permalink
hey,

> Yep, Juniper told us at the time that Fusion was based on open
> standards (802.1BR) and not proprietary in any way. Funny how they
> don't support the use of any other 802.1BR complaint device and, I
> doubt it would work. They must have some property gubbins in there
> like pushing the Fusion firmware blob from the aggregation device to
> the satellite device. If the Fusion firmware wasn't on the QFX the MX
> and QFX wouldn't "bond". Not sure how the MX detects that (LLDP?) - I
> had a (albeit quick) look at the standard back then and couldn't seen
> anything related, so I presume an MX AD would reject a random 802.1
> BR compatible device.

Well, the 802.1BR part might very well be standard. But it does not
cover functionality like loading the software to port extender,
discovery etc. This is proprietary and as a result makes the whole thing
vendor specific.

Funny enough, they do use standards to implement this functionality (why
reinvent the wheel?). One vendor I know creates internal VRF with DHCP
server to boot up and manage port extenders.

--
tarko
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Loading...