Discussion:
[j-nsp] Longest Match for LDP (RFC5283)
James Bensley
2018-07-24 10:17:10 UTC
Permalink
Hi All,

Like my other post about Egress Protection on Juniper, is anyone using
what Juniper call "Longest Match for LDP" - their implementation of
RFC5283 LDP Extension for Inter-Area Label Switched Paths (LSPs) ?

The Juniper documentation is available here:

https://www.juniper.net/documentation/en_US/junos/topics/concept/longest-match-support-for-ldp-overview.html

https://www.juniper.net/documentation/en_US/junos/topics/task/configuration/configuring-longest-match-ldp.html

As before, as far as I can tell only Juniper have implemented this:
- Is anyone use this?
- Are you using it in a mixed vendor network?
- What is your use case for using it?

I'm looking at IGP/MPLS scaling issues where some smaller access layer
boxes that run MPLS (e.g. Cisco ME3600X, ASR920 etc.) and have limited
TCAM. We do see TCAM exhaustion issues with these boxes however the
biggest culprit of this is Inter-AS MPLS Option B connections. This is
because Inter-AS OptB double allocates labels, which means label TCAM
can run out before we run out of IP v4/v6 TCAM due to n*2 growth of
labels vs prefixes.

I'm struggling to see the use case for the feature linked above that
has been implemented by Juniper. When running LDP the label space TCAM
usage increments pretty much linearly with IP prefix TCAM space usage.
If you're running the BGP VPNv4/VPNv6 address family and per-prefix
labeling (the default on Cisco IOS/IOS-XE) then again label TCAM usage
increases pretty much linearly with IP prefix TCAM usage. If you're
using per-vrf/per-table labels or per-ce labels then label TCAM usage
increases in a logarithmic fashion in relation to IP Prefix usage, and
in this scenario we run out of IP prefix TCAM long before we run out
of label TCAM.

My point here is that label TCAM runs out because of BGP/RSVP/SR
usage, not because of LDP usage.

So who is using this feature/RFC on low end MPLS access boxes (QFX5100
or ACX5048 etc.)?
How is it helping you?
Who's running out of MPLS TCAM space (on a Juniper device) before they
run out of IP prefix space when using LDP (and not RSVP/SR/BGP)?

Cheers,
James.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
a***@netconsultings.com
2018-07-24 13:35:37 UTC
Permalink
Hi James
Of James Bensley
Sent: Tuesday, July 24, 2018 11:17 AM
Hi All,
Like my other post about Egress Protection on Juniper, is anyone using
what
Juniper call "Longest Match for LDP" - their implementation of
RFC5283 LDP Extension for Inter-Area Label Switched Paths (LSPs) ?
https://www.juniper.net/documentation/en_US/junos/topics/concept/long
est-match-support-for-ldp-overview.html
https://www.juniper.net/documentation/en_US/junos/topics/task/configur
ation/configuring-longest-match-ldp.html
- Is anyone use this?
- Are you using it in a mixed vendor network?
- What is your use case for using it?
I'm looking at IGP/MPLS scaling issues where some smaller access layer
boxes that run MPLS (e.g. Cisco ME3600X, ASR920 etc.) and have limited
TCAM. We do see TCAM exhaustion issues with these boxes however the
biggest culprit of this is Inter-AS MPLS Option B connections. This is
because
Inter-AS OptB double allocates labels, which means label TCAM can run out
before we run out of IP v4/v6 TCAM due to n*2 growth of labels vs
prefixes.
I'm struggling to see the use case for the feature linked above that has
been
implemented by Juniper. When running LDP the label space TCAM usage
increments pretty much linearly with IP prefix TCAM space usage.
If you're running the BGP VPNv4/VPNv6 address family and per-prefix
labeling (the default on Cisco IOS/IOS-XE) then again label TCAM usage
increases pretty much linearly with IP prefix TCAM usage. If you're using
per-
vrf/per-table labels or per-ce labels then label TCAM usage increases in a
logarithmic fashion in relation to IP Prefix usage, and in this scenario
we run
out of IP prefix TCAM long before we run out of label TCAM.
My point here is that label TCAM runs out because of BGP/RSVP/SR usage,
not because of LDP usage.
So who is using this feature/RFC on low end MPLS access boxes (QFX5100 or
ACX5048 etc.)?
How is it helping you?
Who's running out of MPLS TCAM space (on a Juniper device) before they
run out of IP prefix space when using LDP (and not RSVP/SR/BGP)?
I certainly was not aware of this one,
Interesting concept I'm guessing for OptB in Inter-Area deployments? (or a
neat alternative to current options?),

Suppose I have ABR advertising default-route + label down to a stub area,
And suppose PE-3 in this stub area wants to send packets to PE1 and PE2 in
area 0 or some other area.
Now I guess the whole purpose of "Longest Match for LDP" is to save
resources on PE-3 so that all it has in its RIB/FIB is this default-route +
LDP label pointing at the ABR.
So it encapsulates packets destined to PE1 and PE2 with the only transport
label it has and put the VPN label it learned via BGP from PE1 and PE2 on
top and send the packets to ABR,
When ABR receives these two packets -how is it going to know that these are
not destined to it and that it needs to stitch this LSP further to LSPs
toward PE1 and PE2 and also how would it know which of the two packets it
just received is supposed to be forwarded to PE1 and which to PE2?
This seem to defeat the purpose of an end-to-end LSPs principle where the
labels stack has to uniquely identify the label-switch-path's end-point (or
group of end-points)
The only way out is if ABR indeed thinks these packets are destined for it
and it also happens to host both VRFs and actually is advertised VPN
prefixes for these VRFs to our PE-3 so that PE-3 sends packets to PE1 and
PE2 these will land on ABR ain their respective VRFs and will be send
further by ABR to PE1 and PE2.

In the old world the PE-3 would need to have a route + transport label for
PE1 and PE2.
Options:
a) In single area for the whole core approach, PE3 would have to hold these
routes + transport labels for all other PEs in the backbone -same LSDB on
each host requirement.
b) In multi-area with BGP-LU (hierarchical MPLS) we could have ABR to
advertise only subset of routes + labels to PE-3 (or have PE-3 to only
accept routes it actually needs) -this reduction might suffice or not, note:
no VPN routes at the ABR.
c) I guess this new approach then further reduces the FIB size requirements
on PE-3 by allowing it to have just one prefix and transport label (or two
in case of redundant ABRs), but it increase requirements on ABRs as now they
need to hold all VPN routes -just like RRs (i.e. require much more FIB than
a regular PE).

I guess running out of FIB due to share size of the MPLS network can happen
in environments where you have just a few VRFs per PE with just a few routes
each but you have 10s or 100s of thousands of PEs, -it's a toll of MPLS all
the way down to the access layer -that's partly why they came up with
MPLS-TP.

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Krasimir Avramski
2018-07-24 16:25:27 UTC
Permalink
Hi

It is used in Access Nodes(default route to AGN) with
LDP-DOD(Downstream-on-Demand) Seamless MPLS architectures - RFC7032
<https://tools.ietf.org/html/rfc7032>
A sample with LDP->BGP-LU redistribution on AGN is here
<https://www.juniper.net/documentation/en_US/junos12.2/topics/example/mpls-ldp-downstream-on-demand.html>
.

Best Regards,
Krasi
Post by a***@netconsultings.com
Hi James
Of James Bensley
Sent: Tuesday, July 24, 2018 11:17 AM
Hi All,
Like my other post about Egress Protection on Juniper, is anyone using
what
Juniper call "Longest Match for LDP" - their implementation of
RFC5283 LDP Extension for Inter-Area Label Switched Paths (LSPs) ?
https://www.juniper.net/documentation/en_US/junos/topics/concept/long
est-match-support-for-ldp-overview.html
https://www.juniper.net/documentation/en_US/junos/topics/task/configur
ation/configuring-longest-match-ldp.html
- Is anyone use this?
- Are you using it in a mixed vendor network?
- What is your use case for using it?
I'm looking at IGP/MPLS scaling issues where some smaller access layer
boxes that run MPLS (e.g. Cisco ME3600X, ASR920 etc.) and have limited
TCAM. We do see TCAM exhaustion issues with these boxes however the
biggest culprit of this is Inter-AS MPLS Option B connections. This is
because
Inter-AS OptB double allocates labels, which means label TCAM can run out
before we run out of IP v4/v6 TCAM due to n*2 growth of labels vs
prefixes.
I'm struggling to see the use case for the feature linked above that has
been
implemented by Juniper. When running LDP the label space TCAM usage
increments pretty much linearly with IP prefix TCAM space usage.
If you're running the BGP VPNv4/VPNv6 address family and per-prefix
labeling (the default on Cisco IOS/IOS-XE) then again label TCAM usage
increases pretty much linearly with IP prefix TCAM usage. If you're using
per-
vrf/per-table labels or per-ce labels then label TCAM usage increases in
a
logarithmic fashion in relation to IP Prefix usage, and in this scenario
we run
out of IP prefix TCAM long before we run out of label TCAM.
My point here is that label TCAM runs out because of BGP/RSVP/SR usage,
not because of LDP usage.
So who is using this feature/RFC on low end MPLS access boxes (QFX5100 or
ACX5048 etc.)?
How is it helping you?
Who's running out of MPLS TCAM space (on a Juniper device) before they
run out of IP prefix space when using LDP (and not RSVP/SR/BGP)?
I certainly was not aware of this one,
Interesting concept I'm guessing for OptB in Inter-Area deployments? (or a
neat alternative to current options?),
Suppose I have ABR advertising default-route + label down to a stub area,
And suppose PE-3 in this stub area wants to send packets to PE1 and PE2 in
area 0 or some other area.
Now I guess the whole purpose of "Longest Match for LDP" is to save
resources on PE-3 so that all it has in its RIB/FIB is this default-route +
LDP label pointing at the ABR.
So it encapsulates packets destined to PE1 and PE2 with the only transport
label it has and put the VPN label it learned via BGP from PE1 and PE2 on
top and send the packets to ABR,
When ABR receives these two packets -how is it going to know that these are
not destined to it and that it needs to stitch this LSP further to LSPs
toward PE1 and PE2 and also how would it know which of the two packets it
just received is supposed to be forwarded to PE1 and which to PE2?
This seem to defeat the purpose of an end-to-end LSPs principle where the
labels stack has to uniquely identify the label-switch-path's end-point (or
group of end-points)
The only way out is if ABR indeed thinks these packets are destined for it
and it also happens to host both VRFs and actually is advertised VPN
prefixes for these VRFs to our PE-3 so that PE-3 sends packets to PE1 and
PE2 these will land on ABR ain their respective VRFs and will be send
further by ABR to PE1 and PE2.
In the old world the PE-3 would need to have a route + transport label for
PE1 and PE2.
a) In single area for the whole core approach, PE3 would have to hold these
routes + transport labels for all other PEs in the backbone -same LSDB on
each host requirement.
b) In multi-area with BGP-LU (hierarchical MPLS) we could have ABR to
advertise only subset of routes + labels to PE-3 (or have PE-3 to only
no VPN routes at the ABR.
c) I guess this new approach then further reduces the FIB size requirements
on PE-3 by allowing it to have just one prefix and transport label (or two
in case of redundant ABRs), but it increase requirements on ABRs as now they
need to hold all VPN routes -just like RRs (i.e. require much more FIB than
a regular PE).
I guess running out of FIB due to share size of the MPLS network can happen
in environments where you have just a few VRFs per PE with just a few routes
each but you have 10s or 100s of thousands of PEs, -it's a toll of MPLS all
the way down to the access layer -that's partly why they came up with
MPLS-TP.
adam
netconsultings.com
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
James Bensley
2018-07-25 07:25:03 UTC
Permalink
Post by a***@netconsultings.com
Hi James
Hi Adam,
Post by a***@netconsultings.com
Suppose I have ABR advertising default-route + label down to a stub area,
And suppose PE-3 in this stub area wants to send packets to PE1 and PE2 in
area 0 or some other area.
Now I guess the whole purpose of "Longest Match for LDP" is to save
resources on PE-3 so that all it has in its RIB/FIB is this default-route +
LDP label pointing at the ABR.
So it encapsulates packets destined to PE1 and PE2 with the only transport
label it has and put the VPN label it learned via BGP from PE1 and PE2 on
top and send the packets to ABR,
When ABR receives these two packets -how is it going to know that these are
not destined to it and that it needs to stitch this LSP further to LSPs
toward PE1 and PE2 and also how would it know which of the two packets it
just received is supposed to be forwarded to PE1 and which to PE2?
This seem to defeat the purpose of an end-to-end LSPs principle where the
labels stack has to uniquely identify the label-switch-path's end-point (or
group of end-points)
The only way out is if ABR indeed thinks these packets are destined for it
and it also happens to host both VRFs and actually is advertised VPN
prefixes for these VRFs to our PE-3 so that PE-3 sends packets to PE1 and
PE2 these will land on ABR ain their respective VRFs and will be send
further by ABR to PE1 and PE2.
^ This is exactly my problem with this feature. It only works if
directly above the transport label is the IP payload (e.g. in your
topology, PE3 is sending traffic inside the global routing tablet /
inet.0), then we need to store fewer prefixes + labels for transport
of GRT traffic. For MPLS VPN traffic as you say, the ABR needs all the
routes (for L3 VPNs), must be IPv6 capable in the case of IPv6 VPNs,
and the ability to do L2 VPN stitching to support inter-area L2 VPNs.
This is quite a lot of extra work for the ABR just to save TCAM/FIB
space on PE-3.
Post by a***@netconsultings.com
In the old world the PE-3 would need to have a route + transport label for
PE1 and PE2.
a) In single area for the whole core approach, PE3 would have to hold these
routes + transport labels for all other PEs in the backbone -same LSDB on
each host requirement.
b) In multi-area with BGP-LU (hierarchical MPLS) we could have ABR to
advertise only subset of routes + labels to PE-3 (or have PE-3 to only
no VPN routes at the ABR.
c) I guess this new approach then further reduces the FIB size requirements
on PE-3 by allowing it to have just one prefix and transport label (or two
in case of redundant ABRs), but it increase requirements on ABRs as now they
need to hold all VPN routes -just like RRs (i.e. require much more FIB than
a regular PE).
^ Agree with all of the above.
Opt1 doesn't scale well.
Opt2 scales better, you could only accept the /32s you need on each PE
but now you need per-pe loopback filters :(
Opt3 doesn't scale well either. If your topology is AREA_1 -- AREA_0
-- AREA_2, then the ABR on the area 1/0 boarder must carry all the
service VRFs/prefixes/labels for all PEs inside area 1 and 2, so that
an LSP can stretch from a PE inside area 1 to a PE inside area 2 and
that ABR (and the area 0/2 ABR) can perform service label swap. This
goes for any area 0 ABR, and the more areas you have the worse it
gets, those area x/0 ABRs must carry all service prefixes/labels from
all areas. So this obviously isn't a scalable approach.

So what is the use case of this feature?

All I can see if for a label switching a default route inside the
GRT/inet.0 from access PE to an access PE in another area. Similar to
the Cisco IOS command "mpls ip default route" which allocates a label
for the default route in LDP (default is no label).

Cheers,
James.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Krzysztof Szarkowicz
2018-07-25 08:14:32 UTC
Permalink
Hi,

The purpose of “Longest Match for LDP” is to be able to distribute /32 LDP FECs, if corresponding /32 routes are not available in IGP.
So, on ABR you inject e.g. default route into access IGP domain. ABR has /32 LDP FECs, and advertises this /32 FECs in LDP (but not in IGP) downstream into access domain. In access domain, LDP readvertises hop-by-hop these /32 LDP FECs, assigning the labels.

It is typically used with LDP DoD. On the other hand, however, nothing prevents you from having LDP policy on ABR to inject into access domain only specific /32 LDP FECs.

The same applies to IPv6 LDP FECs.

Thanks,
Krzysztof
Post by James Bensley
Post by a***@netconsultings.com
Hi James
Hi Adam,
Post by a***@netconsultings.com
Suppose I have ABR advertising default-route + label down to a stub area,
And suppose PE-3 in this stub area wants to send packets to PE1 and PE2 in
area 0 or some other area.
Now I guess the whole purpose of "Longest Match for LDP" is to save
resources on PE-3 so that all it has in its RIB/FIB is this default-route +
LDP label pointing at the ABR.
So it encapsulates packets destined to PE1 and PE2 with the only transport
label it has and put the VPN label it learned via BGP from PE1 and PE2 on
top and send the packets to ABR,
When ABR receives these two packets -how is it going to know that these are
not destined to it and that it needs to stitch this LSP further to LSPs
toward PE1 and PE2 and also how would it know which of the two packets it
just received is supposed to be forwarded to PE1 and which to PE2?
This seem to defeat the purpose of an end-to-end LSPs principle where the
labels stack has to uniquely identify the label-switch-path's end-point (or
group of end-points)
The only way out is if ABR indeed thinks these packets are destined for it
and it also happens to host both VRFs and actually is advertised VPN
prefixes for these VRFs to our PE-3 so that PE-3 sends packets to PE1 and
PE2 these will land on ABR ain their respective VRFs and will be send
further by ABR to PE1 and PE2.
^ This is exactly my problem with this feature. It only works if
directly above the transport label is the IP payload (e.g. in your
topology, PE3 is sending traffic inside the global routing tablet /
inet.0), then we need to store fewer prefixes + labels for transport
of GRT traffic. For MPLS VPN traffic as you say, the ABR needs all the
routes (for L3 VPNs), must be IPv6 capable in the case of IPv6 VPNs,
and the ability to do L2 VPN stitching to support inter-area L2 VPNs.
This is quite a lot of extra work for the ABR just to save TCAM/FIB
space on PE-3.
Post by a***@netconsultings.com
In the old world the PE-3 would need to have a route + transport label for
PE1 and PE2.
a) In single area for the whole core approach, PE3 would have to hold these
routes + transport labels for all other PEs in the backbone -same LSDB on
each host requirement.
b) In multi-area with BGP-LU (hierarchical MPLS) we could have ABR to
advertise only subset of routes + labels to PE-3 (or have PE-3 to only
no VPN routes at the ABR.
c) I guess this new approach then further reduces the FIB size requirements
on PE-3 by allowing it to have just one prefix and transport label (or two
in case of redundant ABRs), but it increase requirements on ABRs as now they
need to hold all VPN routes -just like RRs (i.e. require much more FIB than
a regular PE).
^ Agree with all of the above.
Opt1 doesn't scale well.
Opt2 scales better, you could only accept the /32s you need on each PE
but now you need per-pe loopback filters :(
Opt3 doesn't scale well either. If your topology is AREA_1 -- AREA_0
-- AREA_2, then the ABR on the area 1/0 boarder must carry all the
service VRFs/prefixes/labels for all PEs inside area 1 and 2, so that
an LSP can stretch from a PE inside area 1 to a PE inside area 2 and
that ABR (and the area 0/2 ABR) can perform service label swap. This
goes for any area 0 ABR, and the more areas you have the worse it
gets, those area x/0 ABRs must carry all service prefixes/labels from
all areas. So this obviously isn't a scalable approach.
So what is the use case of this feature?
All I can see if for a label switching a default route inside the
GRT/inet.0 from access PE to an access PE in another area. Similar to
the Cisco IOS command "mpls ip default route" which allocates a label
for the default route in LDP (default is no label).
Cheers,
James.
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo
James Bensley
2018-07-30 09:14:43 UTC
Permalink
Hi Krasimir, Krzysztof,
Post by Krasimir Avramski
It is used in Access Nodes(default route to AGN) with
LDP-DOD(Downstream-on-Demand) Seamless MPLS architectures - RFC7032
A sample with LDP->BGP-LU redistribution on AGN is here.
Thanks Krasimir. Sorry for the delay, I read
https://tools.ietf.org/html/rfc7032,
https://tools.ietf.org/html/rfc5283 and
https://tools.ietf.org/html/draft-ietf-mpls-seamless-mpls-07 before
responding.
Post by Krasimir Avramski
The purpose of “Longest Match for LDP” is to be able to distribute /32 LDP
FECs, if corresponding /32 routes are not available in IGP.
So, on ABR you inject e.g. default route into access IGP domain. ABR has /32
LDP FECs, and advertises this /32 FECs in LDP (but not in IGP) downstream
into access domain. In access domain, LDP readvertises hop-by-hop these /32
LDP FECs, assigning the labels.
It is typically used with LDP DoD. On the other hand, however, nothing
prevents you from having LDP policy on ABR to inject into access domain only
specific /32 LDP FECs.
Thanks Krzysztof, that was my understanding from the Juniper link I
provided and the RFC, but it's still nice to have my understanding
clarified by someone else.

After reading the above RFCs I see that the specific use case for this
feature is when using LDP in Downstream on Demand mode, although that
isn't actually called out in RFC5283 anywhere or the Juniper
documentation. I was thinking in DU mode in my head :)

In DU mode, an agg node will advertise all labels to the access node.
If the access node has say 10.0.0.0/22 summary route (an example range
loopback IPs are assigned from) and RFC5283 enabled, and the agg node
advertises 1024 /32 IPv4 FEC labels (one for each loopback assuming
1000 PEs exist) the access node will keep all 1000 labels even if it
only needs a few of them, matching them against the summary route.
This is the default LDP DU behaviour unless we create horrible per-LDP
neighbour policies on the agg node that only allow the labels for the
exact loopbacks that access node needs to reach. So relaxing the LDP
exact match rules is kind of useless for LDP DU. In LDP DoD mode, the
access nodes only request the label mappings for the labels they need,
so no need for per-LDP neighbour policies, but we would still need
per-LDP neighbour IP routing policies to only advertise the /32
loopback IPs that neighbor needs in the IGP, unless we use RFC5283 and
advertise a summary route (or install a static summary route).

Cheers,
James.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.n
Krzysztof Szarkowicz
2018-07-30 14:22:54 UTC
Permalink
James,

As mentioned in my earlier mail, you can use it even with DU. If ABR has
10000 /32 LDP FECs, you can configure LDP export policy on ABR to send only
subset (e. g. 20 /32 FECs) to access.

Saying that, typical deployment is with DoD, since typically access PEs
(and not ABRs) have better knowledge which loop backs are needed. So,
basically access PEs send the info to ABR, which loop backs are needed, and
which loop backs are not needed via LDP DoD machinery.

Sent from handheld device. Sorry for typos.
Post by James Bensley
Hi Krasimir, Krzysztof,
Post by Krasimir Avramski
It is used in Access Nodes(default route to AGN) with
LDP-DOD(Downstream-on-Demand) Seamless MPLS architectures - RFC7032
A sample with LDP->BGP-LU redistribution on AGN is here.
Thanks Krasimir. Sorry for the delay, I read
https://tools.ietf.org/html/rfc7032,
https://tools.ietf.org/html/rfc5283 and
https://tools.ietf.org/html/draft-ietf-mpls-seamless-mpls-07 before
responding.
Post by Krasimir Avramski
The purpose of “Longest Match for LDP” is to be able to distribute /32
LDP
Post by Krasimir Avramski
FECs, if corresponding /32 routes are not available in IGP.
So, on ABR you inject e.g. default route into access IGP domain. ABR has
/32
Post by Krasimir Avramski
LDP FECs, and advertises this /32 FECs in LDP (but not in IGP) downstream
into access domain. In access domain, LDP readvertises hop-by-hop these
/32
Post by Krasimir Avramski
LDP FECs, assigning the labels.
It is typically used with LDP DoD. On the other hand, however, nothing
prevents you from having LDP policy on ABR to inject into access domain
only
Post by Krasimir Avramski
specific /32 LDP FECs.
Thanks Krzysztof, that was my understanding from the Juniper link I
provided and the RFC, but it's still nice to have my understanding
clarified by someone else.
After reading the above RFCs I see that the specific use case for this
feature is when using LDP in Downstream on Demand mode, although that
isn't actually called out in RFC5283 anywhere or the Juniper
documentation. I was thinking in DU mode in my head :)
In DU mode, an agg node will advertise all labels to the access node.
If the access node has say 10.0.0.0/22 summary route (an example range
loopback IPs are assigned from) and RFC5283 enabled, and the agg node
advertises 1024 /32 IPv4 FEC labels (one for each loopback assuming
1000 PEs exist) the access node will keep all 1000 labels even if it
only needs a few of them, matching them against the summary route.
This is the default LDP DU behaviour unless we create horrible per-LDP
neighbour policies on the agg node that only allow the labels for the
exact loopbacks that access node needs to reach. So relaxing the LDP
exact match rules is kind of useless for LDP DU. In LDP DoD mode, the
access nodes only request the label mappings for the labels they need,
so no need for per-LDP neighbour policies, but we would still need
per-LDP neighbour IP routing policies to only advertise the /32
loopback IPs that neighbor needs in the IGP, unless we use RFC5283 and
advertise a summary route (or install a static summary route).
Cheers,
James.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
ht
James Bensley
2018-07-30 15:13:30 UTC
Permalink
Post by Krzysztof Szarkowicz
James,
As mentioned in my earlier mail, you can use it even with DU. If ABR has
10000 /32 LDP FECs, you can configure LDP export policy on ABR to send only
subset (e. g. 20 /32 FECs) to access.
Saying that, typical deployment is with DoD, since typically access PEs (and
not ABRs) have better knowledge which loop backs are needed. So, basically
access PEs send the info to ABR, which loop backs are needed, and which loop
backs are not needed via LDP DoD machinery.
Hi Kryzstof,
Post by Krzysztof Szarkowicz
Post by James Bensley
unless we create horrible per-LDP
neighbour policies on the agg node that only allow the labels for the
exact loopbacks that access node needs to reach.
Cheers,
James.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Krzysztof Szarkowicz
2018-07-30 19:51:11 UTC
Permalink
Post by James Bensley
Post by Krzysztof Szarkowicz
James,
As mentioned in my earlier mail, you can use it even with DU. If ABR has
10000 /32 LDP FECs, you can configure LDP export policy on ABR to send only
subset (e. g. 20 /32 FECs) to access.
Saying that, typical deployment is with DoD, since typically access PEs (and
not ABRs) have better knowledge which loop backs are needed. So, basically
access PEs send the info to ABR, which loop backs are needed, and which loop
backs are not needed via LDP DoD machinery.
Hi Kryzstof,
I have seen a deployment, where services were deployed via some centralized provisioning system. Part of the service provisioning was to adjust the LDP export policy on ABR, since LDP DU was used (some access PEs didn’t supported LDP DoD in that deployment). So, if you have some devices without LDP DoD support, your only choice is LDP DU + policies :-).
Post by James Bensley
Post by Krzysztof Szarkowicz
Post by James Bensley
unless we create horrible per-LDP
neighbour policies on the agg node that only allow the labels for the
exact loopbacks that access node needs to reach.
Cheers,
James.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-ns
a***@netconsultings.com
2018-07-31 14:29:28 UTC
Permalink
Of Krzysztof Szarkowicz
Sent: Monday, July 30, 2018 8:51 PM
To: James Bensley
Cc: Juniper List
Subject: Re: [j-nsp] Longest Match for LDP (RFC5283)
Post by James Bensley
Post by Krzysztof Szarkowicz
James,
As mentioned in my earlier mail, you can use it even with DU. If ABR has
10000 /32 LDP FECs, you can configure LDP export policy on ABR to
send only subset (e. g. 20 /32 FECs) to access.
Saying that, typical deployment is with DoD, since typically access
PEs (and not ABRs) have better knowledge which loop backs are needed.
So, basically access PEs send the info to ABR, which loop backs are
needed, and which loop backs are not needed via LDP DoD machinery.
Hi Kryzstof,
I have seen a deployment, where services were deployed via some
centralized provisioning system. Part of the service provisioning was to adjust
the LDP export policy on ABR, since LDP DU was used (some access PEs didn’t
supported LDP DoD in that deployment). So, if you have some devices
without LDP DoD support, your only choice is LDP DU + policies :-).
Or BGP-LU + policies,

One follow up question,
What about the case, where the minimum set of /32 loopback routes and associated labels is simply beyond the capabilities of an access node.
Is there a possibility for such access node to rely on default route + label -where originator of such a labelled default-route is the local ABR(s) in "opt-B" role doing full IP lookup and then repackaging packets towards the actual NH please?


adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::


_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/li
Krzysztof Szarkowicz
2018-07-31 15:46:20 UTC
Permalink
You can turn ABR to inline Route Reflector and change the next-hop to self
when reflecting the routes to access PE. Thus, access PE will require
loopbacks of ABRs only.

Sent from handheld device. Sorry for typos.
Post by Krzysztof Szarkowicz
Of Krzysztof Szarkowicz
Sent: Monday, July 30, 2018 8:51 PM
To: James Bensley
Cc: Juniper List
Subject: Re: [j-nsp] Longest Match for LDP (RFC5283)
Post by James Bensley
Post by Krzysztof Szarkowicz
James,
As mentioned in my earlier mail, you can use it even with DU. If ABR has
10000 /32 LDP FECs, you can configure LDP export policy on ABR to
send only subset (e. g. 20 /32 FECs) to access.
Saying that, typical deployment is with DoD, since typically access
PEs (and not ABRs) have better knowledge which loop backs are needed.
So, basically access PEs send the info to ABR, which loop backs are
needed, and which loop backs are not needed via LDP DoD machinery.
Hi Kryzstof,
I have seen a deployment, where services were deployed via some
centralized provisioning system. Part of the service provisioning was to
adjust
the LDP export policy on ABR, since LDP DU was used (some access PEs
didn’t
supported LDP DoD in that deployment). So, if you have some devices
without LDP DoD support, your only choice is LDP DU + policies :-).
Or BGP-LU + policies,
One follow up question,
What about the case, where the minimum set of /32 loopback routes and
associated labels is simply beyond the capabilities of an access node.
Is there a possibility for such access node to rely on default route +
label -where originator of such a labelled default-route is the local
ABR(s) in "opt-B" role doing full IP lookup and then repackaging packets
towards the actual NH please?
adam
netconsultings.com
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
a***@netconsultings.com
2018-07-31 16:09:02 UTC
Permalink
Aah yes of course in-path RRs with NHS would do :)

Cheers,



adam



netconsultings.com

::carrier-class solutions for the telecommunications industry::



From: Krzysztof Szarkowicz [mailto:***@gmail.com]
Sent: Tuesday, July 31, 2018 4:46 PM
To: ***@netconsultings.com
Cc: James Bensley; Juniper List
Subject: Re: [j-nsp] Longest Match for LDP (RFC5283)



You can turn ABR to inline Route Reflector and change the next-hop to self when reflecting the routes to access PE. Thus, access PE will require loopbacks of ABRs only.

Sent from handheld device. Sorry for typos.
Of Krzysztof Szarkowicz
Sent: Monday, July 30, 2018 8:51 PM
To: James Bensley
Cc: Juniper List
Subject: Re: [j-nsp] Longest Match for LDP (RFC5283)
Post by James Bensley
Post by Krzysztof Szarkowicz
James,
As mentioned in my earlier mail, you can use it even with DU. If ABR has
10000 /32 LDP FECs, you can configure LDP export policy on ABR to
send only subset (e. g. 20 /32 FECs) to access.
Saying that, typical deployment is with DoD, since typically access
PEs (and not ABRs) have better knowledge which loop backs are needed.
So, basically access PEs send the info to ABR, which loop backs are
needed, and which loop backs are not needed via LDP DoD machinery.
Hi Kryzstof,
I have seen a deployment, where services were deployed via some
centralized provisioning system. Part of the service provisioning was to adjust
the LDP export policy on ABR, since LDP DU was used (some access PEs didn’t
supported LDP DoD in that deployment). So, if you have some devices
without LDP DoD support, your only choice is LDP DU + policies :-).
Or BGP-LU + policies,

One follow up question,
What about the case, where the minimum set of /32 loopback routes and associated labels is simply beyond the capabilities of an access node.
Is there a possibility for such access node to rely on default route + label -where originator of such a labelled default-route is the local ABR(s) in "opt-B" role doing full IP lookup and then repackaging packets towards the actual NH please?


adam

netconsultings.com <http://netconsultings.com>
::carrier-class solutions for the telecommunications industry::



_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/list
James Bensley
2018-08-01 07:16:10 UTC
Permalink
Post by a***@netconsultings.com
One follow up question,
What about the case, where the minimum set of /32 loopback routes and associated labels is simply beyond the capabilities of an access node.
Is there a possibility for such access node to rely on default route + label -where originator of such a labelled default-route is the local ABR(s) in "opt-B" role doing full IP lookup and then repackaging packets towards the actual NH please?
Hi Adam,

In the Seamless MPLS design the access nodes have a single default
route or single summary prefix for your loopback range (say
192.0.2.0/24) and use LDP Downstream on Demand and request the
transport labels from the aggregation nodes only for the remote PEs
the access node actually needs (i.e. where you have configured a
pseudowire/L2 VPN towards, iBGP neghbour address for L3 VPN etc.). So
the access node should *only* have exactly the labels it needs with a
single route (when using RFC5283).

Cheers,
James.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Loading...