Discussion:
[j-nsp] 6PE without family inet6 labeled-unicast
Andrey Kostin
2018-07-20 19:58:56 UTC
Permalink
Hello juniper-nsp,

I've accidentally encountered an interesting behavior and wondering if
anyone already seen it before or may be it's documented. So pointing to
the docs is appreciated.

The story:
We began to activate ipv6 for customers connected from cable network
after cable provider eventually added ipv6 support. We receive prefixes
from cable network via eBGP and then redistribute them inside our AS
with iBGP. There are two PE connected to cable network and receiving
same prefixes, so for traffic load-balancing we change next-hop to
anycast loopback address shared by those two PE and use dedicated LSPs
to that IP with "no-install" for real PE loopback addresses.
IPv6 wasn't deemed to use MPLS and existing plain iBGP sessions between
IPv6 addresses with family inet6 unicast were supposed to be reused.
However, the same export policy with term that changes next-hop for
specific community is used for both family inet and inet6, so it
started to assign IPv4 next-hop to IPv6 prefixes implicitly.

Here is the example of one prefix.

## here PE receives prefix from eBGP neighbor:

***@re1.agg01.LLL2> show route XXXX:XXXX:e1bc::/46

inet6.0: 52939 destinations, 105912 routes (52920 active, 1 holddown,
24 hidden)
+ = Active Route, - = Last Active, * = Both

XXXX:XXXX:e1bc::/46*[BGP/170] 5d 13:16:26, MED 100, localpref 100
AS path: EEEE I, validation-state: unverified
to XXXX:XXXX:ffff:f200:0:2:2:2 via ae2.202
## Now PE advertises it to iBGP neighbor with next-hop changed to plain
IP:
***@re1.agg01.LLL2> show route XXXX:XXXX:e1bc::/46
advertising-protocol bgp XXXX:XXXX:1::1:140

inet6.0: 52907 destinations, 105843 routes (52883 active, 6 holddown,
24 hidden)
Prefix Nexthop MED Lclpref AS
path
* XXXX:XXXX:e1bc::/46 YYY.YYY.155.141 100 100 EEEE
I

## Same output as above with details
{master}
***@re1.agg01.LLL2> show route XXXX:XXXX:e1bc::/46
advertising-protocol bgp XXXX:XXXX:1::1:140 detail ## Session is
between v6 addresses

inet6.0: 52902 destinations, 105836 routes (52881 active, 3 holddown,
24 hidden)
* XXXX:XXXX:e1bc::/46 (3 entries, 1 announced)
BGP group internal-v6 type Internal
Nexthop: YYY.YYY.155.141 ## v6
prefix advertised with plain v4 next-hop
Flags: Nexthop Change
MED: 100
Localpref: 100
AS path: [IIII] EEEE I
Communities: IIII:10102 no-export


## iBGP neighbor receives prefix with tooled next hop and uses
established LSPs to forward traffic:
***@re0.bdr01.LLL> show route XXXX:XXXX:e1bc::/46

inet6.0: 52955 destinations, 323835 routes (52877 active, 10 holddown,
79 hidden)
+ = Active Route, - = Last Active, * = Both


XXXX:XXXX:e1bc::/46*[BGP/170] 5d 13:01:12, MED 100, localpref 100, from
XXXX:XXXX:1::1:240
AS path: EEEE I, validation-state: unverified
to YYY.YYY.155.14 via ae1.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-1
to YYY.YYY.155.9 via ae12.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-2
to YYY.YYY.155.95 via ae4.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-1
to YYY.YYY.155.9 via ae12.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-2

***@re0.bdr01.LLL> show route XXXX:XXXX:e1bc::/46 detail | match
"Protocol|XXXX:XXXX|BE-"
XXXX:XXXX:e1bc::/46 (3 entries, 1 announced)
Source: XXXX:XXXX:1::1:240
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL2-1
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL2-2
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL-1
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL-2
Protocol next hop: ::ffff:YYY.YYY.155.141
### Seems that IPv4 next hop has been converted to compatible form
Task: BGP_IIII.XXXX:XXXX:1::1:240
Source: XXXX:XXXX:1::7

## The policy assigning next-hop is the same for v4 and v6 sessions,
only one term is shown:
***@re1.agg01.LLL2> show configuration protocols bgp group internal-v4
export
export [ deny-rfc3330 to-bgp ];

{master}
***@re1.agg01.LLL2> show configuration protocols bgp group internal-v6
export
export [ deny-rfc3330 to-bgp ];


***@re1.agg01.LLL2> show configuration policy-options policy-statement
to-bgp | display inheritance no-comments
term vvvv-vvvv {
from {
community vvvv-vvvv;
tag 33;
}
then {
next-hop YYY.YYY.155.141;
accept;
}
}


***@re0.bdr01.LLL> show route forwarding-table destination
XXXX:XXXX:e1bc::/46
Routing table: default.inet6
Internet6:
Destination Type RtRef Next hop Type Index NhRef
Netif
XXXX:XXXX:e1bc::/46 user 0 indr 1049181 37
ulst 1050092 4
YYY.YYY.155.14 ucst 1775 1
ae1.0
YYY.YYY.155.9 Push 486887 1859
1 ae12.0
YYY.YYY.155.95 ucst 2380 1
ae4.0
YYY.YYY.155.9 Push 486892 2555
1 ae12.0

The result is that we have IPv6 traffic forwarded via MPLS without 6PE
configured properly. ipv6-tunneling is configured under "protocols mpls"
but no "family inet6 labeled-unicast explicit-null" under v4 iBGP
session.
It works as far as we have v6 enabled on all MPLS links, so packets are
not dropped because of implicit-null label.
Looks sketchy but it works. Has anybody seen/used it before?
--
Kind regards,

Andrey Kostin
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Andrey Kostin
2018-07-20 19:58:56 UTC
Permalink
Hello juniper-nsp,

I've accidentally encountered an interesting behavior and wondering if
anyone already seen it before or may be it's documented. So pointing to
the docs is appreciated.

The story:
We began to activate ipv6 for customers connected from cable network
after cable provider eventually added ipv6 support. We receive prefixes
from cable network via eBGP and then redistribute them inside our AS
with iBGP. There are two PE connected to cable network and receiving
same prefixes, so for traffic load-balancing we change next-hop to
anycast loopback address shared by those two PE and use dedicated LSPs
to that IP with "no-install" for real PE loopback addresses.
IPv6 wasn't deemed to use MPLS and existing plain iBGP sessions between
IPv6 addresses with family inet6 unicast were supposed to be reused.
However, the same export policy with term that changes next-hop for
specific community is used for both family inet and inet6, so it
started to assign IPv4 next-hop to IPv6 prefixes implicitly.

Here is the example of one prefix.

## here PE receives prefix from eBGP neighbor:

***@re1.agg01.LLL2> show route XXXX:XXXX:e1bc::/46

inet6.0: 52939 destinations, 105912 routes (52920 active, 1 holddown,
24 hidden)
+ = Active Route, - = Last Active, * = Both

XXXX:XXXX:e1bc::/46*[BGP/170] 5d 13:16:26, MED 100, localpref 100
AS path: EEEE I, validation-state: unverified
to XXXX:XXXX:ffff:f200:0:2:2:2 via ae2.202
## Now PE advertises it to iBGP neighbor with next-hop changed to plain
IP:
***@re1.agg01.LLL2> show route XXXX:XXXX:e1bc::/46
advertising-protocol bgp XXXX:XXXX:1::1:140

inet6.0: 52907 destinations, 105843 routes (52883 active, 6 holddown,
24 hidden)
Prefix Nexthop MED Lclpref AS
path
* XXXX:XXXX:e1bc::/46 YYY.YYY.155.141 100 100 EEEE
I

## Same output as above with details
{master}
***@re1.agg01.LLL2> show route XXXX:XXXX:e1bc::/46
advertising-protocol bgp XXXX:XXXX:1::1:140 detail ## Session is
between v6 addresses

inet6.0: 52902 destinations, 105836 routes (52881 active, 3 holddown,
24 hidden)
* XXXX:XXXX:e1bc::/46 (3 entries, 1 announced)
BGP group internal-v6 type Internal
Nexthop: YYY.YYY.155.141 ## v6
prefix advertised with plain v4 next-hop
Flags: Nexthop Change
MED: 100
Localpref: 100
AS path: [IIII] EEEE I
Communities: IIII:10102 no-export


## iBGP neighbor receives prefix with tooled next hop and uses
established LSPs to forward traffic:
***@re0.bdr01.LLL> show route XXXX:XXXX:e1bc::/46

inet6.0: 52955 destinations, 323835 routes (52877 active, 10 holddown,
79 hidden)
+ = Active Route, - = Last Active, * = Both


XXXX:XXXX:e1bc::/46*[BGP/170] 5d 13:01:12, MED 100, localpref 100, from
XXXX:XXXX:1::1:240
AS path: EEEE I, validation-state: unverified
to YYY.YYY.155.14 via ae1.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-1
to YYY.YYY.155.9 via ae12.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-2
to YYY.YYY.155.95 via ae4.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-1
to YYY.YYY.155.9 via ae12.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-2

***@re0.bdr01.LLL> show route XXXX:XXXX:e1bc::/46 detail | match
"Protocol|XXXX:XXXX|BE-"
XXXX:XXXX:e1bc::/46 (3 entries, 1 announced)
Source: XXXX:XXXX:1::1:240
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL2-1
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL2-2
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL-1
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL-2
Protocol next hop: ::ffff:YYY.YYY.155.141
### Seems that IPv4 next hop has been converted to compatible form
Task: BGP_IIII.XXXX:XXXX:1::1:240
Source: XXXX:XXXX:1::7

## The policy assigning next-hop is the same for v4 and v6 sessions,
only one term is shown:
***@re1.agg01.LLL2> show configuration protocols bgp group internal-v4
export
export [ deny-rfc3330 to-bgp ];

{master}
***@re1.agg01.LLL2> show configuration protocols bgp group internal-v6
export
export [ deny-rfc3330 to-bgp ];


***@re1.agg01.LLL2> show configuration policy-options policy-statement
to-bgp | display inheritance no-comments
term vvvv-vvvv {
from {
community vvvv-vvvv;
tag 33;
}
then {
next-hop YYY.YYY.155.141;
accept;
}
}


***@re0.bdr01.LLL> show route forwarding-table destination
XXXX:XXXX:e1bc::/46
Routing table: default.inet6
Internet6:
Destination Type RtRef Next hop Type Index NhRef
Netif
XXXX:XXXX:e1bc::/46 user 0 indr 1049181 37
ulst 1050092 4
YYY.YYY.155.14 ucst 1775 1
ae1.0
YYY.YYY.155.9 Push 486887 1859
1 ae12.0
YYY.YYY.155.95 ucst 2380 1
ae4.0
YYY.YYY.155.9 Push 486892 2555
1 ae12.0

The result is that we have IPv6 traffic forwarded via MPLS without 6PE
configured properly. ipv6-tunneling is configured under "protocols mpls"
but no "family inet6 labeled-unicast explicit-null" under v4 iBGP
session.
It works as far as we have v6 enabled on all MPLS links, so packets are
not dropped because of implicit-null label.
Looks sketchy but it works. Has anybody seen/used it before?
--
Kind regards,

Andrey Kostin
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Dan Peachey
2018-07-20 20:40:26 UTC
Permalink
Post by Andrey Kostin
Hello juniper-nsp,
I've accidentally encountered an interesting behavior and wondering if
anyone already seen it before or may be it's documented. So pointing to
the docs is appreciated.
We began to activate ipv6 for customers connected from cable network
after cable provider eventually added ipv6 support. We receive prefixes
from cable network via eBGP and then redistribute them inside our AS
with iBGP. There are two PE connected to cable network and receiving
same prefixes, so for traffic load-balancing we change next-hop to
anycast loopback address shared by those two PE and use dedicated LSPs
to that IP with "no-install" for real PE loopback addresses.
IPv6 wasn't deemed to use MPLS and existing plain iBGP sessions between
IPv6 addresses with family inet6 unicast were supposed to be reused.
However, the same export policy with term that changes next-hop for
specific community is used for both family inet and inet6, so it
started to assign IPv4 next-hop to IPv6 prefixes implicitly.
Here is the example of one prefix.
inet6.0: 52939 destinations, 105912 routes (52920 active, 1 holddown,
24 hidden)
+ = Active Route, - = Last Active, * = Both
XXXX:XXXX:e1bc::/46*[BGP/170] 5d 13:16:26, MED 100, localpref 100
AS path: EEEE I, validation-state: unverified
to XXXX:XXXX:ffff:f200:0:2:2:2 via ae2.202
## Now PE advertises it to iBGP neighbor with next-hop changed to plain
advertising-protocol bgp XXXX:XXXX:1::1:140
inet6.0: 52907 destinations, 105843 routes (52883 active, 6 holddown,
24 hidden)
Prefix Nexthop MED Lclpref AS
path
* XXXX:XXXX:e1bc::/46 YYY.YYY.155.141 100 100 EEEE
I
## Same output as above with details
{master}
advertising-protocol bgp XXXX:XXXX:1::1:140 detail ## Session is
between v6 addresses
inet6.0: 52902 destinations, 105836 routes (52881 active, 3 holddown,
24 hidden)
* XXXX:XXXX:e1bc::/46 (3 entries, 1 announced)
BGP group internal-v6 type Internal
Nexthop: YYY.YYY.155.141 ## v6
prefix advertised with plain v4 next-hop
Flags: Nexthop Change
MED: 100
Localpref: 100
AS path: [IIII] EEEE I
Communities: IIII:10102 no-export
## iBGP neighbor receives prefix with tooled next hop and uses
inet6.0: 52955 destinations, 323835 routes (52877 active, 10 holddown,
79 hidden)
+ = Active Route, - = Last Active, * = Both
XXXX:XXXX:e1bc::/46*[BGP/170] 5d 13:01:12, MED 100, localpref 100, from
XXXX:XXXX:1::1:240
AS path: EEEE I, validation-state: unverified
to YYY.YYY.155.14 via ae1.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-1
to YYY.YYY.155.9 via ae12.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-2
to YYY.YYY.155.95 via ae4.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-1
to YYY.YYY.155.9 via ae12.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-2
"Protocol|XXXX:XXXX|BE-"
XXXX:XXXX:e1bc::/46 (3 entries, 1 announced)
Source: XXXX:XXXX:1::1:240
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL2-1
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL2-2
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL-1
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL-2
Protocol next hop: ::ffff:YYY.YYY.155.141
### Seems that IPv4 next hop has been converted to compatible form
Task: BGP_IIII.XXXX:XXXX:1::1:240
Source: XXXX:XXXX:1::7
## The policy assigning next-hop is the same for v4 and v6 sessions,
export
export [ deny-rfc3330 to-bgp ];
{master}
export
export [ deny-rfc3330 to-bgp ];
to-bgp | display inheritance no-comments
term vvvv-vvvv {
from {
community vvvv-vvvv;
tag 33;
}
then {
next-hop YYY.YYY.155.141;
accept;
}
}
XXXX:XXXX:e1bc::/46
Routing table: default.inet6
Destination Type RtRef Next hop Type Index NhRef
Netif
XXXX:XXXX:e1bc::/46 user 0 indr 1049181 37
ulst 1050092 4
YYY.YYY.155.14 ucst 1775 1
ae1.0
YYY.YYY.155.9 Push 486887 1859
1 ae12.0
YYY.YYY.155.95 ucst 2380 1
ae4.0
YYY.YYY.155.9 Push 486892 2555
1 ae12.0
The result is that we have IPv6 traffic forwarded via MPLS without 6PE
configured properly. ipv6-tunneling is configured under "protocols mpls"
but no "family inet6 labeled-unicast explicit-null" under v4 iBGP
session.
It works as far as we have v6 enabled on all MPLS links, so packets are
not dropped because of implicit-null label.
Looks sketchy but it works. Has anybody seen/used it before?
Hi,

Presumably the penultimate LSR has the IPv6 iBGP routes? i.e. it knows how
to IPv6 route to the destination. The last LSR->LER hop should just be IPv6
routed in that case.

I've noticed this behaviour before whilst playing with 6PE on lab devices.
It would of course break if you were running IPv4 core only.

Cheers,

Dan
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Pedro Marques Antunes via juniper-nsp
2018-07-21 09:36:28 UTC
Permalink
The IPv6 explicit null label works as an overlay for the IPv6 traffic
carried over the core.

In a PHP scenario, the ultimate LSR will still forward based on the
received MPLS transport label. Therefore I do not think it will require
an IPv6 routing table. However it still needs to forward IPv6 packets.
Not a problem with recent routers. But this might have been a problem in
the days when you would have devices without any IPv6 capabilities. On
Junos boxes though, `family inet6` is still required on the egress
interface.

In a UHP scenario, the penultimate LSR is expected to forward a packet
with the IPv4 explicit null label (3). But this cannot be used with an
IPv6 packet. The overlay is mandatory in such a scenario.
Post by Dan Peachey
Post by Andrey Kostin
Hello juniper-nsp,
I've accidentally encountered an interesting behavior and wondering if
anyone already seen it before or may be it's documented. So pointing to
the docs is appreciated.
We began to activate ipv6 for customers connected from cable network
after cable provider eventually added ipv6 support. We receive prefixes
from cable network via eBGP and then redistribute them inside our AS
with iBGP. There are two PE connected to cable network and receiving
same prefixes, so for traffic load-balancing we change next-hop to
anycast loopback address shared by those two PE and use dedicated LSPs
to that IP with "no-install" for real PE loopback addresses.
IPv6 wasn't deemed to use MPLS and existing plain iBGP sessions between
IPv6 addresses with family inet6 unicast were supposed to be reused.
However, the same export policy with term that changes next-hop for
specific community is used for both family inet and inet6, so it
started to assign IPv4 next-hop to IPv6 prefixes implicitly.
Here is the example of one prefix.
inet6.0: 52939 destinations, 105912 routes (52920 active, 1 holddown,
24 hidden)
+ = Active Route, - = Last Active, * = Both
XXXX:XXXX:e1bc::/46*[BGP/170] 5d 13:16:26, MED 100, localpref 100
AS path: EEEE I, validation-state: unverified
to XXXX:XXXX:ffff:f200:0:2:2:2 via ae2.202
## Now PE advertises it to iBGP neighbor with next-hop changed to plain
advertising-protocol bgp XXXX:XXXX:1::1:140
inet6.0: 52907 destinations, 105843 routes (52883 active, 6 holddown,
24 hidden)
Prefix Nexthop MED Lclpref AS
path
* XXXX:XXXX:e1bc::/46 YYY.YYY.155.141 100 100 EEEE
I
## Same output as above with details
{master}
advertising-protocol bgp XXXX:XXXX:1::1:140 detail ## Session is
between v6 addresses
inet6.0: 52902 destinations, 105836 routes (52881 active, 3 holddown,
24 hidden)
* XXXX:XXXX:e1bc::/46 (3 entries, 1 announced)
BGP group internal-v6 type Internal
Nexthop: YYY.YYY.155.141 ## v6
prefix advertised with plain v4 next-hop
Flags: Nexthop Change
MED: 100
Localpref: 100
AS path: [IIII] EEEE I
Communities: IIII:10102 no-export
## iBGP neighbor receives prefix with tooled next hop and uses
inet6.0: 52955 destinations, 323835 routes (52877 active, 10 holddown,
79 hidden)
+ = Active Route, - = Last Active, * = Both
XXXX:XXXX:e1bc::/46*[BGP/170] 5d 13:01:12, MED 100, localpref 100, from
XXXX:XXXX:1::1:240
AS path: EEEE I, validation-state: unverified
to YYY.YYY.155.14 via ae1.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-1
to YYY.YYY.155.9 via ae12.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-2
to YYY.YYY.155.95 via ae4.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-1
to YYY.YYY.155.9 via ae12.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-2
"Protocol|XXXX:XXXX|BE-"
XXXX:XXXX:e1bc::/46 (3 entries, 1 announced)
Source: XXXX:XXXX:1::1:240
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL2-1
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL2-2
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL-1
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL-2
Protocol next hop: ::ffff:YYY.YYY.155.141
### Seems that IPv4 next hop has been converted to compatible form
Task: BGP_IIII.XXXX:XXXX:1::1:240
Source: XXXX:XXXX:1::7
## The policy assigning next-hop is the same for v4 and v6 sessions,
export
export [ deny-rfc3330 to-bgp ];
{master}
export
export [ deny-rfc3330 to-bgp ];
to-bgp | display inheritance no-comments
term vvvv-vvvv {
from {
community vvvv-vvvv;
tag 33;
}
then {
next-hop YYY.YYY.155.141;
accept;
}
}
XXXX:XXXX:e1bc::/46
Routing table: default.inet6
Destination Type RtRef Next hop Type Index NhRef
Netif
XXXX:XXXX:e1bc::/46 user 0 indr 1049181 37
ulst 1050092 4
YYY.YYY.155.14 ucst 1775 1
ae1.0
YYY.YYY.155.9 Push 486887 1859
1 ae12.0
YYY.YYY.155.95 ucst 2380 1
ae4.0
YYY.YYY.155.9 Push 486892 2555
1 ae12.0
The result is that we have IPv6 traffic forwarded via MPLS without 6PE
configured properly. ipv6-tunneling is configured under "protocols mpls"
but no "family inet6 labeled-unicast explicit-null" under v4 iBGP
session.
It works as far as we have v6 enabled on all MPLS links, so packets are
not dropped because of implicit-null label.
Looks sketchy but it works. Has anybody seen/used it before?
Hi,
Presumably the penultimate LSR has the IPv6 iBGP routes? i.e. it knows how
to IPv6 route to the destination. The last LSR->LER hop should just be IPv6
routed in that case.
I've noticed this behaviour before whilst playing with 6PE on lab devices.
It would of course break if you were running IPv4 core only.
Cheers,
Dan
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
--
Pedro Marques Antunes
Andrey Kostin
2018-07-22 19:28:58 UTC
Permalink
Hi Pedro,

Thanks for your comment. I agree with you that penultimate LSP forwards
traffic based on received label without IPv6 lookup. In my scenario,
default PHP is used and all routers have family inet6 configured, so it
just works.
Post by Pedro Marques Antunes via juniper-nsp
The IPv6 explicit null label works as an overlay for the IPv6 traffic
carried over the core.
In a PHP scenario, the ultimate LSR will still forward based on the
received MPLS transport label. Therefore I do not think it will require
an IPv6 routing table. However it still needs to forward IPv6
packets.
Not a problem with recent routers. But this might have been a problem in
the days when you would have devices without any IPv6 capabilities. On
Junos boxes though, `family inet6` is still required on the egress
interface.
In a UHP scenario, the penultimate LSR is expected to forward a packet
with the IPv4 explicit null label (3). But this cannot be used with an
IPv6 packet. The overlay is mandatory in such a scenario.
Post by Dan Peachey
Hi,
Presumably the penultimate LSR has the IPv6 iBGP routes? i.e. it knows how
to IPv6 route to the destination. The last LSR->LER hop should just be IPv6
routed in that case.
I've noticed this behaviour before whilst playing with 6PE on lab devices.
It would of course break if you were running IPv4 core only.
Cheers,
Dan
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-n
Andrey Kostin
2018-07-22 19:22:51 UTC
Permalink
Hi Dan,

Thanks for answering. All routers have family inet6 configured on all
participating interfaces, because other v6 traffic is forwarded without
MPLS, so we are safe for that.


Kind regards,
Andrey
Post by Dan Peachey
Hi,
Presumably the penultimate LSR has the IPv6 iBGP routes? i.e. it knows how
to IPv6 route to the destination. The last LSR->LER hop should just be IPv6
routed in that case.
I've noticed this behaviour before whilst playing with 6PE on lab devices.
It would of course break if you were running IPv4 core only.
Cheers,
Dan
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listin
Dan Peachey
2018-07-22 19:34:37 UTC
Permalink
Hey,

Other posters are correct... IPv6 routes are not requied, it forwards based
on received label as next hop is already known. Confused myself as I was
distributing full IPv6 table in my lab testing and assumed it was working
because of that, but it's not the case.

Cheers,

Dan
Post by Andrey Kostin
Hi Dan,
Thanks for answering. All routers have family inet6 configured on all
participating interfaces, because other v6 traffic is forwarded without
MPLS, so we are safe for that.
Kind regards,
Andrey
Post by Dan Peachey
Hi,
Presumably the penultimate LSR has the IPv6 iBGP routes? i.e. it knows how
to IPv6 route to the destination. The last LSR->LER hop should just be IPv6
routed in that case.
I've noticed this behaviour before whilst playing with 6PE on lab devices.
It would of course break if you were running IPv4 core only.
Cheers,
Dan
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/lis
Pavel Lunin
2018-07-21 10:44:58 UTC
Permalink
In this setup it's not 6PE but just classic IP over MPLS, where vanilla
inet/inet6 iBGP resolves it's protocol next-hop with a labeled LDP/RSVP
forwarding next hop.

It works much the same way for v6 as for v4, except that the v6 header is
exposed to the last P router, when it performs PHP. It still relies on MPLS
to make the forwarding decision (if we don't take into account the hashing
story), however it "sees" the v6 header when it puts it onto the wire, and
needs to treat it accordingly. E. g. it must set the v6 ethertype or decide
what to do if the egress interface MTU can't accommodate the packet.

So you need family inet6 enabled on the egress interface of the penultimate
LSR to make IPv6 over MPLS work.

6PE was invented to work around this. Technically it's the same IPv6 over
MPLS but with an explicit (as opposed to implicit) null label at the tail
end, which hides the v6 header from the penultimate LSR. Or you can just
disable PHP in the core.



Cheers,
Pavel
Post by Andrey Kostin
Hello juniper-nsp,
I've accidentally encountered an interesting behavior and wondering if
anyone already seen it before or may be it's documented. So pointing to
the docs is appreciated.
We began to activate ipv6 for customers connected from cable network
after cable provider eventually added ipv6 support. We receive prefixes
from cable network via eBGP and then redistribute them inside our AS
with iBGP. There are two PE connected to cable network and receiving
same prefixes, so for traffic load-balancing we change next-hop to
anycast loopback address shared by those two PE and use dedicated LSPs
to that IP with "no-install" for real PE loopback addresses.
IPv6 wasn't deemed to use MPLS and existing plain iBGP sessions between
IPv6 addresses with family inet6 unicast were supposed to be reused.
However, the same export policy with term that changes next-hop for
specific community is used for both family inet and inet6, so it
started to assign IPv4 next-hop to IPv6 prefixes implicitly.
Here is the example of one prefix.
inet6.0: 52939 destinations, 105912 routes (52920 active, 1 holddown,
24 hidden)
+ = Active Route, - = Last Active, * = Both
XXXX:XXXX:e1bc::/46*[BGP/170] 5d 13:16:26, MED 100, localpref 100
AS path: EEEE I, validation-state: unverified
to XXXX:XXXX:ffff:f200:0:2:2:2 via ae2.202
## Now PE advertises it to iBGP neighbor with next-hop changed to plain
advertising-protocol bgp XXXX:XXXX:1::1:140
inet6.0: 52907 destinations, 105843 routes (52883 active, 6 holddown,
24 hidden)
Prefix Nexthop MED Lclpref AS
path
* XXXX:XXXX:e1bc::/46 YYY.YYY.155.141 100 100 EEEE
I
## Same output as above with details
{master}
advertising-protocol bgp XXXX:XXXX:1::1:140 detail ## Session is
between v6 addresses
inet6.0: 52902 destinations, 105836 routes (52881 active, 3 holddown,
24 hidden)
* XXXX:XXXX:e1bc::/46 (3 entries, 1 announced)
BGP group internal-v6 type Internal
Nexthop: YYY.YYY.155.141 ## v6
prefix advertised with plain v4 next-hop
Flags: Nexthop Change
MED: 100
Localpref: 100
AS path: [IIII] EEEE I
Communities: IIII:10102 no-export
## iBGP neighbor receives prefix with tooled next hop and uses
inet6.0: 52955 destinations, 323835 routes (52877 active, 10 holddown,
79 hidden)
+ = Active Route, - = Last Active, * = Both
XXXX:XXXX:e1bc::/46*[BGP/170] 5d 13:01:12, MED 100, localpref 100, from
XXXX:XXXX:1::1:240
AS path: EEEE I, validation-state: unverified
to YYY.YYY.155.14 via ae1.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-1
to YYY.YYY.155.9 via ae12.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-2
to YYY.YYY.155.95 via ae4.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-1
to YYY.YYY.155.9 via ae12.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-2
"Protocol|XXXX:XXXX|BE-"
XXXX:XXXX:e1bc::/46 (3 entries, 1 announced)
Source: XXXX:XXXX:1::1:240
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL2-1
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL2-2
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL-1
Label-switched-path BE-bdr01.LLL-vvvv-agg01.LLL-2
Protocol next hop: ::ffff:YYY.YYY.155.141
### Seems that IPv4 next hop has been converted to compatible form
Task: BGP_IIII.XXXX:XXXX:1::1:240
Source: XXXX:XXXX:1::7
## The policy assigning next-hop is the same for v4 and v6 sessions,
export
export [ deny-rfc3330 to-bgp ];
{master}
export
export [ deny-rfc3330 to-bgp ];
to-bgp | display inheritance no-comments
term vvvv-vvvv {
from {
community vvvv-vvvv;
tag 33;
}
then {
next-hop YYY.YYY.155.141;
accept;
}
}
XXXX:XXXX:e1bc::/46
Routing table: default.inet6
Destination Type RtRef Next hop Type Index NhRef
Netif
XXXX:XXXX:e1bc::/46 user 0 indr 1049181 37
ulst 1050092 4
YYY.YYY.155.14 ucst 1775 1
ae1.0
YYY.YYY.155.9 Push 486887 1859
1 ae12.0
YYY.YYY.155.95 ucst 2380 1
ae4.0
YYY.YYY.155.9 Push 486892 2555
1 ae12.0
The result is that we have IPv6 traffic forwarded via MPLS without 6PE
configured properly. ipv6-tunneling is configured under "protocols mpls"
but no "family inet6 labeled-unicast explicit-null" under v4 iBGP
session.
It works as far as we have v6 enabled on all MPLS links, so packets are
not dropped because of implicit-null label.
Looks sketchy but it works. Has anybody seen/used it before?
--
Kind regards,
Andrey Kostin
_______________________________________________
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/
Andrey Kostin
2018-07-22 19:45:34 UTC
Permalink
Hi Pavel,

Thanks for replying. I understand how it works as soon
as proper next-hop is present in a route. My attention was attracted by
implicit next-hop conversion from pure IPv4 address to IPv4-mapped IPv6
next-hop from "Nexthop: YYY.YYY.155.141" in the advertised route to
"Protocol next hop: ::ffff:YYY.YYY.155.141" in the received route.


Overwise it all works as expected, considering that family inet6 is
enabled in the core.

I'm also wondering what could happen is there are
no LSP available, which is rather unreal situation because everything
will be broken anyway in this case.

Kind regards,

Andrey

Pavel
Post by Pavel Lunin
In this setup it's not 6PE but just
classic IP over MPLS, where vanilla inet/inet6 iBGP resolves it's
protocol next-hop with a labeled LDP/RSVP forwarding next hop.
Post by Pavel Lunin
It
works much the same way for v6 as for v4, except that the v6 header is
exposed to the last P router, when it performs PHP. It still relies on
MPLS to make the forwarding decision (if we don't take into account the
hashing story), however it "sees" the v6 header when it puts it onto the
wire, and needs to treat it accordingly. E. g. it must set the v6
ethertype or decide what to do if the egress interface MTU can't
accommodate the packet.
Post by Pavel Lunin
So you need family inet6 enabled on the
egress interface of the penultimate LSR to make IPv6 over MPLS work.
6PE was invented to work around this. Technically it's the same IPv6
over MPLS but with an explicit (as opposed to implicit) null label at
the tail end, which hides the v6 header from the penultimate LSR. Or you
can just disable PHP in the core.
Post by Pavel Lunin
Cheers,
Pavel
пт, 20
Post by Andrey Kostin
Hello juniper-nsp,
I've accidentally encountered an interesting behavior and wondering if
Post by Pavel Lunin
Post by Andrey Kostin
anyone already seen it before or may be it's documented. So pointing to
the docs is appreciated.
We began to
activate ipv6 for customers connected from cable network
Post by Pavel Lunin
Post by Andrey Kostin
after cable
provider eventually added ipv6 support. We receive prefixes
Post by Pavel Lunin
Post by Andrey Kostin
from
cable network via eBGP and then redistribute them inside our AS
Post by Pavel Lunin
Post by Andrey Kostin
with
iBGP. There are two PE connected to cable network and receiving
Post by Pavel Lunin
Post by Andrey Kostin
same
prefixes, so for traffic load-balancing we change next-hop to
anycast loopback address shared by those two PE and use dedicated LSPs
Post by Pavel Lunin
Post by Andrey Kostin
to that IP with "no-install" for real PE loopback addresses.
IPv6
wasn't deemed to use MPLS and existing plain iBGP sessions between
IPv6 addresses with family inet6 unicast were supposed to be reused.
However, the same export policy with term that changes next-hop for
specific community is used for both family inet and inet6, so it
started to assign IPv4 next-hop to IPv6 prefixes implicitly.
Post by Pavel Lunin
Post by Andrey Kostin
Here
is the example of one prefix.
Post by Pavel Lunin
Post by Andrey Kostin
## here PE receives prefix from
XXXX:XXXX:e1bc::/46
Post by Pavel Lunin
Post by Andrey Kostin
inet6.0: 52939 destinations, 105912 routes
(52920 active, 1 holddown,
Post by Pavel Lunin
Post by Andrey Kostin
24 hidden)
+ = Active Route, - = Last
Active, * = Both
Post by Pavel Lunin
Post by Andrey Kostin
XXXX:XXXX:e1bc::/46*[BGP/170] 5d 13:16:26, MED
100, localpref 100
Post by Pavel Lunin
Post by Andrey Kostin
AS path: EEEE I, validation-state: unverified
to XXXX:XXXX:ffff:f200:0:2:2:2 via ae2.202
Post by Pavel Lunin
Post by Andrey Kostin
## Now PE advertises
it to iBGP neighbor with next-hop changed to plain
***@re1.agg01.LLL2> show route XXXX:XXXX:e1bc::/46
advertising-protocol bgp XXXX:XXXX:1::1:140
Post by Pavel Lunin
Post by Andrey Kostin
inet6.0: 52907
destinations, 105843 routes (52883 active, 6 holddown,
Post by Pavel Lunin
Post by Andrey Kostin
24 hidden)
Prefix Nexthop MED Lclpref AS
Post by Pavel Lunin
Post by Andrey Kostin
path
* XXXX:XXXX:e1bc::/46
YYY.YYY.155.141 100 100 EEEE
Post by Pavel Lunin
Post by Andrey Kostin
I
## Same output as above with
details
Post by Pavel Lunin
Post by Andrey Kostin
{master}
XXXX:XXXX:e1bc::/46
Post by Pavel Lunin
Post by Andrey Kostin
advertising-protocol bgp XXXX:XXXX:1::1:140
detail ## Session is
Post by Pavel Lunin
Post by Andrey Kostin
between v6 addresses
inet6.0: 52902
destinations, 105836 routes (52881 active, 3 holddown,
Post by Pavel Lunin
Post by Andrey Kostin
24 hidden)
* XXXX:XXXX:e1bc::/46 (3 entries, 1 announced)
Post by Pavel Lunin
Post by Andrey Kostin
BGP group internal-v6
type Internal
Post by Pavel Lunin
Post by Andrey Kostin
Nexthop: YYY.YYY.155.141 ## v6
prefix advertised
with plain v4 next-hop
Post by Pavel Lunin
Post by Andrey Kostin
Flags: Nexthop Change
MED: 100
Localpref: 100
Post by Pavel Lunin
Post by Andrey Kostin
AS path: [IIII] EEEE I
Communities: IIII:10102
no-export
Post by Pavel Lunin
Post by Andrey Kostin
## iBGP neighbor receives prefix with tooled next hop
and uses
show route XXXX:XXXX:e1bc::/46
Post by Pavel Lunin
Post by Andrey Kostin
inet6.0: 52955 destinations,
323835 routes (52877 active, 10 holddown,
Post by Pavel Lunin
Post by Andrey Kostin
79 hidden)
+ = Active
Route, - = Last Active, * = Both
Post by Pavel Lunin
Post by Andrey Kostin
XXXX:XXXX:e1bc::/46*[BGP/170] 5d
13:01:12, MED 100, localpref 100, from
Post by Pavel Lunin
Post by Andrey Kostin
XXXX:XXXX:1::1:240
AS
path: EEEE I, validation-state: unverified
Post by Pavel Lunin
Post by Andrey Kostin
to YYY.YYY.155.14 via
ae1.0, label-switched-path
Post by Pavel Lunin
Post by Andrey Kostin
BE-bdr01.LLL-vvvv-agg01.LLL2-1
to
YYY.YYY.155.9 via ae12.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-2
Post by Pavel Lunin
Post by Andrey Kostin
to YYY.YYY.155.95 via ae4.0,
label-switched-path
Post by Pavel Lunin
Post by Andrey Kostin
BE-bdr01.LLL-vvvv-agg01.LLL-1
to
YYY.YYY.155.9 via ae12.0, label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-2
XXXX:XXXX:e1bc::/46 detail | match
Post by Pavel Lunin
Post by Andrey Kostin
"Protocol|XXXX:XXXX|BE-"
XXXX:XXXX:e1bc::/46 (3 entries, 1 announced)
XXXX:XXXX:1::1:240
Post by Pavel Lunin
Post by Andrey Kostin
Label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-1
Post by Pavel Lunin
Post by Andrey Kostin
Label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-2
Post by Pavel Lunin
Post by Andrey Kostin
Label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-1
Post by Pavel Lunin
Post by Andrey Kostin
Label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-2
::ffff:YYY.YYY.155.141
Post by Pavel Lunin
Post by Andrey Kostin
### Seems that IPv4 next hop has been
converted to compatible form
Post by Pavel Lunin
Post by Andrey Kostin
Task: BGP_IIII.XXXX:XXXX:1::1:240
Source: XXXX:XXXX:1::7
Post by Pavel Lunin
Post by Andrey Kostin
## The policy assigning next-hop is the
same for v4 and v6 sessions,
Post by Pavel Lunin
Post by Andrey Kostin
export
export [ deny-rfc3330 to-bgp ];
{master}
export
export [ deny-rfc3330 to-bgp ];
to-bgp | display inheritance no-comments
term vvvv-vvvv {
from
{
Post by Pavel Lunin
Post by Andrey Kostin
community vvvv-vvvv;
tag 33;
}
then {
next-hop
YYY.YYY.155.141;
Post by Pavel Lunin
Post by Andrey Kostin
accept;
}
}
route forwarding-table destination
Post by Pavel Lunin
Post by Andrey Kostin
XXXX:XXXX:e1bc::/46
Routing
table: default.inet6
Post by Pavel Lunin
Post by Andrey Kostin
Destination Type RtRef Next hop
Type Index NhRef
Post by Pavel Lunin
Post by Andrey Kostin
Netif
XXXX:XXXX:e1bc::/46 user 0 indr 1049181
37
Post by Pavel Lunin
Post by Andrey Kostin
ulst 1050092 4
YYY.YYY.155.14 ucst 1775 1
ae1.0
YYY.YYY.155.9 Push 486887 1859
Post by Pavel Lunin
Post by Andrey Kostin
1 ae12.0
YYY.YYY.155.95 ucst 2380
1
Post by Pavel Lunin
Post by Andrey Kostin
ae4.0
YYY.YYY.155.9 Push 486892 2555
1 ae12.0
The
result is that we have IPv6 traffic forwarded via MPLS without 6PE
configured properly. ipv6-tunneling is configured under "protocols mpls"
Post by Pavel Lunin
Post by Andrey Kostin
but no "family inet6 labeled-unicast explicit-null" under v4 iBGP
session.
It works as far as we have v6 enabled on all MPLS links,
so packets are
Post by Pavel Lunin
Post by Andrey Kostin
not dropped because of implicit-null label.
Looks
sketchy but it works. Has anybody seen/used it before?
Post by Pavel Lunin
Post by Andrey Kostin
--
Kind regards,
Post by Pavel Lunin
Post by Andrey Kostin
Andrey Kostin
_______________________________________________
Post by Pavel Lunin
Post by Andrey Kostin
juniper-nsp mailing
list juniper-***@puck.nether.net [1]
https://puck.nether.net/mailman/listinfo/juniper-nsp [2]




Links:
------
[1] mailto:juniper-***@puck.nether.net
[2]
https://puck.nether.net/mailman/listinfo/juniper-nsp
[3]
mailto:***@podolsk.ru
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.ne
Pavel Lunin
2018-07-22 21:52:36 UTC
Permalink
Post by Andrey Kostin
Hi Pavel,
Thanks for replying. I understand how it works as soon as proper next-hop
is present in a route. My attention was attracted by implicit next-hop
conversion from pure IPv4 address to IPv4-mapped IPv6 next-hop from
::ffff:YYY.YYY.155.141" in the received route.
This is normal. In order to announce AFI/SAFI 2/1 update, you must have an
IPv6 next-hop. This is why it gets automatically converted. If you enable
BGP-LU, nothing will change in this terms, your next-hop address will still
be an IPv4-mapped IPv6 address. It will just be labeled.

Same thing happens when you perform next-hop-self (or it's eBGP) for an
IPv6 route, announced via an MP-BGP session over IPv4.

And ipv6-tunneling under mpls stanza is what makes your LDP/RSVP routes be
leaked from inet.3 to inet6.3 with automatic v4-to-v6 mapping. It's a
syntactic sugar, you can do the same with policies, explicitly leaking
inet.3 to inet6.3.

I'm also wondering what could happen is there are no LSP available, which
Post by Andrey Kostin
is rather unreal situation because everything will be broken anyway in this
case.
If no LSP/FEC is available for the v4-mapped IPv6 next-hop, you won't have
an LDP/RSVP route in inet.3, thus it won't be leaked to inet6.3. So your
BGP route will not be inactive because of the unreachable next-hop. And
not, it's not so unusual. You can easily have your IGP up and running, but
someone forgot to add MPLS on one of the core interfaces. So your BGP
session and routes are up, IGP works but there is no labeled next-hop in
inet.3.
Pavel Lunin
2018-07-22 21:55:57 UTC
Permalink
Errata
So your BGP route will not be inactive because of the unreachable
next-hop.

So your BGP route *will be* inactive because of the unreachable next-hop.
Post by Andrey Kostin
Hi Pavel,
Thanks for replying. I understand how it works as soon as proper next-hop
is present in a route. My attention was attracted by implicit next-hop
conversion from pure IPv4 address to IPv4-mapped IPv6 next-hop from
::ffff:YYY.YYY.155.141" in the received route.
This is normal. In order to announce AFI/SAFI 2/1 update, you must have an
IPv6 next-hop. This is why it gets automatically converted. If you enable
BGP-LU, nothing will change in this terms, your next-hop address will still
be an IPv4-mapped IPv6 address. It will just be labeled.
Same thing happens when you perform next-hop-self (or it's eBGP) for an
IPv6 route, announced via an MP-BGP session over IPv4.
And ipv6-tunneling under mpls stanza is what makes your LDP/RSVP routes be
leaked from inet.3 to inet6.3 with automatic v4-to-v6 mapping. It's a
syntactic sugar, you can do the same with policies, explicitly leaking
inet.3 to inet6.3.
I'm also wondering what could happen is there are no LSP available, which
Post by Andrey Kostin
is rather unreal situation because everything will be broken anyway in this
case.
If no LSP/FEC is available for the v4-mapped IPv6 next-hop, you won't have
an LDP/RSVP route in inet.3, thus it won't be leaked to inet6.3. So your
BGP route will not be inactive because of the unreachable next-hop. And
not, it's not so unusual. You can easily have your IGP up and running, but
someone forgot to add MPLS on one of the core interfaces. So your BGP
session and routes are up, IGP works but there is no labeled next-hop in
inet.3.
--
Pavel
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Andrey Kostin
2018-07-24 03:52:46 UTC
Permalink
Hi Pavel,

Thanks for details. Looks like it's all documented
except next-hop conversion...

I guess that in "show route
advertised-protocol" the address is shown before conversion because
overwise it would be invalid and could not be announced...

Kind
regards,

Andrey
Post by Pavel Lunin
Errata
So your BGP route will not be inactive because of the unreachable
next-hop.
Post by Pavel Lunin
So your BGP route *will be* inactive because of the
unreachable next-hop.
Post by Pavel Lunin
On Sun, Jul 22, 2018 at 11:52 PM, Pavel
On Sun, Jul 22, 2018 at 9:45 PM, Andrey Kostin
Post by Andrey Kostin
Hi Pavel,
Thanks for replying. I understand how
it works as soon as proper next-hop is present in a route. My attention
was attracted by implicit next-hop conversion from pure IPv4 address to
IPv4-mapped IPv6 next-hop from "Nexthop: YYY.YYY.155.141" in the
advertised route to "Protocol next hop: ::ffff:YYY.YYY.155.141" in the
received route.
Post by Pavel Lunin
This is normal. In order to announce AFI/SAFI 2/1
update, you must have an IPv6 next-hop. This is why it gets
automatically converted. If you enable BGP-LU, nothing will change in
this terms, your next-hop address will still be an IPv4-mapped IPv6
address. It will just be labeled.
Post by Pavel Lunin
Same thing happens when you
perform next-hop-self (or it's eBGP) for an IPv6 route, announced via an
MP-BGP session over IPv4.
Post by Pavel Lunin
And ipv6-tunneling under mpls stanza is
what makes your LDP/RSVP routes be leaked from inet.3 to inet6.3 with
automatic v4-to-v6 mapping. It's a syntactic sugar, you can do the same
with policies, explicitly leaking inet.3 to inet6.3.
Post by Pavel Lunin
Post by Andrey Kostin
I'm also
wondering what could happen is there are no LSP available, which is
rather unreal situation because everything will be broken anyway in this
case.
Post by Pavel Lunin
If no LSP/FEC is available for the v4-mapped IPv6 next-hop,
you won't have an LDP/RSVP route in inet.3, thus it won't be leaked to
inet6.3. So your BGP route will not be inactive because of the
unreachable next-hop. And not, it's not so unusual. You can easily have
your IGP up and running, but someone forgot to add MPLS on one of the
core interfaces. So your BGP session and routes are up, IGP works but
there is no labeled next-hop in inet.3.
Post by Pavel Lunin
--
Pavel
Links:
------
[1] mailto:***@podolsk.ru
[2]
mailto:***@gmail.com
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck
Pavel Lunin
2018-07-24 12:55:10 UTC
Permalink
Looks like it's all documented except next-hop conversion...
It's RFC4798 which prescribes to do so. Juniper/Cisco docs mention this
behavior briefly (google://bgp ipv4 ipv6 mapped ffff) but they consider it
part of BGP itself, not specific to implementation / configuration, so not
their job to document ;) But I agree, it's a bit confusing, they could've
done better.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Aaron Gould
2018-07-30 17:37:30 UTC
Permalink
".... so for traffic load-balancing we change next-hop to anycast loopback
address shared by those two PE and use dedicated LSPs to that IP with
"no-install" for real PE loopback addresses"

Did you have to use this anycast method?... just wondering if bgp
multipathing would've worked in this case also...and if so, why was one
method chose over the other

-Aaron


_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Andrey Kostin
2018-07-31 15:36:43 UTC
Permalink
Hi Aaron,

Possibly it could, but it definitely needs to be checked and tested
about possibility of unequal load-balancing. As far as next-hop tooling
required anyway to process those prefixes in a different way than other
announced from PEs, and traffic is actually sent via rsvp tunnels,
probably the outcome will be just replacing one kind of complexity with
another. It will be interesting to test though.

BTW, thanks to all who replied! Looks like one of my previous messages
didn't reach the list.

Kind regards
Andrey
Post by Aaron Gould
".... so for traffic load-balancing we change next-hop to anycast loopback
address shared by those two PE and use dedicated LSPs to that IP with
"no-install" for real PE loopback addresses"
Did you have to use this anycast method?... just wondering if bgp
multipathing would've worked in this case also...and if so, why was one
method chose over the other
-Aaron
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether

Loading...