Discussion:
[j-nsp] LSP's with IPV6 on Juniper
craig washington
2018-08-27 16:39:44 UTC
Permalink
Hello all.

Wondering if anyone is using MPLS with IPV6?

I have read on 6PE and the vpn counterpart but these all seem to take into account that the CORE isn't running IPV6?

My question is how can we get the ACTUAL IPV6 loopback addresses into inet6.3 table? Would I need to do a rib import for directly connected?

If you run "ipv6-tunneling" this seems to only work if the next-hop is an IPV4 address. (next-hop self)

I also messed around with changing the next-hop on the v6 export policy to the IPV4 loopback and this works too but figured there should be a different way?

So overall, I am trying to find a way for v6 routes to use the same LSP's as v4 without changing the next hop to a v4 address.

Hope this makes sense 😊


Any feedback is much appreciated.

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether
Olivier Benghozi
2018-08-27 16:57:05 UTC
Permalink
In global we have 6PE.
In VRF we have 6VPE.
Just works so far.

An yes, the MPLS control-plane uses only IPv4: (the intercos between routers are in IPv4, LDP uses IPv4, IGP uses IPv4, and IPv6 is really announced over specific AFI/SAFI (labeled unicast IPv6 for 6PE, VPNv6 for 6VPE) in IPv4 MP-iBGP sessions ; but it doesn't matter.

Of course the actual IPv6 loopbacks won't go to inet6.3 since they are not used to resolve the routes (you will see your IPv4 mapped over IPv6 addressing). The next-hops are IPv4, but again, it doesn't matter, only the results matter: it works :)

You don't explicitly "change" the next-hop of IPv6 using policies, you just use nexthopself and that's it.

> Le 27 août 2018 à 18:39, craig washington <***@hotmail.com> a écrit :
>
> Hello all.
>
> Wondering if anyone is using MPLS with IPV6?
>
> I have read on 6PE and the vpn counterpart but these all seem to take into account that the CORE isn't running IPV6?
>
> My question is how can we get the ACTUAL IPV6 loopback addresses into inet6.3 table? Would I need to do a rib import for directly connected?
>
> If you run "ipv6-tunneling" this seems to only work if the next-hop is an IPV4 address. (next-hop self)
>
> I also messed around with changing the next-hop on the v6 export policy to the IPV4 loopback and this works too but figured there should be a different way?
>
> So overall, I am trying to find a way for v6 routes to use the same LSP's as v4 without changing the next hop to a v4 address.
>
> Hope this makes sense 😊
>
>
> Any feedback is much appreciated.
>
> _______________________________________________
> juniper-nsp mailing list juniper-***@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/ma
Tobias Heister
2018-08-27 21:05:15 UTC
Permalink
Hi,

Am 27.08.2018 um 18:57 schrieb Olivier Benghozi:
> An yes, the MPLS control-plane uses only IPv4: (the intercos between routers are in IPv4, LDP uses IPv4, IGP uses IPv4, and IPv6 is really announced over specific AFI/SAFI (labeled unicast IPv6 for 6PE, VPNv6 for 6VPE) in IPv4 MP-iBGP sessions ; but it doesn't matter.

There is LDPv6 on Juniper since a couple of releases (i believe 16.x)
https://www.juniper.net/documentation/en_US/junos/topics/task/configuration/configuring-ldp-native-ipv6-support.html

So in theory you could run IPv6 only control Plane (IGP, LDP, BGP) with MPLS Data Plane. As there is no 4PE you either need to do v4 in MPLS VPN/VRF or run v4 and v6 Control plane in parallel to get v4 across.
There is no v6 RSVP yet, but you might get your TE needs from SR/Spring.

--
Kind Regards
Tobias Heister
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
craig washington
2018-08-28 15:25:25 UTC
Permalink
Thanks everyone for the feedback.

We are running RSVP so LDPv6 won't currently be an option.

I'll keep digging around, again, thank you everyone for the feedback.



________________________________
From: juniper-nsp <juniper-nsp-***@puck.nether.net> on behalf of Olivier Benghozi <***@wifirst.fr>
Sent: Monday, August 27, 2018 4:57 PM
To: juniper-***@puck.nether.net
Subject: Re: [j-nsp] LSP's with IPV6 on Juniper

In global we have 6PE.
In VRF we have 6VPE.
Just works so far.

An yes, the MPLS control-plane uses only IPv4: (the intercos between routers are in IPv4, LDP uses IPv4, IGP uses IPv4, and IPv6 is really announced over specific AFI/SAFI (labeled unicast IPv6 for 6PE, VPNv6 for 6VPE) in IPv4 MP-iBGP sessions ; but it doesn't matter.

Of course the actual IPv6 loopbacks won't go to inet6.3 since they are not used to resolve the routes (you will see your IPv4 mapped over IPv6 addressing). The next-hops are IPv4, but again, it doesn't matter, only the results matter: it works :)

You don't explicitly "change" the next-hop of IPv6 using policies, you just use nexthopself and that's it.

> Le 27 août 2018 à 18:39, craig washington <***@hotmail.com> a écrit :
>
> Hello all.
>
> Wondering if anyone is using MPLS with IPV6?
>
> I have read on 6PE and the vpn counterpart but these all seem to take into account that the CORE isn't running IPV6?
>
> My question is how can we get the ACTUAL IPV6 loopback addresses into inet6.3 table? Would I need to do a rib import for directly connected?
>
> If you run "ipv6-tunneling" this seems to only work if the next-hop is an IPV4 address. (next-hop self)
>
> I also messed around with changing the next-hop on the v6 export policy to the IPV4 loopback and this works too but figured there should be a different way?
>
> So overall, I am trying to find a way for v6 routes to use the same LSP's as v4 without changing the next hop to a v4 address.
>
> Hope this makes sense 😊
>
>
> Any feedback is much appreciated.
>
> _______________________________________________
> juniper-nsp mailing list juniper-***@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
juniper-nsp Info Page - puck.nether.net<https://puck.nether.net/mailman/listinfo/juniper-nsp>
puck.nether.net
To see the collection of prior postings to the list, visit the juniper-nsp Archives.. Using juniper-nsp: To post a message to all the list members, send email to juniper-***@puck.nether.net.



_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
Mark Tinka
2018-08-29 08:55:14 UTC
Permalink
On 28/Aug/18 17:25, craig washington wrote:

> Thanks everyone for the feedback.
>
> We are running RSVP so LDPv6 won't currently be an option.
>
> I'll keep digging around, again, thank you everyone for the feedback.
>

You can run multiple label distribution protocols.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Mark Tinka
2018-08-27 21:20:00 UTC
Permalink
On 27/Aug/18 18:39, craig washington wrote:

> Hello all.
>
> Wondering if anyone is using MPLS with IPV6?
>
> I have read on 6PE and the vpn counterpart but these all seem to take into account that the CORE isn't running IPV6?
>
> My question is how can we get the ACTUAL IPV6 loopback addresses into inet6.3 table? Would I need to do a rib import for directly connected?
>
> If you run "ipv6-tunneling" this seems to only work if the next-hop is an IPV4 address. (next-hop self)
>
> I also messed around with changing the next-hop on the v6 export policy to the IPV4 loopback and this works too but figured there should be a different way?
>
> So overall, I am trying to find a way for v6 routes to use the same LSP's as v4 without changing the next hop to a v4 address.

LDPv6 is your friend.

We have a dual-vendor network with varying levels of LDPv6 support, so
we haven't tested this in the real wild.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Minto Mascarenhas
2018-08-28 00:05:12 UTC
Permalink
shortcut is another option for rsvp lsps.
https://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/shortcuts-edit-protocols-isis.html


-minto

On Mon, Aug 27, 2018 at 2:20 PM Mark Tinka <***@seacom.mu> wrote:

>
>
> On 27/Aug/18 18:39, craig washington wrote:
>
> > Hello all.
> >
> > Wondering if anyone is using MPLS with IPV6?
> >
> > I have read on 6PE and the vpn counterpart but these all seem to take
> into account that the CORE isn't running IPV6?
> >
> > My question is how can we get the ACTUAL IPV6 loopback addresses into
> inet6.3 table? Would I need to do a rib import for directly connected?
> >
> > If you run "ipv6-tunneling" this seems to only work if the next-hop is
> an IPV4 address. (next-hop self)
> >
> > I also messed around with changing the next-hop on the v6 export policy
> to the IPV4 loopback and this works too but figured there should be a
> different way?
> >
> > So overall, I am trying to find a way for v6 routes to use the same
> LSP's as v4 without changing the next hop to a v4 address.
>
> LDPv6 is your friend.
>
> We have a dual-vendor network with varying levels of LDPv6 support, so
> we haven't tested this in the real wild.
>
> Mark.
> _______________________________________________
> juniper-nsp mailing list juniper-***@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Mark Tinka
2018-08-28 05:42:43 UTC
Permalink
On 28/Aug/18 02:05, Minto Mascarenhas wrote:

>
> shortcut is another option for rsvp lsps. 
> https://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/shortcuts-edit-protocols-isis.html
>

I believe this would still rely on an IPv4 underlay.

The OP is looking for a native control and data plane for MPLS in IPv6.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
a***@netconsultings.com
2018-08-28 15:58:22 UTC
Permalink
> Of craig washington
> Sent: Monday, August 27, 2018 5:40 PM
>
> Hello all.
>
> Wondering if anyone is using MPLS with IPV6?
>
> I have read on 6PE and the vpn counterpart but these all seem to take into
> account that the CORE isn't running IPV6?
>
> My question is how can we get the ACTUAL IPV6 loopback addresses into
> inet6.3 table? Would I need to do a rib import for directly connected?
>
> If you run "ipv6-tunneling" this seems to only work if the next-hop is an IPV4
> address. (next-hop self)
>
> I also messed around with changing the next-hop on the v6 export policy to
> the IPV4 loopback and this works too but figured there should be a different
> way?
>
> So overall, I am trying to find a way for v6 routes to use the same LSP's as v4
> without changing the next hop to a v4 address.
>
I'm not aware of any v6 extensions for RSVP.

Just out of curiosity is there a business problem/requirement/limitation you're trying to solve by not changing the next hop to v6 mapped v4 address and using native v6 NHs instead please?
Please be aware that whatever solution you'll find will likely put you on the long tail of the usual deployment graph with all its drawbacks.
On contrary 6PE/6VPE is such a well-trodden path.

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::


_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Rob Foehl
2018-08-29 05:14:12 UTC
Permalink
On Tue, 28 Aug 2018, ***@netconsultings.com wrote:

> Just out of curiosity is there a business problem/requirement/limitation you're trying to solve by not changing the next hop to v6 mapped v4 address and using native v6 NHs instead please?

I'd asked a similar question as the OP two weeks ago in the thread about
mixing v4 and v6 in the same BGP peer groups, after several responses
extolling the virtues of avoiding any conflation between the two. If
that's the case for routing, but forwarding v6 in an entirely v4-dependent
manner on a 100% dual stack network is tolerable, then this inconsistency
is... inconsistent.

By all outward appearances, v6 is still a second class citizen when it
comes to TE, and it doesn't seem unreasonable to ask why this is the way
it is in 2018. There are plenty of valid reasons for wanting parity.

> On contrary 6PE/6VPE is such a well-trodden path.

The world is covered with well-trodden paths that have fallen into disuse
with the arrival of newer, better, more convenient infrastructure.

-Rob
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Jared Mauch
2018-08-29 05:17:48 UTC
Permalink
> On Aug 29, 2018, at 1:14 AM, Rob Foehl <***@loonybin.net> wrote:
>
> On Tue, 28 Aug 2018, ***@netconsultings.com wrote:
>
>> Just out of curiosity is there a business problem/requirement/limitation you're trying to solve by not changing the next hop to v6 mapped v4 address and using native v6 NHs instead please?
>
> I'd asked a similar question as the OP two weeks ago in the thread about mixing v4 and v6 in the same BGP peer groups, after several responses extolling the virtues of avoiding any conflation between the two. If that's the case for routing, but forwarding v6 in an entirely v4-dependent manner on a 100% dual stack network is tolerable, then this inconsistency is... inconsistent.
>
> By all outward appearances, v6 is still a second class citizen when it comes to TE, and it doesn't seem unreasonable to ask why this is the way it is in 2018. There are plenty of valid reasons for wanting parity.
>
>> On contrary 6PE/6VPE is such a well-trodden path.
>
> The world is covered with well-trodden paths that have fallen into disuse with the arrival of newer, better, more convenient infrastructure.
>

Yes, I’m always reminding folks that router-id may be well known to be the same integer representation of your IP address in the protocol encoding, but often it’s not a requirement.

I would like to see some of the gaps closed that prevent me from having an IPv6 loopback in my BGP OPEN message, but then again, I could just use the integer value of the serial number of my router instead.

- Jared

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://pu
craig washington
2018-08-29 13:10:15 UTC
Permalink
No, there isn't a particular business problem or requirement.

When I set this up in the lab (logical systems) and followed the Juniper documentation for setting up 6PE the IPV6 prefixes didn't resolve to LSP's.

The documentation says to add labeled unicast with explicit null and tunneling.

I had 2 groups, one for v4 and one for v6. I added the commands to v4 group and didn't see a change.

I removed it all and tried adding it to v6 group and no change.

The only way I got it to work was with mpls tunneling for v6 and on the export policy for the v6 group I changed the next hop from self to the v4 address of the advertising PE.

A side note, I also got it to work by adding static routes to the inet6.3 table but that's not feasible.

It is entirely possible I did something wrong but I went back through and only saw those commands that were needed so I figured it had something to do with the CORE not being just IPv4 so that's why I just changed the next hop to the IPv4 address of the PE.

I am aware I can run multiple protocols but sometimes with my current employer a lot of things are rush jobs so I didn't want to roll out LDP.


Thanks again for taking time to read through all my convoluted babble 😊


________________________________
From: juniper-nsp <juniper-nsp-***@puck.nether.net> on behalf of Jared Mauch <***@puck.nether.net>
Sent: Wednesday, August 29, 2018 5:17 AM
To: Rob Foehl
Cc: juniper-***@puck.nether.net
Subject: Re: [j-nsp] LSP's with IPV6 on Juniper



> On Aug 29, 2018, at 1:14 AM, Rob Foehl <***@loonybin.net> wrote:
>
> On Tue, 28 Aug 2018, ***@netconsultings.com wrote:
>
>> Just out of curiosity is there a business problem/requirement/limitation you're trying to solve by not changing the next hop to v6 mapped v4 address and using native v6 NHs instead please?
>
> I'd asked a similar question as the OP two weeks ago in the thread about mixing v4 and v6 in the same BGP peer groups, after several responses extolling the virtues of avoiding any conflation between the two. If that's the case for routing, but forwarding v6 in an entirely v4-dependent manner on a 100% dual stack network is tolerable, then this inconsistency is... inconsistent.
>
> By all outward appearances, v6 is still a second class citizen when it comes to TE, and it doesn't seem unreasonable to ask why this is the way it is in 2018. There are plenty of valid reasons for wanting parity.
>
>> On contrary 6PE/6VPE is such a well-trodden path.
>
> The world is covered with well-trodden paths that have fallen into disuse with the arrival of newer, better, more convenient infrastructure.
>

Yes, I’m always reminding folks that router-id may be well known to be the same integer representation of your IP address in the protocol encoding, but often it’s not a requirement.

I would like to see some of the gaps closed that prevent me from having an IPv6 loopback in my BGP OPEN message, but then again, I could just use the integer value of the serial number of my router instead.

- Jared

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
juniper-nsp Info Page - puck.nether.net<https://puck.nether.net/mailman/listinfo/juniper-nsp>
puck.nether.net
To see the collection of prior postings to the list, visit the juniper-nsp Archives.. Using juniper-nsp: To post a message to all the list members, send email to juniper-***@puck.nether.net.


_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Olivier Benghozi
2018-08-29 14:08:11 UTC
Permalink
For 6PE you have to:
- delete the iBGP ipv6 groups
- add family ipv6 labeled-unicast explicit-null to the IPv4 iBGP groups
- add ipv6-tunneling to protocol mpls.
- make sure your IGP is not advertising IPv6 addresses

This is the way it's configured, with either RSVP-TE or LDP.

> Le 29 août 2018 à 15:10, craig washington <***@hotmail.com> a écrit :
>
> When I set this up in the lab (logical systems) and followed the Juniper documentation for setting up 6PE the IPV6 prefixes didn't resolve to LSP's.
> The documentation says to add labeled unicast with explicit null and tunneling.
> I had 2 groups, one for v4 and one for v6. I added the commands to v4 group and didn't see a change.
> I removed it all and tried adding it to v6 group and no change.
> The only way I got it to work was with mpls tunneling for v6 and on the export policy for the v6 group I changed the next hop from self to the v4 address of the advertising PE.

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mail
craig washington
2018-08-29 14:55:42 UTC
Permalink
Yea, I figured I would need to do something along those lines.

Our CORE is currently dual stacked and running OSPF for v4 and v6 with prefixes being advertised so that was a little bit more work than desired.

So my fix was leaving everything as is and just changing the next-hop from self to the IPv4 address of the advertising PE under the v6 group which is basically what would be happening anyway if I deleted the groups and added everything to the v4 group.


My overall goal was to try to get IPv6 prefixes to use the same LSP's as their IPv4 counterparts with as little trouble as possible. (not adding new protocols or changing existing protocols if possible)

Simplest way I found was just changing the next hop. Everything worked as expected when that was done.


I just didn't know if there was anything else anyone else was doing of if anyone came across a similar situation.

I now know about LDPv6, which will be looked into and I still have some research to do on SPRING.


________________________________
From: juniper-nsp <juniper-nsp-***@puck.nether.net> on behalf of Olivier Benghozi <***@wifirst.fr>
Sent: Wednesday, August 29, 2018 2:08 PM
To: juniper-***@puck.nether.net
Subject: Re: [j-nsp] LSP's with IPV6 on Juniper

For 6PE you have to:
- delete the iBGP ipv6 groups
- add family ipv6 labeled-unicast explicit-null to the IPv4 iBGP groups
- add ipv6-tunneling to protocol mpls.
- make sure your IGP is not advertising IPv6 addresses

This is the way it's configured, with either RSVP-TE or LDP.

> Le 29 août 2018 à 15:10, craig washington <***@hotmail.com> a écrit :
>
> When I set this up in the lab (logical systems) and followed the Juniper documentation for setting up 6PE the IPV6 prefixes didn't resolve to LSP's.
> The documentation says to add labeled unicast with explicit null and tunneling.
> I had 2 groups, one for v4 and one for v6. I added the commands to v4 group and didn't see a change.
> I removed it all and tried adding it to v6 group and no change.
> The only way I got it to work was with mpls tunneling for v6 and on the export policy for the v6 group I changed the next hop from self to the v4 address of the advertising PE.

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
juniper-nsp Info Page - puck.nether.net<https://puck.nether.net/mailman/listinfo/juniper-nsp>
puck.nether.net
To see the collection of prior postings to the list, visit the juniper-nsp Archives.. Using juniper-nsp: To post a message to all the list members, send email to juniper-***@puck.nether.net.


_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Andrey Kostin
2018-08-29 20:40:12 UTC
Permalink
Hi Craig,

Recently I asked in this list exactly the same question, how legit is
to not use "family inet6 labeled-unicast explicit-null" but just change
next-hop to IPv4 address for IPv6 BGP session. After some discussion I
was pointed out to RFC4798 that states

The 6PE routers MUST exchange the IPv6 prefixes over MP-BGP
sessions as per [RFC2545] running over IPv4. The MP-BGP Address
Family Identifier (AFI) used MUST be IPv6 (value 2). In doing
so,
the 6PE routers convey their IPv4 address as the BGP Next Hop for
the advertised IPv6 prefixes. The IPv4 address of the egress 6PE
router MUST be encoded as an IPv4-mapped IPv6 address in the BGP
Next Hop field. This encoding is consistent with the definition
of an IPv4-mapped IPv6 address in [RFC4291] as an "address type
used to represent the address of IPv4 nodes as IPv6 addresses".

This is not exactly how it works in our case, because next sentence
states that label MUST be provided for such prefixes:
In addition, the 6PE MUST bind a label to the IPv6 prefix as per
[RFC3107]. The Subsequence Address Family Identifier (SAFI) used
in MP-BGP MUST be the "label" SAFI (value 4) as defined in
[RFC3107].

For IPv6 BGP session AFI/SAFI is 2/1 instead of 2/4 as per RFC, however
it works.
Just for the record, possible AFI/SAFI combinations can be found here:
https://www.juniper.net/documentation/en_US/junos/topics/usage-guidelines/routing-enabling-multiprotocol-bgp.html

Following example makes me thinking that if IPv6 unicast session is
configured between mapped IPv4 addresses it may work without any
next-hop tooling and traffic will use MPLS tunnels if they exist:
https://www.juniper.net/documentation/en_US/junos/topics/example/bgp-ipv6.html

You are probably also aware that you have to run IPv6 in the core
because explicit-null label is not assigned in this case and you need
family inet6 on the ingress interface of egress PE. As long as this
condition met it works, no caveats or issues found so far.

craig washington писал 29.08.2018 10:55:

> So my fix was leaving everything as is and just changing the next-hop
> from self to the IPv4 address of the advertising PE under the v6
> group
> which is basically what would be happening anyway if I deleted the
> groups and added everything to the v4 group.
>
>
> My overall goal was to try to get IPv6 prefixes to use the same LSP's
> as their IPv4 counterparts with as little trouble as possible. (not
> adding new protocols or changing existing protocols if possible)
>
> Simplest way I found was just changing the next hop. Everything
> worked as expected when that was done.
>
>
> I just didn't know if there was anything else anyone else was doing
> of if anyone came across a similar situation.
>
>


--
Kind regards,
Andrey Kostin

_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-
Mark Tinka
2018-09-03 14:21:45 UTC
Permalink
I was cruising through the Junos 17.4 release notes, and found this:

*****

IPv6 next-hop support for static egress LSPs (MX Series)—Starting in
Junos OS Release 17.4R1, static LSPs on the egress router can be
configured with IPv6 as the next-hop address for forwarding IPv6
traffic. Previously, only IPv4 static LSPs were supported. The IPv6
static LSPs share the same transit, bypass, and static LSP features of
IPv4 static LSPs.

A commit failure occurs when the next-hop address and destination
address of the static LSP do not belong to the same address family (IPv4
or IPv6).

*****

Could be interesting for the OP.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https:/
heasley
2018-08-29 14:28:50 UTC
Permalink
Wed, Aug 29, 2018 at 01:17:48AM -0400, Jared Mauch:
> Yes, I’m always reminding folks that router-id may be well known to be the same integer representation of your IP address in the protocol encoding, but often it’s not a requirement.

its not; the reverse actually. it is just a 32 bit integer, though many
platforms take the value from the loopback address and/or can be configured
from an ipv4 address formatted integer. So, there is nothing to be changed
in the protocol.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/ju
Jared Mauch
2018-08-29 14:34:59 UTC
Permalink
> On Aug 29, 2018, at 10:28 AM, heasley <***@shrubbery.net> wrote:
>
> Wed, Aug 29, 2018 at 01:17:48AM -0400, Jared Mauch:
>> Yes, I’m always reminding folks that router-id may be well known to be the same integer representation of your IP address in the protocol encoding, but often it’s not a requirement.
>
> its not; the reverse actually. it is just a 32 bit integer, though many
> platforms take the value from the loopback address and/or can be configured
> from an ipv4 address formatted integer. So, there is nothing to be changed
> in the protocol.


We’re saying the same thing, it’s just if you’re doing inet_ntoa/inet_addr for your presentation or configuration layer.

- Jared
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/junipe
Saku Ytti
2018-08-29 07:39:52 UTC
Permalink
On Wed, 29 Aug 2018 at 08:15, Rob Foehl <***@loonybin.net> wrote:

> The world is covered with well-trodden paths that have fallen into disuse
> with the arrival of newer, better, more convenient infrastructure.

That newer and better is Segment Routing, it does IPv6.

https://www.juniper.net/documentation/en_US/junos/topics/example/example-configuring-spring-srgb.html

Personally I do not care about having dual-plane control-plane, adds
cost, time, complexity, scale and reduces availability. I'm perfectly
happy with decoupling sold products from internal technical signalling
and using IPv4 only signalling + 6(V)PE for service up until I fork
lift to IPv6 only signalling and 4(V)PE.

--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Mark Tinka
2018-08-29 08:58:27 UTC
Permalink
On 29/Aug/18 09:39, Saku Ytti wrote:

> That newer and better is Segment Routing, it does IPv6.
>
> https://www.juniper.net/documentation/en_US/junos/topics/example/example-configuring-spring-srgb.html

And we had this discussion on this and other lists over the past couple
of months, where it's not yet very clear whether MPLSv6 courtesy of SR
actually works in the data plane.


> Personally I do not care about having dual-plane control-plane, adds
> cost, time, complexity, scale and reduces availability. I'm perfectly
> happy with decoupling sold products from internal technical signalling
> and using IPv4 only signalling + 6(V)PE for service up until I fork
> lift to IPv6 only signalling and 4(V)PE.

And I think there is nothing wrong with that if that is your design choice.

I prefer native but independent MPLS forwarding between IPv4 and IPv6,
to reduce the scope of fate sharing as much possible. But yes, just my
design choice.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
a***@netconsultings.com
2018-08-29 18:41:25 UTC
Permalink
Hi Rob,
Some interesting points you raised indeed,


> Of Rob Foehl
> Sent: Wednesday, August 29, 2018 6:14 AM
>
> On Tue, 28 Aug 2018, ***@netconsultings.com wrote:
>
> > Just out of curiosity is there a business problem/requirement/limitation
> you're trying to solve by not changing the next hop to v6 mapped v4
address
> and using native v6 NHs instead please?
>
> I'd asked a similar question as the OP two weeks ago in the thread about
> mixing v4 and v6 in the same BGP peer groups, after several responses
> extolling the virtues of avoiding any conflation between the two. If
that's the
> case for routing, but forwarding v6 in an entirely v4-dependent manner on
a
> 100% dual stack network is tolerable, then this inconsistency is...
> inconsistent.
>
It's a slippery slope this separation one, what separation is sufficient
separate LDP or IGP or even BGP for v6?
I guess the key to striking the balance between separation and convergence
is probability.

Let me explain,
Let's divide the routing information carried by a typical NSP network into 3
sets.
The graph can be imagined as 3 concentric circles of different sizes.
The outermost layer represents the internet routes, the one below represents
customer routes and the centre circle represents infrastructure routes.

Routes from the outermost layer representing the internet has the highest
probability of screwing your network.
In most cases the outer layer of internet routes is vast in comparison with
the other two but there are few cases where its dwarfed by the customer
routes layer.
But the point is the number of routes doesn't matter it's the number of
routing information sources -the higher the number the higher the
probability that someone somewhere will screw up and that mess ends up in
your as well as everyone else's BGP and when that happens you want as little
collateral damage as possible thus separation is the means to reduce the
fallout.
And that's not only separation of this layer from the other two layers but
also various separations within the layer itself.

Second layer below that represents customer routes in some NSPs have
millions of customer routes compared to "just" ~700k internet routes.
Though usually majority of these routes are originated from managed CPEs or
LNS-es etc... meaning the routing information sources are under control of
the provider.
And yes if you are an ISP then majority of your customer routes would fall
into the internet routes layer, and there are also wires only services where
customer is managing CPE.
But the point is again irrespective if the number of routes in this layer
the number of routing information sources is always smaller compared to the
internet routes layer.
As a result the lower probability of something bad happening naturally
results in lower incentive to invest the time and effort to mitigate
potential fallout.

The inner circle/layer is composed of solely the infrastructure routes, this
should be a very sterile environment.
But the main point is that there's just one entity responsible for
introducing all routes in this layer.
This layer is where actually simplicity means robustness.
Because the probability of say a malformed IPv4 TLV will bringing down ISIS
for both v6 and v4 is such extremely low that the added complexity of
running separate IGP/MPLS protocol stack for v6 and v4 is just added
complexity that in itself is just asking for trouble.


> By all outward appearances, v6 is still a second class citizen when it
comes to
> TE, and it doesn't seem unreasonable to ask why this is the way it is in
2018.
> There are plenty of valid reasons for wanting parity.
>
Personally I'd vote against IPv6 support for existing RSVP-TE, the protocol
has been around for ages with no new major features added and therefore all
the implementations are very stable,
I'd vote for a separate protocol altogether that can be enabled alongside
the RSVP-TE (SR for instance).

> > On contrary 6PE/6VPE is such a well-trodden path.
>
> The world is covered with well-trodden paths that have fallen into disuse
> with the arrival of newer, better, more convenient infrastructure.
>
I wish you were right, but look at those DC folks and all that madness
around VXLAN or now EVPN for VXLAN, -I mean there are these headers that can
actually be stacked (like MPLS or IPv6 header) and could solve all their
problems while keeping my life simpler when creating DCI solutions.


adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::


_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Mark Tinka
2018-08-30 09:09:17 UTC
Permalink
On 29/Aug/18 20:41, ***@netconsultings.com wrote:

> Personally I'd vote against IPv6 support for existing RSVP-TE, the protocol
> has been around for ages with no new major features added and therefore all
> the implementations are very stable,
> I'd vote for a separate protocol altogether that can be enabled alongside
> the RSVP-TE (SR for instance).
While we don't use RSVP in our network, I don't think adding IPv6
support for it would be a problem. Other protocols do it all the time;
take BFD, for example.

Rather than introduce a new protocol that someone has to learn, have
support for across the board, e.t.c., adding IPv6 to an existing
protocol is low overhead to me. As long as the operator has the on/off
switch for IPv6, there isn't much else he's learning. He already knows
that IPv6 is IPv4 with bigger numbers :-).

Mark.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Mark Tinka
2018-08-29 08:55:52 UTC
Permalink
On 28/Aug/18 17:58, ***@netconsultings.com wrote:

>>
> I'm not aware of any v6 extensions for RSVP.

TTBOMK, no.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Loading...