Multi Protocol Layer Switching Information (MPLS)
January
02
, 2018
Introduction
I've been looking at MPLS, MultiProtocol Label Switching, over the last couple of years. Cisco has some pretty good slideware on the topic, but there have been some things I had wanted to drill down on and understancd better. I can't say for sure that I'm all the way there yet.
The IETF working group information (and list of related draft documents) for MPLS can be found at:
For an overview of MPLS, see also:
I found both of these fairly readable. But then, I've seen enough Cisco slideware that I have in my head some basic pictures and ideas of MPLS.
What is MPLS?
MPLS stands for Multiprotocol Label Switching. Multiprotocol because it might be applied with any Layer 3 network protocol, although almost all of the interest is in using MPLS with IP traffic. But that doesn't actually give us any idea what MPLS does for you (we'll get to that momentarily).
Depending on which vendor you ask, MPLS is the solution to any problem they might conceivably have. So the question "What is MPLS" could have a lot of right answers. The presentations from this Spring's MPLS Forum were all over the place on precisely this.
For me, MPLS is about gluing connectionless IP to connection-oriented networks. The IETF draft documents refer to this as the "shim layer", the idea that MPLS is something between Layer 2 and Layer 3 that makes them fit better (and perhaps carries a small amount of information to help that better fit).
MPLS started out as Tag Switching. Ipsilon (remember them?) was the company that got the MPLS buzz started. Back then, there were perhaps two key insights. One was that there is no reason an ATM switch can't have a router inside it (or a router have ATM switch functionality inside it). Another was that once you've got a router on top of your ATM switch, you can use dynamic IP routing to trigger virtual circuit (VC) or path setup. In other words, instead of using management software, or human configuration, or (gasp!) even ATM routing (PNNI) to drive circuit setup, dynamic IP routing might actually drive the creation of circuits. You might even have a variety of protocols for different purposes, each driving Label Switch Path establishment.
I've been thinking of this as avoiding the hop-by-hop decision making, by setting up a "Layer 2 fast path" using tags (think ATM or Frame Relay addressing) to move things quickly along a pre-established path, without such "deep analysis". The packet then needs to be examined closely exactly once, at entry to the MPLS network. After that, it is somewhere along the path, and forwarding is based on the simple tagging scheme, not on more complex and variable IP headers. The U.S. postal system seems to work like that: forward mail to a regional center, do handwriting recognition once, apply some sort of infrared or ultraviolet bar code to the bottom edge of the envelope, from there onwards, just use the bar code to route the letter. When you start thinking about fast forwarding with Class of Service (CoS), then incoming interface, source address, port and application information, all might play a role in the forwarding decision. By rolling the results into one label per path the packet might take, subsequent devices do not need to make such complex decisions.
Fairly soon after the basic idea of Tag Switching got publicized, Cisco got visibly involved, and then so did all the other vendors of course. For a couple of years now, Cisco Tag Switching in the 7000 series has allowed using Tag Switching on high-speed IP networks. This is migrating right now, to support the final standardized Label Switching. Other Cisco platforms now supporting MPLS: LS1010, 3600 (the release notes for 12.1(3) T say 2600), 12000 GSR series.
It now looks like optical networking devices will be capable of fast circuit establishment. Lucent has announced an "Optical Router", using 256 very small mirrors on a chip, steered under electrical control. Agilent (HP) and Texas Instruments have announced liquid or gel-based chips where current turns the fluid to a reflective surface, deflecting light from one waveguide into another. For me, all these devices deserve a title like Optical DACS (Cross-Connect Switch), but who asked me? (The Cisco press release for the Monterey Networks acquisition refers to their optical cross-connect technology). These devices are not routers in the sense of looking into packets and determining path dynamically. They are routers in the sense of figuring out and plumbing a path through multiple Layer 1 devices. I prefer not to call that routing.
MPLS ties to optical by using the idea that when a route to a specific destination or group of destinations is propagated, a light path might also be set up. This light path could then be used by packets going to that destination or group of destinations, getting them there faster (one hopes) than if every router or device along the path examined the Layer 3 header. Actually having a program examine the Layer 3 information would involve converting the light to and from electrical signals at each step along the way.
So we have several media where MPLS is being considered:
- high-speed IP backbones
- legacy ATM
- MPLS-capable ATM
- optical
Frame Relay MPLS is also receiving some consideration by vendors other than Cisco.
You might be wondering if anyone is actually doing MPLS, or is this cutting-edge stuff? Well, RoutIT is offering a service called IPVPN, with IP access via DSL/Ethernet to an MPLS network. This uses the Cisco-driven idea of MPLS-based VPN's, discussed very lucidly in RFC 2547. See also:
Looking More Deeply Into MPLS
A router supporting MPLS is a Label Switch Router, or LSR. An edge node is an LSR connecting to a non-LSR. An ingress LSR is the one by which a packet enters the MPLS network, an egress LSR is one by which a packet leaves the MPLS network.
Labels are small identifiers placed in the traffic. They are inserted by the ingress LSR, and ultimately removed by the egress LSR (so nothing will remain to perplex the non-MPLS devices outside the MPLS network). For IP-based MPLS, some bytes are inserted prior to the IP header. For ATM, the VPI/VCI addressing is the label. For Frame Relay, the DLCI is the label. For optical, I imagine the label is the optical fiber and/or wavelength being used (implicit label), perhaps combined with some actual label. To read more about MPLS labels with LAN and PPP, see:
As traffic transits the MPLS network, label tables are consulted in each MPLS device. These are known as the Label Information Base, or LIB.
By looking up the inbound interface and label in the LIB, the outbound interface and label are determined. The LSR can then substitute the outbound label for the incoming, and forward the frame. This is analogous to if not exactly the way Frame Relay and ATM behave as they send traffic through a virtual circuit. For that matter, IBM High Performance Routing, HPR, behaves similarly as far as how it actually forwards data. The labels are locally significant only, meaning that the label is only useful and relevant on a single link, between adjacent LSRs. The adjacent LSR label tables however should end up forming a path through some or all of the MPLS network, a Label Switch Path (LSP), so that when a label is applied, traffic transits multiple LSRs. If traffic is found to have no label (only possible in an IP MPLS network, not Frame Relay or ATM), a routing lookup is done, and possibly a new label applied.
As you read about MPLS you'll encounter "Forwarding Equivalency Class", or FEC. This just refers to the idea that all sorts of different packets might need to be forwarded to the same next hop or along the same MPLS path. The FEC is all the packets to which a specific label is being applied. This might be all packets bound for the same egress LSR. For a Service Provider, all packets with a given Class of Service (CoS) bound for a certain AS boundary router, or matching certain CIDR prefixes. For a large company, all packets matching certain route summaries.
Traffic may actually bear multiple labels, a stack of labels. Only the outermost (last) label is used for forwarding. The label table in a LSR may cause the outermost label to be removed. This is called a "label POP". This is useful for MPLS Tunneling, which is useful for Traffic Engineering.
Binding is the process of assigning labels to FECs. A Label Distribution Protocol (LDP) is how MPLS nodes communicate and bind labels. Think of an LDP as being an official way for one LSR to say to another "let's use this label to get stuff to this destination really fast". More than one LDP is being contemplated, each specifically designed for a purpose. However, LDP without further qualification refers to the standard LDP for setting up Label Switched Paths in response to IP routing. After LDP has run hop-by-hop, the MPLS network should have paths from ingress to egress LSR, along which only labels are used. Such paths are called Label Switch Paths (LSPs).
The draft documents mention some of the ways that other LDPs can operate. For example, for explicit routing (with a source or controller setting up a Traffic Engineering path), it might be driven by two mechanisms known as CR-LDP (constraint-based routing) and RSVP-TE (extended RSVP driving an LDP).
A Concrete Example
I don't know how you learn, but I always feel better when I see some of the details of how things work. For MPLS, the labels are the mysterious part. So let's look at how they are established and used in a concrete example. The discussion here will refer to the following diagram. I know it's a busy diagram. The numbers in circles refer to steps in the following explanation.
Step 1 (at the bottom). The bottom non-MPLS (customer) router has Class C networks 192.1.1.0 /24, 192.1.2.0 /24 somewhere out the Ethernet 0 interface. They are either directly connected (with a secondary address on the interface) or learned from another router. The table to the left of the bottom router attempts to suggest the routing table, which tracks the routing prefix, the outgoing interface, next hop router, and perhaps other information. The light blue arrow suggests that an ordinary routing update (you pick the protocol) advertises the routes to the Edge LSR above.
Step 2: The routes are advertised to the LSR above and to the left of the Edge LSR. Using LDP, the router selects a free (unused) label, 5, and advertises it to the upstream neighbor. The hyphen in the Out column is intended to note that all labels are to be popped (removed) in forwarding to the non-LSR below. Thus, a frame received on Serial 1 with label 5 is to be forwarded out Serial 0 with no label. The red arrow is intended to suggest LDP communicating the use of label 5 to the upstream LSR.
Step 3. The LSR has learned routes to the two prefixes we're tracking. It advertises the routes upstream. When LDP information is received, it records use of label 5 on outgoing interface Serial 0 for the two prefixes we're tracking. It then allocates label 17 on Serial 1 for this FEC, and uses LDP to communicate this to the upstream LSR. Thus, when label 17 is received on Serial 1, it is replaced with label 5 and the frame sent out Serial 0.
Steps 4 and 5: Proceed similarly. Note that there will be no labels received at the top Edge LSR, since the top router is not an MPLS participant, as we can see from its routing table (no labels!) in Step 6. The dark blue arrow shows the Label Switch Path (LSP) that has now been established. The table for Step 4 is bigger since this LSR has sent routing and LDP information to the LSR to its right.
Step 7: A routing advertisement might also be sent out interface Serial 2 from the Edge LSR at the bottom of the picture. It too can use LDP to tell the upstream LSR to use label 31 to deliver packets rapidly to the destinations we're tracking here.
Step 8: This LSR has perhaps not yet had time to propagate the routing information and label bindings upstream (or your author was getting fatigued).
Step 9: Here we have bindings that have passed from the left LSR to the right one. The right one uses label 123 for our two prefixes. Note that multiple flows can end up merging: frames bearing label 94 on Serial 1 or label 123 on Serial 2 all get relabelled with label 17 and sent out Serial 0. This indicates the multipoint-to-point behavior of IP MPLS.
Please note that the above example is incomplete, in that we have not yet propagated routes and bindings to all neighbors. You could see the same results after routing convergence if the metrics favored the links on the left side of the drawing. So don't attempt to read to much into this story. I'm just trying to show how routing and labels get propagated, and how the hop-by-hop behavior of LDP can still result in a Label Switch Path being established.
More Details and VC Merge
On ATM, the above behavior isn't quite what happens. The issue is that if you forward cells from two frames along the merged path shown in the figure above, you might intermingle the cells. Recall that data transmission uses ATM AAL5 encapsulation, and that there is no way to separate out intermingled cells from different frames in AAL5.
One solution is for the ATM switches and LSRs to do VC merge: know enough to delay the cells from one frame while cells from a different frame are transiting through the switch. This does create some interesting buffering and store-and-forward issues. Another approach is VP merge, where a common VPI but different VCI are used, to allow the edge LSR to sort the cells out. Yet another approach is to change the behavior of LDP over ATM, and have the upstream LSR drive the creation of the LSP. This results in separate VCs from every ingress LSR to the egress LSR, which may not scale well in terms of number of VCs.
If you're wondering how to configure MPLS, well, turning on basic MPLS isn't that complicated. (Understanding it, well that's all clear now, isn't it?)
Configuring Basic MPLS
Turning on basic MPLS is pretty simple:
TERMINAL
- ! if you're on a platform supporting dCEF:
- ip cef distributed
- ! turn on MPLS tag distribution
- tag-switching advertise-tags
- ! enable MPLS on appropriate interface(s):
- interface e0/1
- tag-switching ip
'
You should look at
for details on how to enable MPLS incrementally. You can use an access list with the advertise-tags command, to specify which networks to advertise labels for, presumably those that can be reached via MPLS. You can even control which peer LSRs to advertise which prefixes to.
MPLS Class of Service
Since the marketing and interest in MPLS is tied up with ATM coexisting with IP, the question of providing Quality of Service (QoS) always comes up. The focus in MPLS is more on differentiated Classes of Service than on ATM-like QoS, although with Traffic Engineering features, MPLS seems to come a long way towards IP-style QoS.
Right now, the Cisco CoS features used for MPLS are CAR or CBWFQ, WRED, and WFQ.
We start by using CAR or CBWFQ (or a couple of other techniques, see the QoS articles) to classify or recognize traffic at the edge of the network. These techniques also let us mark the traffic, setting the 3 IP Precedence or 6 DSCP bits in the IP Type of Service field. Recall that marking allows downstream (core) devices to provide appropriate service to the packet without having to delve as closely into headers to figure out what service the packet deserves.
We also configure WRED or WFQ to provide differentiated service based on IP Precedence (or DSCP) in the downstream (core) routers. These queue managment and scheduling techniques can be applied whether or not MPLS is in effect, if we're operating MPLS over IP. The IP Precedence information determines the weights to be used (the 'W' in WRED or WFQ). Higher IP Precedence gets preferentail treatment.
MPLS comes into the picture in two possible ways. One is by copying IP Precedence bits to the MPLS header (if desired). This MPLS header is used for MPLS over IP and has a field for such CoS information, the EXP field (3 bits). The second way MPLS can deal with CoS is by storing Precedence information as part of the Label Information Base (LIB). Each level of precedence is assigned a different Label Switch Path, so the label can be thought of as implicitly specifying the precedence. (If a Label Switch Router needs to know the precedence, it can look it up in the LIB.)
So when a frame arrives at a LSR, the label is used to determine outbound interface and new label, but the precedence or EXP field is then used to determine queuing treatment.
On ATM LSRs, the same thing happens. We're dealing with a Label Virtual Circuit (LVC) for our Label Switch Path. The LIB determines outgoing interface, which happens to be an ATM interface. WFQ and WRED can then be applied on the outgoing ATM interface, along with WEPD (Weighted Early Packet Discard).
With a non-MPLS ATM core, the edge LSRs are interconnected by ATM PVCs through the core ATM switches. WFQ and WRED can be applied on a per-VC basis. The BPX 8650 also allows you to use different PVCs for different classes of service.
Configuring MPLS CoS
To use multiple VCs for MPLS Cos on an ATM interface, configure:
TERMINAL
- interface e0/1
- tag-switching atm multi-vc
- tag-switching ip
This creates four VCs for each MPLS destination. An alternative is to use fewer label VCs by configuring CoS mapping. See the documentation (basically, the above URL) for details and alternatives.
MPLS Traffic Engineering
The idea of MPLS Traffic Engineering is to use unidirectional tunnels to shift traffic off one path and onto another. The tunnels can be statically or automatically determined by the LSRs. Multiple tunnels can be used for load sharing when a traffic flow is too large for a single path.
Although the figure shows edge to edge tunnels, TE tunnels can be shorter. They can be used by a Service Provider to shift traffic off an overloaded trunk, until more capacity can be added.
The tunnel mechanism works because we can stack up the labels applied to IP packets. That is, additional labels are applied temporarily, to the outside of the packet and existing label, to shunt traffic into the tunnel. The tunnel LSP is followed until the end of the tunnel, where the outermost label is popped off. At that point the packet resumes following the original LSP to its destination.
A link state protocol (IS-IS or OSPF) is used with enhanced link state advertisements to track network capacity and to ensure that the tunnel does not create a routing loop. The actual signaling for dynamic tunnel establishment is based on RSVP, which acts to reserve bandwidth on a link.
The following example shows all of these factors at work. It sets up an explicit tunnel (where we statically specify the path) with a dynamic backup tunnel. This is a configuration snippet from the LSR at the entrance to the tunnel (top of the picture).
TERMINAL
- ! mpls traffic-eng tunnels
- interface fast 0/0
- ip address 10.1.1.1 255.255.255.0
- mpls traffic-eng tunnels
- ip rsvp bandwidth 10000
- interface tunnel 1
- tunnel destination 10.3.3.3
- tunnel mode mpls traffic-eng
- tunnel mpls traffic-eng path-option 1 explicit name mytunnel
- tunnel mpls traffic-eng bandwidth 1000
- tunnel mpls traffic-eng path-option 2 dynamic
- ip explicit-path name mytunnel
- next-address 10.1.2.1
- next-address 10.1.10.1
- next-address 10.3.3.3
You also would have to enable tunnels on routers and interfaces the tunnel might traverse:
TERMINAL
- ! mpls traffic-eng tunnels
- interface fast 0/0
- mpls traffic-eng tunnels
- interface fast 1/0
- mpls traffic-eng tunnels
For the dynamic path establishment to work, we would also need to configure IS-IS for MPLS Traffic Engineering, and specify which traffic is to use the tunnel. The traffic to go through this tunnel is that exiting the BGP Autonomous System at router 10.5.5.5.
TERMINAL
- ip router isis
- router isis
- net 47.0000.0012.3456.00
- is-type level-1
- mpls traffic-eng router-id loopback0
- mpls traffic-eng level-1
- metric-style wide
- router traffic-engineering
- traffic-engineering filter 60 egress 10.5.5.5 255.255.255.255
- traffic-engineering route 60 tunnel 1
The metric-style wide command allows ISIS to track the additional routing metric information needed for Traffic Engineering. There is a routing protocol migration issue here and you should read all the relevant documentation before attempting this in a production network! See:
A caution: this is all fairly new stuff, I do not have equipment available to test it with (nor time), and am piecing together information from various sources. Thus the configurations are my best effort but are not guaranteed accurate.
What is a VPN?
A Virtual Private Network or VPN is a network implemented using a shared network infrastructure but so as to provide the security and privacy of a private leased-line network. Older examples would be Frame Relay and ATM. Lately VPN has come to more often refer to IPSec tunnels over the Internet, or perhaps PPTP or L2TP dial VPN connectivity across a shared internetwork.
For our purposes in this article, the VPNs will be IP networks where the WAN core of a corporate network has been outsourced to a Service Provider. The IP VPN connectivity is provided across a shared IP network belonging to the Service Provider. It will turn out the the BGP and MPLS-based VPNs we will talk about are powerful enough to provide secure connectivity (and relatively simple configuration) for both intranets and extranets.
Terminology:
- Intranet -- VPN interconnecting corporate sites
- Extranet -- VPN connecting corporate site or sites to external business partners or suppliers. The Internet is the ultimate insecure Extranet VPN.
- Customer Edge (CE) router -- a router at a customer site that connects to the Service Provider (via one or more Provider Edge routers)
- Provider Edge (PE) router -- a router in the Service Provider network to which Customer Edge Routers connect
- Provider Core (Core) router -- a router in the Service Provider network interconnecting Provider Edge routers but, generally, not itself a Provider Edge Router
- Entry and Exit PE routers -- the PE routers by which a packet enters and exits the Service Provider network
In the figure, imagine the red routers are connected with one VPN, and the blue ones with another. (I tried to draw in some lines to suggest connectivity, but things rapidly got rather cluttered). An extranet is where some red routers connect to some blue routers. The red path with arrow shows traffic from the bottom red CE router to the top one. The first (bottom) gray provider router is the entry PE router, and the final gray provider router is the exit PE router (terms used below).
Understanding MPLS-Based VPNs
I've been thinking of MPLS-based VPNs as basically using long IP addresses. That isn't exactly what's going on, but it is a key part of it.
Each site belongs to a VPN, which has a number. In the Cisco implementation, this number is configured as the 8 byte Route Distinguisher (RD). The route distinguisher number is used to prefix the IP addresses for the site. It is configured on the interface (or subinterface) connecting to the site. This gives us a way to tell duplicate private addresses apart, to distinguish them. For example, subnet 10.1.1.0 for VPN 23 is different than subnet 10.1.1.0 for VPN 109: from the MPLS VPN provider's point of view they are really 23:10.1.1.0 and 109:10.1.1.0, which are quite different. Putting the 8 byte route distinguisher in front of a 4 byte IP address gives us a 12 byte routing prefix. We regard these as the VPN-IPv4 family of addresses.
The multiprotocol extension to BGP4, MBGP, was invented to carry such routing information between peer routers. So once we think in terms of routing 12 byte prefixes, there is a natural way to propagate the information. For security and scalability, MBGP only propagates information about a VPN to other routers that have interfaces with the same route distinguisher value. That reduces the chance of accidentally leaking information about Customer A to Customer B (quite easily done with routing distribute lists in a tunneling approach, or with route maps or distribute lists or prefix lists and ordinary BGP). It also means that each PE router only tracks routes for the customers connected to that one PE router, not for the entire set of long prefixes for all sites and customers connected to the Service Provider. Scalability!
Another aspect of this is that core routers, not being connected to CE routers, don't learn VPN-IPv4 routes. We'll come back to this idea in a moment. This is desirable: it turns out we only need to run an IGP (Internal Gateway Protocol), so that core routers have routes to all PE routers. And from our prior discussions about MPLS, we suspect the IGP might be OSPF or IS-IS, to allow implementation of MPLS Traffic Engineering. Only tracking routes to PE routers keeps the core extremely scalable, and greatly simplifies the size of routing tables for core routers. This too enhances scalability!
So what we've got so far is long addresses, and tracking routing that builds in the VPN ID or route distinguisher as part of the routing prefix. The PE routers that share the long prefix routing information are all speaking MBGP, all within the same AS -- hence internal MBPG, or iMBGP. This behaves very much like ordinary BGP. Well, when iBGP speaking routers propagate routes, they also propagate attributes. One key attribute for Service Providers is the next hop attribute. For iBGP-speaking routers, the next hop is generally the exit point from the Service Provider network, the exit point used to reach the advertised destination prefix.
If we were to actually route based on the long addresses, we'd have to forward the packets hop by hop and do a routing lookup at each PE or core router between the entry PE router and the exit PE router. The problem with that is, we would then have to convert our IP header to use our longer addresses at the entry PE router, we'd have to have internal core routers that knew how to forward this new network-layer protocol, and then we'd have to strip out the longer addressing information at the exit PE router. This probably sounds sort of like what MPLS already does with labels -- but now we'd be doing it with actual network layer headers. Some readers might be thinking "aha! IPv6! Tunneling IPv4!". Nice thoughts, but ... WRONG!
I suppose the network layer code could have been written to support this, or IPv6 could have been used for a form of tunneling. But all of that would have cost time and work and money. Instead, the Cisco engineers who came up with this had a very clever idea. MPLS!
All that the entry PE routers need to do to packets is somehow deliver them to the appropriate exit PE router, the next hop known via the mandatory MBGP next hop attribute. But with MPLS and any IGP carrying routes to the PE routers, we will already have an MPLS Label Switch Path (LSP) from the entry PE to each possible exit PE! And that does it.
When a packet comes in, we look up the long (VPN) destination prefix in the MBGP routing information base (RIB). That tells us the next hop router, the exit PE router. We would normally look up how to get to that router in the IGP, and determine the IP next hop. But this gets short-circuited by MPLS: we find we have a label available for an LSP that delivers packets very efficiently to the MBGP next hop router, the exit PE router. And (here's the clever part) if we use the LSP, the core routers in the core never have to examine IP addresses or headers, they just use the labels to forward the packet!
So MPLS LSPs act as tunnels through the Service Provider core, meaning we can get away with an IGP in the SP core, and thus the SP core routers can remain ignorant of the many, many possible destinations for all subnets in all VPNs.
Route distinguisher 0 and VPN 0 can be regarded as the current Internet.
Note that smart Service Providers might build their AS number into the VPN route distinguisher, as a way to provide uniqueness and allow cooperation in providing MPLS-based VPN services to their customers.
Extended Communities and VRFs
The techniques described so far are enough to build VPNs for a particular SP customer, say Customer A. Suppose the SP is providing VPN services to Customers A and B, and A and B decide they need connectivity between certain sites? The approach above is a little limited. So there is one more piece to this MPLS BGP VPN puzzle. That piece is Extended Communities. This is a long 8 byte version of the 2 byte community attribute already known in BGP.
When the Service Provider connects up a CE router, the route distinguisher is specified on the connecting interface. Routes from the site can be learned by static routing, or dynamic routing exchange with the CE router. (MPLS-speaking CE routers are a special case.) When such IPv4 routes are learned, they are extended using the route distinguisher, so they can be distinguished from the routes from another customer, and so they can be propagated to the other VPN (intranet) sites. This is done by associating the same number with those routes as extended community. The extended community is also called and thought of as target community: it identifies the community of other sites needing routes to this long destination prefix.
To maximize flexibility, a per-site or per-interface routing table is used, the VRF (virtual routing forwarding instance). This is configured by creating it, describing it to the router, and then associating it with one or more interfaces (since the VRF might be shared between corporate sites than connect to the same PE router). We'll see how to do this below.
For an intranet, the VRF contains just the routes from that VPN.
Say we've done all this for Customer A. To connect a Company A site to a business partner B, we import routes for the VPN from B (possibly filtering them, so that we can only route to specified sites within B). So that business partner B can reach Customer A, we also export routes to target community B (or the extended community number for B). We can do this per-location within Customer B's network, providing very fine-grained control over which Customer B sites can reach Customer A. Alternatively, we can use a different VPN ID (route distinguisher and extended community) for the A-B extranet, and then export routes to and import routes from this extranet VPN to the VRF's at the sites that have to communicate with the business partner(s). Note how scalable and extensible this is!
Subinterfaces can be used so that extranet traffic can be forced through a CE firewall or so the CE can filter routes to control what internal sites the extranet partners can get to.
Since the Internet is just RD and extended community 0, the Service Provider can also selectively connect customer sites to the Internet.
The above figure shows some sample VRFs associated with the interfaces on the PE router at the left of the picture. These are suggestive of the situation for the configuration that follows in the next section. The VRF named VRF00001 contains routes to other blue VPN sites (subnets). The VRF named VRF00002 contains red VPN subnets, along with an imported blue VPN subnet. A route map might have been used to provide the fine-grained control over what blue subnets are imported into VRF00002. See below for configuration details.
Configuring MPLS VPNs
Suppose as an ISP our AS number is 888. For Customer A, we will create a VRF named vrf00001 and associate it with Route Distinguisher 888:1 (abbreviation for two bytes that are 888 in decimal, followed by six bytes ending in 1). We will also import and export routes to extended community 888:1, namely, other sites in this intranet VPN. For another customer, Customer B, we'll create a VRF named vrf00002 with RD 888:2. This second VRF will import and export extended community 888:2, other sites in Customer B's intranet. However, we'll also import routes from extended community 888:1 accoring to a route map named vrf00002-import-map, so that the site using VRF vrf00002 can reach selected Customer A sites, as extranet partner.
To do all this, configure:
TERMINAL
- ip vrf vrf00001
- rd 888:1
- route-target both 888:1
- ip vrf vrf00002
- rd 888:2
- route-target both 888:2
- route-target import 888:1
- import map vrf00002-import-map
- route-map vrf00002-import-map permit 10
- match ...
It is important to note that the route map is only needed for fine tuning. Normal import/export with VRFs can just extended communities. The thought of security depending on getting route maps built right rather scares me. Luckily, basic security is provided at the extended community level, making route hiding the normal situation. Then route maps can be used to limit connectivity to extranet partner sites, if the customers don't wish to do that for themselves by speaking BGP to the PE routers.
These VRFs would typically then be associated with interfaces:
TERMINAL
- interface Fastethernet 0/2
- ip vrf forwarding vrf00001
- ip address ...
- interface Fastethernet 0/3
- ip vrf forwarding vrf00002
- ip address ...
<- interface Fastethernet 0/4
- ip vrf forwarding vrf00002
- ip address ...
VRF vrf00002 is associated with two interfaces that connect to two sites for Customer B. I'm deliberately showing FastEthernet, since some people now think that's how we'll be connecting to SPs in metropolitan settings. (Think BLEC: Building Local Exchange Carrier, providing VPN, Internet, and Voice connectivity).
We need to be speaking MBGP to carry VPN-IPv4 routes and attributes to peer PE routers. We don't need ordinary BGP routes to PE peers however. (On a larger scale, we might use route reflectors vice iMBGP full-mesh peering):
TERMINAL
- router bgp 888
- no synchronization ! don't do IGP synchronization (since
- ! the IGP won't carry the right routes anyway)
- no bgp default ipv4-activate ! don't do ordinary BGP
- neighbor 10.60.0.5 remote-as 888 ! identify an iBGP neighbor and AS
- neighbor 10.61.0.1 remote-as 888 ! identify another
- address-family vpnv4 unicast
- neighbor 10.60.0.5 activate ! activate session to some MBGP peer
- neighbor 10.61.0.1 activate ! some other MBGP peer
- exit-address-family
Our design might use eBGP to communicate routes to CE routers in a controlled way, to get routes into each VRF. Or it might use static routing, or some other mix. We can also define per-VRF static routes as shown below.
TERMINAL
- address-family ipv4 unicast vrf vrf00001
- redistribute static
- redistribute connected
- neighbor 10.20.1.1 remote-as 65535 ! private AS number
- neighbor 10.20.1.1 activate
- no auto-summary
- exit-address-family
- address-family ipv4 unicast vrf vrf00002
- redistribute static
- redistribute connected
- neighbor 10.20.2.2 remote-as 65535
- neighbor 10.20.2.2 activate
- no auto-summary
- exit-address-family
- ip route vrf vrf00001 15.0.0.0 255.0.0.0 e0/2 10.20.1.1
That's it, a basic MPLS BGP VPN configuration!