Sunday 4 November 2018

BGP Series 1: eBGP vs iBGP in Datacenters

Why BGP and iBGP vs eBGP in Datacenter

  • One of the main requirements of a Leaf-Spine Topology is a strong IP fabric .ie. we should be able to reach any device from any device via IP address
  • The biggest question is when you build an IP Fabric, what control plane protocol do you use? The options are the usual suspects: OSPF and IS-IS. But what about BGP? But isn't BGP a WAN control plane protocol? Not necessarily.

IP Fabric:

  • When creating an IP Fabric there are a few services that we need: prefix distribution, prefix filtering, traffic engineering, traffic tagging, and multi-vendor stability. Perhaps the most surprising requirements are traffic engineering and multi-vendor stability.
  • When creating a large IP Fabric, it's desirable to be able to shift traffic across different links and perhaps steer traffic around a particular switch that's in maintenance mode.
  • Creating an IP Fabric is an incremental process; not many people build out the entire network to the maximum scale from day one. Depending on politics, budgets, and feature sets companies may source switches from different vendors over a long period of time.
  • It's critical that the IP Fabric architecture not change over time and the protocols used are stable across a set of different vendors.

Protocols for IP Fabric- Comparison:

  • Let's map the requirements of an IP Fabric and map them to the options in the control plane: OSPF, IS-IS, and BGP.
RequirementOSPFIS-ISBGP
Prefix distributionYesYesYes
Prefix filteringLimitedLimitedExtensive
Traffic EngineeringLimitedLimitedExtensive
Traffic TaggingBasicBasicExtensive
Multi-vendor stabilityYesYesEven more so (think about the Internet)
  • What is interesting is that BGP pulls ahead as the best protocol choice in creating an IP Fabric. It excels in prefix filtering, traffic engineering, and traffic tagging. BGP is able to match on any attribute or prefix and prune prefixes both outbound and inbound between switches. Traffic engineering is accomplished through standard BGP attributes of Local Preference, MED, AS padding, and other techniques. BGP has extensive traffic tagging abilities with extended communities; each prefix can be associated with multiple communities to convey any sort of technical or business information. The best use case in the world for multi-vendor stability is the Internet; the backbone of the Internet is BGP.
  • BGP in the data center makes the most sense in the data center when building out an IP Fabric. Maybe it isn't so crazy after all. The benefits include prefix filtering, traffic engineering, tagging, and stability across a set of various vendors.

eBGP vs iBGP Summary:

RequirementiBGPeBGP
ECMPRequires BGP AddPathRequires Multi-AS Pathing
PeeringRequires Route Reflector to mitigate full-meshBGP session only between each spine and leaf
Traffic EngineeringNot supportedExtensive

eBGP:

In a E-BGP scenario in a datacenter, From the point of view of a 3-stage Clos or spine and leaf network, eBGP makes the most sense. It supports traffic engineering and doesn't require you configure and maintain a route reflector and AddPath.

iBGP:

Without Route Reflectors:

In a 3-Stage Clos, if we do not use route reflectors, then,

IMG_2934.JPG

We can see from above image that the reachability to the servers connected to leaf is only upto 2-stages. (Note: in the leaf switches, we have given the default route has the spines...so any traffic to unknown destination ‘y’ from Leaf1 will go to Spine and the Spine knows the path to y. But, if we have multiple Pods as shown above, we don’t have connectivity between devices in multiple pods)

With Route Reflectors:

In a 3-Stage Clos, if we use route reflectors on the Spine switches, then,

IMG_2936.JPG

The spines acting as the route reflectors advertise the client routes (x,y,z,w) to its non-client peers...so the super-spine also learns these routes. Now, if L1 has to send traffic to ‘a’, we are able to send since the super-spine also advertises the routes to route-reflector which further reflects non-client routes to all its clients.

Now, that we understood the need for route reflectors in iBGP scenario, we also need to know why we need BGP AddPath:

In the second image, Leaf1 knows that 2 paths exist to ‘y’ via both spines, but, Leaf1 will advertise only one of the paths to its downstream links. So, if we have to get ECMP, we need to enable BGP AddPath on all the leafs so that they advertise all the paths to downstream which enables us to get ECMP.

Similarly, we need to enable AddPath on all the Spines as well since if we don’t, the Spines will advertise only one Super-Spine to the leaf and we won’t get ECMP.

For more details, SEE:

https://forums.juniper.net/t5/Data-Center-Technologists/BGP-in-the-Data-Center-Why-you-need-to-deploy-it-now/ba-p/227547

No comments:

Post a Comment