Route Redistribution- Part 3

Ccie R/SCcnp R/S Nov 06, 2018

This post is the third in a series of posts on Route Redistribution. If you didn’t yet read the first two, here are the links:

  • Route Redistribution – Part 1
  • Route Redistribution – Part 2

So far in this series, the route redistribution examples we’ve worked through used a single router to do all of the redistribution between our autonomous systems. However, from a design perspective, we might look at that one router and realize that it’s potential single point of failure.

For redundancy, let’s think about adding a second router to redistribute between a couple of autonomous systems. What we probably don’t want is for a route to be advertised from, let’s say, AS1 into AS2, and then have AS2 advertise that same route back into AS1, as shown in the figure. 

The good news is, with default settings, that probably won’t be an issue. For example, in the above graphic, router BB2 would learn two ways to get to Network A. One way would be via the OSPF AS to which it’s attached. The other way would be through the EIGRP AS, through router BB1, and back into the OSPF AS. Normally, when a router knows how to get to a network via two routing protocols, it compares the Administrative Distance (AD) values of the routing protocols and trusts the routing protocol with the lower AD. In this example, although EIGRP’s AD is normally 90, which is more believable than OSPF’s AD of 110, the AD of an EIGRP External route (i.e. a route that originated in a different AS) is 170. As a result, BB2’s OSPF-learned route to Network A has a lower AD (i.e. 110) than the AD (i.e. 170) of the EIGRP-learned route to Network A. The result? BB2 sends traffic to Network A by sending that traffic into the OSPF AS, with no need to transit the EIGRP AS.

From time-to-time, though, we might have some non-default AD settings configured, or we might have some creative metrics applied to redistributed routes. In such cases, we run the risk of the scenario depicted in the previous figure.

Let’s discuss how to combat such an issue. Consider the following topology.

In this topology, we have two autonomous systems, one running OSPF and one running EIGRP. Routers BB1 and BB2 are currently configured to do mutual route redistribution between OSPF and EIGRP. Let’s take a look at the IP routing tables of these backbone routers.

Notice, as just one example, that from the perspective of router BB2, the best way to get to the /30 network is to go to a next-hop IP address of (which is router R1). That means, if router BB2 wanted to send traffic to the /30 network, that traffic would stay within the OSPF AS.

Interestingly, the EIGRP routing process running on router BB2 also knows how to get to the /30 network due to router BB1 redistributing that route into the EIGRP AS, but that route is considered to be an EIGRP External route. Since an EIGRP External route’s AD of 170 is greater than OSPF’s AD of 110, the OSPF-learned route is injected into router BB2’s IP routing table.

This is how route redistribution typically works when we have more than one router performing route redistribution between two autonomous systems. However, what can we do if things aren’t behaving as expected (or desired)? How can we prevent a route redistributed into an AS from being redistributed out of that AS and back into the original AS, such as in the example shown in the following figure.

In the above example, router R1 advertises the /24 network to router BB1, which redistributes that route from AS1 into AS2. Router R2 receives the route advertisement from router BB1 and sends an advertisement for that route down to router BB2. Router BB2 then take’s that newly learned route and redistributes it from AS2 into AS1, from whence it came. We probably don’t want that to happen, because it might be creating a suboptimal route.

A common approach correct such an issue is to use a route map in conjunction with a tag. Specifically, when a route is being redistributed from one AS into another, we can set a tag on that route. Then, we can configure all of the routers performing redistribution to block a route with that tag from being redistributed back into its original AS, as depicted in the following figure.

Notice in the above topology, when a route is redistributed from AS1 into AS2, it receives a tag of 10. Also, router BB2 has an instruction (configured in a route map) to not redistribute any routes from AS2 into AS1 that have a tag of 10. As a result, the route originally advertised by router R1 in AS1 never gets redistributed back into AS1, thereby potentially avoiding a suboptimal route.

Next, let’s take a look at how we can configure this tagging approach using the following topology once again. Specifically, on routers BB1 and BB2, let’s set a tag of 10 on any route being redistributed from OSPF into EIGRP. Then, on those same routers, we’ll prevent any route with a tag of 10 from being redistributed from EIGRP back into OSPF.

To begin, on router BB1 we create a route map, whose purpose is to assign a tag value of 10.

Notice that we didn’t say permit as part of the route-map statement, and we didn’t specify a sequence number. The reason is, permit is the default action, and the TAG10 route map only had a single entry.

In just a moment, we’ll go to router BB2 and create a route map that prevents any routes with a tag of 10 from being redistributed into OSPF. Also, we’ll want router BB2 to be marking routes it’s redistributing from OSPF into EIGRP with a tag value of 10. That means, we’ll want router BB1 to prevent those routes (with a tag value of 10) from being redistributed back into OSPF. So, while we’re here on router BB1, let’s setup a route map that will accomplish that (i.e. preventing the redistribution of routes with a tag value of 10 into OSPF).

This newly created route map (DENYTAG10) does use the permit and deny keywords, and it does have sequence numbers. Sequence number 10 is used to deny routes with a tag of 10 (it’s just coincidental that those numbers match). Then, we have to have a subsequent sequence number (that we’ve numbered 20) to permit the redistribution of all other routes.

Now that we have our two route maps created, let’s apply the TAG10 route map to EIGRP’s redistribute command (to tag routes being redistributed into EIGRP with a value of 10). Also, we’ll want to apply the DENYTAG10 route map to OSPF’s redistribute command (to prevent the redistribution of routes tagged with a value of 10 back into the OSPF AS).

Now, we need to enter a mirrored configuration on router BB2.

Just to make sure our routes are being tagged, let’s check out router R2’s EIGRP topology table.

Notice that all routes redistributed into EIGRP from OSPF now have a tag of 10, and we’ve told routers BB1 and BB2 not to redistribute those routes back into OSPF. That’s how we can solve some of the potential problems that arise with route redistribution.

At this point in our route redistribution series, we’ve discussed the need for and the operation of route redistribution. We configured basic route redistribution, and then saw how we could filter specific routes from being redistributed. Then, in this post, we saw how to prevent a route being redistributed from one AS into another from being redistributed back into the original AS. That might be necessary, if we find ourselves in a suboptimal routing situation.

We have one more post coming up in our route redistribution series. It’s all about how we can perform route redistribution for IPv6 networks.

Enjoy your studies,

VLAN Security Concepts

Security Nov 13, 2018

A Virtual Local Area Network (VLAN) is a logical grouping of devices on one or more LANS, configured to communicate as if they were on the same segment. In order to communicate with devices in another VLAN, a Layer 3 device must be present for routing.

Private VLAN (PVLAN)

One way to simplify a multi-VLAN deployment is by use of the Private VLAN (PVLAN) feature. PVLANs achieve isolation at Layer 2 between ports in the same VLAN. This is done by designating the ports as one of three types: promiscuousisolated, orcommunity. Each designation has its own unique set of rules which regulate the ability to communicate with other devices in the same VLAN.

Promiscuous Ports: These ports have the ability to communicate with all other ports within the PVLAN. The default gateway for the network segment would likely be a promiscuous port, since all devices need to be able to communicate with the gateway.

Isolated Ports: These ports have Layer 2 separation from all other ports within the PVLAN, except for promiscuous ports. A PVLAN will block all traffic to an isolated port, except the traffic originating from a promiscuous port. A common example is a hotel or university network, where end users would have Internet access but no access to other clients.

Community Ports: These ports are able to communicate among all other community ports, as well as promiscuous ports. Many enterprise networks will contain community ports, allowing clients to communicate directly with other internal devices such as database or email servers.

Each of these port types are also associated with specific VLAN types, which work together with the port designations to create a PVLAN structure.

  • Primary VLAN: The primary VLAN carries traffic from the promiscuous ports to all other ports in the same VLAN.
  • Isolated VLAN: The isolated VLAN is a secondary VLAN that carries traffic from isolated ports to promiscuous ports.
  • Community VLAN: The community VLAN is also a secondary VLAN and carries traffic among community port members. It’s also responsible for traffic going from community ports to promiscuous ports.

Native VLAN

The Native VLAN is simply the untagged VLAN on an 802.1q trunked switchport. The 802.1Q protocol provides a way for ethernet frames to be tagged with specific VLAN identifiers. Any untagged frames arriving on a trunk port are assumed to be a member of the Native VLAN.

When configuring a trunk port, the Native VLAN should be set to the same value on each end in order to avoid Spanning Tree Protocol (STP) loops. By default, the native VLAN is set to VLAN 1. A recommended best practice is to change the Native VLAN to another unused VLAN where no hosts or other devices reside. This is done in order to avoid VLAN hopping attacks such as double-tagging.

It’s easy to confuse the ideas of the Native VLAN and the Default VLAN. Just to provide clarity about these terms, the following can be stated:

  • The default VLAN will always be VLAN 1. This is determined by Cisco and cannot be changed. The default VLAN assignment for an access port will always be VLAN 1, unless otherwise specified.
  • In the same way, the default Native VLAN value will always be VLAN 1, as determined by Cisco. If the Native VLAN value is not changed, this defaults to VLAN 1. 

  • The Native VLAN can be changed to any value, even though by default it is set to VLAN 1. As already stated, it is a recommended best practice to change the Native VLAN to another non-default value for security reasons.

A few other recommended best practices in regard to VLAN security includes the following:

  1. Shutting down unused interfaces and placing them in a so-called “parking lot” VLAN. This is essentially an unused VLAN where no other clients reside.
  2. Restrict the VLANs allowed on trunk ports to only those that are necessary.
  3. Manually configure access ports with the switchport mode access
  4. Disable Cisco Dynamic Trunking Protocol (DTP) in order to prevent unauthorized trunk link negotiation.

3 Tips to Ace the Technical Interview

Career Success May 27, 2019

At some point in your networking career, you are almost certainly going to face the dreaded technical interview. In this blog posting, let’s answer the question, “How do you go in and come across as the candidate that the company wants to hire?”
Certainly, there are lots of books, and there is lots of advice on how to answer different interviewing questions, but in this blog posting I want to give you three specific tips that have worked for me in different interviews.

Tip #1

Tip #1 is to establish rapport. I remember many years ago, I was interviewing with one of the big ten accounting firms for an IT  position. On my drive to the office to be interviewed, I was listening to one of my many Anthony Robbins tapes – that shows you how long ago it was. I was actually listening to a tape, not a CD or not an mp3.
On that tape, Anthony Robbins tapes was talking about the importance of rapport. He was making a statement that, “We tend to like people that are like us.” He was talking about how we could establish rapport through matching and mirroring.
In other words, when we are talking with someone, if they make a certain gesture, we could subtly make a similar gesture. If they lean forward, we lean forward. If they are talking really, really fast at a high volume, then we talk really, really fast at a high volume. But if they talk more slowly, then we slow down our pace as well. I decided to try it in my interview.
During my interview, the main person that was interviewing me had a very distinctive habit that I could easily mirror. I was actually a little concerned to mirror him, thinking he might notice. When he would say something, he would nod his head up and down, very rapidly. I started mirroring that.
As I answered a question, I would nod my head fairly rapidly. I honestly believe that mirroring technique had a lot to do with them offering me the job.
In the end, the company and me did not agree on the salary; so I turned down the job offer. But a few weeks later, he called me and asked, “Do you know somebody else that is like you? We are looking for somebody like you.” I don’t think that was because I had great technical skill at that point, but I do think it had a lot to do with me nodding my head up and down, mirroring his actions. You might want to try and establish rapport by mirroring the actions of the person interviewing you the next time you find yourself in an interview.

Tip #2

The second tip is in regards to how you answer the question, “What is one of your biggest mistakes?” That is one of the classic interviewing questions that many, many people get asked.
One of the standard recommendations from the interviewing books is to answer the question in such a way that you are actually answering with one of your positive characteristics. However, I think that comes across as being disingenuous.
If somebody asks, “What is one of your weaknesses?” or “What is a mistake you’ve made?” some of the advice out there would have you say, “Well, one of my greatest weaknesses is that I’m a perfectionist, and it tends to frustrate me when I see my co-workers not giving the same level of effort that I’m giving.”
The theory is that you have now positioned yourself as somebody that is a perfectionist – you are going to get things done right. You actually communicated a strength rather a weakness.
However, personally, I don’t like that approach. As I mentioned, I don’t think that sounds very genuine. What I would prefer you do instead, is to tell a true story of regret where you have the ability to show emotion, and you can show that you have learned from that mistake.
For example, I remember during an interview, I was asked about one of my biggest mistakes. I told them the story that I told you in a previous blog posting, where I had coached one of my direct reports in a location where this employee’s peers could hear what was going on. I expressed in the interview how that was a mistake, how I regretted it, how I went and apologized, and how I never made that mistake again.
I think showing that level of honesty, that level of vulnerability, where you are exposing a true weakness, but showing that it’s no longer a weakness – you’ve learned from it, is a lot more impactful than saying something like, “My greatest strength is that I am a perfectionist.”

Tip #3

Finally, Tip #3 is more focused on the technical aspect of the interview. You see, when you go into a technical interview with a company, they are probably going to be asking you technical questions about their network, about their equipment, about their protocols, and maybe you don’t work with those specific technologies on a day-to-day basis.
I remember when I interviewed with Walt Disney World, they asked me a couple of questions that I did not get completely correct. They asked me a Spanning Tree Protocol (STP) question, and they asked me an EIGRP question. Both of these technologies were big design considerations for them. However, at that time, I hadn’t been regularly working with those technologies. So, I was able to partially answer the questions, but not fully.
Coming out of that technical interview, I thought I may have blown it. I missed two technical questions they asked, but actually, they did offer me the job.
I believe the reason was that in that technical interview, even though I did not perfectly answer the questions they asked, when we were talking about different networking scenarios, I brought up a very complex scenario that I had worked on.
I told them a story about how I configured BGP to do dual-homing to two different Internet Service Providers (ISPs), and how I had to do an autonomous system prepend to make my network’s autonomous system seem more attractive via the ISP that had the higher bandwidth. I sketched the topology on their whiteboard, and even though BGP was not something  they were particularly concerned with, it showed that I had a level of technical expertise, even though I did not give a perfect response to the STP or EIGRP questions.
That’s what I recommend you do. Before going into a technical interview, think about something really complex that you’ve configured. Or, if you cannot think of anything, dig into some CCIE Routing and Switching books, and lab up a scenario.
Do something complex. Do mutual route redistribution between two different autonomous systems, with multiple redistribution points, and prevent a routing loop by using route tags. That is just the first example that came to mind, but come up with some scenario demonstrating you are technically competent, even though you may not know everything about their particular network technologies.
I remember conducting technical interviews for a company that was using a lot of ATM, and I would ask candidates about different ATM concepts. ATM was not that widely known. Somebody might come in knowing about routing and switching, but when it came with the ATM and setting up LAN Emulation (LANE), most candidates didn’t really know much about those topics. However, I judged them based on the level of technical expertise they did demonstrate. So, I encourage you in your technical interview, be ready to display your technical competence by telling a really great story about an advanced technology that you did configure.

The Takeaway

Those are my 3 tips for you:

  • Tip #1: Establish a rapport with the person interviewing you.
  • Tip #2: When you are asked about a weakness, give a genuine story of regret about a failure you’ve had, and discuss how you learned from it, and how you will never make that  mistake again.
  • Tip #3: Even though you may not know everything about a company’s technical environment, you can come into a technical interview armed with a cool story about something really technical  you set up, which demonstrates your technical competence.

OSPF Advanced Concepts – Part 4

Ccie R/SCcna R/SCcnp R/S Sep 24, 2019

In the previous part of our OSPF series, we examined options for manually filtering routes. As we wrap up our look at advanced OSPF topics, we’ll discuss default routes, and compare OSPFv2 with OSPFv3.

Default Routes

We have seen where OSPF can automatically generate a default route when needed. This occurs with some of our special area types. For example, of you configure a totally stubby area, of course a default route is required and OSPF generates this route automatically from the ABR.

In order to increase flexibility with your designs, default routes injected into a normal area can be originated by any OSPF router. To generate a default route, you use the default-information originate command.

This command presents two options:

  • You can advertise into the OSPF domain, provided the advertising router already has a default route.
  • You can advertise regardless of whether the advertising router already has a default route. This second method is accomplished by adding the keyword always to the default-information originate

Figure 1 – OSPF Topology

Using our simple topology from Figure 1 once again, let’s configure ATL2 to inject a default route into the normal, non-backbone Area 1.

Note that in this example, we use the always keyword in order to make sure that ATL2 generates the default route regardless of whether the device already has a default route present in its routing table.

Here is the verification on ORL:

Comparing OSPFv2 and OSPFv3

As amazing as OSPFv2 is, it cannot route IPv6 prefixes for us. This is the job of OSPFv3. The great news for you is the fact that you can leverage almost everything you have learned about OSPFv2 when transitioning to the OSPFv3 protocol. There was not a complete redesign of the protocol, and as much functionality and configuration steps were maintained as possible.

As you will learn in this section, OSPFv3 offers the use of address families in the configuration, making this protocol suited for carrying IPv6 prefixes, or even IPv4 prefixes with the appropriate address family.

For the sake of completion, this section demonstrates the “standard” configuration of OSPFv3, as well as the address family configuration.

It is fun, and important, to distinguish the key similarities and differences between v2 and v3 of the OSPF protocols. Here are the similarities that jump off the page:

  • In OSPFv3, a routing process does not need to be explicitly created. Enabling OSPFv3 on an interface will cause a routing process and its associated configuration to be created.
  • The router ID is still a 32-bit value in OSPFv3 and the process of router ID selection is the same. OSPF automatically prefers a loopback interface over any other kind, and it chooses the highest IP address among all loopback interfaces. If no loopback interfaces are present, the highest IP address in the device is chosen.

Here are some key differences:

  • In OSPFv3, each interface must be enabled using commands in interface configuration mode. This feature is different from OSPF version 2, in which interfaces are indirectly enabled using the device configuration mode.
  • When using a nonbroadcast multiaccess interface in OSPFv3, you must manually configure the device with the list of neighbors. Neighboring devices are identified by their device ID.
  • In IPv6, you can configure many address prefixes on an interface. In OSPFv3, all address prefixes on an interface are included by default. You cannot select some address prefixes to be imported into OSPFv3; either all address prefixes on an interface are imported, or no address prefixes on an interface are imported.
  • Unlike OSPF version 2, multiple instances of OSPFv3 can be run on a link.

Traditional OSPFv3 Configuration

In order to demonstrate (and practice) the OSPFv3 configuration, lets drop such a topology in right alongside the existing IPv4 topology that we have.

Here is the configuration of our backbone area (Area 0) and non-backbone area (Area 1) using the “traditional” OSPFv3 approach.

Note how familiar this configuration approach seems, especially if you are a fan of the interface-level approach to configuring OSPFv2. Notice also, we must globally enable the IPv6 unicast routing capability on the device. This is not a default behavior. You should also note that this is not required in order to run IPv6 on interfaces, it is simply a requirement to do routing of IPv6 traffic on the router.

Here is the configuration of our two other devices:

It is now time for some verification. Note that I will perform all of these on the ORL device for brevity. Notice once again all the wonderful similarities to OSPFv2:

OSPFv3 Address Family Configuration

Let us conclude this chapter with a look at the address family configuration style of OPSFv3. Remember, this capability would permit us to use this single protocol for carrying both IPv4 and IPv6 prefixes.

Here is an example of the OSPFv3 address family configuration approach:

Notice that if you are already familiar with address families from another protocol (such as BGP), this is a very simple configuration. Also note that your approach to configuring OSPFv3 under interfaces does not change.

That wraps up our look at some advanced OSPF concepts. Until next time, take good care.

OSPF Advanced Concepts – Part 1

Ccie R/SCcna R/SCcnp R/S Aug 13, 2019

The time has arrived to tackle some of the more advanced (and interesting) features of the Open Shortest Path First routing protocol. We begin by examining the configuration and verification of the different OSPF areas. This is an exercise that is not only fun, but it can really cement the knowledge down of how these areas function and why they exist.


Areas are a fundamental concept of OSPF. It is what makes the routing protocol, hierarchical, as we like to say.

There is a core backbone area (Area 0) that connects to normal, non-backbone areas. The backbone might also connect to special area types we will examine in detail in this chapter. This hierarchical nature of the design helps ensure the protocol is very scalable. We can easily reduce or eliminate unnecessary routing traffic flows and communications between areas if needed. Database sizes are also contained using this approach.

The Backbone and the Non-Backbone Areas

To review a bit from our previous blog posts, Figure 1 shows a simple multi-area network. Here I will configure this network using my personal favorite configuration approach, the interface-level configuration command ip ospf. Example 1 shows the configuration of all three devices.

Figure 1: A backbone and non-backbone area

Example 1: Configuring the Backbone and Non-Backbone Areas

Notice the simplicity of this configuration even though we are running a fairly complex routing protocol. The Area Border Router (ABR) is ATL2 with one interface in the backbone and one in the non-backbone area. 

Notice also how we get some “bonus” verification free of charge. As we are configuring the interfaces, we can see OSPF adjacencies forming between the devices. This saves us from needing to verify these “manually” with the following command:

An interesting verification for us here is to check for the prefix of from the ATL device (as well as the remote link between ATL and ATL2). We check for this on ORL to verify the multi-area configuration of OSPF. Since this is a “normal” area, all LSAs should be permitted in the area and we should see the prefix appear as an inter-area OSPF route.

While not often required in troubleshooting, we can examine the OSPF database in order to see the different types of LSAs in place.

The router link state entries are the Type 1 LSAs. These are the endpoints in our local area of 1. The net link state entries are the Type 2 LSAs. Here we see the router ID of the Designated Router. Finally, the summary net link states are the Type 3 LSAs. These are the prefixes the ABR sends into our area. Sure enough, they are the loopback ( and the remote network (

NOTE: The loopback interface is advertised as a host route of 32-bit. To change this, you can just use the command ip ospf network point-to-point under the loopback interface. This changes the network type from the loopback type for OSPF and causes the mask to advertise as it is configured.

Now it is time to add to the story here. Let’s configure some external prefixes and inject them into the OSPF domain. This is simple thanks to loopback interfaces. We will create some on the ATL router, run EIGRP on them, and then redistribute them into OSPF.

Now we have even more interesting verifications on the ORL device. First, the routing table:

Notice how the remote prefixes are listed as E2 routes. This is the default of Type 2 external OSPF routes. It means that the metric is unchanged as the prefix flows from the ASBR (Autonomous System Boundary Router) to the internal OSPF speaker. You can change the type to Type 1 if you desire when you are performing resitrubution. 

Perhaps of more interest is the OSPF database:

Notice now we pick up the Type 4 LSA (summary ASB link state) which is the router ID ( of the ASBR (ATL). We also get the Type 5 LSAs that are the external prefixes.

That’s going to wrap up the first part of our advanced OSPF blog series. Next time, we’ll take a look at creating stubby areastotally stubby areasnot so stubby areas (NSSA), and totally NSSA.Until then, take good care.

Cisco’s New DevNet Certifications

Network Programmability Jun 12, 2019

One of the big announcements this week at Cisco Live was the launch of their new DevNet certification track. Cisco CEO Chuck Robbins reiterated the fact that knowledgeable engineers are always going to be in-demand. Contrary to what many believe, network automation and A.I. integration is not designed as a replacement for those skills, but rather these advancements allow the ability to manage numerous network devices and their services through software. For large scale networks, usage of API’s for automation is the way of the future.

The launch of this new certification track is aimed at joining the skills of software developers with network professionals, with the goal of accelerating the progress of network automation in organizations throughout the world.

Here’s a breakdown of the current DevNet certification offerings:

DevNet Associate

This entry-level certification is accessible to those who are “early-in-career” developers, and also experienced network engineers. Recommended experience is one or more years of developing and/or maintaining applications built on Cisco platforms. Hands-on programming experience is recommended as well, specifically the Python language.

DevNet Specialist

This is ideal for those who have three to five years of experience with application development, operations, security, or infrastructure. You also have the option to choose a particular focus area under the umbrella of Software Specialist or Automation Specialist, with options listed below.

Software Specialist options:

  • Core
  • DevOps
  • IoT
  • Webex

Automation Specialist Options:

  • Collaboration Automation
  • Data Center Automation
  • Enterprise Automation
  • Security Automation
  • Service Provider Automation

DevNet Professional

This certification requires one core exam, plus one concentration exam. This is recommended for developers who have at least three to five years of experience designing and implementing applications built on Cisco platforms with experience in Python. This also included experienced engineers who want to learn more about software and automation.

DevNet Expert

Currently, the expert certification offering is not available. However, this is listed as a “Future Offering” on the DevNet Certification website.

For more in-depth information about a particular DevNet track, check out the website at

Take care,

OSPF Basic Concepts – Part 3

Ccie R/SCcna R/SCcnp R/S Jul 30, 2019

Before we move on to more advanced topics, we’ll wrap up this OSPF Basics series in Part 3. Here we’ll examine LSA types, area types, and virtual links.


Link State Advertisements (LSA) are the lifeblood of an OSPF network. The flooding of these updates (and the requests for this information) allow the OSPF network to create a map of the network. This occurs with a little help from Dijkstra’s Shortest Path First Algorithm. 

Not all OSPF LSAs are created equal. Here is a look at each:

The Router (Type 1) LSA – We begin with what many call the “fundamental” or “building block” Link State Advertisement. The Type 1 LSA (also known as the Router LSA) is flooded within an area. It describes the interfaces of the local router that are participating in OSPF and the neighbors the local OSPF speaker has established.

The Network (Type 2) LSA – Remember how OSPF functions on an Ethernet (broadcast) segment. It elects a Designated Router (DR) and Backup Designated Router (BDR) in order to reduce the number of adjacencies that must be formed and the chaos that would result from a full mesh of these relationships. The Type 2 LSA is sent by the Designated Router into the local area. This LSA describes all of the routers that are attached to that Ethernet segment. 

The Summary (Type 3) LSA – Recall that your Type 1 and Type 2 LSAs are sent within an area. We call these intra-area LSAs. Now it is time for the first of our inter-area LSAs. The Summary (Type 3) LSA is used for advertising prefixes learned from the Type 1 and Type 2 LSAs into a different area. The Area Border Router (ABR) is the OSPF device that separates areas and it is this device that advertises the Type 3 LSA.

Examine the OSPF topology shown in Figure 1 below.

Figure 1: A Sample Multi-Area OSPF Topology

The Area 1 ABR would send the Type 3 LSAs into Area 0. The ABR joining Area 0 and Area 2 would send these Type 3 LSAs into Area 2 to provide full reachability in the OSPF domain. The Type 3 LSAs remain Type 3 LSAs during this journey, it is just OSPF costs and advertising router details that change in the advertisements. Notice also that in this example we are describing a multi-area OSPF design that is not using any special area types like Stub or Totally Stubby areas.

The ASBR Summary (Type4) LSA – There is a special router role in OSPF called the Autonomous System Boundary Router (ASBR). It is the job of this router to bring in external prefix information from another routing domain. In order to inform routers in different areas about the existence of this special router, the Type 4 LSA is used. This Summary LSA provides the router ID of the ASBR. So once again, the Area Border Router is responsible for shooting this information into the next area and we have another example of an inter-area LSA.

The External (Type 5) LSA – So the ASBR is the device that is bringing in prefixes from other routing domains. The Type 4 LSA describes this device. But what LSA is used for the actual prefixes that are coming in from the other domain? Yes, it is the Type 5 LSA. The OSPF ASBR creates these LSAs and they are sent to the Area Border Routers for dissemination into the other areas. Remember, this might change if we are using special area types.

The NSSA External (Type 7) LSA – Remember that in OPSF there is a VERY special area type called a Not So Stubby Area (NSSA). This area can act stub, but it can also bring in external prefixes from an ASBR. These prefixes are sent as Type 7 LSAs. When an ABR gets these Type 7 LSAs, it sends them alone in to the other areas as a Type 5 LSA. So, the Type 7 designation is just for that very special NSSA area functionality. 

Other LSA Types – Are there other LSA types? The short answer is YES. But we do not often encounter these. For example, a Type 6 LSA is used for Multicast OSPF and that technology never really caught on, allowing Protocol Independent Multicast to win out. For completion’s sake, here is a complete listing of all of the possible LSA types:

  • LSA Type 1: Router LSA
  • LSA Type 2: Network LSA
  • LSA Type 3: Summary LSA
  • LSA Type 4: Summary ASBR LSA
  • LSA Type 5: Autonomous system external LSA
  • LSA Type 6: Multicast OSPF LSA
  • LSA Type 7: Not-so-stubby area LSA
  • LSA Type 8: External attribute LSA for BGP
  • LSA Types 9, 10, and 11: “Opaque” LSA types used for application-specific purposes

OSPF LSA Types and Area Types

One of the reasons that you should master the different LSA types is the fact that it can help you fully understand the potential importance of a multi-area design, especially one that might include special areas. A key to the importance of special area types in OSPF is the fact that they initiate the automatic filtering of certain LSAs from certain areas. 

For example, think about Area 1 attached to the backbone area of Area 0. There are Type 1 LSAs flooding in this Area 1. If we have broadcast segments, we also have Type 2 LSAs circulating in the area. The Area Border Router is sending LSA Type 3s into the backbone to summarize the prefix information from Area 1.

This ABR is also taking in this information from the backbone for other areas that might exist. If there is an ASBR out there in the domain somewhere, our Area 1 will receive Type 4 and Type 5 LSAs in order to know the location of this ASBR and the prefixes it is sharing with us. Note that this is the potential for a lot of information being shared between areas. This is precisely why we have special area types!

OSPF LSAs and the Stub Area

What is it that we want to accomplish with a stub area? We do not want to hear about those prefixes that are external to our OSPF domain. Remember what those were? Sure, they are the Type 5 LSAs. In fact, we do not even want to hear about those Type 4 LSAs that are used to call out the ASBR in the network. So the stub area is full of Type 1, Type 2, and Type 3 LSAs. In fact, how would this area get to one of those external prefixes if it needed to? We typically use a very special Type 3 LSA for this. This LSA represents the default route ( It is this handy route that allows devices in this area to get to all of those externals. In fact, it is this router that is used to get to any prefix not specifically defined in the Routing Information Base (RIB).

OSPF LSAs and the Totally Stubby Area

With this area, we want very little inside it from an LSA perspective.  With this area, it makes sense that we are blocking those Type 4 and Type 5 once again, but now we are even blocking the Type 3 LSAsthat are describing prefix information from other areas WITHIN our OSPF domain. There needs to be one big exception, however. We need a Type 3 LSA for a default route so we can actually get to other prefixes in our domain.

OSPF LSAs and the Not So Stubby Area and the Totally Not So Stubby Area

Remember, the Not So Stubby Area needs to have those Type 7 LSAs. These Type 7 LSAs permit the proliferation of those external prefixes that are entering your OSPF domain thanks to this NSSA area you created. Obviously, this area also has the Type 1, Type 2, and Type 3 inside it. Type 4 and Type 5 will be blocked from entering this area as you would expect. You can also create a Totally Not So Stubby Area by restricting Type 3s from this area.

Virtual Links

You might recall from our earlier discussion of OSPF that all areas in an OSPF autonomous system must be physically connected to the backbone area (Area 0). Where this is not possible, you can use a virtual link to connect to the backbone through a non-backbone area. 

Keep the following facts in mind about your virtual links:

  • They should never be considered a design goal in your networks. They are a temporary “fix” for a violation of the rules of OSPF.
  • You can also use virtual links to connect two parts of a partitioned backbone through a non-backbone area.
  • The area through which you configure the virtual link, known as a transit area, must have full routing information.
  • The transit area cannot be a stub area. 

You create the virtual link with a single command in OSPF configuration mode:

This command creates a virtual link through area 1 with a remote OSPF device with Router ID You configure that remote OSPF device with a virtual-link command as well. For example, if our local OSPF device is at Router ID, the appropriate remote command would be:

NOTE: A virtual link is just one patch for a broken OSPF implementation. You could also use a GRE tunnel to patch the OSPF topology.

That wraps up our look at basic OSPF concepts. In the upcoming series of blog posts, we’ll begin examining advanced topics and configuration. Until then, take good care.

OSPF Basic Concepts – Part 2

Ccie R/SCcna R/SCcnp R/S Jul 23, 2019

In the previous blog post, we looked at a few fundamental OSPF concepts, including neighbor and adjacency formation. As we continue through the basics of OSPF, this post will examine router roles, timers, and metric calculation.

Designated Router (DR) and Backup Designated Router (BDR)

A designated router (DR) is the router interface that wins an election among all routers on a multiaccess network segment such as Ethernet. A backup designated router (BDR) is the router that becomes the designated router if the current designated router has a failure on the network. The BDR is the OSPF router with the second highest priority at the time of the last election. OSPF uses the DR and BDR concept to assist with efficiencies in the operations of OSPF.

Keep in mind that a given OSPF speaker in your network can have some interfaces that are designated and others that are backup designated, and others that are non-designated. If no router is a DR or a BDR on a given subnet, the BDR is first elected, and then a second election is held for the DR.

What are the criteria for the DR election process? The DR is elected based on the following default criteria:

  • If the priority setting on an OSPF router is set to 0, it can never become a DR or BDR.
  • When a DR fails and the BDR takes over, there is another election to see who becomes the replacement BDR.
  • The router sending the Hello packets with the highest priority wins the election.
  • If two or more routers tie with the highest priority setting, the router sending the hello with the highest Router ID wins.
  • Typically, the router with the second highest priority number becomes the BDR.
  • The priority values range between 0 – 255, with a higher value increasing its chances of becoming DR or BDR.
  • If a higher priority OSPF router comes online after the election has taken place, it will not become DR or BDR until there is a failure with an existing DR or BDR. We call this non-preemptive in networking. 

Remember, all routers in a multiaccess network segment will form an adjacency with the DR and BDR. Every time a router sends an update, it sends it to the DR and BDR on the multicast address The DR will then send the update out to all other routers in the area using the multicast address

Thanks to this process, all routers do not have to constantly update each other and can get all their updates from a single source. Note that the use of multicasting further reduces the network load.

DRs and BDRs are always elected on OSPF broadcast networks as described above. DRs can also be elected on NBMA (Non-Broadcast Multi-Access) networks such as Frame Relay or ATM. DRs or BDRs are not elected on point-to-point links.

Example 1 shows how you can set the priority for the DR election process. This example also shows a sample verification.

Example 1: Setting the Priority of an OSPF Interface

OSPF Timers

While there are many timer values with OSPF, there are two that are critical for you to understand. The hello timer controls how often the router sends routine messages to its neighbors to indicate its continued health. If the neighbors don’t hear any hello messages for a length of time defined by the dead-interval, they assume that the router is no longer reachable and drop it from the adjacency table.

The default values are:

  • 10 seconds for the hello time
  • 40 seconds for the dead time

While you can lower the timer values from their defaults, you will generate more traffic on the link. Also realize that if you set the dead time too aggressively small, you could run the risk of declaring a neighbor down when the only true issue on the link was temporary congestion.

Example 2 shows a setting of the OSPF timer values. Keep in mind you would set the timers to match on your neighboring devices as well.

Example 2: Setting the OSPF Timers

Metric Calculation

Remember, the OSPF metric is the costwhich is based on bandwidth by default. The formula to calculate the cost is reference bandwidth divided by interface bandwidth. OSPF uses a reference bandwidth of 100 Mbps for cost calculation. For example, in the case of Ethernet, it is 100 Mbps / 10 Mbps = 10.

You can change the reference bandwidth for the cost calculation. Be sure to do this for all of your OSPF speakers should you decide to manipulate it. You do this with the ospf auto-cost reference-bandwidth command. Example 3 shows this command in action.

Example 3: Setting the Reference Bandwidth for OSPF

NOTE: You can override the OSPF cost calculation for interfaces by setting the cost directly on the interface. You do this with the ip ospf cost command under the interface.

That’s going to wrap up Part 2 of our OSPF Basic Concepts series. Before we move on to advanced concepts, next time we’ll cover OSPF LSA Types and Area Types in Part 3.Until then, take good care.

OSPF Basic Concepts – Part 1

Ccie R/SCcna R/SCcnp R/S Jul 09, 2019

The OpenShortest Path First (OSPF) dynamic routing protocol is one of the most beloved inventions in all of networking, widely adopted as the Interior Gateway Protocol (IGP) of choice for many networks. In this blog series, you’ll be introduced first to the basic concepts of OSPF and learn about its various message types and neighbor formation.

An Overview of OSPF

Where does the interesting name come from when it comes to OSPF? It is from the fact that it uses Dijkstra’s algorithm, also known as the shortest path first (SPF) algorithm. OSPF was developed so that the shortest path through a network was calculated based on the cost of the route. This cost value is derived from bandwidth values in the path. Therefore, OSPF undertakes route cost calculation on the basis of link-cost parameters, which you can control by manipulating the cost calculation formula.

As a link state routing protocol, OSPF maintains a link state database. This is a form of a network topology map. Every OSPF router on the network implements this link state database and the map to networkdestinations. Included in this link state database for each prefix is the OSPF cost value. Remember, the OSPF algorithm allows every router to calculate the cost of the routes to any given reachable destination.

A router interface running OSPF will advertise its link cost to its OSPF neighbors. This prefix and cost information is cascaded through the network as OSPF routers advertise the information they receive from one OSPF neighbor to all other OSPF neighbor routers. This process of flooding link state information through the OSPF network is known as synchronization. Based on this information, all routers with OSPF implementation continuously update their link state databases with information about the network topology and adjust their routing tables.

We like to refer to OSPF as a hierarchical routing protocol. This is because an OSPF network can be subdivided into routing areas to simplify administration and optimize traffic and resource utilization. We identify areas by 32-bit numbers. We can express these as in decimal or in the same dotted decimal notation used for IPv4 addresses.

By definition, Area 0 (or represents the core or backbone area of an OSPF network. Any other areas you might create (Area 10, Area 20, etc) must be connected to the backbone Area 0. If your areas are not connected as described, you must engage in a workaround procedure such as a Virtual Link (described later in this blog series). You maintain connections between your areas with an OSPF router known as an area border router (ABR). An ABR maintains separate link-state databases for each area it serves and maintains summarized routes for all areas in the network.

Neighbor and Adjacency Formation

Because OSPF is a complex routing protocol (unlike a much simpler protocol such as RIP), it uses different neighbor states in its operation. You should know and understand these different states not just for the academic exercise of possessing this knowledge. These states can become critical in your OSPF support and troubleshooting efforts. For example, you might have a router “stuck” in one of these states, and that information can prove critical in fixing the problem.

Here are the states and information you should know about each of them:

Down – This is the first (or initial) OSPF neighbor state. It means that no information (hellos) have been received from a neighbor, but hello packets can still be sent.

During the fully adjacent neighbor state (the Full state), if a router doesn’t receive a hellopacket from a neighbor within the RouterDeadInterval time or if the manually configured neighbor is being removed from the configuration, then the neighbor state changes from Full to Down. Remember, the RouterDeadInterval time is 4 times the HelloInterval by default for OSPF.

Attempt – This state is only valid for manually configured neighbors in a Non-BroadcastMultiaccess (NBMA) environment. In this state, the router sends unicast hello packets every poll interval to the neighbor from which hellos have not been received within the dead interval.

Init – This state specifies that the router has received a hello packet from its neighbor, but the receiving router’s ID was not included in the hello packet. When a router receives a hello packet from a neighbor, it should list the sender’s router ID in its hello packet as an acknowledgment that it received a valid hello packet.

2-Way – This state designates that bi-directional communication has been established between two routers. Bi-directional means that each router has seen the other’s hello packet. 

This state is attained when the router receiving the hello packet sees its own Router ID within the received hello packet’s neighbor field. In this state, a router decides whether to become adjacent to this neighbor. On broadcast media and non-broadcast multiaccess networks, a router becomes full only with the designated router (DR) and the backup designated router (BDR); it stays in the 2-way state with all other neighbors. On Point-to-Point and Point-to-Multipoint networks, a router becomes fully adjacent with all connected routers.

NOTE: At the end of this state, the DR and BDR for broadcast and non-broadcast multi-access networks are elected.

Exstart – Once the DR and BDR are elected, the actual process of exchanging link state information can start between the routers and their DR and BDR.

Exchange – In the exchange state, OSPF routers exchange database descriptor (DBD) packets. Database descriptors contain link-state advertisement (LSA) headers only and describe the contents of the entire link-state database. Routers also send link-state request packets and link-state update packets (which contain the entire LSA) in this state. The contents of the DBD received are compared to the information contained in the routers link-state database to check if new or more current link-state information is available with the neighbor.

Loading – In this state, the actual exchange of link state information occurs. Based on the information provided by the DBDs, routers send link-state request packets. The neighbor then provides the requested link-state information in link-state update packets. During the adjacency, if a router receives an outdated or missing LSA, it requests that LSA by sending a link-state request packet. All link-state update packets are acknowledged.

Full – In this state, routers are fully adjacent with each other. All the router and network LSAs are exchanged and the routers’ databases are fully synchronized.

Remember that the full state is the normal state for an OSPF router. If a router is stuck in another state, it is an indication that there are problems in forming adjacencies.

While a much more complex IGP than something like RIP, surprisingly, the configuration is quite simple. In fact, there are currently two options for its configuration at the command line. You can configure OSPF using a network statement under the routing process (much like other routing protocols), or you can configure the interfaces to run OSPF in interface configuration mode. Examples 1 and 2 demonstrate the two configuration approaches using the topology shown in Figure 1.

Figure 1: A Sample OSPF Topology

Example 1: Configuring OSPF Using Network Statements

Notice in these configurations that we identify the local OSPF process on the router using a locally significant process ID value. In the example, the process ID of 1 is chosen for all the routers. Also notice that the network statement must contain a wildcard mask that indicates the significant bits in the network value that proceeds it in the command. On the ATL2 Area Border Router, a 32-bit wildcard mask of is used to place one interface in Area 0 and another in Area 1.

Example 2: Configuring OSPF Using Interface-Level Commands

Example 3: Verifying OSPF

Notice the power of the show ip ospf neighbor command in order to verify the neighbor relationships and their state. On ORL, we examine the routing table to ensure we are learning the OSPF Inter Area route of  

That’s going to wrap up Part #1 of our OSPF series. Coming up in Part #2, we’ll take a look at the Designated Router and Backup Designated Router election process. Take good care.

New CCNP Certifications Rundown

Ccnp Jun 11, 2019

By now I’m sure you’ve heard of the sweeping changes Cisco is making to their certification tracks, which was announced at Cisco Live on Monday June 10, 2019. I covered the CCNA exam changes in a previous post, so here I’ll specifically address updates to the CCNP track.

First, if you’ve already started working toward any current CCNP certification – keep going!  You have until February 24, 2020 to complete your certification, and in the new program, you’ll receive credit for work you’ve already completed.

Let’s begin by looking at the current list of CCNP certifications, set to expire next February:

  • CCNP Routing and Switching
  • CCNP Collaboration
  • CCNP Wireless
  • CCNP Data Center
  • CCNP Security
  • CCNP Service Provider
  • CCDP

Now, here are the new CCNP certifications that will be rolling out:

  • CCNP Enterprise
  • CCNP Collaboration
  • CCNP Data Center
  • CCNP Security
  • CCNP Service Provider
  • Cisco Certified DevNet Professional

You may notice the absence of CCNP Routing and Switching, CCNP Wireless, and CCDP. These will all be retired, and will instead offer multiple paths to achieving the new CCNP Enterprise certification, along with new Specialist Certifications based on which of the three tracks you choose to go down. To further clarify this, Cisco has provided a migration tool for their professional exam tracks, which you can access here: CCNP Migration Tool.

The important thing to make clear is that if you pass the full exam path for any CCNP track before the February deadline, you will be granted the equivalent certification under the new program. For example, if you pass the current CCNP SWITCH, CCNP ROUTE, and CCNP TSHOOT before February 24, you will receive the new CCNP Enterprise certification, plus the appropriate Specialist certifications (see the migration tool for more info on those) as outlined by Cisco.

Now, if you are starting fresh under the new CCNP program, each CCNP certification requires only two exams: one core exam and one concentration exam of your choice. To see what this looks like for your particular concentration, visit Cisco’s Professional Level certification page here: Cisco Professional Certification Updates.

One last interesting thing of note is that the core exams in each technology track also serve as qualifying exams for CCIE lab exams. This means there will be no more written CCIE exams necessary before the lab attempt. So for example, let’s say you have a current CCNP in Routing and Switching. After the February deadline, you will be granted the CCNP Enterprise certification, along with the CCNP Certified Specialist – Enterprise Core certification. The CCNP Certified Specialist – Enterprise Core (300-401) is a prerequisite for the new CCIE Enterprise Infrastructure, in place of a written lab exam. For more about the expert level certification updates, check here: Cisco Expert Certification Updates.

It’s a lot to get your head around, but all of these changes look like a step in the right direction. Cisco’s goal was to streamline the certification process and make things more accessible with less tests necessary for completing specific concentrations, and it certainly seems like they’ve done that. Stay tuned for more about this big announcement.

Take care,