Assessing Network Management Challenges

Network Management doesn’t have to be overly complex, but a clear understanding of what needs to be accomplished is important. In a previous blog series I had talked about the need for a tools team to help in this process, a cross functional team may be critical in defining these criteria.

  1. Determine What is Important—What is most important to your organization is likely different than that of your peers at other organizations, albeit somewhat similar in certain regards. Monitoring everything isn’t realistic and may not even be valuable if nothing is done with the data that is being collected. Zero in on the key metrics that define success and determine how to best monitor those.
  2. Break it Down into Manageable Pieces—Once you’ve determined what is important to the business, break that down into more manageable portions. For example if blazing fast website performance is needed for an eCommerce site, consider dividing this into network, server, services, and application monitoring components.
  3. Maintain an Open System—There is nothing worse than being locked into a solution that is inflexible. Leveraging APIs that can tie disparate systems together is critical in today’s IT environments. Strive for a single source of truth for each of your components and exchange that information via vendor integrations or APIs to make the system better as a whole.
  4. Invest in Understanding the Reporting—Make the tools work for you, a dashboard is simply not enough. Most of the enterprise tools out there today offer robust reporting capabilities, however these often go unimplemented.
  5. Review, Revise, Repeat—Monitoring is rarely a “set and forget" item, it should be in a constant state of improvement, integration, and evaluation to enable better visibility into the environment and the ability to deliver on key business values.

The Art of Simplicity is a Puzzle of Complexity

As network engineers, administrators, architects, and enthusiasts we are seeing a trend of relatively complicated devices that all strive to provide unparalleled visibility into the inner workings of applications or security. Inherent in these solutions is a level of complexity that challenges network monitoring tools, it seems that in many cases vendors are pitching proprietary tools that are capable of extracting the maximum amount of data out of a specific box. Just this afternoon I sat on a vendor call in which we were doing a technical deep dive of a next-generation firewall with a very robust feature set with a customer. Inevitably the pitch was made to consider a manager of managers that could consolidate all of this data into one location. While valuable in its own right for visibility, this perpetuates the problem of many “single panes of glass".
I couldn’t help but think, what we really need is the ability to follow certain threads of information across many boxes, regardless of manufacturer—these threads could be things like application performance or flows, security policies, etc. Standards-based protocols and vendors that are open to working with others are ideal as it fosters the creation of ecosystems. Automation and orchestration tools offer this promise, but add on additional layers of intricacy in the requirements of knowing scripting languages, a willingness to work with open source platforms, etc.
Additionally, any time we seem to abstract a layer or simplify it, we lose something in the process—this is known as generation loss. Generation loss is the result of compounding this across many devices or layers of management tends to result in data that is incomplete or worse inaccurate, yet this is the data that we are intending to use to make our decisions.
Is it really too much to ask for simple and accurate? I believe this is where the art of simplicity comes into play. The challenge of creating an environment in which the simple is useful and obtainable requires creativity, attention to detail, and an understanding that no two environments are identical. In creating this environment, it is important to address what exactly will be made simple and by what means. With a clear understanding of the goals in mind, I believe it is possible to achieve these goals, but the decisions on equipment, management systems, vendors, partners, etc. need to be well thought through and the right amount of time and effort must be dedicated to it.

Experiencing the Connected Mobile Experiences


I had an opportunity to attend a Mobility BU hosted training at Cisco HQ in Santa Clara. This training covered Hyperlocation, Connected Mobile Experience (CMX) and the Enterprise Mobility Services Platform (EMSP). I had been looking forward to this ever since I received the invite, having invested time into the solution as early as 2013. These technologies are unified in purpose in that each of them have a role to play in transforming the end-user experience and enabling businesses to engage with their customers in new and interesting ways.

Hyperlocation
As one of the Wireless Field Day 8 delegates, I had an opportunity to see the Hyperlocation Module (HALO) up close and personal, however we never got a chance to actually play with it. For those interested, I wrote a detailed blog post about the technology after the WFD8 event. This time around however, we not only got to spend time talking through the technology and its use cases, we actually spent time playing with it in the CMX Lab at Cisco HQ. Seeing hyperlocation in action is impressive and the accuracy was within one meter as advertised. While the location accuracy is great, what is really intriguing is the network is aware of where the user is rather than relying on the user to interact with a beacon or something similar. I had the opportunity to walk around the floor space with an iPhone6+ and watch its movement on the screen. The response was impressively crisp for being 100% Wi-Fi based, but not quite as smooth as beacon-based movement tracking. This distinction is important though as beacons do require a user to be using their app to adequately engage, where as hyperlocation is simply the network being aware of the device and its movement inherently.

Detect. Connect. Engage.
Cisco’s CMX software works by detecting the presence of a device on the wireless network. Presence is simply the device being local to a given access point, it does not necessitate location, however location is an option and can be accomplished through standard triangulation or by the addition of the HALO module. Connection is the process of getting the user to opt-in through captive portal, SMS, social media, or mobile app. Some organizations are challenged with mobile app adoption so alternatives are a welcome addition. Lastly once the user is connected, engaging with them in new and innovative ways is the goal of the platform.

My Connected Mobile Experience (CMX)
Playing with CMX at the Cisco lab was fantastic—we walked around with various devices ranging from phones to Ava the telepresence robot who drove herself around the lab. Our movements generated a ton of data for CMX which we could then use to send notifications, trigger an action, etc. The reports and analytics offered around these actions are simple to navigate and provide powerful insights for organizations.

Enterprise Mobility Services Platform (EMSP)
EMSP is an open, cloud-hosted mobile application platform which provides an intelligent way to deliver customer engagement and is used with CMX to leverage location based services. Upon location acquisition of customer, EMSP wifi-enabled, browser-based captive portal provides a mobile experience specific to the location of the mobile device user, who they are and what they’re doing. EMSP then provides event-based, actionable insights which enable improved monetization and conversion of customer from looking to buying, from general presence to engaged interaction. In addition, the EMSP solution includes a tool suite for rapidly and dynamically updating content for the context-aware mobile experience. With this in mind, EMSP simplifies and accelerates time to deployment. It has the intelligent hooks to act upon the insights provided by CMX location services to improve the client experience, influence behavior, solicit feedback and automate workflow.

Bluetooth World - Day One Recap


My Bluetooth World day one started with a great conversation over breakfast as I presented on the need and opportunity for innovation in healthcare using Bluetooth enabled solutions. Our group opened up and had some fantastic discussion around some of the barriers that are currently challenging this industry such as limited numbers of Bluetooth radios being integrated into medical device solutions for connectivity. We progressed to discussion on all of the possible use cases as well as the opportunity for the data from an IoT-enabled world of healthcare to create new use cases as we better understand interactions between machines and humans.

The keynote speeches and individual presentations had great information, I was most interested in the direction of Bluetooth and the features that are coming shortly, especially the improvements to the meshing capabilities and range as these will open the door for great new use cases.

Also of personal interest was Kiyo Kubo’s talk about Bluetooth LE at Levi’s Stadium and the pain of getting to where it is today. Kiyo had gone through all of the challenges around Apple reducing their probing rates to almost nil and randomization of MAC addresses in the probing frames, forcing a change over to Bluetooth. They then had to develop a number of tools to make it a success both from an initial deployment and long term manageability.

The Expo floor had a wide variety of use cases from BLE managed LED lighting that synced with car audio to IoT-enabled hearing aids that would use location and ambient sound to automatically adjust their sound levels and noise filtration via a cloud interface.

It’s WLPC Time Again

The WLAN Pros Conference is truly a unique experience that I look forward to all year long. Throughout the year we are inundated with vendor marketing material and embroiled in competition. WLPC is a few days where we can come together as individuals, educate each other, build the community and challenge each other to be better at our craft. This year’s conference will be in sunny Phoenix, AZ. Read more about it here. If you’ve never been before and you have an interest in Wi-Fi I urge you to make plans to attend. It is a great opportunity to network and learn from others in the field.


This environment provides a great opportunity to get up and speak about something you are passionate about. The mix of longer presentations and ten talks allow for a lot of variety and depth of topics. This year I’ve selected a topic on Healthcare wireless as my main presentation topic and then will use a Ten Talk slot to provide a sneak peak into my Bluetooth World presentation that I will be giving in March at Levi’s Stadium.

Designing Wireless Networks for Clinical Communications

Healthcare presents one of the most challenging wireless environments in today's networking world. The unique blend of critical network applications and expectation of high speed ubiquitous wireless access for everyone is challenge enough and then numerous devices are layered on top. Clinical communications are critical to providing a high quality of care and has become an especially challenging environment to plan for. This post is intended to offer some guidance in designing these networks.

The Emergence of the Smartphone as a Clinical Communications Tool

Smartphones are joining the healthcare scene at increasing rates, companies such as Voalte, Mobile Heartbeat, PatientSafe and Vocera are bringing new features and functionality to market and are transforming communications at the point of care. These devices are typically either Apple iPhones or the Motorola MC40, however plenty of other variations exist. Each of these phones have numerous differences in how they behave. This differences vary from when they roam to how they handle packet loss, etc.

Access Point Transmit Power

In preparing to design for a clinical communications network a desired endpoint should be known. In almost all cases, Smartphones tend to have lower transmit power than what most admins are used to. As a result, we are designing wireless networks with transmit power of 10-12dBm rather than 14-17dBm as many legacy networks were built. This reduction in access point transmit power drives up the number of access points required to cover a facility by 25-50% depending on construction.

Data Rates

Disable lower data rates to reduce network overhead and functional cell size.

Access Point Placement

Fast roaming is critical to the performance of Voice over WiFi and for Smartphones this typically means leveraging 802.11r and 802.11k. Understanding how these protocols work and their impact on roaming is essential for success of any network being designed to support clinical communications. As a wireless engineer tasked with this design, the goal is to create small, clearly delineated cells with enough overlap to facilitate the roaming behavior of these mobile devices. If designed poorly, 802.11k can be a detriment to device roaming. Some general guidelines to follow:

  • Access points should be mounted in patient rooms and out of hallways whenever possible
  • Leverage interior service rooms to cover longer hallways--clean storage, food prep, case management offices, etc.
  • If you must place an AP in a hallway
    • consider planning to use short cross unit hallways rather than the long hallways wherever possible
    • consider using alcoves to your advantage to reduce the spread of the RF signal
  • Leverage known RF obstructions to help create clean roaming conditions that favor 802.11k
  • Overlap may need to be as much as 20% due to roaming algorithms in the smartphones
  • Pay attention to the location of patient bathrooms, facilities where these rooms are in the front of the patient room (near hallway) offer far more challenges than those where it is in the back of the room
  • Stagger APs between floors such that they are not vertically stacked on each other

Voice SSID

Configure for a single band whenever possible - you'll find that some vendors are still only comfortable with 2.4GHz. From experience this can work, but is not without issues either. As a general rule, I recommend using AppRF to view the applications using the SSID and prioritize them properly. Smartphones are always talking via multiple apps on multiple ports and this should be accounted for.

All Apps Are Not Created Equal

Certain mobile communications apps are simply not ready for the demands of a healthcare environment. Take the time to understand exactly how these apps are being used, on multiple occasions I've seen perceived "dropped" calls as an app issue rather than anything to do with the wireless network itself.

Test, Test, Test

This is still a relatively new application for Voice over WiFi and it will require effort to get it right. Extensive testing is typically needed to get these deployments 100% dialed in. Tuning from AP placements to transmit power tweaks should be expected to some degree.

One Company's Journey Out of Darkness, Part VI: Looking Forward

I've had the opportunity over the past couple of years to work with a large customer of mine on a refresh of their entire infrastructure. Network management tools were one of the last pieces to be addressed as emphasis had been on legacy hardware first and the direction for management tools had not been established. This mini-series will highlight this company's journey and the problems solved, insights gained, as well as unresolved issues that still need addressing in the future. Hopefully this help other companies or individuals going through the process. Topics will include discovery around types of tools, how they are being used, who uses them and for what purpose, their fit within the organization, and lastly what more they leave to be desired.


If you'e followed the series this far, you've seen a progression through a series of tools being rolled out. My hope is that this last post in the series spawns some discussion around tools that are needed in the market and features or functionality that is needed. these are the top three things that we are looking at next.

Event Correlation
The organization acquired Splunk to correlate events happening at machine level throughout the organization, but this is far from fully implemented and will likely be the next big focus. The goal is to integrate everything from clients to manufacturing equipment to networking to find information that will help the business run better and experience fewer outages and/or issues as well as increase security. Machine data is being collected to learn about errors in the manufacturing process as early as possible. This error detection allows for on the fly identification of faulty machinery and enables quicker response time. This decreases the amount of bad product and waste as a result, improving overall profitability. I still believe there is much more to be gained here in terms of user experience, proactive notifications, etc.

Software Defined X
Looking to continue move into the software defined world for networking, compute, storage, etc. These offerings vary greatly and the decision to go down a specific path shouldn't be taken lightly by an organization. In our case here we are looking to simplify network management across a very large organization and do so in such a way that we are enabling not only IT work flows, but for other business units as well. This will likely be OpenFlow based and start with the R&D use cases. Organizationally IT has now set standards in place that all future equipment must support OpenFlow as part of the SDN readiness initiative.

Software defined storage is another area of interest as it reduces the dependency on any one particular hardware type and allows for ease of provisioning anywhere. The ideal use case again is for R&D teams as they develop new product. Products that will likely lead here are those that are pure software and open, evaluation has not really begun in this area yet.

DevOps on Demand
IT getting a handle on the infrastructure needed to support R&D teams was only the beginning of the desired end state. One of the loftiest goals is to create an on-demand lab environment that provides compute, store and network on demand in a secure fashion as well as provide intelligent request monitoring and departmental bill back. We've been looking into Puppet Labs, Chef, and others but do not have a firm answer here yet. This is a relatively new space for me personally and I would be very interested in further discussion around how people have been successful in this space.


Lastly, I'd just like to thank the Thwack Community for participation throughout this blog series. Your input is what makes this valuable to me and increases learning opportunities for anyone reading.





Aruba Networks Sensors Everything

In case you missed it, Aruba Networks, an HP Enterprise Company announced the availability of their new Aruba Sensor product this week. This was spoken about at Atmosphere back in March, but then had seemingly disappeared. This new sensor enables Aruba Network's Meridian cloud analytics and wayfinding solution and best in class beacon management capabilities to be used on any wireless network. These sensors have a Low-Energy Bluetooth (BLE) radio to act as a beacon and manage other beacons in within a 25 meter radius and a wireless radio to provide network connectivity. Power is delivered through either AC power or USB and both power options offer security locks to ensure the safety of the Sensor. Making the same solution available for any wireless network is a huge deal as it allows for standardization of an engagement solution. ClearPass, Meridian and the Aruba Sensor/Beacon offer tremendous capabilities for any organization's network. Kudos to the team for embracing the market as a whole!



One Company's Journey Out of Darkness, Part V: Seeing the Light

I've had the opportunity over the past couple of years to work with a large customer of mine on a refresh of their entire infrastructure. Network management tools were one of the last pieces to be addressed as emphasis had been on legacy hardware first and the direction for management tools had not been established. This mini-series will highlight this company's journey and the problems solved, insights gained, as well as unresolved issues that still need addressing in the future. Hopefully this help other companies or individuals going through the process. Topics will include discovery around types of tools, how they are being used, who uses them and for what purpose, their fit within the organization, and lastly what more they leave to be desired.

Blog Series

After months of rolling out new tools and provisioning the right levels of access, we started to see positive changes within the organization.

Growing Pains
Some amount of growing pains were to be expected and this was certainly no exception. Breaking bad habits developed over time is a challenge, however the team worked to hold each other accountable and began to build the tools into their daily routines. New procedures for rolling out equipment included integration with monitoring tools and testing to ensure data was being logged and reported on properly. The team made a concerted effort to ensure that previously deployed devices were populated into the system and spent some time clearing out retired devices. Deployments weren't perfect at first and a few steps were skipped, however the team developed deployment and decommission checklists to help ensure the proper steps were being met. Some of the deployment checklist items included things that would be expected: IP addressing, SNMP strings, AAA configuration, change control submission, etc. while others were somewhat less obvious - placing inventory tags on devices, recording serial numbers, etc. We also noticed that communications between team members started to change as discussions were starting from a place in which individuals were better informed.

Reducing the Shadow
After the "growing pains" period, we were pleased to see that the tools were becoming part of every day activities for core teams. The increased knowledge led to some interesting discussions around optimizing locations for specific purposes and helped shed some light on regular pain points within the organization. For this particular customer, the R&D teams have "labs" all over the place which could place undue stress on the network infrastructure. The "Shadow IT" that had been an issue before could now be better understood. In turn, IT made an offer to manage the infrastructure in trade for giving them what they wanted. This became a win-win for both groups and has fundamentally changed the business for the better. In my opinion, this is the single best change the company experienced. Reduction in role of "Shadow IT" and migrating those services to the official IT infrastructure group created far better awareness and supportability. As an added benefit, budgets are being realigned with additional funding shifted to IT who has taken on this increased role. There is definitely still some learning that needs to be done here, but the progress thus far has been great.

Training for Adoption
Adoption seemed slow for help desk and some of the ancillary teams who weren't used to these tools and we wanted to better understand why. After working with the staff to understand the limited use it became apparent that although some operational training had been done, training for adoption had not. A well-designed training-for-adoption strategy can make the difference between success and failure of a new workflow or technology change.The process isn't just providing users with technical knowledge, but rather to build buy-in, ensure efficiency, and create business alignment. It is important to evaluate how the technology initiative will help improve your organization. Part of the strategy should include an evaluation plan to measure results against those organizational outcomes, such as efficiency, collaboration, and customer satisfaction (this could be internal business units or outward facing customers).

The following are tips that my company lives by to help ensure that users embrace new technology to advance the organization:
Communicate the big-picture goals in relevant terms. To senior management or technology leaders, the need for new technology may be self-evident. To end-users, the change can seem arbitrary. However, all stakeholders share common interests such as improving efficiency or patient care. Yet, users may resist a new workflow system—unless the project team can illustrate how the system will help them better serve patients and save time.

Invest properly in planning and resources for user adoption. If an organization is making a significant investment in new systems, investing in the end-user experience is imperative to fully realize the value of the technology. However, training for user adoption often is an afterthought in major technology project planning. Furthermore, it is easy to underestimate the hours required for communications, workshops and working sessions.

Anticipate cultural barriers to adoption. Training should be customized to your corporate culture. In some organizations, for instance, time-strapped users may assume that they can learn new technology “on the fly." Others rely on online training as a foundation for in-person instruction. Administrators may face competing mandates from management, while users may have concerns about coverage while they are attending training. A strong project sponsor and operational champions can help anticipate and overcome these barriers, and advise on the training formats that will be most effective.

Provide training timed to technology implementation. Another common mistake is to provide generic training long before users actually experience the new system, or in the midst of go-live, where it becomes chaotic. Both scenarios pose challenges. Train too early and, by the time you go “live," users forget how they are supposed to use the technology and may be inclined to use it as little as possible If you wait for go-live, staff may be overwhelmed by their fears and anxieties, and may have already developed resistance to change. The ideal approach will depend on each facility’s context and dependencies. However, staggering training, delivering complex training based on scenarios, addressing fears in advance, and allowing for practice time, are all key success factors.

Provide customized training based on real-life scenarios. Bridging the gap between the technology and the user experience is a critical dimension of training and one that some technology vendors tend to overlook in favor of training around features and functionality. Train with real-life scenarios, incorporating various technologies integrated into “day in the life" of an end user or staff member. By focusing on real-world practice, this comprehensive training helps overcome the “fear of the new" as users realizes the benefits of the new technology.

Create thoughtful metrics around adoption. Another hiccup in effective adoption occurs when companies do not have realistic metrics, evaluation, and remediation plans. Without these tools, how do you ensure training goals are met—and, perhaps more importantly, correct processes when they are not? Recommend an ongoing evaluation plan that covers go-live as well as one to six months out.

Don’t ignore post-implementation planning. Contrary to popular perception, training and adoption do not end when the new system goes live. In fact, training professionals find that post-implementation support is an important area for ensuring ongoing user adoption.

One Company's Journey Out of Darkness, Part IV: Who is Going to Use the Tools?

I've had the opportunity over the past couple of years to work with a large customer of mine on a refresh of their entire infrastructure. Network management tools were one of the last pieces to be addressed as emphasis had been on legacy hardware first and the direction for management tools had not been established. This mini-series will highlight this company's journey and the problems solved, insights gained, as well as unresolved issues that still need addressing in the future. Hopefully this help other companies or individuals going through the process. Topics will include discovery around types of tools, how they are being used, who uses them and for what purpose, their fit within the organization, and lastly what more they leave to be desired.


Throughout this series I've been advocating the formation of a tools team, whether it is a formalized group of people or just another hat that some of the IT team wears. This team's task is to maximize the impact of the tools that they've chosen to invest in. In order to maximize this impact, understanding who is using each tool is a critical component of success. One of the most expensive tools that organizations invest in is their main network monitoring system. This expense may be in the CapEx spent obtaining the tool or the sweat equity put in by someone building out an open source offering, but either way these dashboards require significant effort to put in place and demand effective use by the IT organization. Most of IT can benefit from these tools in one way or another, so having Role Based Access Controls to these platforms is important so that this access may be granted in a secure way. Screens should be highly visible so that everyone in the office can see them.

Network Performance Monitoring
NPM aspects of a network management tool should be accessible by most if not all teams, although some may never opt to actually use it. Outside of the typical network team, the server team should be aware of typical throughput, interface utilization, error rates, etc. such that the team can be proactive in remediation of issues. Examples where this has come in useful include troubleshooting backup related WAN congestion issues and usage spikes around anti-virus updates in a large network. In both of these cases, the server team was able to provide some insights into configuration of the applications and options to help remedy the issue in unison with the network management team. Specific roles benefiting from this access include: Server Admins, Security Admins, WAN Admin, Desktop Support

Deep Packet Inspection/Quality of Experience Monitoring
One of the newer additions to NMS systems over the years has been DPI and its use in shedding some light on the QoE for end users. Visibility into application response time can benefit the server team and help them be proactive in managing compute loads or improving on capacity. Traps based on QoE variances can help teams responsible for specific servers or applications provide better service to business units. Specific roles benefiting from this access include: Server Admins, Security Admins, Desktop or Mobile Support

Wireless Network Monitoring
Wireless has outpaced the wired access layer as the primary means of network connectivity. Multiple teams benefit from monitoring the air space ranging from security to help desk and mobile support teams. In organizations supporting large guest networks - health care, universities, hotels, etc. the performance of your wireless network is critical to the public perception of brand. Wireless networks monitoring now even appeals to customer service or marketing teams. This addition to non-IT teams can improve overall communications and satisfaction with the solutions. For teams with wireless voice handsets, telecom will benefit from access to wireless monitoring. In health care, there is a trend to develop a mobile team as these devices are critical to the quality of care. These mobile teams should be considered advanced users of wireless monitoring.

IP Address Management (IPAM)
IPAM is an amazing tool in organizations that have grown organically over the years. Using my customer as a reference, they had numerous /16 networks in use around the world, however many of these were disjointed. This disjointed IP addressing strategy leads to challenge from an IP planning standpoint, especially for any new office, subnet, DMZ, etc. I'd advocate read only access for help desk and mobile support teams and expanded access for server and network teams. Awareness of an IPAM solution can reduce outages due to human error and provides a great visual reference as to the state of organization (or lack there of) when it comes to a company's addressing scheme.

I personally do not advocate an environment that promotes read-only access for anyone interested in these tools as the information held within these tools should be secure as they would provide the seeds for a well planned attack if so desired. Each individual given access to these tools should be made aware that they are a job aide and carry a burden of responsibility. Also, I've worked with some organizations looking for very complex RBAC for their management teams, unless you have an extremely good reason, I'd shy away from this as well as the added complexity generally offers very little.