API Monitoring: Not One and Done

David O'Neill
by David O'Neill 27 Nov, 2015

APIs, or application programming interfaces, are the keys that allow applications to cross-talk and work together, and are what make today's Internet of Things (IoT) world go 'round.  

Those who already knew this, understand why APIs are important and are probably already testing APIs as part of the product or website development process. What they might not even know is that API monitoring and testing are no longer one-and-done activities.

Since APIs are what allow developers to tap into Web-based services such as Facebook or Google Maps and let website designers include key social media links to important communication tools such as Twitter, it's critical to ensure they are working at optimum levels at all times. Additionally, today's Internet-fueled economy sees a tremendous amount of online retail revenue being enabled through APIs. If API testing stops after a website and/or mobile app is released, the team will miss a lot of potentially disastrous problems that impact the bottom line.  

As an example, testing from inside the enterprise firewall can give a completely different picture of API health and potential problems than tests using data centers in other parts of the world. Developers need both, and they need testing on a continuous basis. In short, the goal should be to minimize the development of unexpected API errors and thus avoid the anger of impacted users complaining on social media about their terrible online experiences. That scenario is damaging to a brand's reputation and to its overall bottom line.

Here are several reasons why ongoing API monitoring is important:

1) Cross-cloud issues.

In building an API specifically for a certain cloud service that customers are using, it's important to research the best host for the service as well. Latency varies, and calling one cloud provider from an app hosted on the same provider des not always have the lowest latency. As an example, a recent internal survey on API performance showed that calling APIs hosted on Azure from Amazon Web Services did not necessarily perform the best.

2) API gateway.

Is the enterprise gateway creating lag time? If so, API monitoring can pinpoint it as a problem.

3) Location.

Where servers are located makes a difference in the latency of API calls. If developers want to know if any of their site or app users are getting slow service, particularly those overseas, they'll never know if tests are conducted solely inside the office.

4) Third-party APIs are slow.

Now, here's the rub: if a company is using any of the thousands of available public APIs, their poor performance can have a huge impact on the brand's offering. The consumer-facing company will always shoulder the blame, as customers won't know why service is slow or broken; however, API performance monitoring can at least give developers the satisfaction of knowing if the problem is with Facebook or within their own software. 

In conclusion, the biggest argument for regular API performance monitoring is to ensure that developers find problems with their website or app before a user des.  API testing should not just happen throughout the product development lifecycle.  For best results, it needs to be conducted from an end-user perspective from inside and outside of thee firewall, in a variety of locations and over a number of service providers.  It's the only way to get a clear picture that gives an in-house team the data to tackle, solve and even avert problems.

David O'Neill is a serial entrepreneur with a background in mobile and testing technologies dating all the way back to WAP. He is the CEO and co-founder of APImetrics, which offers the first and only intelligent, analytics-driven API performance solution build specifically for the enterprise.