API Management is a hot topic at the moment. Many have heard about it and have seen the flurry of products that have been introduced into the market in recent years. However, when speaking with our customers, it's clear to me that many do not know exactly what all the elements are that make up API Management. This can lead to a lack of understanding of the architecture and sometimes the benefits of API Management for their business. To help address this, we have created this post to help break down API Management into its constituent parts and help you see how your organisation might benefit.
A lot of this confusion occurs because API Management is not one single thing but, rather, a host of small functions and advantages. Some of those functions have previously been called other things or have been hidden from the business guys by the architects who didn't need to expose the function to the business level. Let's go through them one-by-one:
What is an API?
An API is, as it has been for years under many different guises, an 'Application Programming Interface.' In this case we're not talking about, say, a Java class interface, but about an interface into a hosted service that is (probably) in your enterprise as shown below:
You may also see these back-end services discussed under the guise of ‘systems of record’. Don't let the word 'service' make you think that this is just for Web Services either. Today, APIs are nearly always WebServices (SOAP) or REST and when the interface is a REST interface the data format is nearly always JSON. However, this does not mean that the back-end service that you are exposing has to be outputting SOAP or JSON; you may have a current service that outputs SOAP (or anything else) but you want to expose to the world the same content in JSON using a REST interface.
Alternatively, you may want to expose a single 'API' that is actually a conglomeration of different back-end services that is made to look like a single API to the outside world.
Although I'm discussing exposing APIs from a back-end system to external customers (business partners or clients), they can also be internal APIs. For example, services made available from one group in your business to another.
A key role of the API Manager is to secure the API being exposed. After all, APIs are usually exposed through the DMZ and are exposing back-end services to the world. There are several elements to that security, including: authentication (who wants to use the API?) and authorisation (can someone use the API?). When discussing API Management the normal methods for enabling these elements of security are either HTTP Basic auth, OAuth or OpenId…and I'm not going to go into the difference between OAuth and OpenId here as it appears to be becoming a religious debate ;-).
Because internet security is becoming more unified it is getting more and more normal to use OAuth/OpenID rather than just HTTP Basic Auth when securing APIs. However, Basic Auth does allow you to get up and running just a little easier in dev and test environments.
With this security comes key management which is integral to the OAuth specifications (and of course, username/password management with Basic Auth). API Managers have the capability to handle this responsibility by standardizing creation and refreshment of keys.
Denial of Service
Denial of Service (DOS) attacks are clearly a risk in this sort of environment. You are not only exposing your crown jewels to the world but also advertising the fact that they are exposed. Most API Managers have some kind of DOS attack defence mechanisms built-in but those that don't appear to be adding it in as customers are beginning to demand it.
A general function that many gateway servers encompass is caching of the content that they are serving. API Managers are no exception. However, given that the data flowing may be changing quite rapidly, it may be that the caching capabilities are limited just by the style of data being served.
As with any system the business would like to know who’s actually making use of the APIs. This is also the job of the API Manager. Again, common function is evolving here: some vendors not only track the usage in the API Manager but also allow you to view those usage statistics from within the API Manager interface. Others just keep track of the usage and then farm out the pretty graphical tools to third party tooling. I suspect that, over time, we'll see most API Managers having built-in tooling for viewing usage statistics.
Complimentary to usage tracking and Denial of Service defence is usage throttling. API Managers have built-in function that allows the API administrator to specify levels of usage of the API.
The usual analogy here is of the Gold, Silver and Bronze users. Where the Gold users can, say, get unlimited access to an API versus the Bronze who can only access an API, say, once every minute. Once those usage limits have been reached it is the job of the API Manager to block or warn the user until such times as the time limit has run-out. There are subtleties to these algorithms. For instance, only allowing Bronze users in during certain times of the day. Or, perhaps if the system is looking overloaded, allow Gold users through before allowing Silver and Bronze users.
All of these parameters are configurable to different extents on different vendor's platforms. Although not all vendors offer the same range of function at the moment they are all trying to achieve the same overall goal - that of controlling how much an API can be used.
You may well have heard of the 'API Economy'. This encompasses the idea that APIs can also be charged for.
This goes hand-in-hand with Usage Tracking and Throttling. Once we have the ability to usage track and throttle it is not a large step to add in the ability to charge and invoice users. At this point in time most vendors have, at the very least, some kind of ability to output the gathered usage statistics to an external customer invoicing process. As API managers evolve I expect this function to be enabled more and more within the API Manager itself.
Once an API is defined it's an obvious thing to want to advertise it. Most vendors have supplied an integral API store in their solution. This store functions as a single shop-front to allow developers to view and sign-up for APIs. Such things as documentation of the API can also be exposed in the store. The function of the store also often includes the ability for social media interactions; things like rating APIs; in-built forums for a specific API; commenting, etc.
One of the key things that stores enable is the ability for what's called self-registration. Imagine that your API is the most popular API in the company and that you have hundreds of users wanting to sign-up and use it. You don't want to sit there all day accepting requests to access your API. So, stores allow you to build rules to automatically allow certain classes of users to access your API (whether that is 'everyone' or just some specific group).
APIs are often evolving beasts and to this end you may have one version in production and another in development. This brings with it the age-old problem of version control.
API Managers often have the ability to host different versions of an API in different areas of the system that are e.g. only accessible to certain groups or networks. This is often in the form of simple sandbox and production areas however, as vendors evolve their solutions, it's now more easy to define many environments e.g. dev, test, prod and move the APIs seamlessly through each of the environments.
I'll revert to a diagram to get us back on track and break all those functions down into manageable units. Although I've said that API Management has x, y, or z capability in fact API Management is not a single server.
The diagram above shows that there are usually, at least, three servers involved.
Starting at the beginning, designing of the API happens from within its own separate server. The configuration of the actual APIs is stored in a registry ready to be pushed out to the gateway server. This registry is often built into the manager server itself so I haven’t shown it as a separate entity.
The whole of the rest of API life-cycle is also managed through the API Manager console e.g. versioning and deprecation.
It is not uncommon to find that the manager server is switched off when APIs are not being designed or managed as it may not serve any other purpose (but this is vendor/customer specific).
These servers always expose a GUI to the user to allow easy creation of APIs.
This server will also often double up as the monitoring and usage tracking server later on in the life-cycle of the API.
The gateway is the real 'runtime' server in this architecture. Its role is to do all that security, throttling and usage statistics monitoring that we've discussed above.
The API store does what it says. It's where the developers come in and find out what the APIs are and where they can find them. The store must be accessible to the writers of the API - but it needn't necessarily be available to them at runtime so it may be that, for security reasons perhaps, the API store is not switched on all the time. I've put it in the DMZ in the diagram above so that it can be exposed to external users but this may not be necessary depending on who's writing the application that uses the API.
Of course, all these servers are linked in some way and there may well be a separate registry that contains the configuration information for the platform as a whole, depending on your vendor and configuration.
I hope that I've shown you that API Management encompasses a whole host of different functionality. At the moment, different vendors have different capabilities in each of the areas we've discussed but they are converging into some common functions that they all support. The runtime architecture helps to distinguish the different roles and responsibilities that go with those functions. Watch this blog for future posts about API Management technology, and view the resources on this page and in the API Management section of our resource library.
About the Author
|John Hawkins is EMEA Integration practice lead based in the Lightwell UK offices. John has previously worked for IBM as an MQ strategist and architect working on next generation cloud messaging solutions and messaging appliances. Since leaving IBM John has continued to focus on both integration and cloud technologies such as IBM PureApplication Systems, MQ and, more recently, API Management.|