API Management promises a nirvana of exposing and securing data using well-known and simple techniques. Vendors focus on how easy it is to create the APIs and nearly always mention security as part of their API lifecycle story.
Yet, we've all seen the headlines screaming the latest security breach. So, what does “Security” really mean when it comes to API Management?
In this post I’ll try to differentiate the basic policies that all vendors discuss from the many other attack vectors that we need to be aware of.
API Management Security Terminology
Most API Management solution vendors discuss security in terms of OAuth, key management, and TLS. These are all valid to the overall problem of security. Most vendors will also have easy-to-use policies that throttle requests based on client usage. These throttling policies take the form of stopping an individual client from accessing a specific API over a period of time (e.g. allow them 20 calls per minute).
Foundational API Security
The above figure shows these “foundational” security elements. These essentials are what most of the API Management solution vendors highlight, and some don't have any other security policies over and above that. But effective API management requires more than this.
Throttling is Not Enough
If we hark back to "the good ol' days" of SOA and XML attacks, there were attacks like SQL injection (Execution of malicious SQL statements) and XML attack "bombs" (huge arrays of XML tags).
API Management is little different in this regard: there are still going to be attacks that rely on simplistic APIs that have only been considered from an implementation viewpoint and not a security view.
Consider the case where an API allows a range of values to be retrieved; let's say min=n and max=p. In this case we can see that a request with an extraordinarily high number for "max" could overload the back end system.
Another simple attack is where the payload might contain a String. Unless policies are put in place to make sure the string is kept within certain parameters, then the back-end could, again, get over whelmed (range attacks).
Putting this in more technical terms we can see:
- Distributed Attacks: Distributed Denial Of Service attacks (DDOS) are an excellent example of where a simple throttling policy simply won't work. In these attacks multiple IP addresses and, potentially multiple different clients, attempt to flood an API. Throttling policies often work on either a single IP address or single client key. It's a far more complicated problem to see that an attack is happening if from multiple client addresses or using multiple client keys.
- Microservice Attacks: Microservice architectures also lead us to find a new problem: if many front-end solutions are using the same back-end services then it is possible for an attacker to be calling entirely different front-end APIs but they may all funnel to one or two back-end systems and thus overwhelm it.
Apparently normal clients flooding a single back-end service API
Although from the attacker’s perspective it is difficult to ascertain what services are vulnerable−it's equally as difficult, from the providers perspective, to ascertain that an attack is ongoing given the complexity of the relationships.
- Network Attacks: Many attacks are still based on the old problems. Such concepts as slow posts (applications holding open connections by sending data in slowly) or login attacks (continuously attempting to login). These are well-known problems and should be handled using the well-known solutions.
Comprehensive security coverage
The above figure shows the layers of attack that are usually not discussed by vendors in-depth.
I hope I've shown that the problem of API Management security is one that needs to be considered in much greater depth than just whether the API has a throttling policy in place. Basic network attacks, "old-fashioned" HTTP attacks, content attacks and multi-level DDOS attacks are all in play.
Given this complexity, it's clear that one technology and strategy is not going to solve the problem. All the usual network and HTTP firewalls alongside the newer throttling policies must be in place. However, API Management also needs good security testing to be in place--something that I suspect we're all guilty of not considering as a prime place we want to spend money! We need to check that our APIs don't have the potential for SQL injection and that we have things like range checking in place.
Once those checks and balances have been put in place the APIs then must be actively managed. Verification of Traffic anomalies needs to be done so that it can be checked that a spike is a result of good use cases, not bad. Monitoring must be enabled all the way through to the back end services in order to ensure that distributed attacks are not taking advantage of the complexity of newer architectures.
In addition, there are a number of companies who can now help with DDOS mitigation, such as Cloudflare or Amazon’s “Shield” service. These remove most of the burden from you, allowing you to concentrate on your business.
Security in the API Management era is just as complicated as it ever was with SOA and is far more than a simple throttling policy attached to the external facing API.
The interaction of the wealth of external facing APIs with back-end systems needs to be understood to cater for distributed attacks. Active API management will help find where anomalies are occurring and, perhaps, need to be fixed. These methods, combined with good HTTP and network firewalls should stop most attacks, or at least contain them when they do arrive.
DDOS attacks can be mitigated by either your own security or with the help of the some of the new external services that are springing up.
Never scrimp on good up-front testing of the API content logic. API Management focuses on the ability to expose data quickly--but with that speed comes just as many security problems as we always had. Make sure that your APIs are designed and tested with all the rigour you would expect from any of the other systems you work with, and create good security testing and monitoring solutions around them.
About John Hawkins
John Hawkins is the CTO for Lightwell.
He has extensive experience in the middleware and integration stack having worked at IBM, IBM Business Partners and WSO2.
During his time as an IBM MQ architect he led IBM's Cloud strategy and architecture for next generation, Cloud Centric, Messaging.
John is an innovator and naturally "sees the big picture" which he has demonstrated not only through his work at Lightwell and previous roles, but also his large patent portfolio.