Background Introduction
Adam DuVander, a Portland raconteur and all-around good fellow reached out several months ago. He was writing an article and hoped I could answer a few questions on enterprise API management. A few of my quotes, along with insight from a handful of others, have since been posted in a piece entitled “API Experts Share How They Organize Thousands of APIs and Microservices”.
The choicest bits made it into the piece. However, behind the scenes, I bloviated enough to populate a standalone piece. Here’s the entirety of my replies, tweaked and updated for clarification.
The questions have to do with how companies manage their internal API portfolios. These are collections that, particularly in the age of microservices, grow at exponential rates.
Q1: How does an engineer discover what internal APIs already exist?
The Problem
API architectures are useful for several reasons. Benefits include organizing the complexity of an enterprise environment, allowing for independent deployment, dividing labor, and achieving faster times to market. Despite these advantages, however, moving from monolithic software deployment does cause discovery problems. Where there was once a single mass to inspect, there are now many homunculi to track down and interview. This is a problem, regardless of the development shop size.
A centralized API registry for service discovery should be the source of truth. This single catalog not only benefits integrators looking to kickstart new experiences; it also is vital for data and regulatory appeasement (GDPR, CCPA).
As Luis Weir points out in his book, “Enterprise API Management”, many companies, unfortunately, don’t have this single portal discovery experience. The discovery process, in these orgs, relies on tribal knowledge and shared spreadsheets.
One needs to design the discovery experience, not just for external APIs, but for internal ones, as well. There are many examples, often retold, for excellent external experiences. What isn’t covered nearly as much is the internal discovery experience. Whether this is a dedicated portal meticulously maintained by a dedicated platform team or a list (or list of lists) of APIs in a spreadsheet, this discovery experience should be managed. Knowledge management is an undervalued discipline; if you have access to talent in this area, use them in creating this experience!
Automation Can Take Multiple Forms
Whether it is one portal or many, relying on “good faith” efforts aren’t scalable. Pleading for harried developers to “do the right thing” and “create some docs” doesn’t work. It, at best, results in the same stale documentation that plagues so many company content management systems. The process for data entry to the registry must be as automated as possible. Thankfully, automation is possible through standardized lifecycle management and process enforcement through uniform API descriptions, like OpenAPI.
Another approach is to have the runtime gateway environments “phoning home” to a centralized registry. The description of the business intent would be more abstract. However, identifying highly trafficked or critical infrastructure would be a place to start building out more robust management rigor.
Duplication of Work Will Happen; The Degree to Which this is a Problem is Debatable
The Continuous API Management (CAM) book, by the dearly departed API Academy, fingers internal API duplication on poor discovery ability. However, in my experience, this is an oversimplification. Even with a comprehensive catalog of APIs, development teams may seek to minimize their dependencies. This includes other people’s APIs (see point #2 in my talk, “Three Ways Conway’s Law Affects API Governance”). Internal APIs do not have the same market-force natural selection to pick winners and losers. Other sociotechnical systems engineering approaches need to be brought to bear.
How do you ensure new APIs are consistent and meet guidelines?
Consistency of Design Starts with Consistency of Language
Before consistency can happen, we have to have a clear medium of exchange. The consistency of design starts with the consistency of language. Are the words “policy”, “guidance”, and “standards” all used interchangeably? What about “resources” and “endpoints”? Are “methods”, “actions”, and “verbs” all the same thing? Or do they have different meanings? Observe the language the business uses and create consensus where there is none. This is a foundational activity before guidelines can be built. It is also ongoing.
Pair Design Requirements to Organizational Outcomes
The next step is to understand the organization’s goals. Guidelines are a means of driving teams a particular direction, but to where? Guidelines without a destination, that don’t drive an organization to a better place, will be ignored. Are new integrations taking too long because of incomplete (or even contradictory) affordances in the interface? If so, we now have an outcome (or perhaps even KPIs) to shape guidance.
Enforcement is Not an All or Nothing Matter
With guidelines matched to outcomes, we can start enforcement. There is a spectrum. One extreme is gatekeeping. This is where an API is not allowed into an environment (development, QA, production, etc.) without ‘approval’ by a company-sanctioned group. This group is responsible for ensuring that guidelines are met.
Automation can help here, as well. Tools like Spectral can detect deterministic rules. These are cases where a team’s design attempts to submit a body as part of a GET operation. However, more subjective criteria, like whether “chAccPwChangeId” is self-descriptive and easily understood to the target audience, may still require human judgment.
The gatekeeping approach is the simplest to implement and the easiest to understand, particularly in highly regulated industries (to prevent the bad outcomes from happening a sanctioned party reviews all the things). However, this approach introduces an organizational dependency on software development. It also can be discouraging, particularly for those that self-identify as highly-skilled, empowered individuals.
The book, “Team Topologies”, presents an alternative approach. Rather than mandating a gate, the intent is to create a desire for mastery across all constituents and then help them get there (this pursuit of mastery better aligns with the self-image mentioned above, and thus, is more empowering). Skewing on this this side means the emphasis goes from up-front prevention to ongoing internal evangelism and a service interaction models by a supporting API team. Successfully deploying, maintaining, and demonstrating effectiveness of this model is incredibly challenging, however. It is why, I believe, I haven’t seen it implemented nearly as much.
Guidelines are a Process, Not a Stable End-State
Finally, any guideline program needs a well-established and continuously practiced feedback mechanism. The market changes, developer maturity changes, and the organization’s goals change. So too should the guidelines change to reflect that reality. The process for continual co-evolution of the guidelines is so much more important than the given guidelines at any one specific point in time.
How should “legacy” APIs be treated differently?
It is highly unlikely that any guideline or standards process is starting greenfield. Organizational leadership may be surprised by the number of interfaces, and the data exchanged through them, that already exist at the onset of a cataloging effort.
When it comes to building a catalog, the first priority, even above adherence to standards, is being listed. Once listed, an API governance team can begin a process of ‘eventual consistency’ to guidelines. This is where the API is discoverable but listed as out of compliance. When the team begins work on the next breaking version, reconciling incompatibilities can also happen since the client will have to modify their code to accommodate the new version anyway.
I’m not fond of “burn the boats”, all-or-nothing transformation initiatives. Even if these “legacy” APIs are not the way we’d do them now, they are still providing business value to someone (hence their continued existence). Cataloging, with an understood path forward, enables that quirky part of the corporate ecosystem to be studied and understood. As stewards of these complex systems, we’d do well to learn from the complex accretions that occurred. This is opposed to ignoring a company’s legacy to declare a fleeting ideological victory.