Modes of capability distribution in a system.

I’ve been thinking a bit about how complexity emerges in a system, and whether the relative size of its subcomponents can influence the amount of complexity possible. Let’s explore it a bit.

Warning: this analysis is more of a stream-of-consciousness exploration to see how a few abstract things relate to each other. It’s not trying to assert anything especially meaningful, nor is it carefully proving or demonstrating any of its points.

Systems are defined by their components engaging in feedback loops to advance the system toward its target state. A change in the state of a subcomponent typically triggers some flow of information to another subcomponent, which may result in some type of action to help the system maintain homeostasis of its overarching state. The primary focus here will be on the ability of components to take those actions.

We want to answer this question:

Does the size of a system’s components influence how large the system is able to grow?

Synthesizing component-oriented system types.

We have a few examples of systems we can use to help us figure out what variables we’re trying to work with:

  • A mammalian body consists of stable subcomponents called organs. Each organ has a highly specific job, its own unique substructures, and specialized cells suited for the job of the organ.
  • A company usually comprises a number of well-defined departments and teams. The skills needed to support the activities of these teams usually differs across the functional units.
  • In an ant colony, there tends to be no specialization between individual drones nor organizing structures determining their work. With few exceptions, each member is able to do any job and moves between jobs as needs emerge through point-to-point communication.
  • A baseball team has members who take different roles out on the field with no hierarchy and little communication. Each player applies their specialization independently to support the team.

In Robert Keidel’s Seeing Organizational Patterns: A New Theory and Language of Organizational Design, he proposes three archetypes about how individuals interact in an organization. We can use those same archetypes more abstractly to think about how system components relate to one another. They are:

  • Cooperation: Components are integrated in a way that allows them to operate as equal peers in their decisions and actions.
  • Control: The system has a subordinate structure that gives members further up in the hierarchy power over the lower members.
  • Autonomy: Separation is built into the design such that system components do not have to interact.

Comparing these traits to our examples, we can easily conclude that mammals’ bodies and typical companies are patterned off of hierarchy and therefore fit the Control archetype. Ant colonies tend more toward the Cooperation aspect, with each member communicating to assess and act on the needs of the colony. A baseball team demonstrates an organization that focuses primarily on Autonomy.

Let’s accept these three archetypes as our system types. We’ll test out the properties of each of these archetypes to form some theories about how their unique constraints influence their ability to scale.

Autonomy: little growth potential.

First up, let’s look at Autonomy. Following the baseball example, we see that this type of system has trained individuals into different roles which require little-to-no communication between the other players to execute. The system is patterned up front—players are assigned to a particular job and trained on how to do it—and then the system is left to do what it can. If the pattern is not working, this system has to change paradigms temporarily to reform itself. Either one component of the system needs to analyze the system and apply the Control paradigm to redefine the system, or each of the autonomous components need to apply the Cooperation pattern to reassess and reorganize.

What constrains this pattern’s ability to grow into a complex system?

In order to be successful (and apply the paradigm purely), every component of this system needs some way to observe the global state and adapt to it. Since the only specializations in the system occurs at the smallest component’s level, every component either needs to be able to peform every job, or has to have some ability to grow into other specializations, perhaps through training by other specialists. Since the focus of this pattern is autonomy rather than coordination, every component needs to understand the global goals of the system, its current state, and the activities of any nearby (if not all other) components. Without these per-component feedback loops, the system would not be able to adapt and grow.

It ends up being hard to differntiate the overall system and any lower-level components under this definition. Even the baseball team design is not a pure autonomous system—baseball teams bring control and coordination into their designs to make choices. It’s not all left up to a given component to figure it out.

The growth potential of Autonomy-based systems seems to be nearly 0. There may not be any examples where such a system even truly exists: if a component and the system are indifferentiable, there may not be a system at all, just the component’s interior, and the component’s method of operation would likely need to be a different type of system to meet its theoretical requirements.

Cooperation: quickly constrained by wasted potential.

In many ways, a cooperative system resembles an autonomous system in the way we’re exploring them. Each component needs to be able to possess a meaningfully comprehensive number of the system’s internal capabilities for the system to grow and adapt. The biggest difference is that no component needs to understand the global state of the system under this design, components only need to be able to assess local conditions, make decisions about them, and rely that information to other nearby components to solicit their support.

This creates some efficiencies—units near a problem are fully empowered to self-organize quickly around it without the overhead of administrative layers. It also poses a constraint for inefficiency. Without any administrative activities, the possible complexity of local state trades off against quality of information and therefore system performance.

This works okay for ants: if the food stores are running low, any individual can recognize and communicate this to redirect more effort toward food collection. Any ant involved in that activity is capable of noticing when food stores are at an appropriate level to cease that activity. Information decay is pretty minimal, the worst risk is that too many drones accidentally collect too much food, but the maximum possible overrage is that every engaged ant brings back one unit of food beyond the target stock level.

Consider a much more complex state, however: take for example a supermarket. Per-item stock has to be managed much more closely than this. Coordination can no longer be managed as a point-to-point issue—a worker who notices that more bananas are needed cannot communicate that they are working on correcting that stock to every possible other worker who could notice that the stock is running low. Past a relatively small amount of complexity, who is doing what has to be coordinated by a meta-system outside of the individual components, whereby breaking the cooperation-only model since those new components govern the lower-level components.

Another constraint on this system is that every new capability of the system has to enjoy wide adoption amongst the components. It doesn’t have to be that every participant in the system possesses the capability, just that the capability is available where it’s needed to a degree adequate for the needs of the system. This does mean though that every component in the system needs to be able to grow proportionally alongside the system, though. This is quickly wasteful: each component needs to be able to perform tasks that it will increasingly never encounter the need for.

Every new capability additionally consumes more resources. The component needs to be able to learn the capability, or its design needs to be updated to possess the capability innately, and these changes require resources in one form or another to accommodate. Thus, resource availability also constrains the complexity potential of this system.

Control: eventually constrained by inefficient resource usage.

Hierarchical systems seem to be the most able to grow in complexity. These systems change the function of their components until they reach a stable state of performance. Each of these components has a specialized function. The hierarchy arranges these specializations in such a way that it can continue building functions, encapsulating them into stable higher-level subcomponents, and it can grow its complexity from there.

This approach runs into ineffeciencies in its approach to forming stable components. Many components may have needs that are similar between one another, but once stabilized, these systems often do not abstract shared responsibilities into components that are better optimized to provide the function across the whole system.

In fact, once we get to a certain level of complexity, managing the consequences of the complexity begins to take more and more resources that could have been otherwise employed to build refinements. Subtle failure modes in subcomponents negatively interact with each other causing larger and unpredictable failures.

Resource usage also begins to explode with growth. Redundancies form throughout the system. The goals of subcomponents waver, falling in and out of alignment with the overarching system’s goals. Complexity can continue growing in these systems so long as they are able to continue growing their resource consumption to make up for the exponential growth in inefficiency. Attempts by the system to resolve inefficiencies require it to un-encapsulate a stable component to change how it works, introducing new complexities as it integrates better with other components to address the inefficiency, which necessarily changes the component in a way that produces new failure modes and therefore quality issues. As complexity grows, so does the propensity for catastrophic failures.

With enough analytical and executive investment, it might be possible to eliminate both inefficiency and catastrophic failure modes. Assuming those internal constraints can be addressed, the only thing constraining how large and complex a system like this can grow is its environment.


So there we have it. It seems that systems comprised of smaller, more uniform components will grow wasteful much too quickly to achieve any real complexity. Systems with a hierarchy of components ranging up to very large system-level stable components become wasteful at a slower rate, allowing them to grow much larger and more complex.