When I started developing software, the operating system handed off control to an application, the application did its thing, and when done, handed back control to the operating system. These early systems were largely single-application and linear. Control flow was simple and direct. The operating system had very little involvement with the running application.
The rise of graphical user interfaces brought the next major paradigm mainstream. Instead of the operating system handing off control, it actually owned control, sending it temporarily to the application. The application and operating system worked in cooperation. For instance, if you wanted to print, the application would call the operating system and tell it what to print and the operating system would handle it from there. This is called event-driven programming.
This was the standard for desktop and mobile operating systems. When web development became popular, client-server also became popular and hit massive scale. In this model, the application would do some work, need some information, and request it from the server. The server would do its thing and send back a response with whatever was requested. Meanwhile, generally nothing was stopping the application from doing other things while it was waiting.
In each of these models, a human did something and the software reacted to it. The user would click a button and the application and operating system or application and server would do something on their behalf. This is human-in-the-middle. A human is involved in the process.
This is mostly how things worked until the early 2000s. When I started my first company, I remember investors saying selling to software developers was one of the worst markets you could ever choose. This was a group of people who could write stuff on their own and all acted as individual purchasers. Very hard! And many companies died trying.
In the late 2000s and early 2010s, this started to shift. Part of this was the rise of microservices where we could break control of the application into many different servers with each doing one specific task. And another was the rise of mobile and modern, very complex operating systems. As a developer, there was too much to know and it became feasible and desirable to outsource some capabilities to other software applications and services. We had mashups, where we’d use, say, Google Maps in our own applications.
In the 2010s a few new companies were founded to build services explicitly for other software. Stripe, payment processing, and Twilio, messaging and communication, were two such companies. Both are still staples in software development circles. Why build your own payment processing software with all the compliance and reporting requirements when we can pay 2.9% and $0.30 per transaction, and work with a company that focused on making it brain-dead simple to integrate?
This is the start of the inflection point. For the first time we have software built primarily for other software to call, with no humans involved. That’s software-in-the-middle. That shift is about to accelerate dramatically with AI, LLMs, and agents.
For decades, software was built primarily for humans. We clicked buttons. We filled out forms. We triggered workflows.
Stripe and Twilio changed that. They weren’t built for humans to use directly. They were built for software to call. Clean APIs, clear contracts, and simple integrations.
That pattern is about to dominate.
AI, LLMs, and agents don’t just assist humans; they consume APIs, call services, and orchestrate workflows. The next generation of software development won’t just build applications for users; it will build systems designed to be used by other systems.
Software enabling software. Software-in-the-middle.