1. We Get the Enterprise DevOps We Deserve

    Design thinking expresses traditional design principles in a form that can be used for non-traditional activities. According to Herbert Simon’s definition of design as “changing current situations into preferred ones”, nearly anything can be designed, including DevOps. One could view DevOps as a thing, and argue about the definition of that thing. Alternatively, one could view it as an unfolding discourse, which can continually change from its current situation into a preferred one.

    Empathy is central to design thinking. In practical terms, empathy means testing your assumptions against users’ perspectives. Design thinking projects often start by looking for a new solution, only to discover the need to reframe the problem itself. I believe this approach can help us navigate the current debate regarding the nature of DevOps and the relevancy of Enterprise DevOps.

    Some of us dismiss Enterprise DevOps as, at best misguided, and at worst “snake oil”. Perhaps, though, we might be better served by listening to it for possible glints of wisdom. Could there be any truth to the claim that DevOps is “handwavy” when it comes to talking about culture? Might IT organizations gain genuine benefit from efforts to explain more concretely how to address the C in CAMS? Does denigrating people with the “snake oil” moniker miss the fact that salesmen go where the money is, and that money points to a perceived need? If we believe that Enterprise DevOps is “wrong”, might addressing the underlying real need be the best way to “discredit” it?

    Conversely, those of us who dismiss “pure DevOps” as a fantasy for startup unicorns might benefit from testing our own assumptions that underlie that opinion. Ironically, DevOps came into being to solve a legacy problem, not to present a golden utopia to the masses. Just because we can’t get a handle on what feels like hand-waving doesn’t necessarily mean it’s empty of content or value. 

    That being said, if we think the “pure DevOps” discourse around culture is a call for “cultural revolution”, maybe that’s exactly what enterprises need. They are besieged by demands to change their notion of themselves and how they relate to their customers. Is it strange to think they’d need equally profound transformations within IT? If we try to translate the strange and foreign into something more accessible, are we really addressing the right problem?

    Finally, if we want to present Enterprise DevOps as something “different”, we need to do more than just elaborate its ostensibly unique challenges. “Enterprises are different” is a question, not an answer. Assuming it’s the right question, we then need to answer it by describing what Enterprise DevOps practices actually look like, and explaining their unique applicability to large organizations.  It doesn’t help if we’re as equally handwavy as the “other side”.

    Im my humble opinion, DevOps is good, and we all deserve good DevOps. The best DevOps is that which concretely, empathically addresses its customers’ needs. Time will tell whether that results in a single set of practices you can buy from a certification body, or in infinitely many, where every shop does it a little differently, or something in between.  I believe that empathy is the essence of DevOps, just as it is the essence of Design Thinking. As such it needs to characterize, not just specific practices in specific IT organizations, but also the ongoing design of DevOps itself. 

    The Tibetan Buddhist teacher Dzigar Kongtrul recently tweeted, “When we are hit with suffering we generally focus on the outer causes instead of looking at the inner causes of our suffering.” This comment seems like wise advice for us all. The more willing we are to expose ourselves to feedback, however it comes, and however confusing or distasteful it may seem, the better chance we’ll have of changing current situations into ones that really are preferred. 

  2. How to Fix Agile (and Cloud and DevOps) With This One Neat Trick

    © 2014 Jeff Sussna, Ingineering.IT

    The Agile community seems to be going through a renewed bout of soul-searching. On what seems like a daily basis I see posts questioning the relationship between formal, “capital-A” Agile practices and “small-a” agility. Debaters search for meaning in the Agile Manifesto like constitutional lawyers. 

    I decided to reread the manifesto for myself. It quickly occurred to me that its framers might have made a tactical error. The heart of the manifesto, which makes up page one, is a set of four values. The problem is that these values are really practices. They are things to do: collaborate, respond to change, emphasize software over documentation. They focus on what and how, not why. 

    Page two of the manifesto is a set of twelve principles. Reading the principles, I suddenly saw it, as if glowing in golden light: the holy grail, the key to the kingdom, the answer to all questions. “Agile processes harness change for the customer’s competitive advantage.” 

    If I were asked to rewrite the manifesto, I would make that sentence the entirety of page one. It answers all the questions about why we should use Agile, how we know if we’re doing it right, how we communicate its ultimate value, and so on. In fact, this principle goes beyond just agile software development. It captures the underlying purpose and benefit of all of 21st-century IT, from Agile to Cloud to DevOps to PaaS to Microservices to Continuous Delivery. When we say that any of these practices or technologies makes a business more agile, or helps it tighten its OODA Loop, what we’re really saying is that it helps the business harness change for competitive advantage. 

    We can use this principle to address objections to particular practices. To those who complain that Agile or DevOps tosses discipline and quality out with the rigidity bathwater, we can respond that the goal is to “harness” change, as one would a horse, to make it work for us, not to surrender to it. Conversely, we can use this principle as our own guide and conscience. We can use 2nd-order Agile practices such as retrospectives to ask ourselves whether we’re harnessing change or just unleashing it.

    Many development organizations still struggle communicate Agile’s business value. If you don’t express yourself in terms of business benefit, business people likely won’t understand you. “Harnessing change for competitive advantage” is all about business benefit. It enables straightforward discussions of value and relevance. If, for example, a company doesn’t want to harness change, or doesn’t believe doing so will confer competitive advantage, then it shouldn’t use Agile (or, for that matter, Cloud or DevOps or …). Plain and simple.

    I believe that all of the 21st-century IT domains suffer from a certain amount of arguing at the wrong level. What’s the difference between SOA and microservices? Does DevOps work for enterprises? Is PaaS really just orchestration + containers? Agile seems to suffer from this problem even more egregiously than other domains. The centrality of the Agile Manifesto’s Four Values seems to exacerbate the problem. How much documentation is enough/too much? Are ticketing systems evil because they value process over individuals? 

    In general, we get mired in arguments about details when we lose sight of the purpose of those details. Rewriting the Agile Manifesto to just say “harnessing change for competitive advantage” won’t make all the arguments go away. There will still be plenty of room to debate the extent to which any given practice is harnessing change, or resisting it, or just mindlessly unleashing it. I think, though, that this is precisely the level of argument we want to be having.

  3. Brands As Promise-Marks

    © 2014 Jeff Sussna, Ingineering.IT

    Yesterday I read a tweet that said something about “engaging with brands”. That statement struck me as odd. I hypothesized that a brand isn’t something you can do anything to or with; nor can it do anything to or with you. Instead, I thought, a brand is the result of what you do with and to a company, and what what it does with and to you.

    Peter Laudenslager made the comment that “a brand is a package of assumptions and expectations…what people assume and expect, true or no, intentional or not”. Something struck me as right about that statement. But where do the particular assumptions/expectations associated with a brand come from? Customers don’t make them up out of thin air. You don’t expect an airline to help you lose weight. You do expect it to get you to your destination on time, with a minimum of hassle. The package of assumptions and expectations you have about the airline evolves over time based on your experience. You may come to assume, for example, that United won’t get you to your destination on time, and that they’ll do it with maximum rather than minimum hassle.

    This line of inquiry led me to think we might be able to use Promise Theory to model the concept of “brand”. Promise Theory treats systems as consisting of autonomous agents that voluntarily cooperate by making, and sometimes keeping promises to each other. Promise recipients are responsible for evaluating the trustworthiness of the promises they receive. Based on their evaluations, they can plan contingencies that help them improve certainty and reliability in the face uncertain, unreliable relationships. Agents’ ability to keep their promises changes over time. Evaluation of trustworthiness and contingency planning thus must be dynamic activities.

    In the language of Promise Theory, a brand starts as a set of promises. These promises constrain customers’ initial assumptions and expectations. United does not promise to help passengers lose weight. It does promise to transport them safely from one city to another. It makes various promises about timeliness, convenience, etc. In the course of doing business, it sometimes keeps some of its promises, and sometimes breaks some of them. The bad news is that United often breaks its promises of timeliness and convenience. The good news is that it seldom if ever breaks it promise of basic safety. A United customer might evaluate the airline’s promises and leave extra time in their travel schedule. They likely wouldn’t, though, feel the need to update their will.

    A “brand” is thus “a package of promises made, kept, and broken”. This definition captures the dynamic nature of brands. They can get better and worse over time. It also captures their individual, experiential quality. Not everyone has a lousy impression of United. Some people are lucky enough to have a good flight nearly every time they use the airline. 

    To return to the metaphor behind the word “brand”, we could say that a brand is the “mark” a company makes on each customer. In the language of Promise Theory, that mark is the trace of the promises it’s made, kept, and broken with that customer. 

    Promise Theory intends to be useful, not just pretty. If my understanding of brand is correct, then it would seem to follow that companies can use Promise Theory to help themselves improve the marks they leave on their customers. They can ask themselves questions such as:

    • What promises are we making?
    • Are we keeping them?
    • What promises are we breaking because we don’t even realize we should be making them?
    • Are we keeping the most important promises to the most important customers?

  4. Designing for Operations

    © 2014 Jeff Sussna, Ingineering.IT

    Service-dominant logic tells us that service providers and customers create value collaboratively. Digital infusion means that every business is a digital business, and every service has a digital component. Value co-creation therefore requires holistic, mutual engagement all the way from the customer back to the IT operations organization. Customer satisfaction depends as much on IT system scalability, resilience, and other non-functional requirements as it does on functionality or UX quality. We can see the growing impact of operability on business in Target’s recent firing of its CEO, in part because of a security breach. 

    Digital system operability depends on human operators’ ability to manage systems. System administrators need to be able control infrastructure in response to customer needs. In some cases, control implies proactively changing systems: adding servers, tuning databases, etc. In other cases, it means responding to potential and actual problems: a disk is about to fill up, an application has crashed, etc.  

    Sysadmins use a variety of tools to help themselves understand and control infrastructure. These tools are generally not designed from a usability perspective, nor with the participation of UX designers. They tend to be somewhat crude and utilitarian. They arrange their interfaces around data rather than tasks, ideas, or processes. As a result, they offer sysadmins clumsy affordances which do less than they could to help maximize operability.

    Ryan Frantz's 'Alert Design' post inspired me to think about design for operations. The opening sentence, “alert design is not a solved problem”, grabbed my attention. I realized that, for the most part, design for operations isn’t even an identified problem, let alone a solved one. It occurred to me that operations could benefit from design thinking. We consider challenges such as minimizing cognitive load all the time in the context of consumer interfaces, whether they be websites, mobile apps, or automobile interfaces. Why not do the same in the context of IT operations? If we truly believe in service-dominant logic, and the reality of digital infusion, tackling the problem of designing for operations would seem to be a crucial component of developing quality services.

  5. Failure == Failure to Empathize

    © 2014 Jeff Sussna, Ingineering.IT

    In my previous post I proposed a ‘non-Newtonian’ definition of success as ‘a useful conversation with one’s environment’. I was trying to capture the notion that ‘success’ is a dynamic process rather than a static state. According to my definition, one is succeeding as long as one is listening to the environment’s responses to one’s actions, and as long as the conversation ‘makes sense to all involved parties’. The environment could be your company’s customers, or its employees, or development if you’re in operations, or operations if you’re in development. 

    But how does one listen well? How do you know if what you’re saying is making sense to the environment with which you’re conversing? You need the ability to see the conversation, and in particular your own actions, from the other party’s point of view. In other words, you need the ability to empathize. 

    This realization leads me to believe we could redefine ‘success’ even more succinctly as ‘the ability to empathize’. Empathy drives both sides of the conversation. When you see things from another’s perspective, you instinctively want to do something useful based on what you see. Empathy naturally drives action in response to listening. Because that action is empathetic, it’s more likely to be useful. Which brings us back to our previous definition of success as useful conversation. And so it goes, round and round…

  6. Rethinking Failure

    © 2014 Jeff Sussna, Ingineering.IT

    Suddenly failure is all the rage. Innumerable blog posts tells us failure is good, failure is necessary, failure should be incented, Google developed all of its best applications from failures, and so on. But wait a minute - if failure is good, that would seem to imply that somehow it leads to success. If that’s the case, is it really failure any more? What do we really mean by ‘failure is good’?

    By itself, failure is anything but good. Making the same mistake over and over again doesn’t help anyone. Failure leads to success when I learn from it by changing my behavior or understanding in response to it. Even then, it’s impossible to guarantee that my response will in fact lead to success. The validity of any given response can only be evaluated in hindsight. Even worse, the environment to which I’m trying to adapt doesn’t stop changing just because I’ve declared victory. Yesterday’s success can thus become tomorrow’s failure. 

    We need a new, “non-Newtonian” definition of failure that is less binary or dualistic. We need to shift our focus from momentary events to unfolding processes. This shift is especially important in the context of complex systems that evade traditional control. Component-level failure is inevitable in complex systems, yet the systems themselves can still thrive. Conversely, component-level events can combine to cause systemic breakdowns without themselves being considered failures. Once again, failure is in the eyes of the future beholder.

    We seem to be stuck in a Catch-22. We can’t be sure our actions won’t make things worse. Inaction isn’t an option; the situation arose in the first place because our current state is unsatisfactory. Can we resolve our conundrum by redefining failure and success in non-Newtonian terms? I believe we can. I want to propose a new definition of success as “a useful conversation with one’s environment”. 

    A conversation is “an interchange of information”. A useful conversation follows a direction that makes sense to all involved parties. Imagine the following counter-productive exchange:

    1. Let’s have Indian food for dinner.
    2. I don’t like Indian food.
    3. Do you prefer the Indian place on Grand or the one on Summit?

    The first speaker isn’t really listening to their counterpart. The conversation isn’t leading anywhere that makes sense to both parties. In fact, the first speaker is wandering off into the weeds by way of non sequitur.

    By this definition, success is less about what you do at any given point in time than how you process the environment’s response to it. The following conversation is perfectly constructive:

    1. Let’s have Indian food for dinner.
    2. I don’t like Indian food.
    3. Do you like Italian food?
    4. I love it!
    5. I know a great Italian place on Grand.
    6. That sounds good. But wait, won’t Grand be congested tonight because of the game?
    7. Good point. There’s another good place on Summit.
    8. Let’s go there.

    By redefining success in non-Newtonian terms, we’ve enabled ourselves to evaluate our progress in real-time. As long as we’re a) speaking, that is, acting by trying something new, b) listening to the environment’s response to our action, and c) guiding our future action by the response to our past action, then we are succeeding. When we stop engaging in any of those three steps, then we have failed.

  7. Informing the TDD Debate

    © 2014 Jeff Sussna, Ingineering.IT

    I’m not going to weigh in on whether Test-Driven Development is good, bad, or ugly. I do think, though, that the current debate is failing to consider one of its important characteristics. In my opinion, TDD is a design tool first, and a testing tool second. When we write code, we instinctively jump to thinking about how the code should work. We tend to focus on internal implementation details and forget about external requirements. This tendency holds at the micro-level (classes) as well as the macro-level (user functionality). The “write tests before code” mantra works as a design tool because it forces us to think from the outside-in before we think from the inside-out. It makes us start with questions about how our code is going to be used, and what service or functionality it’s supposed to provide.

    I believe that TDD’s name is somewhat unfortunate, and contributes to confusion. As far as I know, it reflects its spiritual inheritance from “Executable Requirements”. In the bad old days, we created chains of specs: MRD, PRD, design spec, test plan, etc. We then put ourselves through gyrations to connect specs to each other. How could we be sure we were testing the right things? So-called traceability. Traceability created a convoluted, time-consuming, error-prone process that resembled a game of reverse-telephone: making sure the message didn’t get changed as it moved from spec to spec and team to team.  

    The Executable Requirements movement had the insight that collapsing the distance between business and test specifications could improve efficiency, clarity, and quality. They began expressing requirements using testing language. From that perspective, “write your tests before you write your code” simply means “write your requirements before you write your code”. The fact that doing so gives you a regression suite that makes change safer “for free” is a bonus.

    In Agile approaches, we don’t do everything up front in linear fashion. We write requirements and tests, and design implementations, in small, iterative batches. TDD applies that approach to executable requirements. How, though, does it really help with design? It has to do with the concept of “externalized thinking”. When designers draw sketches or create models before trying to build a final product, they are “thinking out loud”. By doing so, they can help themselves better understand, and expose problems with their design before committing implementation time, resources, or money. TDD offers a similar benefit to developers. By the time you dig into writing code that’s intended for release, you’ve maximized your ability to understand to understand what it is that you’re trying to build.

    I am personally skeptical of any methodology driven by “you must, always”. I feel more comfortable with approaches that say “you should, unless there’s a good reason not to”. In my personal experience, TDD works that way. Most of the time, when I try to convince myself I shouldn’t or don’t need to write tests (aka, concretely express and understand requirements) before I write code, the truth is that I’m making some sort of excuse. 

  8. Why Vagrant is the Best DevOps Tool Ever

    © 2014 Jeff Sussna, Ingineering.IT

    I am a strong proponent of the viewpoint that DevOps is first and foremost about culture. When clients ask me which big, expensive enterprise tool they should use to implement DevOps, I tell them they shouldn’t buy anything until they fully understand why they want it. I’ve previously posted my belief that empathy is the true essence of DevOps.

    I do, however, believe that tools can sometimes help develop culture by influencing behavior. To that end, there is one tool I tell every client they should adopt from the start. That tool is Vagrant, created and maintained by Mitchell Hashimoto.

    Vagrant makes it possible to create desktop clouds by scripting the configuration and control of multi-VM systems. Imagine a multi-tier web application consisting of a web server, a database, and an email server. With Vagrant you can specify and package the entire description of that application: its tiers, their operating systems, and all the system and application configuration actions needed to provision the entire software stack. You can then share that package with your whole team in a controlled manner. Any configuration changes can be managed and disseminated consistently via a version control system.

    Vagrant makes it easy for everyone involved in delivering a software service to think about it in the same way. I tell my clients to create Vagrant boxes for their applications, and put them on everyone’s desktop. By everyone I mean developers, testers, admins, and even product owners. There is no reason product owners should depend on centralized test servers any more than anyone else. They should be able to do acceptance testing right on their own desktops, at their own convenience. Vagrant’s automation capabilities let them do it this way, with confidence they’re testing the same configuration that will run in production.

    Vagrant dissolves differences between perspectives across the Dev/Ops continuum. It makes it possible to treat every environment similarly, from the developer’s desktop all the way to production. It treats layers within the software stack similarly, from operating system patches to application configuration files. It presents everyone with the same view of a system, not just “infrastructure” or “application” or “database” or “app server” or “user behavior”. Most importantly, it treats every member of the software service team similarly, giving testers and admins alike the same environments and tools.

    Empathy involves the ability to see things from others’ perspectives. Vagrant puts complete systems on everyone’s machines. It makes those systems part of their daily lives. Team members can run the entire software stack, architecturally identical to production, on their laptop in a coffee shop. No longer is the full system architecture something that only lives in the cold, humming data center, on the other side of the man trap. In this way, Vagrant helps cross-functional software service teams start down the path towards mutual empathy, and thus towards DevOps culture.

    The title of this post is intended to be tongue-in-cheek. It’s not my intention to set up some kind of competition with other DevOps tools. As people say when asked to choose between Chef, Puppet, CFEngine, and other configuration automation tools, “using any of them is better than using none.” The point is that, when we struggle to understand how to foster something as intangible as “DevOps culture”, Vagrant can be an excellent starting point.

  9. Why We Should Design Software Systems Like We Design Buildings

    © 2014 Jeff Sussna, Ingineering.IT

    Yesterday I stumbled upon an online debate about whether we should build software like we build buildings. I would like to pose a slightly different question: should we design software systems the way we design buildings? To answer my own question: of course we should!

    We consider Apple under the leadership of Steve Jobs to be legendary and unique in the history of the computer industry. Jobs led the creation of systems that unified functionality, engineering, and esthetics. The results satisfied their users on multiple levels. With Jobs’ passing, it appears at least for the moment that Apple has lost its design mojo. No one else seems fully able to take up the mantle.

    By comparison, architects have been designing satisfying buildings for hundreds of years. Just within Spain, for example, you can see magnificent buildings whose design spans several centuries. Architects have always treated functionality, engineering, and esthetics inseparably. The idea that one would need to invent DevOps would seem strange to them. 

    We use the word ‘architecture’ on multiple levels in the context of software systems. None of our uses of the term are as rich as its meaning in the context of buildings. Building designers strive to satisfy human needs by shaping physical space. I believe that software architects need to redefine our jobs in a similar fashion. Our mission should be nothing less than striving to satisfy human needs by shaping digital space. We must pursue this mission inseparably across functional, operational, and esthetic dimensions. Hopefully, by doing so we can usher in an age that is replete with satisfying software systems, and where Steve Jobs is just one among many instead of a lonely beacon in the dark.

  10. Beginner’s Mind and Design

    From The Eames Studio’s Inspiring History and Unknown Dark Side:

    Charles and Ray’s biggest contribution was conceptual: They showed that “design” could be an art of manipulating ideas, not just materials. They were master communicators, not fabricators. “We don’t make art; we solve problems” was a favorite maxim of Charles, which still sounds perfectly contemporary in the 21st century, 50 years after he said it. “Design thinking” and research strategies, de rigueur now thanks to firms like IDEO, owe a debt to the Eameses philosophy of what one interviewee in the film calls “selling ignorance.” IBM and Westinghouse didn’t hire the Eames Office for its expertise, which would necessarily be limited; quite the opposite. They hired the Eameses for their process of discovery, of admitting that they knew little, and taking that “beginner’s mind” approach to finding design solutions.

  11. If You Want More Innovation, You Need More Art

    © 2014 Jeff Sussna, Ingineering.IT

    Last night I went to see a Trisha Brown Dance Company retrospective at the Walker art museum in Minneapolis. Trisha Brown is a preeminent postmodern dance composer. The works in the show incorporated contributions by three of my artistic heroes: Laurie Anderson, John Cage, and Robert Rauschenberg.

    The show was one of the most exhilarating, creative, original performances I’ve ever seen. Brown and her collaborators have stretched our understanding of how human beings can move, both alone and in relation to each other; what they can wear; how space can be arranged and decorated; and how it all can relate to sound. Afterwards my brain felt like it had been stretched and spun and thrown up in the air like pizza dough,  then doused in a concentrated caffeine bath.  

    We live in the age of competition through innovation. We’ve begun spending more and more time and energy trying to figure out how to make our organizations more innovative. We tend to do it by applying external mechanisms: everything from Agile to Lean Startup to Design Thinking to innovation consultants to Chief Innovation Officers. These mechanisms can all be helpful. By themselves, however, they are likely doomed to fail. Mechanisms can only guide people in expressing their internal abilities.

    We often dismiss art as not being “about anything” or having any “practical” purpose. But you could also say that art is creative problem solving stripped to its barest essence. After last night’s dance performance we couldn’t stop talking about which choreography choices worked, what they meant, what could have been done differently. These are exactly the kinds of questions we want people asking about product strategy or corporate structures and procedures. We need people to be able to stretch and spin and throw their brains up in the air in the workplace. We need them to feel dosed with the caffeine of desire to understand and experiment and learn and solve.

    In this age of innovation, the idea of reducing exposure to or support for the arts seems highly counterproductive. We want people to go to more dance performances and read more novels. We want them to talk about what they read or saw or heard around the water cooler on Monday morning. Just like an athlete improving their strength and stamina through training, people need to train their minds in order to increase their capacity to stretch and reach for new ideas and solutions. Without that training, all our well-intentioned innovation methodologies will just be expensive, frustrating exercises in futility.

  12. Puzzlement-as-a-Service

    © 2014 Jeff Sussna, Ingineering.IT

    I’ve been observing the latest PaaS debate with some interest and more frustration. The relationship between PaaS and IaaS is being questioned: is PaaS becoming just an attribute of IaaS? Which one is more central? Does it make sense for PaaS vendors to continue to exist as independent companies?

    With all due respect to the participants, all of whom I hold in great esteem, I fear that the debate may be missing the point. I’ve long been a fan of PaaS on general principle. At this point, though, it’s hard to tell what it really is. I haven’t seen enough in the way of concrete, detailed, grounded description or analysis. Claims for its benefits are highly unicornish: “PaaS will liberate developers from the thrall of IT”. These claims often dismiss, and risk alienating, the ops side of things. To me this dismissal and alienation is very ironic. Having worked with enterprise IT teams that supported multiple applications on a single set of infrastructure, I believe PaaS has as much potential benefit for ops as it does for development. Unfortunately, I feel a bit like I’m talking to myself. The current discourse doesn’t help me understand whether I should advise my clients to run as fast as they can towards PaaS, or away from it.

    According to its vendors, the PaaS market is maturing. The information about PaaS needs to mature as well. Imagine, if you will, a hard-nosed, skeptical IT ops architect conducting a PaaS evaluation. Now imagine that this architect issues a concluding report along the lines of “here’s why I tried to convince myself we should avoid PaaS, and here’s how I convinced myself we should adopt it instead.” The report would include specific details about which features facilitated which beneficial outcomes, and how they did so. 

    Such a report would be incredibly useful to everyone analyzing, selling, supporting, or considering adopting PaaS. I want to challenge one or more of the PaaS vendors to write such a report, or at least to use it as a conceptual model for their marketing material. I think it would be a great step towards helping PaaS cross the chasm. If its benefits really are as great as those being touted, then we really do want everyone to use it.

  13. Empathy: The Essence of DevOps

    © 2014 Jeff Sussna, Ingineering.IT

    I first encountered empathy as an explicit design principle in the context of design thinking. You can’t design anything truly useful unless you understand the people for whom you’re designing. Customer satisfaction is more than just an intellectual evaluation. Understanding users requires understanding not just their thoughts, but also their emotional and physical needs.

    I was surprised to encounter empathy again in the context of cybernetics. This rediscovery happened thanks to a Twitter exchange with @seungchan​. Cybernetics tells us that, in order for any one or any thing to function, it must have a relationship with other people and/or things. That relationship takes place through the exchange of information, in the form of a conversation. The thermostat converses with the air in the room. The brand converses with the customer. The designer converses with the developer. The developer converses with the operations engineer. Information exchange requires (and can contribute to) mutual understanding; e.g., empathy.

    I had another Twitter exchange, this one with @krishnan, on the question of whether Platform-as-a-Service needs DevOps. I think the question actually misses the point. Software-as-service offers customers inseparable functionality and operability. Development delivers functionality and experience; operations ensures the operational integrity of that experience. At some point, the service will inevitably break. Uncertainty and failure are part of the nature of software-as-service. They are, to use @seungchan’s term, part of its “materiality”, just as flexibility or brittleness are part of the materiality of the wood or metal or plexiglass used to make a piece of furniture.

    When a service does break, someone has to figure out where and why it broke, and how to fix it. Did the application code cause the failure? The PaaS? An interaction between them? Or something at a layer below them both? Regardless of how many abstraction layers exist, it’s still necessary both to make things and to run them. It doesn’t matter whether or not different people, or teams, or even companies take responsibility for the quality of the making and the operating. In order for a software service to succeed, both have to happen, in a unified and coherent way.

    The confluence of these two Twitter exchanges led me to reflect on the true essence of DevOps. It occurred to me that it’s not about making developers and sysadmins report to the same VP. It’s not about automating all your configuration procedures. It’s not about tipping up a Jenkins server, or running your applications in the cloud, or releasing your code on Github. It’s not even about letting your developers deploy their code to a PaaS. The true essence of DevOps is empathy.

    We say that, at its core, DevOps is about culture. We advise IT organizations to colocate Dev and Ops teams, to have them participate in the same standups, go out to lunch together, and work cheek by jowl. Why? Because it creates an environment that encourages empathy. Empathy allows ops engineers to appreciate the importance of being able push code quickly and frequently, without a fuss. It allows developers to appreciate the problems caused by writing code that’s fat, or slow, or insecure. Empathy allows software makers and operators to help each other deliver the best possible functionality+operability on behalf of their customers.

    Dev and Ops need to empathize with each other (and with Design and Marketing) because they’re cooperating agents within a larger software-as-service system. More importantly, they all need to empathize, not just with each other, but also with users. Service is defined by co-creation of value. Only when a customer successfully uses a service to satisfy their own goals does its value become fully manifest. Service therefore requires an ongoing conversation between customer and provider. To succeed, that conversation requires empathy.

  14. Designing Holographically (Learning to See)

    © 2013 Jeff Sussna, Ingineering.IT

    In response to my post on Lean Architecture and Holographic Design, the esteemed James Coplien asked me to “share concrete practices that work”. I will try my best to respond to his request, though I’m not sure he’ll find my answer satisfying. I don’t have any formal processes, algorithms, or best practices to offer. In my experience, designing holographically is more about seeing than about doing. 

    Holographic design doesn’t work by creating a purely high-level architecture first, then filling in details later. Instead, it goes all the way down from the beginning. Confidence in the details degrades at lower architectural levels, as does the need for precision. Big Up-Front Design doesn’t work because it doesn’t allow for adaptation. At the same time, though, we can’t just hope that good architecture will magically emerge. Holographic design allows us to create a coherent overall picture while leaving room for specific details within that picture to flex and change in response to feedback. 

    How do you know that any given version of an architecture is good enough to let you move forward? How can you be sure that iterative refinements at lower levels won’t invalidate higher-level decisions? To a large degree, it’s a matter of practice. Having done it enough times, you learn to see weaknesses in the overall design, and in the relationships between components and levels. You learn how to push at the design, almost like you would a spider web, to see what happens if individual strands break at any given level. 

    Architecture is about shape. In order to evaluate a design, you need to be able to see, and contemplate, the entire shape. There’s a reason we use the word ‘spaghetti’ to refer to poorly thought-out designs. They are, quite literally, ‘a mess’. You can’t understand shapes and their implications through linear analysis. It’s not a matter of applying graph theory. Instead, you need a holistic ability to see. How do you learn how to see? How do you practice it in order to improve your seeing ability? The best way I know is to study and practice the arts. Look at great buildings throughout history. Take a film-making or photography class. Read Hemingway and Fitzgerald and Louise Erdrich. Write poems and short stories. Participate in critiques. 

    Picasso’s ‘Guernica’ is my favorite painting in the world. Many years ago I had the great fortune to see a show devoted entirely to that painting. It included a large number of studies Picasso painted in preparation for the final piece. Being able to see his progression from the initial concept to the final masterpiece was remarkable. His process seemed very holographic. The basic premise remained from the beginning to the end. Along the way, his execution grew in richness, depth, detail, and completeness. He added more details, while simultaneously working out the relationships between them. 

    Systems thinking requires subjective choices about where to draw boundaries and how to connect components. Thinking about the meaning of things in the arts requires similar choice-making. What is Hemingway trying to say in “The Old Man and the Sea”? It’s up to you to find patterns within the story, and to decide how those patterns fit together. There is no such thing as “the right” analysis of a piece of literature. Similarly, there is no such thing as “the right” architecture. There is no way to be certain an architecture “works” until after the fact. Holographic design is a continual process of hypothesizing. The best you can do is to learn how to make good hypotheses. Making, studying, and critiquing works of art is about making, studying, and critiquing the process of hypothesis-making. As such, it can help us become better and more lean architects.

    The artistic approach isn’t just about lonely genius. It can also help us design holographically within a collaborative group process. Art has always incorporated group critique. Lean architecture can use a similar technique. Architectural critique can become a regular part of the iterative cadence. For it to work, however, everyone involved needs the ability to understand and push at architectural shapes. The entire team, therefore, not just a lonely ivory-tower architect, needs to learn to see.

  15. Lean Architecture and Holographic Design

    © 2013 Jeff Sussna, Ingineering.IT

    Agile has always struggled with the question of how to approach architecture. Should it be emergent, and just arise as part of iterative development? Or should it take place during a special, distinct Iteration Zero? We tend to think of architecture as something different and non-iterative. We worry about getting it “right” up front. That thought process goes against the very notion of Agile, which questions the possibility of getting anything “right”, forever, in one shot.

    I think the tension between architecture and Agile arises from the fact that architecture necessarily involves coherency. From the beginning, an architecture must represent a context, along with the relationships between the components within that context. We struggle to understand how to iteratively create coherency. We worry that, if we “bite off small pieces as a time”, we’ll end up with a Rube Goldberg device instead of anything coherent, elegant, or manageable.

    In reflecting on my own architectural process, I’ve found that I approach design “holographically”. The name comes by way of analogy with a hologram that, when broken, still shows the whole picture, but with less sharpness of detail. I design by starting with a fuzzy version of the entire architecture, then gradually increase the precision of successively lower levels of detail.

    Holographic design naturally balances coherency and iteration. I think it has the potential to help resolve the Agile/architecture tension, in the form of a Lean Architecture practice. Similar to Lean UX, Lean Architecture would integrate architecture into the Agile development process without sacrificing the integrity of either one.

    Rather than applying iteration to the gluing together of architectural components, holographic design applies it to increasing levels of precision within the overall architecture. My first cut at an architecture generally results in a picture that has more or less the right overall shape. The details of the components that make up that shape, however, are somewhat fuzzy. The more deeply I dig into the structure, the more fuzzy the details get. If someone asks me “how are you going to solve this sub-problem”, the answer is often “I’m not sure”. Sometimes I do have an answer, but one that doesn’t hold up to scrutiny. Peer review often helps me understand what I have figure out next.

    It isn’t generally the case that detailed review shows my picture to be fundamentally flawed. If my design is just plain wrong, I usually find out fairly quickly. More often, I need to resolve something at a component level. That resolution may uncover structural problems at lower levels of the architecture. It’s seldom the case, though, that structural breakage cascades back up to a higher level. By resolving lower-level details, I increase the “sharpness” of the overall design.

    Holographic design makes it possible to do rapid architecture early in a project. Early stage architecture review only needs to provide enough confidence to move forward with a design. It doesn’t need to freeze lower-level details. The architecture can, and will, flex during the course of iterative development. It can do what it’s supposed to do, which is to adapt to interaction with reality. Project teams can delay decisions about specific, component-level details and solutions until they really need to resolve them. In the meantime, iterative development generates feedback about still-fuzzy components. The lean architecture practice can use this feedback to increase sharpness at the same cadence, and within the same iterative structure, as the rest of the project.