#201 - Building Evolutionary Architectures: Automated Software Governance - Rebecca Parsons

00:56:58
https://www.youtube.com/watch?v=HeZWJ1K3qR0

Resumen

TLDRDr. Rebecca Parsons discusses evolutionary architecture's increasing importance in the tech industry due to the rapid pace of change and consumer expectations. She explains that traditional rigid architectures are often inadequate, and the need for systems that can adapt quickly is critical for organizational success. Essential concepts include fitness functions for measuring architectural qualities, governance emphasizing outcomes, and understanding Conway's Law to align team structures with architectural goals. The conversation also touches on AI's role in development, stressing the need for careful integration of AI tools to maintain code quality and ensure proper learning for new developers.

Para llevar

  • 🚀 Evolutionary architecture is essential for adapting to fast-paced change.
  • 📏 Fitness functions help measure and guide architecture changes.
  • 🛠️ Governance should focus on outcomes, not just implementation rules.
  • 🔄 Conway's Law impacts how systems are structured based on team communication.
  • 🔍 Understanding legacy systems is crucial for effective evolution.

Cronología

  • 00:00:00 - 00:05:00

    The speaker highlights the evolution of architectural needs due to the rapid technological and consumer expectation changes. Organizations need to adapt their systems accordingly to remain competitive.

  • 00:05:00 - 00:10:00

    Dr. Rebecca Parsons discusses her personal journey in choosing careers in computer science and economics, emphasizing the importance of following one's instincts and aligning success criteria with organizational goals.

  • 00:10:00 - 00:15:00

    She reflects on her experience as a CTO and the importance of inspiring future generations in tech, specifically women, by speaking on important tech topics and advocating for diversity.

  • 00:15:00 - 00:20:00

    The conversation shifts to evolutionary architecture. Dr. Parsons explains the necessity of evolving architecture due to fast-changing business models and consumer demands. Traditional architectures may not suffice in a rapidly evolving market.

  • 00:20:00 - 00:25:00

    Dr. Parsons distinguishes between the necessity of microservices and monolithic architectures, emphasizing that both have their pros and cons depending on the organization's context and needs.

  • 00:25:00 - 00:30:00

    She defines architecture as dependent on what matters to the organization, stressing the significance of trade-offs in addressing architectural concerns.

  • 00:30:00 - 00:35:00

    The speaker mentions the limited adoption of evolutionary architecture compared to the hype around microservices but notes an increasing interest in fitness functions for better architectural assessments.

  • 00:35:00 - 00:40:00

    Dr. Parsons outlines the definition of evolutionary architecture, highlighting the roles of guided incremental change and fitness functions in aligning architecture with business objectives.

  • 00:40:00 - 00:45:00

    Fitness functions are described as automated tests or assessments to gauge the health of a system, emphasizing the need for concrete definitions to measure aspects like maintainability.

  • 00:45:00 - 00:50:00

    She suggests that UX and databases should also evolve incrementally and discusses the principles and practices that support evolutionary architecture, including governance and engineering practices.

  • 00:50:00 - 00:56:58

    The discussion covers governance, emphasizing that it should focus on outcomes rather than prohibitive rules and incorporate principles from domain-driven design.

Ver más

Mapa mental

Vídeo de preguntas y respuestas

  • Why is evolutionary architecture necessary today?

    Evolutionary architecture is necessary due to rapid changes in technology and consumer expectations, requiring organizations to adapt quickly to remain competitive.

  • What are fitness functions in the context of evolutionary architecture?

    Fitness functions are metrics used to assess how close a system is to achieving its desired goals, helping guide incremental changes in architecture.

  • What role does governance play in evolutionary architecture?

    Governance focuses on outcomes rather than implementations, allowing teams the freedom to create innovative solutions that align with business goals.

  • How does Conway's Law relate to software architecture?

    Conway's Law states that a system will reflect the communication patterns within an organization, highlighting the importance of team structure in architectural design.

  • What engineering practices support evolutionary architecture?

    Key practices include continuous delivery, evolutionary database design, contract testing, and focusing on architecture that aligns with business needs.

Ver más resúmenes de vídeos

Obtén acceso instantáneo a resúmenes gratuitos de vídeos de YouTube gracias a la IA.
Subtítulos
en
Desplazamiento automático:
  • 00:00:00
    When I started, back decades ago, you probably didn't need an evolutionary
  • 00:00:05
    architecture, because technology was not moving that quickly, expectations
  • 00:00:10
    were not moving that quickly.
  • 00:00:12
    But what we see now is that business model lifetimes are shorter.
  • 00:00:17
    Consumer expectations for businesses are being driven
  • 00:00:21
    by what's happening in TikTok.
  • 00:00:23
    And when you have that level of change, having a system that you can't change to
  • 00:00:30
    reflect what it is that your customers are demanding you give them is not going to
  • 00:00:34
    allow you to succeed as an organization.
  • 00:00:36
    One aspect of architecture that I think is important is it depends on what
  • 00:00:43
    is important to that organization.
  • 00:00:46
    You want your architecture to support the things that matter.
  • 00:00:53
    There is nothing inherently wrong with a monolith.
  • 00:00:57
    Yes, there are some things that you can do with a microservice
  • 00:01:00
    that you can't do with a monolith.
  • 00:01:02
    But then there are problems that you can't have in a monolith that you have to deal
  • 00:01:10
    with in a microservices architecture.
  • 00:01:12
    People try to fight Conway's Law.
  • 00:01:15
    And you just can't do it.
  • 00:01:17
    If the people don't talk effectively to each other, the systems that
  • 00:01:22
    they're responsible for are not going to talk to each other.
  • 00:01:26
    There was one study that was done by the CodeScene people, when the coding
  • 00:01:32
    assistant that they were working with recommended a refactoring, in the best
  • 00:01:37
    case, they were right 37% of the time.
  • 00:01:40
    If you as a developer got things wrong two thirds of the time, you're not
  • 00:01:44
    going to keep your job for very long.
  • 00:01:59
    Hello, everyone.
  • 00:02:00
    Welcome back to another new episode of the Tech Lead Journal podcast.
  • 00:02:03
    Today, I'm very excited here to have an honorary guest, Dr.
  • 00:02:06
    Rebecca Parsons.
  • 00:02:07
    She's an ex-ThoughtWorks CTO for a long time, around 15 years, I guess, until
  • 00:02:12
    very recently she left the position.
  • 00:02:14
    So Dr.
  • 00:02:15
    Rebecca today is here to talk about evolutionary architecture.
  • 00:02:19
    I think this topic has been around for quite some time, although I
  • 00:02:22
    think the adoption has not been really there in the industry.
  • 00:02:27
    So Dr.
  • 00:02:27
    Rebecca, really looking forward to have this conversation with you today.
  • 00:02:30
    I hope to learn a lot about evolutionary architecture.
  • 00:02:34
    Happy to be here, Henry.
  • 00:02:36
    Right.
  • 00:02:36
    Dr.
  • 00:02:36
    Rebecca, I always love to start my conversation by asking my guests to maybe
  • 00:02:40
    tell us a little bit more about you.
  • 00:02:42
    Maybe turning points that you think we all can learn from you.
  • 00:02:46
    Well, I guess the first one would be, as I was just getting out of
  • 00:02:50
    university and taking my first real job.
  • 00:02:53
    And I had offers at the same company, but from two very different departments.
  • 00:02:59
    Because I have actually both a degree in computer science as
  • 00:03:01
    well as a degree in economics.
  • 00:03:03
    And I went through the classic, okay, pros and cons on the
  • 00:03:08
    spreadsheet kind of process.
  • 00:03:11
    And had decided I was going to take the job in economics.
  • 00:03:14
    I mean, I thought, you know, what could be better?
  • 00:03:15
    Somebody is going to pay me to read the Wall Street Journal.
  • 00:03:18
    You know, why wouldn't that be what I did?
  • 00:03:22
    And when I got on the phone talking to the recruiter, I said I'm going
  • 00:03:26
    to take the computer science job.
  • 00:03:27
    And it's like, wait a minute.
  • 00:03:30
    But what I realized is in the back of my head, I'd been playing with, okay, yes,
  • 00:03:36
    I'm going to take the economics job.
  • 00:03:37
    I'm going to take the computer science job.
  • 00:03:40
    And I realized that even though all of the clinical analysis said I should do
  • 00:03:45
    the economics job, what I really wanted to do was the computer science job.
  • 00:03:49
    And what I learned from that is, sure, go ahead and do the analysis, but that
  • 00:03:54
    process of, okay, I'm going to sit with myself for half a day, as if I've
  • 00:04:00
    taken the computer science job, and I'm going to sit with myself for half a
  • 00:04:04
    day as if I've taken the economics job.
  • 00:04:06
    And your gut is going to tell you what's the right thing to do.
  • 00:04:10
    And it's good to listen to your gut.
  • 00:04:13
    And, you know, I've often wondered what would have happened if I
  • 00:04:16
    had taken the economics job.
  • 00:04:18
    I probably would have ended up in law school as a lawyer or something like
  • 00:04:22
    that, as opposed to a computer scientist.
  • 00:04:25
    But, um, I'd say that was the first one.
  • 00:04:28
    And then the second one, I would say, I didn't listen to my gut.
  • 00:04:35
    And this was, I completed my postdoc at Los Alamos and I was choosing between
  • 00:04:42
    staying at Los Alamos as a researcher or going to the university as a professor.
  • 00:04:48
    And when I first started my PhD full time, I told myself I'm never going
  • 00:04:55
    to be an assistant professor of computer science and I'm never going
  • 00:04:58
    to live in the state of Florida.
  • 00:05:00
    And I ended up at the University of Central Florida as an assistant
  • 00:05:04
    professor of computer science.
  • 00:05:06
    And part of the reason I did that, academia, at least in the U.S.
  • 00:05:11
    at that time, if you didn't pretty quickly go into academia, they
  • 00:05:15
    just didn't take you seriously.
  • 00:05:17
    And it was one of those things, okay, if I think I might want
  • 00:05:20
    to do this, I need to do it now.
  • 00:05:23
    But I was right.
  • 00:05:25
    Maybe it's because the amount of time I spent in industry.
  • 00:05:28
    I don't know.
  • 00:05:29
    But what I learned is that to have a success criteria for yourself
  • 00:05:37
    that isn't aligned with the success criteria of your organization is
  • 00:05:41
    a recipe to be very, very unhappy.
  • 00:05:44
    Because either they think you're successful and you're
  • 00:05:47
    personally miserable, or you feel like you're successful and
  • 00:05:51
    they think you're a failure.
  • 00:05:52
    And neither one of those is a good place to be.
  • 00:05:55
    And then I would say that the third came after I'd been CTO for
  • 00:06:00
    a little while, and I was having a conversation with our president and CEO.
  • 00:06:05
    And I'd had an experience.
  • 00:06:08
    I had been on a CTO panel for the Grace Hopper Celebration of Women in Computing.
  • 00:06:13
    And I was the only woman CTO on the panel, which I thought was
  • 00:06:18
    kind of ironic for the Grace Hopper Celebration of Women in Computing,
  • 00:06:22
    but I understood the rationale for it.
  • 00:06:24
    But what I learned later is we had lunch and they had students at the
  • 00:06:31
    tables with the different CTOs.
  • 00:06:33
    And all of the other CTOs ended up just talking about jobs at their company.
  • 00:06:36
    I was talking with the women and students at my table about their
  • 00:06:41
    careers and their aspirations and such.
  • 00:06:43
    And one of them said to me, I think I need to leave graduate school or at
  • 00:06:47
    least change my advisor, because my advisor told me that I was taking the
  • 00:06:53
    spot of a man and I needed to go home and make babies like I was supposed to.
  • 00:06:57
    And I thought, you know, this is the 21st century.
  • 00:07:00
    The fact that anywhere on the planet, some professor would think it was alright to
  • 00:07:06
    say something like that just appalled me.
  • 00:07:09
    And I decided during that conversation with Trevor, the the the CEO, was that
  • 00:07:16
    even though I'm actually an introvert.
  • 00:07:20
    I don't really like getting up on stages.
  • 00:07:23
    But it was important for me to be up on those stages, to
  • 00:07:27
    be talking as a technologist.
  • 00:07:30
    Not just talking about diversity, but talking about agile and enterprise
  • 00:07:34
    architecture and evolutionary architecture and domain specific languages and all
  • 00:07:39
    of these different technical topics.
  • 00:07:41
    Because women needed to see someone who looks like me up on a stage
  • 00:07:46
    talking about things like that.
  • 00:07:49
    And that was a real turning point for me.
  • 00:07:52
    Prior to that, I'd been on a couple of stages.
  • 00:07:55
    But it wasn't an important part of what I did.
  • 00:07:59
    And it became a very important part of the job I did for ThoughtWorks.
  • 00:08:04
    Wow.
  • 00:08:05
    Thanks for sharing.
  • 00:08:05
    First of all, I think the stories really are, you know,
  • 00:08:08
    strong and beautiful, right?
  • 00:08:09
    So the first is about listening to your gut, right?
  • 00:08:11
    So I could imagine back then you were kind of like torn between
  • 00:08:14
    the two or, you know, two majors, computer science and economics.
  • 00:08:18
    And then after that is a lesson about not listening to your guts.
  • 00:08:21
    I think sometimes we all did that as well, especially in our career, right?
  • 00:08:25
    We wanted to leave, but we couldn't for whatever reasons, right, and
  • 00:08:29
    ended up being miserable in the job.
  • 00:08:31
    And the third one is about making a stand, right?
  • 00:08:33
    Being there as an inspiration for some women out there.
  • 00:08:36
    So thanks for sharing the story.
  • 00:08:38
    So Dr.
  • 00:08:38
    Rebecca, you are well known about evolutionary architecture.
  • 00:08:41
    In fact, you have written this book Building Evolutionary Architecture,
  • 00:08:44
    which is in the second edition now.
  • 00:08:46
    So maybe first of all, right, tell us a little bit more, why do we
  • 00:08:49
    need evolutionary architecture?
  • 00:08:51
    Because I think when we talk about architecture, typically people talk about
  • 00:08:54
    something that is difficult to change.
  • 00:08:56
    And why do we need an evolution for our architecture?
  • 00:09:00
    Well, when I started, back decades ago, you probably didn't need an
  • 00:09:05
    evolutionary architecture, because technology was not moving that quickly,
  • 00:09:09
    expectations were not moving that quickly.
  • 00:09:12
    But what we see now is that business model lifetimes are shorter.
  • 00:09:19
    Customer expectations for businesses, consumer expectations for businesses
  • 00:09:25
    are being driven not by what's happening in the financial services industry,
  • 00:09:30
    but what's happening in TikTok.
  • 00:09:32
    And you don't have nearly as much control over what it is you
  • 00:09:39
    must build to remain competitive.
  • 00:09:42
    And when you have that level of change, having a system that you can't change to
  • 00:09:49
    reflect what it is that your customers are demanding you give them is not going to
  • 00:09:53
    allow you to succeed as an organization.
  • 00:09:56
    And so, with all of the change that is happening around, to say, but, oh,
  • 00:10:01
    the architecture will never change.
  • 00:10:03
    Well, you know, that's ridiculous.
  • 00:10:06
    And we can't really predict where that change is going to come from.
  • 00:10:09
    You can, in hindsight, look at, for example, virtual machines and
  • 00:10:14
    containerization and Docker and, you know, and see that as a natural progression.
  • 00:10:20
    But the impact on how people think about their technology estate was incredibly
  • 00:10:26
    impacted by Docker and its ilk, in a way really that virtual machines
  • 00:10:33
    didn't have that same level of impact.
  • 00:10:35
    And so even if you weren't going to use Docker right away, you have to be
  • 00:10:41
    thinking about it from an architectural perspective of how is this something
  • 00:10:45
    that I can take advantage of.
  • 00:10:47
    And so it became a necessity, really, not because anybody wanted it to be,
  • 00:10:54
    but because you didn't have a choice.
  • 00:10:56
    You have to be able to change your systems to keep up with changing business
  • 00:11:00
    expectations and consumer expectations.
  • 00:11:03
    Let alone regulatory frameworks and things of that nature.
  • 00:11:07
    Yeah, and also during the pandemic back then, right, so situational impact, right?
  • 00:11:12
    Suddenly everyone has to scramble and find a solution.
  • 00:11:15
    So I think the other aspect about architecture that I have seen,
  • 00:11:18
    typically in the startups, is that instead of doing evolutionary changes,
  • 00:11:22
    they make a revolutionary changes.
  • 00:11:24
    You know, things like rewrites, breaking monolith into microservice.
  • 00:11:27
    How about this kind of case?
  • 00:11:29
    Do you see it as also something that is doable or more advisable to do
  • 00:11:34
    for, you know, industry or companies who are changing very, very rapidly.
  • 00:11:40
    Well, one of the goals of having an evolutionary architecture is
  • 00:11:44
    simplifying whatever change it is that you ultimately have to make.
  • 00:11:49
    And if you've got something that's small enough and self contained enough
  • 00:11:53
    that you can just throw it away and rewrite it, that's probably easiest.
  • 00:11:57
    And you don't have to worry about being evolutionary if
  • 00:12:00
    it's only going to run once.
  • 00:12:01
    You know, I saw a talk by a scientist from the Jet Propulsion Laboratory in the U.S.
  • 00:12:07
    And the purpose of the talk was to talk about how they took advantage of
  • 00:12:12
    cloud to do this particular analysis.
  • 00:12:15
    But the point I took away from it is the fact that this data download was
  • 00:12:19
    only ever going to happen once, and you would never have to run it again.
  • 00:12:24
    No, you don't worry about evolving something that will only run once.
  • 00:12:29
    You know, you get it to the point that you need it to let it run
  • 00:12:34
    once and then you throw it away.
  • 00:12:36
    But the problem is when you look at most of the enterprises out there that have
  • 00:12:43
    been around for any length of time, they don't have pieces of their architecture
  • 00:12:48
    that they can just throw away.
  • 00:12:50
    They probably have five or six generations of technology and languages and
  • 00:12:56
    frameworks and all of that kind of stuff.
  • 00:12:59
    And so, you have to start by getting yourself in a position where maybe you
  • 00:13:06
    do have things that you can throw away.
  • 00:13:09
    But for a lot of enterprises, that isn't the case.
  • 00:13:13
    Sure, if you're a startup, if you're a mom and pop, if you've been running
  • 00:13:16
    on a, on an Access database or an Excel spreadsheet, or, you know,
  • 00:13:21
    whatever, maybe you can do that.
  • 00:13:24
    Maybe you haven't customized yourself into a corner or something like that.
  • 00:13:28
    But for a lot of the kinds of clients that we dealt with at ThoughtWorks,
  • 00:13:34
    that just wasn't the reality.
  • 00:13:36
    They didn't have pieces of their architecture they could just throw away.
  • 00:13:41
    So maybe in your definition, what do you define as an architecture?
  • 00:13:45
    Because when I talk to, you know, when I learn about architecture in so many
  • 00:13:48
    different resources and books, right?
  • 00:13:50
    The first thing that we often hear about is about, you know, architecture is
  • 00:13:53
    stuff that is hard to change or you make until the last responsible moment, right?
  • 00:13:58
    Or the other thing, like Neal Ford always say, architecture
  • 00:14:01
    is about trade offs, right?
  • 00:14:02
    It's there's always something that you trade off.
  • 00:14:04
    So maybe in your definition, before we actually go into the
  • 00:14:06
    evolutionary aspect, so what is architecture in your definition?
  • 00:14:11
    I work very hard never to precisely define it.
  • 00:14:16
    Part of the problem is I'm an academic, and I can't call it a definition
  • 00:14:21
    unless it very clearly includes things and very clearly excludes things.
  • 00:14:27
    But one aspect of architecture that I think is important is it depends on
  • 00:14:34
    what is important to that organization.
  • 00:14:38
    You want your architecture to support the things that matter.
  • 00:14:44
    And this is where Neal starts to get into the trade off discussion, because
  • 00:14:48
    you might have two things that are important to you, but you have to decide
  • 00:14:52
    which one is going to take precedence.
  • 00:14:54
    Well, there are also things that are not important, like that Jet
  • 00:14:57
    Propulsion Laboratory program.
  • 00:15:00
    It had absolutely no need to be maintainable or recoverable
  • 00:15:07
    because they had one data set they were going to run at once.
  • 00:15:10
    And then they're...
  • 00:15:11
    And so the characteristics that lead to the success or failure
  • 00:15:17
    of your system, those are the architectural characteristics.
  • 00:15:21
    It might be network, security, data, performance, operability, resilience.
  • 00:15:28
    I mean, there are all kinds of different characteristics.
  • 00:15:32
    When Neal and I give this talk, we've got a screenshot.
  • 00:15:35
    We took, actually, when the first edition came out.
  • 00:15:38
    And so it's quite old, it doesn't, for example, have observability on it.
  • 00:15:43
    But there are, you know, dozens of different ilities.
  • 00:15:48
    And for each one of those ilities, there's a system that it matters for.
  • 00:15:53
    But not every system needs to worry about all of them.
  • 00:15:56
    And in fact, you can't because many of them are mutually inconsistent.
  • 00:16:00
    And so what we, what I talk about in terms of architecture is what are the things
  • 00:16:06
    that are of importance in your industry for this particular system and even for
  • 00:16:13
    your particular organization, because organizations have particular challenges.
  • 00:16:18
    You know, Retailer A might not have this, the same perspective on
  • 00:16:23
    everything that Retailer B does.
  • 00:16:25
    So they're in the same industry, they might be in the same geography,
  • 00:16:27
    but that doesn't mean that they worry about the same things.
  • 00:16:31
    Right, definitely makes sense, right?
  • 00:16:33
    So something that is most important to you, right?
  • 00:16:36
    You define that as like the so called attributes of your architecture,
  • 00:16:39
    and from there, we kind of like evolve as and when there's a change,
  • 00:16:43
    as and when there's a need, right?
  • 00:16:45
    So one thing in particular that about evolutionary architecture, even though
  • 00:16:48
    this resource, the book, you know, and the theory has been around for quite some
  • 00:16:52
    time, I actually rarely listen people talking about evolutionary architecture
  • 00:16:55
    in the industry for whatever reasons.
  • 00:16:58
    Maybe it's because I'm not exposed in those kind of conversations.
  • 00:17:02
    But in your view, what is the state of adoption in the industry, actually?
  • 00:17:06
    I would say that it is not widely adopted in the way, say, microservices are.
  • 00:17:12
    Because even if you don't have a microservice, people are talking
  • 00:17:15
    about microservices, pretty broadly.
  • 00:17:19
    But I think if you step one level down, some of the ideas that we talk about,
  • 00:17:26
    particularly around fitness functions and how you can use that to assess the
  • 00:17:30
    state of your architecture, I think those ideas are getting more traction.
  • 00:17:36
    People are looking at what am I really trying to achieve here and
  • 00:17:40
    how can I make this concrete enough that I can actually test for it?
  • 00:17:46
    My favorite example again for that is maintainable.
  • 00:17:50
    What does that mean, you know?
  • 00:17:52
    You and I could disagree on how maintainable a
  • 00:17:56
    particular piece of code are.
  • 00:17:57
    We couldn't disagree on what the cyclomatic complexity was, whether
  • 00:18:01
    or not it followed a particular coding standard or a naming standard.
  • 00:18:07
    Did it respect architectural layering rules?
  • 00:18:10
    Those things we can measure, but maintainable is in
  • 00:18:14
    the eye of the beholder.
  • 00:18:16
    So I do think we're making progress in getting people to think more specifically
  • 00:18:22
    about what they're trying to achieve and that the mechanism of fitness functions
  • 00:18:28
    gives them the ability to say, A, here is what I'm trying to achieve, this
  • 00:18:35
    is my target architecture, which is, you know, the theoretical composition
  • 00:18:39
    of all of your fitness functions.
  • 00:18:41
    And this is how I'm doing.
  • 00:18:43
    Maybe something in your system load has changed or maybe a new technology is
  • 00:18:48
    starting to be used and all of a sudden you will need to re-evaluate some of the
  • 00:18:52
    architectural choices that you've made, because the situation is now different.
  • 00:18:56
    And so I think at that level, we are starting to make more progress.
  • 00:19:01
    You do hear more people talking about fitness functions.
  • 00:19:04
    But I would agree with you.
  • 00:19:05
    It's not, it isn't widely adopted.
  • 00:19:07
    And I think that that also goes back to, you know, it's one thing to
  • 00:19:12
    start from a greenfield and build a completely evolutionary architecture.
  • 00:19:16
    It's another thing to take a big ball of mud and turn it
  • 00:19:21
    into something that's evolvable.
  • 00:19:23
    You've got a long journey ahead of you, if you've got lots of balls of mud.
  • 00:19:28
    I saw this great graphic on LinkedIn yesterday.
  • 00:19:31
    There's nothing wrong with building lasagna, a nice layered monolith.
  • 00:19:39
    Well structured.
  • 00:19:41
    You don't want to build a pile of spaghetti.
  • 00:19:44
    But you don't necessarily have to start with raviolis either, which I guess is
  • 00:19:50
    the metaphor for microservices and that.
  • 00:19:52
    But I thought it was a great visual, because, you know, it really gets
  • 00:19:55
    down to the fact that there is nothing inherently wrong with a monolith.
  • 00:20:01
    Yes, there are some things that you can do with a microservice
  • 00:20:03
    that you can't do with a monolith.
  • 00:20:06
    But then there are problems that you can't have in a monolith that you have to deal
  • 00:20:13
    with in a microservices architecture.
  • 00:20:15
    And so, I think too much of a focus on let's be as nimble as possible,
  • 00:20:20
    you can end up with a much more complex system than you need.
  • 00:20:24
    And that's why when we talk about this, we always say you need to
  • 00:20:27
    start by deciding which of these things are most important to you.
  • 00:20:32
    And if evolvability isn't one of your important ilities, don't worry about it.
  • 00:20:38
    Yeah, makes sense.
  • 00:20:39
    It comes back to what we discussed earlier, right?
  • 00:20:41
    It's about what's important to you and pick the right architecture
  • 00:20:44
    based on your context, right?
  • 00:20:45
    My suspicion is also regarding the tools, right?
  • 00:20:48
    Because I don't see many tools focused on solving these kind of things.
  • 00:20:52
    But maybe in the future, we might see, start seeing these kinds of tools.
  • 00:20:56
    So maybe let's go to the definition first.
  • 00:20:59
    Like I liked your definition, evolutionary architecture in the book,
  • 00:21:01
    which kind of like defines the three biggest things from that definition.
  • 00:21:05
    Fitness function is one.
  • 00:21:06
    So maybe if you can help us define first so that the listeners here
  • 00:21:09
    can also understand the big picture of evolutionary architecture.
  • 00:21:14
    Okay, so the three things.
  • 00:21:16
    Evolutionary architecture supports guided, incremental change
  • 00:21:21
    across multiple dimensions.
  • 00:21:24
    And fitness functions come in with the guided, because they are your guide.
  • 00:21:30
    They are your assessment of how close have you gotten to what it
  • 00:21:35
    is that you are trying to achieve.
  • 00:21:37
    Incremental, you know, we want to be able to make these changes incrementally
  • 00:21:44
    as a risk reduction strategy.
  • 00:21:46
    And then across multiple dimensions as we need to think about all of those different
  • 00:21:50
    ilities and we need to think about all of the different characteristics and how they
  • 00:21:55
    interact with each other in determining whether or not we're being successful.
  • 00:22:01
    So just to recap, like, there's a guided thing that happens in
  • 00:22:04
    the evolutionary architecture.
  • 00:22:05
    So this is like the fitness function that kind of like guides you towards
  • 00:22:08
    a certain fitness, I suppose.
  • 00:22:10
    And then incremental change, if you do, if you don't do incremental change,
  • 00:22:14
    I think there's very little reason for you to evolve your architecture.
  • 00:22:17
    Simply just like what you mentioned, the example, right, the jet propulsion thing.
  • 00:22:20
    And the last thing is architecture is involves multiple dimensions, right?
  • 00:22:24
    So pick the most important dimensions for you, and use the fitness function and the
  • 00:22:28
    incremental change that you do to actually kind of like evolve your architecture.
  • 00:22:32
    So maybe let's go to the fitness function first.
  • 00:22:35
    I think this is taken from the theory of evolutionary computing.
  • 00:22:40
    And the analogy also is something like a unit test in, you know,
  • 00:22:43
    automated test world, right?
  • 00:22:45
    So tell us how can we implement this fitness function in, you
  • 00:22:48
    know, day to day project or kind of like service that we build?
  • 00:22:52
    Well, first, some fitness functions act like unit tests, but some of
  • 00:22:57
    them are much more system wide.
  • 00:22:59
    We come up with basically two different axes to define fitness functions.
  • 00:23:03
    Static versus dynamic.
  • 00:23:05
    Static, obviously, it's something that is some kind of analysis.
  • 00:23:09
    Maybe you run it in your build, something like cyclomatic complexity, as an example.
  • 00:23:14
    Dynamic is something that happens at runtime.
  • 00:23:17
    So maybe you have some kind of monitor for CPU utilization.
  • 00:23:23
    Well, that's a dynamic fitness function.
  • 00:23:24
    You don't watch your CPU utilization to get above X.
  • 00:23:28
    Or maybe it's some kind of tracer, transaction tracer through a
  • 00:23:33
    microservices architecture.
  • 00:23:34
    Those are dynamic fitness functions.
  • 00:23:37
    And then there are fitness functions that test just one specific thing, and then
  • 00:23:44
    there are more holistic fitness functions.
  • 00:23:47
    And if you go to the extreme, the Simian Army is a dynamic
  • 00:23:53
    holistic fitness function, or a collection of them, actually.
  • 00:23:57
    And those happen at runtime.
  • 00:23:59
    They often look at system wide characteristics and they're looking at
  • 00:24:05
    multiple aspects of the architecture and how the running system actually behaves
  • 00:24:10
    under particular kinds of stresses.
  • 00:24:13
    And so fitness functions should be thought of as a unifying term for a lot of the
  • 00:24:19
    different kinds of tests that we've been putting systems through for a long time.
  • 00:24:25
    And the advantage of having that unifying terminology is that you can
  • 00:24:29
    now talk about security requirements and operability requirements and performance
  • 00:24:37
    requirements as the same thing.
  • 00:24:40
    How far are we away from what we said our objective was?
  • 00:24:44
    What is it going to take to get there?
  • 00:24:45
    As opposed to all security requirements are created equal and must all be
  • 00:24:50
    implemented before you can go live.
  • 00:24:52
    Generally, that's not the case.
  • 00:24:55
    And so the thing about fitness functions is you want to have it be automated
  • 00:25:02
    if possible, but some of them you don't actually want to automate.
  • 00:25:05
    But the single most important characteristic is that you
  • 00:25:09
    and I will never disagree on whether it passes or not.
  • 00:25:13
    It has to be defined in that way.
  • 00:25:15
    And that process that you have to go through to go from maintainable to a
  • 00:25:21
    suite of defined functions that represent your definition of maintainable.
  • 00:25:28
    That's a difficult exercise, but it's a quite valuable exercise.
  • 00:25:33
    Yeah, so speaking about quality attributes or the ilities, right?
  • 00:25:36
    Definitely, it's kind of like vague whenever we discuss about it.
  • 00:25:39
    I think you brought a point about maintainability.
  • 00:25:41
    What do you mean by maintainability, right?
  • 00:25:43
    Everyone has their own definition.
  • 00:25:45
    And sometimes people focus on certain aspects like code, maybe
  • 00:25:48
    maintanability of the service itself or maintainability of infrastructure, right?
  • 00:25:51
    There are so many different maintainability.
  • 00:25:53
    So I think the first exercise is to come up with a kind of like a baseline, right?
  • 00:25:57
    Everyone needs to understand the same thing.
  • 00:25:58
    And as much as possible, we should be able to quantify maybe some kind of metrics.
  • 00:26:02
    Doesn't have to be automated.
  • 00:26:04
    But at least, people have the same understanding, right?
  • 00:26:07
    Maybe in your experience, having done this for quite some time, do you think
  • 00:26:10
    there are some fitness functions that we, all software engineering team,
  • 00:26:13
    right, must have within our system?
  • 00:26:16
    I know that it's hard because we mentioned that your importance of certain
  • 00:26:20
    characteristics is different, right?
  • 00:26:21
    But maybe there are some basic ones that you think are the most
  • 00:26:24
    important for everyone to adopt.
  • 00:26:26
    Well, assuming you decide that you want your system to be evolvable,
  • 00:26:31
    to me one of the most universal requirements is you have to be
  • 00:26:37
    able to understand what the code is doing and what the system is doing.
  • 00:26:41
    Because you can't change something you don't understand.
  • 00:26:44
    And so the closest I've got to a universal, with the caveat, yes,
  • 00:26:50
    I know I'm a, you know, I'm one of those, you know, scientists.
  • 00:26:53
    I need to be precise, with the caveat you've decided evolvability is important.
  • 00:26:58
    You want to have guardrails in there for code quality.
  • 00:27:03
    Do you have good separation?
  • 00:27:05
    Do you have low cyclomatic complexity?
  • 00:27:09
    That's probably the closest to a universal that you're going to get.
  • 00:27:14
    But we've actually seen quite a bit of creativity where people are trying
  • 00:27:20
    to solve for a particular problem.
  • 00:27:23
    One of my favorite fitness functions that we heard about is we had a client
  • 00:27:28
    and their legal department had come to the delivery team, and said, now,
  • 00:27:33
    we're using all these open source frameworks and they're going to notify
  • 00:27:36
    us if they change their license, right?
  • 00:27:39
    And, you know, the team just laughed, as you are right now.
  • 00:27:43
    Because, of course, they're not going to notify everybody.
  • 00:27:45
    They'd have no idea who's using this.
  • 00:27:47
    And the lawyer got all worried about that because, oh, well, we could
  • 00:27:51
    inadvertently be using something that has a license that I haven't approved.
  • 00:27:55
    And so they started to, you know, come up with all of these natural language
  • 00:27:58
    processing ideas for, you know.
  • 00:28:01
    And then somebody came up with this very simple, elegant solution.
  • 00:28:06
    Hash all of the license files.
  • 00:28:09
    Put in a unit test that hashes the current license file and compares
  • 00:28:15
    the hash to what's in the test.
  • 00:28:17
    And if it's different, you send an email to the lawyer with a link to the new file.
  • 00:28:23
    And then the lawyer can check.
  • 00:28:25
    You know, and it's, it was, it's brilliant in its simplicity.
  • 00:28:29
    And then as soon as the lawyer signs off, they do the rehash, they put it back into
  • 00:28:34
    the build and now they go merrily along.
  • 00:28:37
    And so the system is notifying the lawyer when the license file changes.
  • 00:28:41
    And it was a brilliant solution, you didn't need to do anything fancy to say,
  • 00:28:46
    okay, did the semantics of this, well, how's the team going to maintain that?
  • 00:28:51
    All they need to know is that it's changed.
  • 00:28:54
    So I think as people get more used to using fitness functions, we'll start
  • 00:28:59
    to see more ideas on, well, here's some fitness functions around X.
  • 00:29:07
    Here's some fitness functions around Y.
  • 00:29:08
    In much the same way that many people use variants of the Simian Army, we're
  • 00:29:13
    going to have similar kinds of things, I think, happening across architectural
  • 00:29:18
    patterns, things of that nature.
  • 00:29:20
    Yeah, I hope to see a lot more patterns or recipe book, you know, out there that
  • 00:29:24
    people use in terms of fitness functions.
  • 00:29:26
    So as far as to get inspiration or maybe the tools also that we can just, you know,
  • 00:29:30
    use in an open source fashion, right?
  • 00:29:33
    So I think another thing that is very fundamental in the evolutionary
  • 00:29:36
    architecture implementation, right, there are two aspects, which is
  • 00:29:39
    kind of like the governance aspect.
  • 00:29:41
    And the other one is the engineering practice aspect.
  • 00:29:43
    Maybe if we can cover each, right, because I find those two are really
  • 00:29:46
    important, because they are kind of like a collection of multiple different
  • 00:29:50
    principles, paradigms, philosophies that I think we all need to get reminded of.
  • 00:29:55
    So maybe let's start with the governance aspect.
  • 00:29:58
    Well, I think, so often governance is this dirty word because you've got these
  • 00:30:04
    architects sitting on high looking down upon the minions doing all of the work,
  • 00:30:10
    and they're going to say, no, you can't.
  • 00:30:14
    Governance, when you get to any kind of scale, you have no choice.
  • 00:30:19
    You have to have some level of governance.
  • 00:30:22
    Now you, an engineering leader can decide what level.
  • 00:30:25
    I know of one CTO that was trying to break a pattern of excessive reuse.
  • 00:30:31
    And so he said, no two microservices can be written on the same
  • 00:30:37
    technology stack and language.
  • 00:30:39
    And so it makes it impossible to really reuse anything.
  • 00:30:43
    I thought that was a bit extreme, but he was trying to make an
  • 00:30:46
    organizational change and sometimes the pendulum has to swing like that.
  • 00:30:51
    So what are the principles of evolutionary architecture that you can share with us?
  • 00:30:55
    The value, though, of fitness functions for governance is enormous.
  • 00:31:00
    Because anything that's in a fitness function, particularly
  • 00:31:03
    an automated fitness function.
  • 00:31:04
    You never have to do any kind of architectural review.
  • 00:31:08
    No, you have no cyclic dependencies, because you've got a test in there
  • 00:31:13
    that will fail the build if anybody inadvertently adds a cyclic dependency.
  • 00:31:17
    And so all of those concerns go away from a governance perspective, and you
  • 00:31:22
    can focus your governance discussions then, on those places where you've
  • 00:31:28
    got two things and you've got to make a trade off and you don't really know
  • 00:31:31
    how to make that work, and so you can put the brainpower of the humans in the
  • 00:31:36
    places where you need that creativity and you can leave the rote stuff.
  • 00:31:41
    One of the, I think, it's the Dr.
  • 00:31:43
    Monkey in the Simian Army checks to make sure all RESTful
  • 00:31:46
    endpoints are properly configured.
  • 00:31:49
    So you never have to worry about it, because it'll get tossed out
  • 00:31:53
    if it's not properly configured.
  • 00:31:54
    So the shift of the governance that we're talking about from the
  • 00:32:00
    perspective of evolutionary architecture really comes down to focusing on the
  • 00:32:06
    outcomes, not the implementations.
  • 00:32:08
    What are the outcomes we are trying to achieve?
  • 00:32:10
    What are the behaviors we want the system to exhibit?
  • 00:32:14
    Not how you're going to get there.
  • 00:32:16
    And that allows the delivery teams to work within the sandbox that the
  • 00:32:22
    governance organization has put into place, but then be creative about
  • 00:32:29
    how they might actually implement something to achieve that behavior.
  • 00:32:34
    And you can also then have a basis of a conversation that says, I know
  • 00:32:40
    our standard tool for this is X.
  • 00:32:44
    But we're trying to achieve the outcome.
  • 00:32:48
    And in our situation because of PQ and R, Y works better to achieve the outcome.
  • 00:32:56
    And you can have those discussions.
  • 00:32:57
    And so the discussions become less about, no, you can't use that because I told you,
  • 00:33:03
    you had to use something else, but this is how we're going to go about achieving
  • 00:33:08
    the outcome, and this is why we think this is a better way to achieve the outcome.
  • 00:33:13
    And so that's one of those kind of underlying philosophies is let's be
  • 00:33:19
    focused on outcomes, not implementations.
  • 00:33:22
    Another aspect that is important here is it has to do with how
  • 00:33:27
    you architect your system.
  • 00:33:30
    And domain driven design has really helped us here, because it's given
  • 00:33:34
    us this language and this idea of a bounded context that makes
  • 00:33:40
    sense within the business domain.
  • 00:33:44
    Because if you think about a system in terms of its implementation,
  • 00:33:49
    you're going to talk about SAP or you're going to talk about Salesforce
  • 00:33:52
    or you're going to talk about, you know, the customer ordering system.
  • 00:33:57
    The people who are redesigning business processes and creating new
  • 00:34:01
    business processes, they don't care if something is stored in Salesforce,
  • 00:34:06
    a CRM, or a shipping system.
  • 00:34:09
    They think about the customer, they think about the product, and they think
  • 00:34:14
    about, you know, the logistics flow.
  • 00:34:18
    The more our systems have their boundaries drawn around
  • 00:34:26
    aspects of functionality that correspond to what the people who are creating the
  • 00:34:31
    business process have as their chunks.
  • 00:34:35
    They're going to design the business process by rearranging their chunks.
  • 00:34:39
    We can much more readily implement that process if our chunks have
  • 00:34:46
    the same ability to move around.
  • 00:34:48
    And I actually think that's where microservices have been successful,
  • 00:34:53
    where SOA version one failed so miserably, is the boundary so
  • 00:34:59
    often, in that early implementation of SOA, were all around systems.
  • 00:35:04
    Okay, we are going to create exposed services for SAP.
  • 00:35:10
    Why?
  • 00:35:12
    Um, and so I, of all of the principles and you know, there's several others.
  • 00:35:18
    I think those are the most important ones underlying evolutionary architecture.
  • 00:35:24
    Right.
  • 00:35:25
    I think it's really insightful, right?
  • 00:35:26
    First, it's like focus on the outcome, not the actual how or implementation, right?
  • 00:35:31
    And I think DDD is kind of like the fundamentals of, you know,
  • 00:35:34
    coming up a software that is aligned with the business, right?
  • 00:35:38
    And the other thing that is commonly also mentioned when we talk about
  • 00:35:41
    bounded context, you know, microservice and all that is about Conway's law.
  • 00:35:45
    Oh, yes.
  • 00:35:46
    You have Conway's law and Postel's law as part of this
  • 00:35:49
    evolutionary architecture as well.
  • 00:35:50
    Maybe explain to us why these two laws are important in evolutionary architecture?
  • 00:35:55
    Well, we'll start with Postel's Law.
  • 00:35:58
    Um, simply put, Postel's Law says be generous in what you receive
  • 00:36:05
    and stingy in what you produce.
  • 00:36:08
    And so the standard example I use, if you are receiving address information
  • 00:36:16
    and all you need is the zip code, postcode, some kind of geolocator,
  • 00:36:21
    don't validate the whole address.
  • 00:36:23
    You don't need to.
  • 00:36:25
    And that way, if somebody decides, oh, I need to add in that address
  • 00:36:28
    line 2 to this thing, you won't break.
  • 00:36:32
    Now, of course, you've got the great big asterisk, don't open up a security hole.
  • 00:36:36
    But the point is, focus on the information that you really need.
  • 00:36:40
    Because that way, your system will only require change, if
  • 00:36:46
    it actually has to change.
  • 00:36:48
    There's no way that we can prevent any breaking change from ever happening, but
  • 00:36:53
    we want to limit it to where it really has to break, because we are trying to
  • 00:36:58
    do something fundamentally different.
  • 00:37:01
    But you want to be very stingy in what you expose, because, again,
  • 00:37:05
    you have no idea who's actually using what you put out there.
  • 00:37:10
    One of the sadder examples I saw of this, we had one client who did everything
  • 00:37:15
    right in their package selection.
  • 00:37:17
    They said, we're going to change all of our business processes
  • 00:37:21
    to do what the package wants.
  • 00:37:23
    And that way we don't have all of these customizations.
  • 00:37:26
    And so we can upgrade whenever we get a new version.
  • 00:37:31
    Wonderful idea.
  • 00:37:33
    But what they didn't do was keep track of who was actually
  • 00:37:37
    connecting directly to the database.
  • 00:37:39
    And they had 87 reports that were completely tied to the database schema.
  • 00:37:45
    And so they had to rewrite all of those reports before they could upgrade.
  • 00:37:50
    And they hadn't realized that.
  • 00:37:53
    Because people just said, oh, well, you know, I'll just hit
  • 00:37:56
    the database for that report.
  • 00:37:58
    And so even when you don't intend for somebody to use it, people still
  • 00:38:04
    are going to use it if they can.
  • 00:38:06
    And you've made a contract, even though you're in a contract
  • 00:38:11
    that you don't know you're in.
  • 00:38:12
    And so that's Postel's Law.
  • 00:38:14
    Conway's Law.
  • 00:38:16
    People try to fight Conway's Law.
  • 00:38:20
    And you just can't do it.
  • 00:38:22
    Conway's Law, my version is a system will reflect the communication dysfunction
  • 00:38:28
    of the organization that builds it.
  • 00:38:30
    If the people don't talk effectively to each other, the systems that
  • 00:38:35
    they're responsible for are not going to talk to each other.
  • 00:38:39
    And, you know, I would sometimes look very clever, because I would come in
  • 00:38:42
    and have lunch with the architects, and then go in and talk to the
  • 00:38:45
    VP and say, okay, the integration between these two systems is broken.
  • 00:38:51
    How in the world do you know that?
  • 00:38:52
    You haven't looked at a piece of code.
  • 00:38:55
    I said, yeah, but I, I saw the tech leads in the lunchroom and they
  • 00:38:59
    walked past each other like this.
  • 00:39:01
    Yeah.
  • 00:39:02
    And so you can use Conway's Law to your advantage in when you look at what is,
  • 00:39:08
    what do I really want my architecture to reflect and then reorganize your
  • 00:39:13
    teams and they're going to produce it.
  • 00:39:15
    It's just going to happen.
  • 00:39:16
    We call it the inverse Conway maneuver.
  • 00:39:20
    Yeah.
  • 00:39:21
    So I think Conway's Law is always kind of like brought up in so
  • 00:39:25
    many various discussions, right?
  • 00:39:26
    I think for listeners who are not yet familiar with these two laws, right?
  • 00:39:29
    Make sure you research more because they are fundamentals,
  • 00:39:33
    even though they are like around for so many, many years, right?
  • 00:39:36
    People try to kind of like beat them, but eventually they couldn't.
  • 00:39:40
    So those are kind of like the governance and principles aspect.
  • 00:39:43
    What are some of the engineering practices that you think software engineering
  • 00:39:46
    teams have to adopt and practice?
  • 00:39:49
    Well, first off, I think an underlying prerequisite is the discipline,
  • 00:39:59
    the infrastructural discipline and the deployment discipline that
  • 00:40:02
    comes from continuous delivery.
  • 00:40:04
    You don't have to go all the way to continuous deployment, although there's
  • 00:40:08
    a new book out that makes a very strong case for why you should try to get there.
  • 00:40:14
    But you at least need to know that your deployments are going to run smoothly.
  • 00:40:20
    And so the risk mitigation aspects of continuous delivery are important.
  • 00:40:25
    When you're talking about the kinds of dramatic changes, you need to
  • 00:40:30
    know what you're deploying into and so that you can more readily debug
  • 00:40:37
    anything that that's happening.
  • 00:40:39
    So I think that's the first.
  • 00:40:41
    The second is this whole idea of evolutionary database design
  • 00:40:47
    and database refactoring.
  • 00:40:50
    I've been in many conversations over the years with, you know, people
  • 00:40:55
    who would say, okay, well, agile and incremental, that's fine for developers.
  • 00:41:00
    But I need to have a holistic vision of my complete user experience, or
  • 00:41:05
    no, I can't test the system until it's done, because you're going to be
  • 00:41:09
    changing it and then I'm going to have to retest it and all of those things.
  • 00:41:13
    The team that I think has always had the strongest argument for, no,
  • 00:41:18
    it can't be incremental, were the DBAs, because data migration is hard.
  • 00:41:24
    It sounds so simple.
  • 00:41:26
    Copy it from here to here.
  • 00:41:29
    No.
  • 00:41:30
    But it's, it's hard.
  • 00:41:31
    And so there's an entire book called Refactoring Databases.
  • 00:41:35
    One of our co-authors in the second edition, Pramod Sadalage,
  • 00:41:39
    is one of the authors of that.
  • 00:41:41
    So that, that is another critical engineering practice.
  • 00:41:46
    I also like to talk about contract testing, because, again, one of the
  • 00:41:52
    things that you were trying to do with an evolutionary architecture is to make
  • 00:41:55
    it as easy to change things as possible.
  • 00:41:58
    And so if I understand the assumptions that you're making of my system and
  • 00:42:04
    you understand the assumptions that I am making of yours, you know,
  • 00:42:08
    so we both know what's happening.
  • 00:42:11
    And then we've got the same with Neal.
  • 00:42:13
    And then we can make whatever changes that we want, paying absolutely no attention to
  • 00:42:20
    each other until one of those tests break.
  • 00:42:24
    And let's say my test with Neal breaks.
  • 00:42:27
    So I have a conversation with Neal, because I'm trying to implement
  • 00:42:31
    something that violates something that he's expecting of me, and so we
  • 00:42:35
    negotiate what change has to happen.
  • 00:42:37
    We're continuing to ignore you.
  • 00:42:40
    Because none of your tests are broken, and as long as your tests don't
  • 00:42:43
    break, we can continue to ignore you.
  • 00:42:45
    And then we get all the tests working again, and then we go
  • 00:42:47
    back to ignoring everybody.
  • 00:42:50
    It maximizes the amount of independent work that can take
  • 00:42:54
    place, and it helps us understand what those boundaries are and why.
  • 00:42:59
    And that is a critical piece to being able to evolve an architecture.
  • 00:43:04
    Because if I don't know what you're expecting of me, I can inadvertently
  • 00:43:09
    break you, and we don't want that.
  • 00:43:12
    And so, that's another important technique.
  • 00:43:16
    And then you start to get into things that aren't necessarily as fundamental.
  • 00:43:22
    I believe those things are fundamental.
  • 00:43:25
    You have to have the right kind of test and safety net.
  • 00:43:28
    One of the things that we found is if you think properly about testing,
  • 00:43:32
    you're actually going to end up with a cleaner architecture, because
  • 00:43:35
    you have to have good boundaries to be able to properly test things.
  • 00:43:40
    But then we often, for example, talk about choreography over orchestration.
  • 00:43:46
    And this is where you really start to get into these trade off discussions, much
  • 00:43:50
    like should I go with a well structured monolith or should I go to microservices?
  • 00:43:56
    You have much more flexibility with microservices than you do
  • 00:44:00
    with a well structured monolith.
  • 00:44:02
    Emphasis on the well structured.
  • 00:44:05
    This is not spaghetti monoliths.
  • 00:44:07
    This is nice, nice structured lasagna monoliths.
  • 00:44:12
    And if you don't need that level of flexibility, it's not
  • 00:44:16
    worth paying for the complexity.
  • 00:44:19
    But sometimes you do.
  • 00:44:20
    And it's the same with choreography versus orchestration.
  • 00:44:24
    If you've got an orchestrator, that orchestrator is going to solve
  • 00:44:27
    some of those problems that you have of these independent actors.
  • 00:44:31
    But you're introducing coupling that is not strictly necessary.
  • 00:44:38
    But there are all kinds of errors that you have to take care of
  • 00:44:41
    yourself in a choreographed system.
  • 00:44:44
    And so, again, if you need that flexibility, take it.
  • 00:44:48
    But if you don't need the flexibility, then go with something that's simpler.
  • 00:44:54
    Right.
  • 00:44:55
    Thanks for highlighting all these important practices.
  • 00:44:57
    I think continuous delivery is something that is a must, right?
  • 00:45:00
    Especially when you want to do incremental change, because without
  • 00:45:02
    continuous delivery, there's just no way for you to do incremental change.
  • 00:45:06
    And the other one is like the contract testing, especially in the microservice
  • 00:45:10
    world, right, where you integrate with so many different third parties and services.
  • 00:45:14
    You know, knowing where you break or when you break, I
  • 00:45:16
    think is very important as well.
  • 00:45:18
    You talk about evolutionary database design.
  • 00:45:20
    I don't know whether this has been a common thing, but I think many, many
  • 00:45:24
    language and framework kind of like covers this aspect of evolutionary, at least
  • 00:45:27
    RDBMS kind of like database migration.
  • 00:45:30
    And the last one is about choreography.
  • 00:45:32
    I think it touches a little bit about event driven architecture where
  • 00:45:35
    you kind of like have choreography rather than orchestration.
  • 00:45:39
    So thanks for sharing all this.
  • 00:45:41
    So definitely one aspect that is kind of like trendy these days is about
  • 00:45:44
    introduction of AI, LLM, generative AI, and all those stuff, right?
  • 00:45:48
    So what is your take about the invasion of AI and with evolutionary architecture?
  • 00:45:52
    Because one good aspect about AI is that they can create tests these days.
  • 00:45:58
    People also use a lot of suggestions from AI assistant to actually generate code.
  • 00:46:03
    Is this something that evolutionary architecture needs
  • 00:46:05
    to kind of like govern as well?
  • 00:46:07
    What's-your take about AI?
  • 00:46:09
    Well, that's one of the things that, that Neal and I've been
  • 00:46:12
    talking about for a while.
  • 00:46:14
    We can use fitness functions at least to, you know, particularly
  • 00:46:20
    the suite of code quality fitness functions that are brought up.
  • 00:46:25
    We can use that to assess the generated code.
  • 00:46:30
    You know, and there's still anecdotal evidence, I wouldn't call it solid
  • 00:46:36
    evidence yet, that these code generators, they will tend to copy, paste, and
  • 00:46:44
    modify as opposed to trying to abstract.
  • 00:46:48
    And so, running a simple copy paste detector can help you see
  • 00:46:52
    if your code base is starting to get out of control in that way.
  • 00:46:58
    I've been interested and involved in AI for most of my career.
  • 00:47:04
    And so I've seen, you know, the AI winters.
  • 00:47:08
    There's certainly a lot of hype going on right now.
  • 00:47:11
    But these models are qualitatively more powerful than any models
  • 00:47:18
    that we've had in the past.
  • 00:47:20
    And so I do think that we have the potential to use these LLM based
  • 00:47:28
    systems, particularly the more coding focused ones, to help us in development.
  • 00:47:34
    One of the things that ThoughtWorks has been experimenting with, as an
  • 00:47:38
    example, is using these LLMs on a legacy codebase to help understand
  • 00:47:46
    how the information actually flows through that legacy codebase.
  • 00:47:50
    And to help use that information to start to refactor and ultimately
  • 00:47:57
    replace a legacy code base.
  • 00:47:59
    It's still early days, but what we're seeing is really a fundamental
  • 00:48:05
    increase in the ability of a human to understand a code base.
  • 00:48:12
    And it's because the human is relying on information, and the LLM and the
  • 00:48:17
    background is doing a lot of hard work.
  • 00:48:19
    So I think we're going to see more of that.
  • 00:48:21
    As I said earlier, you cannot evolve a system that you can't understand.
  • 00:48:25
    And that's one of the problems with many of these old legacy systems is people just
  • 00:48:30
    don't understand how they work anymore.
  • 00:48:32
    And so the more we can build tools to help understand these legacy systems,
  • 00:48:38
    that puts us in a much better position to actually be able to modify those systems.
  • 00:48:44
    Right.
  • 00:48:45
    You mentioned something that is, uh, I think quite insightful for me, right?
  • 00:48:48
    So you mentioned that code generation is typically kind of
  • 00:48:51
    like copy paste modifying evolving a little bit here and there, right?
  • 00:48:54
    But it's pretty rare to see AI who can suggest you some abstractions
  • 00:48:59
    or, you know, kind of like a domain driven kind of a suggestion as well.
  • 00:49:03
    Is this something that every developer has to be concerned with or kind of like the
  • 00:49:07
    gotcha that we all need to be aware of?
  • 00:49:09
    Because most of the time now, everyone integrates their Copilot,
  • 00:49:13
    and suddenly, you know, you see a lot of code being suggested and
  • 00:49:16
    they just accept, accept, accept.
  • 00:49:18
    I think there's also a lot of studies saying that code churn is
  • 00:49:21
    really high these days and a lot more codes are being generated.
  • 00:49:25
    Maybe in terms of lines of code, right, grows really fast simply, because,
  • 00:49:28
    yeah, we just accept rather than think through, you know, what kind
  • 00:49:31
    of solution that AI is suggesting.
  • 00:49:33
    So what is your view about these, especially in terms of fitness function
  • 00:49:37
    or architectural aspect that maybe one day is going to be like another
  • 00:49:41
    big ball of mud generated by AI, which is kind of like even worse.
  • 00:49:46
    Yeah, that is one of the things I worry about is that we basically
  • 00:49:51
    increased the productive capacity of our industry to create that code.
  • 00:49:57
    And that doesn't help anybody.
  • 00:49:59
    Um, as I said, I do think we can use fitness functions to at least monitor
  • 00:50:05
    what's happening with the code base.
  • 00:50:07
    One of the things that I worry about though is, in many ways, our industry
  • 00:50:14
    is kind of an apprentice model, where you have junior developers who
  • 00:50:19
    are learning from more experienced developers, and it goes on.
  • 00:50:24
    Unless these coding assistants get much better pretty quickly, I would worry
  • 00:50:31
    about in 20 years time, where are our star developers going to have come from?
  • 00:50:37
    The notion that somebody is going to learn how to code from a coding
  • 00:50:41
    assistant, we're not there yet.
  • 00:50:44
    They're too likely to put out things that are wrong.
  • 00:50:47
    There was one study, we actually did a podcast on this that was done by
  • 00:50:52
    the CodeScene people, when the coding assistant that they were working with and
  • 00:50:57
    they, they went across a suite of models, recommended a refactoring, in the best
  • 00:51:03
    case, they were right 37% of the time.
  • 00:51:07
    So in over 60%, the refactoring that they suggested did not maintain
  • 00:51:15
    the correct behavior of the code.
  • 00:51:17
    If you as a developer got things wrong two thirds of the time, you're not
  • 00:51:21
    going to keep your job for very long.
  • 00:51:24
    As a professor, if two thirds of the stuff that I said was wrong, I am
  • 00:51:29
    not doing a service to my students.
  • 00:51:31
    They're not going to be able to learn if they have to figure out which two thirds
  • 00:51:36
    of the stuff that I've said is nonsense.
  • 00:51:38
    So that's what worries me, is how are we going to train the
  • 00:51:45
    next generation if we're relying so much on coding assistants.
  • 00:51:51
    So yeah, thanks for highlighting this apprenticeship.
  • 00:51:53
    I kind of like agree with you, right?
  • 00:51:54
    Because in some of my experience using coding assistants, sometimes yeah, it got
  • 00:51:58
    wrong answer in a confident way, you know.
  • 00:52:00
    Like this is it, you know, this is the solution.
  • 00:52:02
    And then when you give it a try, actually, it's kind of like wrong.
  • 00:52:04
    You try again, it's still wrong.
  • 00:52:06
    Until maybe certain times that, okay, you finally got it right.
  • 00:52:09
    So I think maybe people talk about, you know, replacing junior or junior
  • 00:52:13
    being able to upskill themselves really, really fast just by using AI assistant.
  • 00:52:17
    I think there's a worry there about this apprenticeship aspect
  • 00:52:19
    that you mentioned, right?
  • 00:52:20
    How can someone be trained maybe in terms of abstraction, domain driven design,
  • 00:52:24
    or even evolutionary architecture, you know, the multiple dimension aspect.
  • 00:52:28
    So I think AI currently is not capable of doing that.
  • 00:52:30
    So thanks again for highlighting that.
  • 00:52:32
    So Dr.
  • 00:52:33
    Rebecca, it's been such a pleasure.
  • 00:52:35
    I learned a lot about evolutionary architecture and
  • 00:52:36
    all the fundamentals about it.
  • 00:52:38
    So, unfortunately, we reached the end of our conversation.
  • 00:52:41
    I have one last question that I'd like to ask you.
  • 00:52:43
    I call this the three technical leadership wisdom.
  • 00:52:46
    You can think of it just like an advice that you want to give to the listeners.
  • 00:52:49
    Maybe you can share your version of wisdom for us to learn from.
  • 00:52:53
    Okay, well, the first one is, I firmly believe as technologists, it's our
  • 00:52:59
    responsibility to communicate to the rest of the organization in their
  • 00:53:05
    language, the potential consequences of the decisions that they are making.
  • 00:53:11
    We're the ones that know the tech, but we have to do it in their language
  • 00:53:15
    so that they can put it, so they can understand the business risks and or the
  • 00:53:22
    business opportunities for that matter.
  • 00:53:24
    And so the first thing is we need to understand how our organization makes
  • 00:53:29
    money, what they are doing, what are the pressures on that organization.
  • 00:53:34
    And that's our responsibility.
  • 00:53:37
    The second I would say that as the technology landscape has become so broad,
  • 00:53:44
    questions of generalist versus specialist have taken on a different meaning.
  • 00:53:50
    It used to be, as I said when I started, one person could
  • 00:53:54
    understand the entire stack.
  • 00:53:58
    You can't do that to any level of specificity anymore.
  • 00:54:03
    JavaScript frameworks and other frontend frameworks and, you know, non relational
  • 00:54:08
    databases and this different kind of network architecture and dot, dot,
  • 00:54:13
    dot, dot, dot, it just keeps going on.
  • 00:54:15
    And so a crucial decision that an individual needs to make is what kind
  • 00:54:21
    of technologist do they want to be?
  • 00:54:23
    Do they want to be a somewhat generalist?
  • 00:54:26
    Do they want to think more big picture from a technology perspective
  • 00:54:30
    or do they want to become a true specialist in something?
  • 00:54:34
    And that's something to decide relatively early in your career.
  • 00:54:38
    And then the final thing is, with how rapidly our industry is changing,
  • 00:54:44
    you have to think of learning as fun.
  • 00:54:47
    I was one of those silly people who loved school.
  • 00:54:50
    So summer school was perfect.
  • 00:54:52
    We have summer and we have school at the same time.
  • 00:54:54
    Isn't this great?
  • 00:54:55
    And everybody thought I was mad.
  • 00:54:58
    But we have to embrace that because new languages are coming out, new
  • 00:55:03
    frameworks are coming out, new architectural approaches are coming out.
  • 00:55:08
    And we need to be able and we need to enjoy continuing to learn new things.
  • 00:55:16
    Because you don't want to be that person who is hanging on at the tail
  • 00:55:20
    end of the career because they're the only person left on the planet that
  • 00:55:24
    understands this programming language.
  • 00:55:26
    You know, you don't want to be that person.
  • 00:55:28
    You want to be someone who has continued to evolve your career.
  • 00:55:31
    And to do that, thinking of learning as fun and not a chore is crucial.
  • 00:55:38
    Well, I wasn't expecting the last one, you know.
  • 00:55:40
    Treating learning as something fun, right?
  • 00:55:42
    So I, I'm sure many people would find it also kind of like insightful, right?
  • 00:55:46
    So if we don't take learning as something that we enjoy doing, I think
  • 00:55:49
    it's kind of like a chore, right?
  • 00:55:50
    Especially with all these rapid changes that are happening.
  • 00:55:53
    I think it's going to be difficult to keep up if we don't treat it as a fun thing.
  • 00:55:56
    So I think that's really beautiful.
  • 00:55:58
    So Dr.
  • 00:55:58
    Rebecca, if people want to talk to you or maybe reach out to ask you
  • 00:56:01
    more questions, is there a place where they can find you online?
  • 00:56:05
    I'm on LinkedIn.
  • 00:56:06
    And my readable handle is Dr.
  • 00:56:10
    Rebecca Parsons.
  • 00:56:12
    Right.
  • 00:56:12
    I'll put it in the show notes.
  • 00:56:13
    So thank you so much for your time, Dr.
  • 00:56:15
    Rebecca.
  • 00:56:15
    I really enjoyed this-conversation.
  • 00:56:16
    Thank you, Henry.
  • 00:56:17
    I had fun.
Etiquetas
  • Evolutionary Architecture
  • Fitness Functions
  • Conway's Law
  • Governance
  • Continuous Delivery
  • AI in Development
  • Engineering Practices
  • Microservices
  • Software Quality
  • Legacy Systems