Tim Schraepen's Blog Yet another developer blog.

The Lead Developer Conference - Day 1

The Lead Developer what now?

The Lead Developer conference, or at least the one I went to, was held on the 23rd and 24th of June 2016, in the Queen Elizabeth II Centre in London, during the Brexit vote.

It’s a conference mainly about “that other part” of a job as programmer, namely working with people.

I can highly recommend this conference to anyone. It’s not just for developers; scrum masters, managers, and other roles can all learn from the abundance of information shared at this conference.

All the slides can be found on the conferences website, but I link to the specific slides (and my notes) in the short bits I write about the talks below.

Note: not all slides are online yet, but they’re getting in when the speakers have taken the time to share them

A word of thanks

Thank you to my faithful travel companions and colleagues Jan Sabbe and Bart De Neuter.

Thanks also go out to our employer, Cegeka, who paid for travel, accomodation and the entrance fee.

Thanks to the organizers, White October Events, who did an amazing job at making everyone feel at home and for being super friendly people in general.

And thanks to Meri Williams, with her witty announcements, for being the perfect host and an inspiring example to us all.

Scribblings of a madman

Based on my notes, I’m sharing the most important things I learned in every talk I watched. So, you can decide if it’s interesting or not.

I’m also turning this into 2 blog posts, because after writing Day 1 it already was a lot.

Day 1 

Patrick Kua - What I wish I knew as a first time tech lead 

Using his fantastic art in his slide-deck, Patrick Kua (@patkua) gave everybody some pointers on the path to a wise lead developer.

He mentioned some recognisable things such as: trust your team, delegate to it, provide guidance, don’t write code all the time, don’t make all the technical decisions.

But the coolest thing I learned was that there are certain skills of yourself (as a leader) that can be represented by archetypes or personas. Patrick identified the Coach, Shepherd, Shaman, and Champion.

The Coach

The Coach (e.g. in a soccer team) personifies the part of you that tries to keep the team together and motivated.

The Shepherd

The Shepherd is the part of you that guides individuals back to the team, or guides the team to a shared goal.

The Shaman

The Shaman is your story telling skill. I thought it was so cool that this is important enough to validate turning into a persona. I never thought of it that way but it sure is refreshing.

The Champion

And finally The Champion, is that part of you that can Lead by example your team to a higher level.

We all carry all of these personas with us, it’s certain situations that will call these out at times.

Patrick closed off by saying that our role as Lead gives us greater impact. And instead of becoming the 10x Dev, we can grow the individuals in our team so that the team becomes the 10x Dev.

This keynote really set the tone for the talks that we’d see over the course of the next 2 days.

Slides / Notes

Heidi Waterhouse - The 7 righteous fights 

The 7 righteous fights… you should be fighting.

Heidi’s (@wiredferret) talk was mainly about making sure your application adheres to Localization, Security, Extensibility, Documentation, Affordance, Acceptance and Accessibility. If you don’t take these into account, you’ll build up compound technical debt.

I learned that Affordance is about nudging your users so they use your application correctly.

About Security she had a lovely quote:

Internet is not a series of tubes connected to hackers wearing hoodies.

That got a good laugh. :)

About technical debt in general she said if it’s already in a release, you’ll get more resistance when trying fixing it. She made the fitting analogy by saying that trying to fix stuff in a codebase after its release is ​like trying to pound chocolate chips into an already baked cookie​.

Slides / Notes

Mike Gehard - Moving from Monolith to Microservices 

Mike (@mikegehard) went so fast through his presentation, that it’s gonna take me at least a second viewing to really understand all the things he was trying to say.

The way Mike approached moving from a Monolith to Microservices is by starting from a monolith and then move towards Microservices, instead of trying to guess which microservice to distill from your application and do trial and error from that position.

Why Monolith first? Because it’s safer to learn about your domain and it’ll cost less to change Bounded Contexts.

Basically what he was saying was Use the benefits that immersive design provides.

  1. Write API level tests
  2. Arrange/Organize your application so you can see your domain (and Bounded Contexts)
  3. Break out components
  4. Promote one of your components to a microservice (here you’ll also build the minimum required infrastructure and test it out)
  5. Continue promoting microservices

As you might notice from the lingo, Mike is a big fan of Eric Evans’ Domain Driven Design (he even made the same jokes that go along with it). He also didn’t fail to mention Simon Brown’s component structuring. This made me feel right at home. :)

Slides / Notes

Lorna Mitchell - Wonderful world of webhooks 

Lorna (@lornajane) gave a lightning talk about why WebHooks are cool.

I remember that Webhooks are less chatty, and that its content is usually large (coars-grained) because you want to push the information that’s useful to 80% of the use cases to stay as least chatty as possible.

Slides / Notes

Dan North - How to make a sandwich 

Dan (@tastapod) North, the man, the Legend, gave a really great talk about Feedback.

He first talked about Feedback in the context of Systems Theory, which he does a way better job at explaining than I ever will, so watch the video.

The most important thing I picked up about this part is that there are different motivations why one would offer feedback. And that when you notice you would be offering feedback to control the one you’re giving feedback to, you should just walk away.

In the next part we learned about how to deliver feedback using SBI, Situation Behavior Impact.

And different ways of structuring feedback, e.g. the infamous Sandwich model.

To receive feedback there is only one rule: Always say Thank You. Even if the feedback you got made you angry. Because it shows your appreciative of getting feedback. Try to map that feedback you got to SBI yourself (how was the other person thinking about giving you feedback).

Great talk to start off the lunch. :)

Slides / Notes

Duretti Hirpa - How to get engineering team to eat their vegetables 

To stay in the food analogies after a very nice lunch, Duretti’s (@Duretti) talk about how teams operate had some really great one liners.

My favorite ones being (Acceptance of) vulnerability leads to a learning culture, Coordination IS competitive advantage, Productivity is a measure of comfort.

She noticed what qualities a good team shows:

  • A good team wants you to win
  • Has a sense of togetherness
  • Has a place for vulnerability.
  • Is psychologically safe
  • Has empathy

Nice talk with a whole lot of truth bombs. Worth watching!

Slides / Notes

Katie Fenn - Writing Modular Stylesheets with CSS Modules 

Katie Fenn (@katie_fenn) took the stage and talked about how CSS Modularization is sensical because along with making components in JavaScript, you also want the same for your CSS.

CSS Modules is the way to achieve this fairly easily. I think it makes the most sense when you work with JSX components like you do when you use ReactJS.

Other than the various JS loaders, you can also use it with both Sass and Less.

Slides / Notes

Yasmina Banaszczuk - I know just the right person 

After that short technical talk, in came Yasmina (@lasersushi), ready to blow us all away with refreshing german directness and delicate, yet powerful tone of voice.

Her talk was all about how our (personal) networks can influence hiring and retention in the tech industry.

After laying out studies and papers about how we really hire “People like ourselves”, we delved deeper into the motivation behind it.

She gave a hilarious example of how the name Kevin has (had?) a bad connotation in Germany, which even lead to a word called Kevinismus, and the entire floor laughing out loud when she visibly felt embarassed about all the Kevin’s in the audience, and the continued, emboldened laughter after she exclaimed There’s actual research in that!.

At the end of her talk we got a nice summary:

Check your processes, your networks and yourself.

Are they varied, do they have same educational background, do they spend their nights gaming, or do they speak or dress the same way?

Then your network might be too homogeneous.

This might turn your networks into gatekeepers instead of welcoming, open gates.

A very refreshing and interesting talk, with a very smart and very fun to listen to speaker. Definitely watch her talk.

If you’re interested in more of her research, definitely check out her website. She’s currently working on a series of articles on The Habitus of Tech.

Slides / Notes

Joel Chippindale - Telling stories through your commits 

One of Joel Chippindale (@joelchippindale) was preaching to the choir (at least on my part) about how The key challenge as lead dev is managing complexity and how Naming, code design, refactoring and automated tests are all about communicating the intent of our software.

He then went on to say that VCS (like Git, Mercurial, SVN, …) is underused in this regard.

To have every line of code always documented, he presented us with the 3 principles he adheres to.

  1. Make atomic commits; What’s the smallest useful change u can make to ur codebase?

  2. Write good commit messages (on his website he links to Chris Beams’ How to write a good commit message)

  3. Revise your development history before sharing. With git rebase.

The biggest motivator to a team on using these principles is that all of them have the benefit of making commits simpler, and also easier to think about.

Absolutely loved this talk. It amazed me how much information he was able to cram in a relatively short talk.

Check out his website dedicated to this concept.

Slides / Notes

Sam Lambert - The argument for simplicity 

Sam Lambert (@isamlambert) taught us a bit about how they work at github.

They deploy with hubot in slack so they can easily backtrace what happened before and after in the conversation that is deploying to production. Neat!

If there’s one thing he wanted us to take along it was to get a therapist. People laughed awkwardly, but he was dead serious. It was great to see this taboo being taken down.

At the end, in stead of mentioning his own internet location he shared some people he wants us to look up, so here are the ones I managed to note down: @ihavenotea, @jessfraz, @eanakashima.

Slides / Notes

Nickolas Means - How to crash an airplane 

Jezus christ, how do I even begin summarizing this talk.

Nickolas (@nmeans, glorious beardy man, airplane afficionado, …) left the audience mesmerized and awe struck after channeling his inner Shaman, telling us a story about how a horrible plane crash could’ve turned out even worse if it weren’t for Cockpit Resource Management. And making the analogy to what makes up a good team.

If there’s one presentation you should watch of this conference, it’s this one.

A grasp out of some of his quotes:

Cooperation over Heroics

Everyone has a voice.

As leaders we need to avoid teams to develop dominant voices, especially our own. Use your authority to ensure every voice is heard.

Software is a team sport.

Slides / Notes

Day 2 

Continue reading Day 2!

Value Stream Mapping Retrospective

TL;DR

Use Lean principles’ Value Stream Mapping if you want to:

  • identify and optimize your process
  • boost team morale by showing you’re making something valuable
  • boost collaboration between analysts and developers
  • want a break from the drudgery that is Mad, Sad, Glad.

Check my lessons learned to make sure you don’t make the same mistakes.

What is Value Stream Mapping

Value Stream Mapping is a collaborative exercise originating from Lean Principles, that visualizes the process of delivering value to your customers. As a result of the visualisation you’ll be able to analyze the flow of materials and information, identify chokepoints and waste, etc.

Setting the Stage

About 2 to 3 minutes

Make sure you mention Norm Kerths Prime Directive, especially if this is your first retro, but also as a refresher.

Aside from that, pick one! The internet is a vast ocean of possibilities.

Since we have the tradition of bringing a snack when it’s your turn to facilitate the retrospective, I brought clementines. I told my team to first draw a face on this fruit, expressing how they felt at the time. Being silly is very much allowed.

Gathering Data

30 minutes

I then started off by explaining Value Stream Mapping and told them to divide into groups of 4, with each group having at least 1 analyst. I asked them to create a flow of a story or a bug, not both. All teams ended up using the perspective of a story in our process.

Every group had an ample amount of sticky notes, sharpies, and flipchart paper. I also gave them a pdf printout of the Value Stream Mapping explanation they could refer to when something wasn’t clear.

They were kept up to date on how much time they had every 10 minutes. At first I didn’t plan to spend this much time in this phase, but they all seemed really into mapping out our process and having fun that I decided to have them continue on for a little while longer.

Group 1 Group 2 Group 3 Group 4

Generating Insights

5 minutes

While the groups were still working on their Value Stream Map, I went by each one and told them they had 5 minutes to answer 3 questions that I also wrote on a flipchart:

  • What did you learn?
  • What still puzzles you?
  • Where is the biggest time-sink/waste?

Sharing results

20 minutes

After those 5 minutes were over, I had every team share their Value Stream Map and their findings to the other groups.

Group 1 Sharing their results Group 2 Sharing their results Group 3 Sharing their results Group 4 Sharing their results

There was some discussion as well, mostly on our process. Remarkable was that every team had a different style of Value Stream Map.

Our alotted retrospective time was up, so we didn’t get a chance to make any S.M.A.R.T. actions. We did agree to have a follow up meeting with a couple people that showed interest to tackle some bottlenecks.

Retro of the retro

I listed sticky notes with numbers 1 to 5 on them on the table closest to the exit of the room and laid out stacks of sticky notes and sharpies.

The idea was for everybody to put a sticky note in the column corresponding to their opinion. 1 meaning Crowd is booing, 5 meaning Crowd is cheering. If they had concrete suggestions or remarks, they could write them down on the post it as well.

I liked this, because it lowers the threshold to give feedback. People that wanted to give remarks did, and people that didn’t still hung up a sticky anyway.

Retro of the retro

I loved how engaged everybody was when creating the Value Stream Maps and I loved the interaction between developers and analysts. This also came out of the feedback I got.

Lessons Learned 

Something I should have done at the beginning of the Gathering Data phase is to emphasize that they shouldn’t go into too much detail when following the flow of a story. Because this caused groups to require more time to finish the entire flow of a story. Mentioning that they should not use the current kanban board as a reference would have helped a lot too I think. Mostly because this causes them to start off on a level of detail that is unnecessary for a Value Stream Map. This way I can shorten the Gathering Data phase and we’ll have some room for making S.M.A.R.T. actions.

Eventhough I think it’s a pretty powerful format, I also think it can only work well if your team is mature enough. So maybe keep in mind not to try this out on teams that are completely new to an Agile way of working. It might be better to take more time and do a proper Value Stream Mapping session.

Event Storming for realz

After last times Event Storming exercise I was anxious to do it again. And as luck would have it, an opportunity arose soon after.

Friend and colleague Tom Toutenel asked me if I could facilitate an event storming session for one of our newly won clients. I was of course, happy to oblige.

Context

Tom explained that his team wanted to pick the brains of our client so they would get a shared understanding of what they need to build. Preferably in a “Storymap”. And if possible, define a first “Story” to start their sprint in the week after the storming session.

Another noteworthy fact: a considerable amount of effort by the client had already been put into making a thorough analysis of what needed to be done. Eventhough this effort is worthwile, this prompted me to make sure that the group wouldn’t limit themselves too much by the obtained knowledge.

Preparation is half the battle

As you might imagine, I wanted to prepare properly since I’m dealing with an actual client. So I revisited my previous blog post, more specifically all Alberto’s answers. And I checked the notes I had written down the first time I tried event storming again.

Pointers I wrote down

I’ve been using a mini todo-board in my “journal” with mini post-its and the tasks I made for myself were the following:

  • Have Stickies, Sheets and Sharpies for every participant
  • Prepare room (I had no paper roll, so I put 15 sheets of flipchart paper in “landscape mode” up on the wall using painters tape)
  • Introduce yourself
  • Check the groups expectations
  • Give context: EventStorming = What?, Tackle specific problem
  • Put up Events; past tense, start in the middle, extend space if necessary
  • Is Domain Expert interested in Event x? (To solve issues with granularity)
  • Put up Commands + external systems (note to self: look for hotspots)
  • Event to Event is ok: “Person had a birthday –> Person reached pensionable age”
  • Take note of gestures (“moving”, “pointing”, “cutting”)
  • Make UI cards with screens if helpful
  • Ubiquitous Language!

Silence before the storm

As I was preparing the room with flipchart sheets and “superstickies” that would serve as the colorcoding legends, I got a little nervous. I managed to get a little mental checklist loop going to gain some confidence:

  • Are they stuck in the same discussion?
  • How are we doing on time?
  • Is everyone still interested?

Introducing: the participants

In the blue corner we present to you our client, a big contender in belgians electricity market, bringing in a total of 3 persons.

I picked different names, to preserve their anonymity.

  • Lommy, the functional analyst.
  • Leonard, the architect.
  • Lionel, the project lead, aka the product owner.

And in the red corner, hailing from camp Cegeka.

I did not make these up, these are their actual names. A trio of Toms. A TomTrio. A Tromio?

  • Tom T., the team lead.
  • Tom C., the scrum master/customer proxy.
  • Tom B., the developer.

All these people are going to work together in the hours ahead, to create a shared understanding of what problem they want to solve together.

No hiccups

After explaining who I was, and what my role was going to be in the next hour(s), I explained Event/Model Storming in short and what the rules of the game were. Everyone then proceeded to post Events on the sheets of paper. No buts, no arguments, everyone was in the same boat and understood the value of having a shared understanding. Yay, open-minded humans!

There were only a few instances in which Events were not in past tense, and those were quickly fixed once I pointed it out.

Lommy, the functional analyst, went back-and-forth between stickies and his laptop. This told me that he was posting events that were a little bit biased. Most of the time though, other participants had already posted similar events. Which confirmed understanding of both participants and informed me that this bias was negligible. He IS the Domain Expert after all.

Ubiquitous Language

Especially after they started posting External Systems it appeared that there were some issues in the word choice. It appeared that naming External Systems and their actors went particularly well and was clear to everybody. However, once events came in to the problem domain, wording started to get a little fuzzy. Lommy, Leonard and Lionel were all using different words to describe the same events. Which caused a lot of confusion for our Tom Trio.

This is where I noticed immediate benefit of Event Storming: stuff like this usually only appears when you get to that bit of the solution.

Hotspots

As the story started to unfold before their eyes, there was some things that were still unclear and appeared literally outside (off of the paper). I added big exclamation marks on differently colored post-its to make it clear. And every so often, we came back to these hotspots and discussed some more about them. Further more, Lommy knew about these problems, but the rest of the team did not. Excellent learning happened right there, and you could feel the group getting excited about that fact.

One hotspot was about an “edge-case” that appeared to be more important and really couldn’t be considered an edge-case anymore. Marking it as a hotspot also made that more clear.

Talking about an external system also lead to some confusion in ubiquitous language, in that there were different terms being thrown around in relation to the external system (and external actors). They came back to these terms often enough that it warranted a hotspot. I like to think that doing this kept this issue in the back of the heads of the group.

Facilitating “aha-moments”

Aside from the hotspots building confidence in the session we were doing, there were also some other moments that elevated the group to a higher level.

I noticed they were grouping together near one particular part of the line and mentioned that I noticed this while querying “Is this the core of the problem domain?”. I could see them reflect on that a little bit and then unanimously went “Yes! Cool! :D”.

It really felt like the more positive experiences the group had, the better they got at talking to each other and working together at building their shared understanding. It was supercool as a facilitator to see them grow like this.

Adding Contexts

So, I don’t know if this is a good thing to do, but I thought it’s worth mentioning at least.

At some point, someone started noticing that in the time-line that was drawn out, there were some natural groupings that were showing. We decided to name them contexts and hang them up above their location in the timeline. It made some ideas more logical to think about, and caused some events to be shifted into their context. They then also tried to map the contexts to the analysis that Lommy had prepared. But for Tom C., it was getting too fine-grained. Another thing that this facilitated was more discussion on Ubiquitous Language, leading them to declare “This will help us to make the story map more clear, later”.

Lastly, delineating the contexts also gave a good impression of how big those contexts were. This showed something interesting in that there was one context that contrasted a lot in terms of sheer amount, to another context. Upon announcing that observation, they explained that eventhough that other context was much smaller, it was actually more complex. And the reason why they did not have a lot of cards there, was simply because the right people (that had the knowledge of that context) weren’t there.

After we did that the road to storymapping was set and we added some “Shared Language” (the pink post-its in the photo below) and things that could be “Stories” (the purple post-its in the photo below). They didn’t go very far into this because this could be done after, and they felt that they already gained all the extra insight they could get from doing this additional step.

Conclusion

After about an hour I started asking if they wanted to continue the exercise every time the discussion seemed to die down or get stuck. They naturally went back into the discussion when it was interesting enough for them to continue, and other times they jumped to a different topic.

I consider this exercise to be a successful one, eventhough we didn’t get an initial story out of it. In the end I got positive feedback that it was very useful for them. The three L’s were glad they did this and confirmed that we were the better choice in partner to do this project with.

It’s been a month since they successfully went in production and I’m proud to have contributed, albeit in a very small way, to this feat. Well done guys!

Results

Clustering around hotspot End result

How to run a The Responsibility Process™ Retrospective

What is The Responsibility Process™?

Christopher Avery did a bunch of research and eventually came up with The Responsibility Process. Watch this YouTube video too.

In short, it explains how responsibility is not to be considered a trait, something you inherit, but rather something of a state machine, or something that evolves through phases. This process as laid out by Christopher, provides you with a framework that eventually allows you to become aware of which state/phase you’re in, and act upon it as to achieve the goal of Responsibility.

Getting there means you will be able to answer the inner question “What do I really want?”.

Through practice, I found this Process to be extremely powerful. An effect which its simplicity would make you think otherwise. If there’s one resolution you should fulfill in 2015 it’s this one, to use The Responsibility Process and become a better, happier person.

What does it have to do with Retrospectives?

The Responsibility Process is more applicable on a personal level. But make no mistake, it’s very powerful on a team level as well. Providing the answer to the team’s question of “What do WE really want?” is generally a difficult endeavor. However, with the knowledge of The Responsibility Process in the back of everyones mind, can make it realistically achievable.

How do we get this into the back of everyones mind? You build a Retrospective around it!

The goal

To do better in the next iteration, by creating awareness in your team about the Responsibility phases. And by getting everyone in the team to understand The Responsibility Process framework, thereby creating some sort of hivemind, allowing your team to align itself and make decisions easier.

What you’ll need

  • A meeting room
  • Post-its
  • Markers for everyone
  • Sticky (or non-sticky) Flip-charts, Paper-roll, … A clean surface area
  • Tape

Draw out the phases of The Responsibility Process on the clean surface area. Leave some room in between the lanes.

Setting the stage

Make sure you mention Norm Kerths Prime Directive, especially if this is your first retro, but also as a refresher.

Aside from that, pick one! The internet is a vast ocean of possibilities.

Gathering Data

  1. Briefly explain The Responsibility Process. Give short examples for all the phases.
  2. Have everybody write down on a post-it, in silence, at least one event or experience in the past iteration that they can remember, whether it be a good or bad one.
  3. Then have everybody try and hang each of their post-its on The Responsibility Process.

If you have a big team, or you’re doing a project retro, chances are you’ll get a WHOLE lot of post-its. If this is likely the case, you might want to limit the amount of post-its that people can write. You can just lay down a hard limit of 2-3 per person, or constrain them in another way (add to the comment section if you have a nice example).

Generate Insights

This is an important part of your retro. Because here we want the people in the team to become aware of what phase they were in when they experienced their chosen event.

Start with Denial or Blame, and read the post-its out loud. Move up the phases as you go along.

Try to put it every post-it you pick up in the phase where the team thinks it actually belongs. If it isn’t clear what the writer meant by it, have them explain it.

Group same post-its ONLY when they are both about the same subject AND in the same phase!

Analysis

Here’s what we came up with:

Gathering Data

I’m pretty sure anyone can draw the phases out better than I did. :)

After you finished all the post-its, note how many are in which phase. This is the phase you are most likely to become aware of as a team and move towards Responsibility. The usual thing goes as well where you note clusters of subjects. It might be not as apparent because different post-its on same subjects might be spread out in different phases. This is ok. It’s more important to learn where in the states you are, then which subjects pop out. By saying them out loud, everybody will notice it anyways.

Sometimes people will yell I added that same subject in a different lane!. If you feel like the current phase hasn’t been explored enough yet, continue with other cards in that phase. If you feel like it has, ask them which phase they put it in so you can compare and ask why the same subject is in a different lane.

You’ll notice that it’s often a description of the cause, and not the problem itself that fits best in a phase. Help your group notice this too.

You’ll also notice that some people have difficulty placing a post-it in one lane and will sometimes hang it on the border. In the photo, see the two most left post-its that seem to drape over the border of Obligation and Shame. This is the typical programmers need to reduce duplication. Tell them that in this case, it’s alright :)

A participant mentioned that you could never hang anything in Denial, because it would mean you WOULD be aware of the problem. True, but the post-its that go in there would be experiences that you had and when looking back on them you were in Denial.

The post-its that were in Responsibility were always good ones, and these deserve our attention as well. When stuff is going good in an iteration, it’s definitely also worth mentioning!

Decide what to do

Explain that aside from the personal empowering the framework provides you, it also empowers a team with Shared Responsibility. Then ask the team the big question: “What do WE really want?”. You can then proceed to dot-vote, or merge, or make SMART goals.

Suggest that you can organize a contest where teammembers publicly keep their “Hits & Misses” of the phase that had the biggest amount of post-its.

Closing the retro

Explain the concept of Team Responsibility, which you sort of hinted at already in the previous section. This time make it really explicit that it’s you as a team that also can move through the different phases.

Again, pick one. If you’re like me, you’re going to want to choose one that gives you, the facilitator, feedback on how you ran your retro and how they felt during.

Closing thoughts

I didn’t put up Quit, because I didn’t want to confuse them too much. Quit can be tricky like that. But I did explain it to them afterwards.

Print out the posters and hang them across the team room for easy reference and reminders.

The reason why I think this retro format is great is because you gather data, and learn about The Responsibility Process at the same time. Further more, I find that creating awareness about these phases just improves a team in general, for a loooooooooooong time.

GL HF out there!

Oh, and I welcome questions, remarks, or any other kind of feedback in the comment section.

Hits and Misses game

On a whiteboard or whatever, list the teammembers, a hits column and a misses column. When a person noticed that they were in a certain phase when they said/did something, they can mark it as “hit”. If they notice it after they said/did something or someone else notices it afterwards, it counts as a “miss”.

This might work against you if some teammembers are likely to game the system. It’s fun to then ask them, while pointing at the posters you hung up, out of which phase in The Responsibility Process they’re acting the way they are. :)

Event Storming Exercise

As I’m currently in between projects at Cegeka and just finished working on an RFP, I got to factor in a day of learning along with other developers last friday. And boy oh boy was it worthwile.

TL;DR? Skip to the Struggling part.

Event Storming

Last year at Vaughn Vernon’s (@VaughnVernon) 3-day Implementing DDD course, there was a part where Alberto Brandolini (@ziobrando) gave a workshop on what he called “Event Storming”. Over the year the term has gotten some attention here and there. There’s even a twitter account made: @EventStorming.

Most recently though, Alberto gave a presentation about it at the DDD Exchange conference in London.

Log in to SkillsMatter to see the presentation video.

Preface

The idea to get together and practice EventStorming ourselves came from Jo Vanthournout, one of our more avid DDD practitioners. The project he’s on at the moment had a mandatory holiday. And what better way to fill it up with learning than to organize an EventStorming practice session with colleagues.

The idea was to try and storm the tabletop game “Friday”.

In this blog post I’ll list all of the stuff we noticed during our day of learning, so it might a be a bit TMI. :)

Questions, so many of them!

We had some discussion when we got together in the morning that brought us to a couple of questions:

  • Reactive Programming is not the same as an Event Driven Architecture, but they’re alike. But what is the relation between these two?
  • To start using Event Driven Architecture one would normally start with Domain Events, can Event Storming help facilitate with that?
  • Is EventStorming used to model a problem? Or just to model a flow? Or is it maybe a way to explore a process or a problem space?
  • When multiple events need to be gathered before something else can happen, what do you call that? Someone suggested a “coordinator”, which seemed like a very specific name, where does that word come from? Enterprise Integration Patterns maybe? Does EIP also align with Reactive Programming?
  • What should granularity of our events look like? Is it sufficient to write down “Car was started”? Or do we write “Engine started”, “Fuel injected”, “Spark fired”, … ?

Alberto’s Event Storming Recipes

Right after our back-and-forth, the first thing we did was watch Alberto’s presentation that he gave at the DDDX conference.

Pausing and discussing is awesome!

We were lucky to be able to pause it a good couple of times. Whenever we didn’t understand what he was trying to explain, or if we had some additional thoughts we paused and discussed.

I think this worked pretty great. You get the benefit of other views on Alberto’s explanation and you get them instantly. Often this gave extra context to something he was talking about or just made something understandable. I can recommend this way of learning to everyone!

Process Managers

A major point of discussion was the concept of Process Managers. In Alberto’s words: during the session, you’ll sometimes have one event transition straight into another one. In between he annotates something magic happens here. The magic that happens can be contained within a Process Manager, a new concept but a confusing one for sure. Does it wait until multiple events have entered before doing something, or is it simply something to contain some “magic” that leads to an event whenever another event enters it?

Barriers

Another major pause was when he started talking about how Barriers can be detrimental to your domain. But it was really all right there in the slide.

Strict validation might protect some portions of your process, but it might make a large portion “unobservable”.

What he means is that as a side-effect of your strict validation, some new processes might arise which you can’t control, nor observe. The example he gave was about a very strict validation that caused a company to start an interdivisional excel sheet, that was used to maintain some data necessary for this strict validation… :o

Our problem space: Friday

When Jo first initiated the call for Event Storming exercise on our corporate social tool, he suggested to try it out on “Friday”. Real short, it’s a “deck-building” game that consist of beating 3 progression levels and a boss-fight that you play by yourself.

We got supplies to start our event storming session: a fat stack of post-its, sharpies (at least one per person), and some scotch tape should the sticky notes not be sticky enough.

We agreed on starting our “model” from a state where the game has been set-up and is ready to be played. We then proceeded to have an explanation by Jo backed by the game rules we projected on the wall. We then agreed on colorcoding of our post-its:

  • orange = event
  • blue = command
  • yellow = aggregate
  • green = actor/user
  • pink = external system
  • wide green = process manager

In the presentation we learned that we should start putting events as orange post-its on the wall. And so we did… Until someone asked “Ok, but how does that event occur? Let’s add a command that initiates it.”. And thus, command post-its found their way on our wall quite early on. We carried on anyway, trying our best to think about the next event that needed to happen and a command that would initiate it.

A question that arose after adding commands was of course “Who dishes out these commands? How does a conversation with the system work? Does every command need an actor or are commands by the system just implicit?”.

Back to just events

We already noted that we seemed to be biased by our daily jobs (coding). Somehow this brought to our attention that we should stick to the gameplan more strictly. So we got rid of all our commands we already put on the wall and started adding more events again and decided commands would come later, they’re less important. And it’s true, events sketch out the broader context that clarifies earlier events sometimes and you can more easily rearrange your timeline. Another benefit is that when you already have commands in place, you’re less inclined to change up your wall because of the “work” you’ve already put into it and also because you’re too lazy to rearrange double the amount of post-its. Kind of like the fallacy of sunken cost.

We also noticed that our discussions were taking longer and longer. The action we took was to just keep multiple cards and replace them when we re-iterated over our process at a later stage. Some “double” notes we could take away or replace once we were a little further down the chain. So that verified our hunch.

A little breaky break

Breaky break We just finished putting up one “iteration” of the game. Our process on the wall at this stage ends with “Danger Level Raised”. We took a coffee break - away from the wall - and talked about this and that. However, when we got back in the room and tried to restart our think-engines and get back in the flow, we found it wasn’t all that easy to get back into it. Did this have to do with having too specific events? Or maybe it didn’t mean anything at all?

Optional commands and gesturing

We were now at the stage where we were tackling the multiple stage concept of the game. The discussion we had was about what parts of our process were going to be iterated over. There was a clear need for splitting one straight timeline into two or more separate ones. I noticed we were cutting parts of the wall with our hand-gestures. Both round and straight. Alberto also talks about this as well in his talk. It just helps visualize what you want to do easily. It’s clear enough for everyone in the room what parts you’re separating.

Using the “UI”

At some point we went back to the tabletop that had our complete game set-up to continue our thought about the process and to use proper names. Annotated tabletop of Friday Because we did this we got some extra insight that helped us along. We compared this with how Alberto talks about making little “UI” cards that should explain what decisions a user can take based on what information he sees. This further clarifies some events and is obviously helpful in that way. This did make us come back from our earlier strategy to only add events that were based on UI decisions, because you might miss some important ones actually.

End times

When we had to stop for the day we think we managed to complete the short process of Friday in a visual chain of events on our wall. We all agreed that this was time well-spent, learning a lot. But nevertheless something we need to practice more if we want to do this exercise with one of our real customers at some point.

The Result

Struggling 

While trying to add more events, we noticed that we were struggling with the granularity of the events. Because we are all programmers, we sometimes had the impulse to add events that said something like “card turned”, which might be too fine-grained to see our process clearly. When we did eventually notice those, we decided to tune it back a little and replaced some events that were already on the wall or simply got rid of them. Our general rule was “decisions that are taken based on the UI, we don’t put up as events”.

This struggle did lead us to a small discussion on language we used for some events. We knew we had to use verbs in the past tense, but there was one specific case that I thought was interesting. Interesting because we discovered it so early. We had an event that read 1 hazard card to fight chosen which got turned into Challenge (with chosen card) started. Can you see how different that sounds? It’s basically the same step in the game, except the first notation seems too fine grained, whereas the second one seems to be more coarse-grained but also more clear and says more about the game or of a next phase in the game. Ubiquitous language

Another thing we were struggling with was the concept of iterations of commands/events. For example, when you’ve gone through a deck of cards the next level starts (it becomes more difficult) and a big chain of events starts all over again. We didn’t know how to make that visible on our wall, or even if we should make it visible at all. After some discussion we decided to simply not model it, because the events should speak for themselves. You don’t really need an arrow pointing from somewhere in the middle of your “timeline” back to the front to indicate starting over again. Another indication that we’re not exactly used to thinking in events or streams.

Yet another thing we found difficult to model was conditionals, which might again be an indication of our struggle with granularity and an event driven model. Should conditions also be represented as mere events maybe?

At this point we were all agreeing that we’re missing someone that could provide us with some guidance with this event storming stuff.

Take-aways

  • When you recognize you’ve got Event leads to Process manager leads to Command, you basically got stuck into thinking imperatively again.
  • Don’t go too much into detail, try to stay high level as long as you can. Especially in the beginning of your session.
  • It’s ok when one event leads to another event. You’ll run into this when you really want to translate one event from one bounded context into another event that makes more sense in another bounded context. A good example: “Person x has had a birthday”, leads to “Person x has reached pensionable age”. So really in cases where one event is reinterpreted and/or filtered it’s ok to just generate a new event instead of going through a command.

Questions unanswered

  • How do you properly use the result of an Event Storming session to produce stories?
  • Can you write your implementation completely based on the events you put up during the session?
  • Can you write a “walking skeleton” from the global event flow and do you use this to hook your story implementations on? Or is that considered to be too much of an upfront design?
  • We know of some projects that already used Domain Events and/or Event Sourcing. How did they end up with their events? Did they also model this first somehow?
  • Did they ever model an event that ended up being 4 separate events? And how much of a problem was it to modify their code at that point? Or did they ever model events that ended up being irrelevant? Or vice versa, did they miss events that ended up being super important?
  • For me personally there’s still some unclarity on commands and how to properly position your events when your straight timeline splits up into multiple ones. Can you just put those separate flows anywhere, because the eventstorming session only serves to make a mental picture?

What’s next?

The plan is to implement that game based on our event storming session outcome and see how far we can get. So I guess when that happens you guys will be able to read about it. :)