Stories can make or break a design strategy. Streamlining the mechanics of storytelling makes it easy to focus on the art and craft of narrative.
In the past, I’ve discussed why stories are fundamental to shaping, communicating, and evangelizing for a design strategy. The stories we tell are grounded in qualitative ethnographic research, and bring our research participants to life: we can champion their stories, their wants and needs, and bring their voice to the design process. Their stories provoke emotional reactions that often lead to dissonance. The dissonance needs to be resolved, and so narrative acts as a prompt for having and driving difficult conversations through the creative process.
It’s hard to craft stories to support design strategy. The “telling of the story” is an art in itself, as is the curation process and the delivery. Yet I’ve seen that one of the biggest hurdles is not in the creative part of storytelling, but in the logistics of the process itself. Research generates lots of data, and it can be intimidating trying to work through the mess to find the gems. Over the years, my team has refined an operational method for organizing ourselves as we prepare to tell stories from the field. This article is a highly technical tutorial, to the point of discussing what might be considered minutia. I want to communicate what we do, step by step, so others can minimize the tedium to focus on what matters the most – the actual stories themselves.
For the sake of this article, I’m not going to describe how to build a research plan and conduct research; instead, I’ll focus primarily on what to do with the research once you’ve completed it. There are a variety of other resources available describing how to conduct qualitative research, including some from my own (free) book, How I Teach.
Let’s dive in.
Setting up our files, folders, and naming
One of the most important parts of this process is also the most mundane: setting up the folder structure that will hold the content that’s developed. We’re militant about strong file management related to large research data. Our folder structure looks like this:
First, we create a folder for the project as a whole. Within that, we create a folder for the research participants. In there, we create a unique folder for each participant. The folder format is:
[participant number].[participant name]
And inside of that folder are folders for our raw audio, our consent forms, our photos, and our transcription materials.
The reason this structure is so important is simple: when you start to do creative work and weave together stories, you need to find the content, quickly. As with any creative process, creativity comes and goes. When you find a thread and need to follow it, the material needs to be at your fingertips. With even a small sized data set, organization is critical.
An additional benefit of a rigid structure like this is that any member of our team can dive into a project blindly and know where to find the materials. They can apply their knowledge of project organization from one project to another.
Recording the interview
When we conduct research, we audio-record the session. We describe to the participant that they are going to be recorded, and explain exactly what we’ll be doing with the recording (if it will be private, shared anonymously, or shared publicly). Participants sign a consent form prior to recording. The recording becomes the source material for the remainder of synthesis, so it’s important to get this right.
We use Sony Digital Voice Recorders. They are small and inconspicuous, and it’s easy to tell when they are recording, because they have a red light on the top that’s on only when it is active. The battery indicator clearly shows how much battery life is left. The most important factor in selecting a voice recorder, though, is the integrated storage and USB connector.
When we’re in the field, traveling quickly between locations, we don’t need to be struggling with cables and connectors – we just plug it into our laptop, and it works.
Immediately after the research session, we copy the audio file to our Dropbox project folder; we often do this in the car on the way from one research session to another.
Full verbose transcription
As soon as possible after the research has concluded, we produce a complete transcription of the session. The only thing removed are obvious verbal ticks, such as “uh” and “uhm”; the rest of the content is presented exactly as spoken. We only transcribe what the participant said, not what the interviewer said or asked. As we type, we press Enter to indicate a complete thought or idea. This is often delineated by a pause, intake of breath, topic change, or when a question was asked by another participant.
It’s tempting at this point to find a third party to do the transcription for you – to take some of the tedium out of the process and to give you time to do other things. There are lots of companies that will do transcription, charging $1 a minute and turning transcripts around in a few days.
But we don’t do that; we transcribe the work ourselves, divided up among the research team. It takes a long time, but it’s worth it. When we’re done, each of us holds a part of the research intimately, and can “channel” the participant. When we start to craft stories, we can rely on that person to know the material inside and out. After transcribing a participant, I can actually hear their voice in my head, and that voice is lasting; days or even weeks later, I can hear their comments and the tone, and inflections, of their statements. And when a team member asks me a question about them, I can answer it, quickly and completely.
One of the best tools we’ve found to aid our transcription is called oTranscribe. It’s a free tool that simplifies the process. You can press simple key combinations to pause or stop the recording, or to slow it down, speed it up, or jump several seconds backwards. Fluency in the tool means that we rarely actually have to stop transcribing, and can plow through the entire recording in just a few passes.
Coding in Excel
Now that we’ve transferred the audio to a manageable text format, we can move into the data itself by coding it – by giving it meta-data so we can find it easily. Later in the process, we’ll start to move content around, and it loses context quickly. We need a way to find our way back to a source, similar to how citations work in writing.
Here’s our step by step process for coding in Excel:
- Copy the entire text transcription, and paste it in a new Excel document. You’ll see the content flow through the document, and each line break in the Word document becomes a new row in Excel.
- Insert a blank row at the top, and three columns to the left of the text.
- The first column will be for the participant’s first name; label the column Participant Name.
- The second column will be for the participant’s unique number; label the column Participant Number. We start at 1, and work our way down.
- The third column will be for the utterance number; label the column Utterance Number. This is a unique number, starting at 1.
- Finally, label the utterance column Utterance.
Enter three rows of data, giving Excel enough material to work with for duplication; then, drag the bottom right corner of the cell all the way to the bottom, to fill in the remaining data.
Now, you have meta-data about the transcript: each utterance is uniquely reference-able. Each participant will have a unique Excel document, and each thing they said has a unique number. Later in the process, when someone asks a question about your work, you can always find the source material: you can identify exactly what a person said, and the context in which it was said.
Write down the last Utterance number, so you can pick up where you left off on the next participant.
Create individual utterance cards
We’re ready to move from our raw data to a format appropriate for meaning-making. We make meaning out of the data on the wall in our studio – by physically immersing ourselves in the data. To synthesize our data, we’re going to print every single utterance on a small piece of paper. The utterances can then be manipulated, grouped, and arranged. Most importantly, they can be intermingled across participants, so we can start to make observations and inferences about the research as a whole.
To do this, we’re going to re-purpose a feature in Microsoft Word: Mail Merge. Mail Merge is typically used to create mailing labels. If you were going to send letters to 100 people, you could enter their addresses in an Excel file, and use Mail Merge to print them on Avery mailing labels. It works by merging a layout (typically two columns with 10 rows of labels) with a list of mailing addresses (in Microsoft Excel). We’re going to trick it: instead of printing mailing addresses, we’re going to print “labels” of each of our utterances, but on plain paper, not on sticky labels.
These instructions are for Windows, but the process is essentially the same on a Mac.
Set up your document
First, create a new document, go to the Mailings tab in the ribbon, and click Start Mail Merge. Use the Mail Merge Wizard to start; select Labels from the panel on the right.
Select your labels
Click Next: Starting document, and click on Label options on the panel on the right. If you were printing mailing labels, this is where you would select the specific labels you had purchased. We’re going to select a label size that works well with our research utterances. I like to use Avery’s 8923 Shipping Labels as a template; this size is 4″x2″, giving us enough room for rich content, but still small enough to manipulate.
Click OK, and you’ll see the document change – now, it’s acting like lots of small 4″x2″ documents instead of one large one.
Attach your Excel document
Click Next: Select recipients, and select Browse… from the panel on the right. We’re going to connect our Excel document to the mailing labels. Navigate to the Excel document you created and select it. When prompted to Select a Table, click OK. And, when prompted with a Mail Merge Recipients window, click OK.
You’ll see each label area now display the text Next Record.
Insert your utterances
Click Next: Arrange your labels. Select More items… from the panel on the right, and you’ll see a window called Insert Merge Field. This lists all of the columns in your Excel document – Participant Name, Participant Number, Utterance Number, and Utterance.
Click each one and click Insert, and you’ll see them appear in the first label.
On the right panel, click Update all labels, and you’ll see the content duplicate on each label.
Then, click Next: Preview your labels. There’s all of our content, but it looks like garbage. We need to add some formatting so we can read it.
Arrange your content
Click Previous: Arrange your labels, and return to the Arrange screen.
You can edit this just like any other document. We’ll change the font size to 8 point so we can fit more on each note. Then, we’ll add some spacing. Click between «Participant_Name» and «Participant_Number», and add a space. We’ll add the letter P before «Participant_Number», and a dash before «Utterance Number». And, because «Utterance» is the most important part, we’ll add a full line break there.
Click Update all labels again, and then Next: Preview your labels.
Print the document
Now it looks the way we want. Click Next: Complete the merge, and print the material. We typically print both to the printer, and also to a pdf file. We don’t actually print to labels here, just to standard 8.5×11 paper.
Put utterances on the wall
Print your utterances. We try to have someone at the studio get ahead of this process while we are still in the field conducting research, because it takes a long time.
Use a paper cutter to cut the utterances, and put them in a stack. Don’t worry about keeping them organized in any way; their sequence doesn’t matter.
Now, we have an exact duplicate of all of our research content, but in a malleable, movable format. Our next step is to get all of the content on the wall so we can use it.
We use 8 foot by 4 foot black foamcore boards to hold our content. The boards fit easily in a standard office room, where the ceiling is 9 feet or higher. They can be stacked, and transported with some ease between rooms. You can buy them at ULine; they aren’t cheap, but they last forever. These boards come in 1/4″ and 1/2″ thickness, and can hold stickynotes and push pins. Use the 1/2″ – the pins stick through the back of the 1/4″. Each board can hold approximately 300 notes comfortably. A two-hour research interview will generate approximately 200 notes.
We use thumbtacks to attach the notes to the boards. Permanent tape pulls the paper coating on the boards off, and reusable tapes tend to fall off overnight. To attach the notes to the board, we lay the board flat on the ground, and literally crawl on it. This is much easier than putting the notes on the board when it is vertical, because you can minimize the time spent reaching for tacks. Every step counts when working with 4000 data points!
As we attach the notes to the board, we aren’t worrying about keeping them in order or keeping them grouped by participant. Since the notes include attribution and citation, we can always find our way back to the context of the specific utterance. Instead, our focus is on getting all of the notes up, quickly.
Pictures on the wall
At this stage, we’ll also make sure that photos of all of our participants are displayed on the wall. We print an 8.5×11 image of the participant’s face, write their first name in big bold letters under their picture, and add two or three bullet points that describe the interview (“junior, majoring in math, works at Starbucks”) to trigger our memory about who is who.
Tracking our work
Throughout the process, we’ve been tracking our progress in a spreadsheet. We track completion of each of these activities, to make sure we’re not missing anything. The research process is chaotic, and often requires travel logistics, scheduling and rescheduling, and as many as three research sessions per day. Without strong organization, things spiral into a mess.
Our spreadsheet tracks each step: participant meta-data, participant scheduling, transfer of the raw materials to the Dropbox, creation of transcripts, and the utterance-development process.
Working through the mess
Begin the process of synthesizing data. There are lots of ways to do this, but we follow a path from data to observations, and observations to inferences.
From data to observations
First, I’ll look at a single note. I’ll read it, highlight parts that are salient or interesting to me, and place it on a new, blank black board. Then, I’ll find another note, and do the same thing. When I find two notes that feel similar, I put them near one-another. There’s no method to how I select notes to look at. I just start in the middle, and grab a note at random.
Over time, groupings will start to emerge.
When a grouping has about 5 or 6 notes in it, I’ll add a note at the top that describes the group, in the first person. For example, “I feel overwhelmed by my student debt.”
We try not to do what we call “red truck” matching. There are many ways to find similarities in notes like this:
One might be to match the notes based on the fact that they both contain a red truck: The notes are about driving red trucks. Another is to match them based on the behavior, feeling, or attitudes that are present: The notes are about how vehicles carry sentimental value. The second method is much more useful, because it starts to move a little past the raw data and into the world of inference. It’s just a slight jump, but a useful one. Our entire process of meaning making is to provoke new considerations, not simply to describe what’s currently there.
Over time, a group will grow to an unmanageable size, around 10 or 11 notes. At this point, we tear it down, and rebuild it.
The wall will feel overwhelming at first, and it’s difficult to start. I find a part of my leadership is to dive in and start moving things around, even knowing that they will be “wrong” or moved around again. Even moving just 15 or 20 notes gives the wall a sense of progress, and that makes it easier for my team to get started and feel less intimidated.
Throughout the process, it’s helpful to clean up the boards, moving the notes that have become scattered back into nice, neat columns. This starts to add a sense of finality, which is helpful for minimizing anxiety. But it also starts to indicate that things are decided, and that means people may be less likely to make changes. Clean the boards periodically, but focus only on the notes that haven’t been used. Let the other groupings remain organic.
This process generally starts as a quiet, introspective, and personal activity. Many team members participate, but it’s often a silent process of reading a note, considering it, and finding a place for it to live on the wall. Over time, we’ll start to hear things like “Hey, didn’t we have a group about debt?” or “I thought we had something over in this area about student loans”, and conversation will start to pick up as the walls take shape. Funny, sad, or extreme utterances are often read aloud. Team members get a sense or feeling for the walls – a tacit understanding of where things are geographically, and a semantic understanding of how topics are starting to emerge.
This process takes as long as you give it. We timebox the entire thing based on our project schedule, and it’s often directly related to the amount of research that’s been conducted. It may take 3 weeks to conduct 20 participant interviews; this initial synthesis process then likely will take between 1 and 2 weeks to gain traction. It’s difficult to sustain attention for a full work day, and we encourage our team to work for a few hours and take a break, work on something else, or go for a walk to clear the air and come back fresh.
From observations to inferences
Once we start to have a critical mass of observations, we start to identify inferences: combinations of observations around a given topic that are grouped by a larger, more subjective leap. Again, we avoid red truck matching, and instead focus on behavior, feelings, and attitudes. At this stage, entire groupings are being moved around and placed in proximity of one-another. Their observation headers don’t go away. Instead, a meta-header will be created that starts to make a larger inference about the content. For example:
Advisors are supportive, but in an effort to provide value, they often make decisions for students rather than with them. This reinforces a feeling of helplessness.
I have a personal relationship with my advisor.
My advisors know the process.
My advisor acts like my mom and takes control.
This inference header is starting to make a larger leap from raw factual data, towards our interpretation of that data. I still have the real transcript data to support the leap, and there’s a strong likelihood that the leap is correct, but it’s no longer a one-for-one match to what was observed in the field. We’re starting to inject our own perspective into the data, allowing our design expertise in human behavior to influence the content.
This is where “abductive thinking” starts to play a role. That’s an academic idea that describes the ‘logic of what might be’, and I’ve written a very long article about that here.
While the initial data-to-observation process was generally quiet and introspective, the observation-to-inference process is largely a collaborative activity. The inferences are created by the group, and are discussed, debated, and argued about. There are infinite ways to manage the content and make leaps, and group-driven sensemaking is the only way we’ve found to quickly arrive at meaningful, rich inferences.
Inferences are magic, and contentious, and that’s the point. This is the place in our process where risk is being introduced: where we are making statements that aren’t substantiated by fact, but instead, by intuition. These inferences are going to shape the types of stories we are going to tell. Again, this process will take as long as you give it; we try to limit this to about 3 or 4 days. Several of those days will feel like steps backwards. The last few will feel like unlocking a strong understanding of the topic.
Getting organized for stories
We’re almost at the point where stories are coming to life, and at this stage, it has become clear which inferences support the best stories about the participants. To finish getting prepared for storytelling, we’ll work through one last process of organization: bringing all of the material into a single, searchable master index.
First, combine all of the individual participant Excel spreadsheets into a single Excel file. Copy and paste each into a single workbook, or use a VB script to get part of the way there for you. Since the numbers run consecutively, each row will have a unique number.
Now, add two columns, one for Inference Grouping and one for Observation Grouping. Using the citation numbers on the wall, add the group information to the spreadsheet. Start with a group on the top left corner of the wall, and note the line number. Find the number in Excel (Ctrl-F), and add the observation statement and inference statement. Put a small black dot on the physical note so you know you’ve processed it, and move to the next note.
Auto-complete is your friend. After you add each theme or inference once, you will easily be able to add them again the next time by selecting the previous entry – Excel will predict your typing.
Work your way through the entire wall of content, until you have a single Excel spreadsheet with everything you’ve done – all of the data, organized as it appears on the wall.
Using the data
You’ve now developed a single, sortable, filterable artifact for building stories, and you can use that artifact to build narratives, explain ideas, and help describe strategic goals.
Imagine that you work at a bank and want to make changes to the monthly statements students receive in the mail about their student loans. You can build a story about how debt creates a feeling of looming anxiety, and use the material from your research to tell that story. The story substantiates your argument for change: it explains why the change is necessary, and it drives home your argument emotionally, not just rationally.
Building this story is easy: just search the spreadsheet for “debt” and find every single time a participant mentioned the idea. Or, filter the inference column of the spreadsheet based on specific issues related to anxiety. Once you find content that supports your argument, you can bring the content to life by telling the story of the quote. Tell us about the person who said it, why they said it, and how they felt. You have raw audio that is well organized; you can listen to their statements directly. Extract an audio clip to place into a presentation so the audience can hear the quote directly from the participant’s mouth.
You can move quickly, and your materials won’t get in the way of your creative vision and the story you want to tell. You have all of the raw materials you’ll need to craft strategic design stories, to evangelize a position, help build a strategy, communicate wants and needs, and tug at the heartstrings of your stakeholders.
It’s all about the details
The constant drumbeat in this article has been on organization and tedium: on doing things with care, and making sure things are well structured. The article has been highly tactical and focused on the details of work process because I’ve found that the creative act of storytelling is often hampered by logistics and organization. Without a craft of method, the storytelling process falls apart. Things become too hard, too complicated, and too much of a mess; the work suffers, or worse, is abandoned. That’s not fair to the team, and it’s not fair to the participants in the research. These stories are their voice, and by crafting them, you bring their voice to the conversation and act as their advocate.