Samuel Clemens, Founder and Chief Product Officer at InsightSquared, discusses the purpose and implementation of process in product management and how his approach differs from our past guest, David Cancel.
Here are the highlights:
And here's the transcript:
Mike Fishbein: Hey, I'm your host, Mike Fishbein. This is Product Management, a podcast produced by Alpha. Welcome back.
On this episode, I'll be speaking with Samuel Clemens, the Founder and Chief Product Officer at InsightSquared. Now, follow along here because things get interesting. A few weeks ago, we had David Cancel on the show to talk about customer driver product teams and implementing a fluid product management environment.
David was the Head of Product of HubSpot. But before him, Samuel Clemens was the Head of the Product at HubSpot. The two share many beliefs, but have a competing view regarding the purpose and implementation of process in product management. Given that, this interview will be a little bit unorthodox. Samuel will share his position on the key components of an active product management process from conducting user research to continuous releases.
Samuel Clemens: My topic is a active processes is good product management. I'm picking something potentially controversial here since I know a lot of your previous guests have been advocates against half process and product management. And actually, truth be told, I'm process like guy. Most of the time I believe that process should actually follow having a bunch of smart, driven people figuring out how to do something for the first end iterations of whatever that thing is. And indeed, you should stay the heck away from them with any kind of process and let them figure it out.
Then the question in my mind always comes: what do you after those end iterations? If you think back in the last six or 10 years of building things at your company, how many customer visits have you done? How many mock ups? How many times have you pushed out a release? How many times have you triaged a bug? The number's probably in the hundreds, if not the thousands.
If you do something that number of times, you will develop a process. The actual question is whether you're an active or passive participant in developing that process for your company. There are some advantages to passive. There are reasons why you would wanna let it develop organically. If you choose to be an active participant, that gives you the opportunity to then reinforce some of the attributes that you're looking for or bias towards some of the positive outcomes that you know can happen if you have a guiding hand. Active process is product management.
Mike Fishbein: You know an interview will be good when it begins by noting how controversial the topic will be. As excited as I am to dig further into Samuel's point-of-view though, let's first dig into his past. How did he get into tech in the first place and what has shaped his perspective?
Samuel Clemens: I'm actually the child of two photographers, oddly enough. One was a artistic, creative photographer. The other was very mathematical and logical. The reason I bring this up, I think having those two sides of thinking about a problem is something that often comes very useful to any product manager. Whether it's sort of creative and logical or ideation and test, being able to switch back and forth between creatively thinking about the problem and then logically testing it to see if any of those creative ideas worked, and then cycling back and forth, is often a key component to really any kind of problem-solving.
In undergrad, I was an applied math major, which is I think now what they call - much more fashionable term - data science, which is really just fancy terms for here's a toolbox that you can use for problem-solving. Again, the generic type of problem-solving which can be applied to so many things. And the way I chose ... the thing I really like to apply it to was product management.
The actual background was after college I got into management consulting and found that it was really very unsatisfying. I'm a builder. I like to build things. Right away, I went into the first of five-star apps. The first was a Freelance Marketplace for Services, eventually merged with oDesk. The second was BZZAgent, was a word-of-mouth media firm that was acquired by the British relater Dunnhumby. The third was a science phase, 3D modeling company called Models for Mars.
The fourth startup, I ran product at HubSpot for a couple of years, just after Brad Halbin. And then left to start a company called InsightSquared with two very good friends. At this point, we are five and a half years in. We're a B2B software company that does sales analytics for the non-fortune 500. If you know business intelligence, this is BI for the non-fortune 500.
We're what I would characterize as middle-growth stage. We're about 160 total people. The R&D side, including engineering, and product, and design, and all of that, is probably around 40 or 45. Today, my main roles are leading product at InsightSquared. I'm also an entrepreneur in residence at HBS, Harvard Business School. I do frequent guest speaking engagements at HBS and MIT on things like product management, and design, and product marketing.
Mike Fishbein: Samuel has had an impressive career helping to build some of the most renowned B2B products. What have been the key lessons he's learned as they relate to active participation in creating process? I didn't interrupt his epic list so I suggest you take out your notebooks and pens for this answer.
Samuel Clemens: Oftentimes, I'll speak with companies in town about how to implement things like agile development or iterative product management practices. Frequently, they get caught up on not the theory of what they're trying to do. They're very much bought into the theory. What hangs them up is how to actually implement it and get a smooth-running machine.
Over time, I've found that the answers to the questions are there's a pattern. What I've done is I've listed out a half dozen plus, maybe a bonus one at the end called a baker's half dozen, of common things that help teams implement good product management practices. I think these will probably appeal either to more mature organizations that are trying to implement a smooth agile product management practice or even to startups who are looking to start things off on the right foot.
The first one is the core of almost all good product management is you have to know your customer very well. In particular, it's on-premise customer visits. I stress the on-premise part. The key here is they need to in person. You need to be at the customer location. You need to animal in the native environment. That's because when you're on, say a phone call or a screen share, if they're looking at the new mockup that you've done on a screen share, there's so many things that you're not seeing which are absolutely essential to what you need to learn in order to build a good product.
For example, the obvious ones. You're not seeing the expression on their face when they're confused about something. If you're very good you can pick up the delay in the voice, but you're not seeing the expression. You're also not seeing the whiteboard that's in their office which is all of the brainstorming. That's the thinking that they're doing about what really bothers them. You're not seeing the way that they relate to their other peers. You're not seeing what the environment looks like.
You're missing all of that context. When it comes to really building an insightful product, it's all the context that matters. That's why it's absolutely critical. You can't just do surveys. You can't just do phone calls. You have to actually go visit the customer on-premise. One of the processes I have for my teams is that mandatory, once a month, every one of my PMs gets out of the building, off the property, and visits customers on-premise.
The second thing in my list is tuning whatever engineering process you've chosen for your teams. Common choices are things like Condon or Scrum. Whatever you choice, the key is to really tune the levers so that the iterations flow easily and it's an enabler, not a burden for your teams. For example, I tend to run Scrum with my teams. A common question is what the cycle of frequency? Is it one week? Is it two weeks? Is it four weeks? These things can actually make a difference
We ran one-week cycles for the first year of InsightSquared. It gets you a very, very reactive, very high throughput kind of experience. The problem is that it also tends to burn out PMs. It ends up being a very rushed experience. You don't often have time to plan out some of the more complex types of features that happen in say, years two going forward. That's why when you're two through actually current ...
Now, in year five, we've been running two-week cycles. Those work well. They have still a sense of high throughput, high frequency, high energy, but yet it doesn't have that burnout feel that a one-week cycle has. Other choices, other companies, you can run a four-week cycle. I find that those are much more suited to a more mature company where the size of the projects are longer, much more planning per project, that kind of thing. These things make a difference. They will actually change the way that the machine behaves depending on what you select.
Other choices are how you set up estimations that it's easy not painful. How do you triage bugs? How did they flow in? How do you handle them? How are they ranked? How are they handled? That makes a giant difference, both the engineering team, as well as to your customer success team. Then things like demos. How do you demo the software that you've built? The way we've set it up here, we actually have a Squared demo that we do once a month.
It's very much a rah-rah event where we're displaying the software, the stuff that I actually built. One of the keys there is to describe the business value because in the audience you have your customer success teams, and you have your sales teams, they've taken an hour off of their busy day. You're really focusing on: what can you show them that's impressive? How can they relate to the things that the R&D team has built? We've actually had a number of companies come in from the outside to basically sit in on our Squared demos and just kind of see what kind of things that they could adopt and bring back to their own companies.
To wrap this bullet point up is whatever the process you choose, there's a number of levers. The key component is to study those levers and really tune them so that when you're doing 100 or 200 reps of whatever this process that you've chosen, that it's smooth and that it's enabling for the teams and not a burden on them.
The third thing on my list is about the spec. Frequently companies will ask me how do you spec out the things you've built? My answer is: I actually have a kill on sight order on specs in my building. I won't allow it. I think the most dangerous thing about a spec is that someone might actually build it. The problem with the spec is that it represents a point-in-time snapshot of what someone believed, say three, or six, or nine months ago, was the best thing to build. That's no longer up to date because you've already met three, or six, or nine customers in person and visited since then. You've already had so many more support cases come in, so many more customer deals happened. Your learning has progressed so much and none of that is reflected in the spec.
Secondly, the product has progressed. Okay. New things have gotten built. There's so many more interdependencies that the spec can't accommodate for. Then thirdly, no matter how detailed the spec is ... I remember, of the five companies I've been in, the first two were waterfall and the last two have been agile. I remember writing 14 page Word docs in a waterfall product management process at the first companies. No matter how detailed that spec is, it can never anticipate all of the questions that might come up. To pretend that it does gives a false illusion of completeness of the spec. Rather than have that false illusion, I say the opposite. Let's kill the spec.
The spec is the conversation between the product manager and the engineer. Period. The way I prefer to have things happen is that when the engineer starts the story, they turn to the product manager, and they say, "Hey, what are we doing on this feature?" The PM says, "Ah, glad you asked. Here's why we're doing this. Here's what we're trying to do. Here's the problem." So that the engineer has context. The engineers says, "Ah. Okay. All right. There's a couple ways we could think about doing this. We could do approach A or approach B." The PM says, "You know what, I think out of the pros and cons of each of these, I don't think the type of things that would affect problem B are gonna be relevant for us so let's go with approach. A. The engineer comes back the next day and they say, "You'll never guess what happened." What happens when the number of X is zero? Then, the whole thing breaks and says, "Wow, I never would've thought of that. Well, tell you what, let me think about this. I don't think that, of the couple ways we can handle this, I don't think that one's going to be a problem for customers, so let's handle it in this way." That's the kind of thing that would never be in a spec, yet it's critical, so that's why you want the spec to be the conversation. This ongoing conversation between the PM and the engineer. It's the most reliable way that I know of to get quality output.
The fourth thing in my list of how to build a good product management process is release frequency. This is a question that's useful if you're joining a company in a product management role, or even if you're starting one and you're having [conversations 00:12:53] with your product engineering counterpart about how to set things up. Very frequently you'll ask customers, or companies rather, "How often do you release to production?" Particularly, three, or four, or five, or six years ago the answer might be, "Oh, we release once a month," or, "We release every two weeks at the end of a sprint." Essentially, what they're saying is that they do releases in batch mode. There's another way of setting up an engineering release process that doesn't do batch. That basically has a flow. Where you're releasing as things are being developed, and as they're ready, essentially on a per future basis, not a batch basis.
What this means is that you're releasing multiple times per day. It could be dozens, it could be hundreds, whatever it might be, but you're releasing multiple times per day. Benefits of this are immense. To the product management organization, it has the benefits of enabling a much more iterative approach to development. You can develop a piece of something, have it gated and hidden, and you can push it out quickly, get customer feedback on it, and go back and keep iterating to develop the next piece. To the engineering team it also has benefits. It means that you don't have your most senior engineers staying up until two o'clock in the morning every two weeks, or four weeks trying to push out a giant mass of stuff.
Then, to your customer success team it has benefits, because you're not pushing out a giant chunk of code that has bugs that have cropped up simply because of the interdependencies, and that to fix them you might have to do rollbacks, or all sorts of complicated maneuvers. Instead, when you're pushing out once release to production for each one of those releases has just a much smaller chunk of code in it. If a bug happens, it's very easy to identify where that bug came from, so your meantime fix is minutes, which has benefits for a customer success team.
The fifth item in my list in creating good product management processes is, I like to have everyone on the product team coding. I'll split this into both product managers and designers, because both are important. The reason I'm a strong advocate of having both PMs and designers code ... By the way, this is not just a hiring thing, and a coaching thing, it's also an infrastructural thing. There's actually a set of steps that an engineering team needs to do in order to enable their PMs and their designers to code. It takes effort and calories to do that. The reason I believe those calories are worthwhile in spending is that, one of the highest impact things you can change in a product is a copy, as an example.
A typical process for tweaking the copy in your product, if you're a product manager, you might be looking at one of your screens and saying, "You know, every time I sit in front of a customer they find this confusing, and I believe the reason is, they're reading the headline and then they really don't get ... They have the wrong frame of mind for everything else on the screen. I want to change that headline to say something else," blah-blah-blah.
Their process of changing, they might sit down to one of the engineers on their team and they say, "Hey, could you really change that headline for me to something else?" The engineer will say, "Oh sure, that's no problem," and they change it to that. Then, you hit reload on the screen, you're like, "Oh crap, that's wrapping, and that really shifted the layout around, and that might be more clear, but it's overall much less intuitive page. Could you change it again to such-and-such, which is shorter?" The engineer is like, "Sure, no problem," and they change it to whatever is shorter.
Then, you look at it and you're like, "Okay, I made it shorter, but now it's not clear again," and you kind of go back and forth, and back and forth trying to tweak it. What ends up happening is, the engineer feels like a typist, and the PM feels like an ass for making the engineer feel like a typist, and tries to avoid that situation in the first place. What you end up with is a product that has a level of detail that never really gets done, because the friction in trying to get that last level of detail, the friction is just too high.
You can decrease that friction if you enable your PMs and your designers to code, because then they have direct control over that last layer of detail. What you get are three things. One is you get a productivity gain, because you don't have that added friction. Basically, the PMs, the designers can go in and directly impact whatever they want. You have a motivational gain, because they feel like they're empowered to go and actually control the layer of things that they want to control. You also get a quality gain, because the product has that last layer of polish, in the case of the copy tweak I was just suggesting.
For designers it's even more impactful. When you have designers that can code instead of doing just pure UX, for example, and then handing it off, they're less likely to suggest UX that is unimplementable, and in fact, more likely to suggest UX that is informed by the kinds of cool things you can do in code these days. You're going to get a dramatic increase in the quality of the design that actually gets built. You get a productivity gain, you get a motivational gain, you get a quality gain. All these things are very doable, if you spend those calories to unblock and enable the non-engineering members of your RND team to actually push code to production.
Sixth thing in my list is, the notion of testing that overlaps. I frequently get asked about betas. How to run betas, what's the appropriate way to do it? The way I view a lot of this non-automated testing is, it's kind of like an onion with layer, where in the center, or as the layers go out you have increasing layers of fidelity of test, but also increasing cost. At the center of the onion you have the cheapest most possible test is simply asking your internal reference customer. The cost is 15 milliseconds, however fast your own brain runs. The fidelity is good, but potentially questionable if you don't know the exact customer perspective on whatever this item might be.
Slightly more expensive that you might want to choose, if you want to spend more effort to get more fidelity would be, so you look at someone who's next to you, or across the aisle. The cost of this is, this is showing, "Hey, so-and-so, can you just look at the screen, tell me what you think?" The cost of that is 30 seconds, one minute. The fidelity there, the benefit is, you get a human reacting to the thing. They may not be a customer-human, but at least they're a human who's not you.
Slightly more expensive is, you might go outside of your immediate area. You might go to ... Say you make software that sells to salespeople, or you make software that sells to customer success people. You might go to the customer success team and say, "Hey, can I show this to you, because you're a proxy for our kind of customer?" The cost in that case is, say, five minutes. The fidelity is higher, because it's another human and they're the right persona, but they're not exactly your customer, because they know too much, they're inside the building.
Then, of course, you can even increase in costs, you can go outside the building.... The next layer out might be your beta customers, so this is a closed set of, say, a couple dozen customers that you have close relations with. Cost is, say, two hours to set it up and have a phone call. The benefit is that these are actual, real customers this time. Now, the drawback is, they are your beta customers, so they're not truly representative of the entire set. They're too beta tolerant. You can go out another layer, which is, we can do a gated release to production, where it's the subset of one-tenth of the base. Cost to set up now is hours, because you have to actually get it robust enough that it can be released to non-beta parts of the base. You need to set up the gating, but the fidelity is, now you're actually releasing to real customers.
On and on you can go until, basically, it's released to all of your base. Even then, that is still a continuation of testing. You're still watching it being used in the base, and then tweaking and iterating on what you have. Essentially, you have these layers of an onion, and all of these are choices that you can make when you're releasing something. Now, the key is this. When you're releasing something you do have that choice. You want to ask yourself, "What is the type of uncertainty that we have with this thing that I'm releasing right now?" For that kind of uncertainty, what is the appropriate level of testing that I should use?
The last item in my list is, this baker's half-dozen, is creating a roadmap. Questions about a product roadmap are probably the most common I get when speaking to other companies. The way we do it is, I do a four-quarter roadmap. Four-quarter meaning, the buckets are quarterly, here are the things we're going to build in each of the next four quarters. Rather, here are the things we're going to release in each of the next four quarters. I iterate on that monthly.
Once a month I put out a new version of that roadmap, and it's fully transparent internally. Meaning, internally, I send it out to the entire company. I ask them to print it, I ask the salespeople to tape it on their desks. I do a lunch conversation with, open to anyone in the company about the items on the roadmap, the trade-offs we made, what things are included in different projects, so that everyone has an understanding to at least some level of what's going to be built for the next four quarters.
The way it's structured is, I've set up levels of certainty as the quarters progress. The quarter we're currently in has a high level of certainty that nothing will change before we exit the quarter. If it's the current quarter, maybe it's an 80%. 80% chance that nothing will change by the time we finish this quarter. If you go to four quarters out, you know, it's a 20% chance, because a lot could change. We could change how we're selling things, how we're marketing, what the different outside business needs are, as well as what the timelines of the things within the RND team ... All of those things will change what happens to the roadmap. As you go further out there's a lower level of certainty.
I use these so that the consumers of the roadmap within the company get a feel for how to digest that roadmap. It's not all of it locked in stone. They get a feel for how it's fluid, but which pieces are more fluid, and which pieces are more reliable.
Mike Fishbein: In Samuel's Baker's Half-Dozen list of product management processes, he repeatedly emphasizes autonomous product team and customer-centric perspective. It sounds like he and product leaders like David Cancel see completely eye-to-eye. Is that the case?
Samuel Clemens: I'm not against customer-driven as a concept, I'm against customer-driven as it's usually interpreted, and implemented by product management organizations. Usually, what it looks like this is, they say, "Let's be customer-driven, let's, for example, have a team that is dedicated to improving the engagement of customers so that it reduces customer support tickets. The way we're going to do this is, we're going to look at all the customer support tickets that have come in in the last X months. We're going to group them and force-rank them according to which ones customers have found most urgent. Then, we're going to burn down that list. Let's put a metric on this team, so that they have to reduce some number from 20 down to 5, or something like that."
That's going to be a metrics-driven, customer-driven way of doing it. That is product management. I have a major problem with that. My belief is that, when you interpret customer-driven like that, you often miss the bigger picture, and you end up with very incremental levels of product management and improvement to your product. For example, fixing the list of individual issues that your customers are surfacing for you will indeed get you an improvement on each of those individual things if you're doing it right. But what you will often miss is that each individual issues is really part of a bigger problem with whatever that flow in the product might be and that, really, you shouldn't be fixing the five individual things; you should be rewriting the entire flow. Or, for example, that you should actually be ... If you have five different problems on that product, you should actually not even have that product at all. You need to delete that product from whatever your company is currently selling and supporting.
This is one of the issues I have with customer-driven as a set of blinders is that you will never have a customer come to you and say hey, that product I'm giving you feedback on, you really need to kill it. That will very rarely happen. I've never had it happen ever. And yet, that's often something that you would need to do as a product manager is actually decide to not support something so you can focus more resources on a narrower set of products or indeed a newer product that you might be innovating and launching someplace else.
Instead, what I advocate is that product managers take a much more active role in guiding the development of their product. Customer needs are indeed an input. In fact, they're probably the most important input but they are not the only driver of what you're developing. There are other inputs like competition and corporate strategy and the cost of things and sequencing. And all of these things need to get mixed into what is an active process of road mapping and planning instead of just saying hey, we're being customer-driven, we're gonna let the customer drive this bus. As a product manager, you need to be driving the bus. I believe that's much more effective and will get you much more impactful product management versus the types of incremental things you can get just by responding directly one-to-one to individual customer requests and the kind of thing that you typically see from a only customer-driven process.
Mike Fishbein: Samuel believes in a core customer-centric focus that is then amplified by rigorous processes that factor in business and external concerns. What does this core look like before a process is implemented around it?
Samuel Clemens: I think the foundation is having a good understanding of the [inaudible 00:26:22] customer, themselves. I like hiring product managers from the internal customer success team because they come over and they already know at a very deep level ... They've already spent a year or two years working with lots and lots of customers deep with your product. They know exactly what the customer perspective is. If you have that, it's very hard to go wrong. You can be inefficient but ultimately, if you really have customer's true needs as your starting point, ultimately you will end up building the right thing.
What I've described here is how to shorten the process or shorten the time to building the right thing. How do you jumpstart that? How do you give someone whose come over from a customer success team or come from another team in the company ... How do you put in place a base so that they have a more rapid way to get to building what they know is right? And then also, when you're troubleshooting or working with that person ... When you're coaching them, how do you have a shared lexicon so that when you're talking about product management with them versus somebody else ... When you have to double-click on okay, you're having challenges ...
Say you go to have a one on one with one of your PMs. They're having challenges trying to figure out something. If you had to have that conversation across all of your PMs, across all of your team, and each one was doing product management a different way, it would be very difficult for you to drill into what they're doing and actually be able to help them because you'd have to come up to speed on what they were doing and figure out is it what they were doing or how they were doing it or what it might be. Whereas if you have a common base across all of your PMs, across how you do product management, you have a shared lexicon; you have a shared starting point which is an enabler for your PMs to be able to get better at their jobs.
Mike Fishbein: Once this foundation is in place and the seven or so processes are implemented, what dashboard or feedback loops did Samuel use to know how well things are going?
Samuel Clemens: Yes. I'm a strong believer in metrics. In fact, the Insight Squared ... The company that I'm currently head of product for, the product we make is an analytics product that reads in sales data and marketing data and gives you analytics to let you know how is your team doing. What I believe is critical with product management is that the metrics you develop need to be appropriate to the thing that you're building. For example, you might be building a project to help the sales team convert. This is a ... Perhaps is a flashy type of product feature and even though it may not get usage, you think it'll help convert more opportunities and deals. You need to make sure that the metric you pick for that is not a usage metric. It's not going to be a usage metric, it's gonna be a sales conversion array metric.
On the flip side, you might have another feature that it's primary goal is to increase retention. Retention is a lagging indicator. You're gonna find out on, say, renewal whether or not that thing really worked so you can't really use retention at the metric 'cause to many things go into it and it happens too late, a year after you've launched the product, so very often you'll use usage as your engagement metric as a precursor; a leading indicator to the lagging indicator of retention.
Two thoughts here. One is pick the metric that's appropriate to the project. You don't have just global metrics. And then secondly, make sure to pick metrics that are actually easily influenceable and measurable by your team. The lagging indicators are usually the ones you're trying to affect, but they're not usually the best ones to use because they're a lagging indicator. Very often you need to compromise and use a lead indicator that you know correlates with the lagging indicator.
Mike Fishbein: According to Samuel, active participation in the creation of process on top of a customer-driven product team can have a number of benefits. Next up is the benchmark. Let's see how Samuel reflects on the next series of questions we ask all interviewees to ask themselves.
Samuel Clemens: How do we eat our own dog food? It's a great question. I'm a big of the managers or product managers being player-coaches, meaning that they keep their art tuned up by themselves being product managers of things. For example, the most recent project that I was the PM for was a redo of my company's pricing system and that requires me to get outside the building and work with the other teams and think about how do we implement it. My biggest advice or how do I personally eat dog food is by still being a PM myself.
Thinking more broadly, at a company level, how do we eat dog food? What's interesting is the things I've been describing, these are the ways in which we dog food what you normally think of as product management theory. Everyone's usually on board with I want to be agile, I want to have a lean product development process, I want to be customer driver. The question often comes not with the theory; the question comes with how do I actually put these into practice is where people get stuck. That's a lot of what I've described here is when we have put these things into practice, these are the ways in which we found we are best able to get them working.
How do I get out of the office? That one's easy. A, because I require all of my PMs, including myself, to get out of the office and every week when we meet as a team we make a list of who has done an on-premise customer visits. It's very visible. But also, secondly, I encourage the sales team and the customer success team when they're interacting with clients to involve me. It's a known practice on the sale floor that if you have an opportunity that you're working and it would really benefit by someone coming and describing ... Perhaps meeting with maybe more of the execs on the other team and they want to hear more about the bigger picture of what's happening on the road map, I've said hey, these are times when not only can you but I want to be involved. I often get drawn in by the sales team or by the customer success team in meetings with customers to help them out.
What am I reading right now? Good question. One of the commitments I've made to my Pms is that after Insight Squared I want them to be able to go on and either lead product organizations at their next start-up or even start companies themselves. A lot of what I've been discussing is how to set them up in order to be able to do that; how to go and lead a product management organization knowing how to set up the basics of product management wherever they go.
Another part of setting them up for success in their next role is the notion of continuing education and learning more about how to start companies and how to run companies. We have a book club within the product management group where we'll nominate a book, the entire group will read it, and then one of the group will lead a discussion on it. The book we're reading right now is Getting Past No. It's a book on negotiation and was written by William Ury. It's very, very strong. Very powerful. The one we did before this was Designing With the Mind in Mind, a book by Jeff Johnson. Phenomenal. Mind-blowing book. I recommend it to people even outside of product management or business. Just some fascinating things to learn about how the human mind works and how we perceive things. The one we did before that was The Innovator's Solution. It was the sequel to The Innovator's Dilemma. It's a Clay Christensen book and I think it's required reading for anyone whose in the start-up space.
What's a recurring product management nightmare? What's interesting is that I don't have product management nightmares. I do have nightmares but they tend to be about company growth stage type things like how do we manage the culture as we grow? How do we keep employees motivated and retained? How do we work with investors? There's some big, giant players in our space. Sometimes they're partners, sometimes they're competitors. How do we work with them? Those are the kinds of things I have nightmares about. Perhaps it's because of the active processes that I've been describing here. I don't tend to have nightmares about product management.
Mike Fishbein: Listeners can find out more about Insight Squared and Samuel at InsightSquared.com or on Twitter @SCClemens. That's our show. Until next time, this is Mike Fishbein from Feedback Loop.