Digital Communities Can Learn From “Leading Clever People”

Posted by Saneel Radia

Participation Inequality

Participant Media Model by Arts Alliance

We recently got excited about a 15 year old chart (pictured) we were presented that effectively encapsulated participation inequality. We love the level of detail beyond the typical 1:9:90 ratio (creators:editors:audience). We can only assume “1:10:100:1000:10000 rule” is too much of a mouthful to say, thus the shorthand.

It makes us stop and think about how unbelievably valuable the “catalytic creative contributor” is to any community. A digital community designer should want nothing more than to please this particularly small set of people. Even if most brands primarily monetize the “ninety percent”, there would be nothing for this group to engage without the catalytic creative contributor. They are the heart and soul of any community.

A quick glance through digital communities revealed that the highly successful ones clearly cater to this elite base. As we examined what these digital communities did for these special users, we noticed parallels to one of our favorite pieces of business literature ever written: “Leading Clever People” published a few years back in the Harvard Business Review (Goffee & Jones, March 2007) about how to lead those whose skills or knowledge in your organization make them disproportionately valuable. If you haven’t read it and manage people, may we politely suggest you leave our blog and Google it immediately.

Some of the article’s “things to know about clever people” are particularly relevant to catalytic creative contributors, who also offer disproportionate value at quite a high “management” cost. Here are three we found striking:

1. They know their worth
As game mechanics have taken over the world, this principle is regularly forgotten. If a certain group knows their worth, shouldn’t they get some form of VIP status others simply can’t earn? Although Stickybits is a favorite app here at BBH Labs, they recently shifted their focus from content creation to promotions. It’s impossible to say the cause, but from an outsider’s perspective, it may be the consequence of failing to acknowledge the VIP base. There was no established benefit for tagging content. Assuming a small percentage of users must be responsible for creating large quantities of content, Stickybits failed to illustrate the reward of such behavior.

Conversely, Yelp continues to astound with their incredible understanding of the catalytic creative contributor. The Yelp Elite Squad is an example of understanding some creators are more valuable based on quality, and acknowledging they know their worth. Getting this recognition can’t happen via persistence. Yelp subjectively evaluates your contribution and lets you know if you fit the bill. It’s counter intuitive to growing a base via “game mechanics,” but the reality is these people require special attention, and Yelp is willing to yield to their high maintenance requests.

2. They have a low boredom threshold
This one is interesting because “boredom” is so difficult to address. That said, there are clear patterns for those that do it successfully. Wikipedia is legendary because of the exceptionally small number of people that edit the community. A famous article once stated that greater than 50% of the edits come from 0.7% of the community. Editing alone is different from catalytic creative contribution, but it does illustrate the point that a very, very small group will take upon a vastly disproportionate task (we saw this during The Betacup). It might sound boring, but it’s clearly fulfilling to those key people. This is because the system itself alleviates boredom. The reward is in the act of doing, as each entry is unique and has its own audience. It takes quantifiable skill to be one of these 500 people and they no doubt pride themselves on the fact that the vast majority of us couldn’t successfully do that job even if we were so motivated.

Compare that fulfillment with Foursquare. Foursquare is still in early development, but it currently depends on the system to alleviate boredom. The monotony is broken via badges created by Foursquare or its partners, and awarded for activities any user can do (i.e., “check-in”). In other words, it’s not self-fulfilling. It places an exceptional burden on Foursquare itself, rather than on the community, to validate the catalytic creative contributor. Put another way, Foursquare may have created a barrier to its own success. This is especially interesting in the context of their recent shift toward couponing and rewards.

3. They are well connected
Having a core base of hardcore creators is likely necessary for any digital experience. However, it’s easy to lose sight of the other value those content creators bring: a passionate base of advocates and recruiters. It’s similar to the idea of Propagation Planning (“planning not only for the people you reach, but the people they reach”) and poses an interesting challenge to user experience designers. Digg and other supposedly “democratic” news systems know this well. A review of the Top 100 Digg users shows what few people likely realize. A miniscule group actually controls what makes it onto the homepage. That sounds like the opposite of Digg’s offering, but in fact, those users are sought out by the audience because of their influence and reputation. Regular contributors (“editors” in 1:9:90 framework) go out of their way to Digg and link to what these people post. Digg gets traffic and self-propagates. They give these users preferential treatment (the front page favors their submissions), and as a result have a high quality product and a built-in extended audience.

A number of the other observations about leading clever people apply to digital content communities, but these three struck us because they can be applied to help community managers and designers build for the catalytic creative contributor.

This group may be an exceptionally small percentage of the internet, but it wouldn’t surprise us to see an increasing amount of digital experience design just for them. Gamification is a popular trend, but those subtly swimming against the current are seeing success. In fact, the best way to win the game with the masses may actually be by catering to the clever few.



by Gerry McGovern


Whenever you get people to vote on a list of tasks, clear trends will quickly emerge.

Over the last eight years we have done well over 100 top task identification surveys in six languages with more than 70,000 people participating. Repeatable trends have emerged, For example, by surveying 400 customers you will have identified your top 10 tasks with reasonable certainty and your top three tasks with high certainty. Your overall top task will have emerged by about 100 voters and sometimes by as early as 50.

Typically, in surveying 100 tasks, the top 5 tasks will get an average of 25% of the vote, with the bottom 50 tasks also getting 25%. In other words, the top 5 tasks get as much of the vote as the bottom 50. After 400 customers have voted, the chances of a task that is in the bottom 50 becoming a top task are infinitesimal. Also, the chances of a top task dropping into the bottom 50 tasks are also infinitesimal.

This method allows people to vote for what’s really important to them. If it’s a task list for an intranet, for example, then many employees will vote for the task “Find people,” because on a day to day basis, finding people is a really important task for them.

It is essential that you restrict the number of choices. We tested situations where people could choose as many as they liked from the list and the results were not useful at all. It’s important to limit the choices to five or less. We tested with 10 choices and found that a significant number of people struggled to select 10 things that really mattered to them. We found that after five choices, people began selecting things that might be of interest but that they didn’t feel very strongly about, or else they got frustrated with the process. We need to find out what people really care about.

We found that it’s important that people vote. Voting does something to people. If they just have to select five tasks, that’s one thing, but if they have to select the most important task to them and vote for it as the most important, their choice takes on a new level of seriousness for them.

Giving people a list of 100 tasks and asking them to choose the five most important to them sounds mad. I’ve had lots of market researchers tell me that it is utterly impossible. I wouldn’t have believed it myself but the results speak for themselves. So, why does it work? My colleague Gord Hopkins suggests one reason: the cocktail party effect. You’re at a party. There are a lot of people in the room, a lot of buzz and noise. Across the crowded room someone mentions your name in conversation and out of the noise there’s clarity; you hear your name being spoken. The words jump out at you.

What I have noticed is that people don’t read the whole list. They just scan it, and out jump the things that really matter to them. It’s not that they discover things on the list that interest them. It’s that inside their brains there are things that really matter to them (their carewords). Seeing these words on the list is a reinforcement of something they already care a lot about.

If you want step-by-step advice on how to implement the task identification method, check out my new book, The Stranger’s Long Neck. You can read the first chapter for free here:


User Testing Tools

Every time I see Steve Krug’s book “rocket surgery made easy” I feel guilty. I know I should do more usability testing than I do, but somehow it never quite works out that way.

Steve is right when he says we should all be doing usability testing every month. He even makes it incredibly easy by reducing the number of participants to only three people per month. Yet even this we struggle to do.

However I have learnt one valuable lesson from my disastrous DIY experiments. If you have the right tools the job it is a lot easier. In my experience this applies as much to usability testing as to DIY. Fortunately these days there are some amazing tools available and I’ve listed my favourites below. Be sure to check them out.

Flash tests

Flashing testing is where you show a user your website for a few seconds and then asked them to recall as many page elements as they can. This is a great way to discover if you have the correct visual hierarchy for your pages.

Where previously you would need to do this kind of exercise face-to-face, there is now an app for that! takes a screenshot of your website and presents it to the user for five seconds before asking them to recall what they’ve seen. This is a great tool for testing initial sketches, design comps, or non-interactive wireframes.

Face-to-face testing

One of the big problems with face-to-face testing is recording everything that happened. Video cameras can be very intimidating as can having other people in the room taking notes.Silverback does a great job at making this kind of recording as unobtrusive as possible.

Using your Mac’s built-in web cam and mic it records everything the user says as well as their facial expressions and what they do on screen. When you next do face-to-face user testing make sure you have a copy of silverback installed.

Card sorting

Card sorting is an excellent way to ensure your information architecture is as user centric as possible. It involves allowing the user to organise cards that represent different content areas into their own hierarchy. Unfortunately the process can be time-consuming and it can be difficult to find an adequate number of users to make the exercise worthwhile.

Fortunately there is now an app called, which allows you to do card sorting exercises online using real users directly from your website.

A/B testing

By exposing possible design variations to a small segment of site visitors you get an insight into how a new design is going to work in the live environment. Many argue this is by far the most effective type of usability testing. Although there are free tools such as Google website optimiser, my favourite is visual website optimiser.

What sets this tool part from its competition is the ability for client to carry out their own multivariate testing without the need to understand HTML code. Admittedly this is both a benefit and a curse. However, it encourages users to develop their websites on an ongoing basis.

Live interactive remote testing

It can sometimes be difficult to meet face-to-face with user participants who are potentially spread around the globe. One solution is to use screen sharing software that allows remote testing to take place. My favourite tool for doing this kind of testing is GoToMeeting. Although it does not currently support video you can share screens and speak directly to the participant. Best of all it seems to suffer from very little lag which is a crucial requirement when testing.

Un-facilitated remote testing

If you simply do not have the time to recruit participants and test yourself then you may wish to consider a service such as For a low fee per user you can define a test (such as placing an order) and will find participants and record them completing the test.

All participants have been trained to talk out loud when completing tasks so you get a good idea of what they are thinking as they complete the test. Although not as good as interactive testing it is certainly better than no testing at all.

Recruiting participants

Finding participants that match your requirements is possibly the hardest part of usability testing. Although it is not as important as many people think to have a demographic match for your participants, it can still be time consuming to find anybody at all.

If you simply do not have the time to recruit participants consider a service such This service will allow you to recruit participants directly from your own website. It will also allow you to manage these participants and find exactly the type of people you require.

No excuse

With so many tools available there really is no excuse for not carrying out regular usability testing. Just think, if you start doing monthly user testing you won’t need to feel guilty every time you look at Steve Krug’s book!


By Paul Boag