Feed on
Posts
Comments

A design based on performance depends on the audience and the measures of success used. How a design performs determines how it evolves.

(continued from Part 1 – Evolution and Design)

Remember the old metaphysical question: “if a tree falls in the forest, and no one’s around to hear it, does it make a sound?”

Well, here’s an equally unanswerable question: “if someone designs a web site, and no one’s around to measure its performance, was it a success?” If you designed or manage that web site, your job depends on being able to answer that question. However, many user experience designers have historically avoided the question with the enigmatic and frustrating non-answer: “it depends.” Well, the time is overdue for us to take ownership of measuring the performance of our designs. We can start by understanding what performance means.

“Performance” implies that someone is observing the performance – the audience. And there’s an implication of critique, or measurement of the performance. In order to guide the evolution of the design to fulfill some strategy, the design’s performance must be measured.

Performance implies an audience.

The performance of any creative effort is subject to interpretation by the people observing the performance, and those people are called “the audience”. To reiterate the argument I made in Part 1, user experience designers aren’t creating art for self-expression. We’re creating art for use. My design is made to perform a particular function, made useful by having an audience that values the design’s function.

If I design a web application behind a firewall that no one ever finds or uses, what have I accomplished, other than self-expression? By definition, user experience design requires an other (the user) to experience the work. My design must be used in order to fulfill its destiny.

Performance implies measurement.

The audience is neither monolithic nor homogeneous. Each audience member will have their own unique set of values, motivations, cultural filters, and personality traits, which will affect how they view the performance. They will each bring their own criteria by which they will judge the performance, whether they know it or not.

In the movie theater, I might judge a performance based on the quality of the story, the set design, or the acting. A film critic might judge the character development, the narrative structure, or the impact on the world of filmmaking. A marketer might focus on the product placements, the brand opportunities, or the potential for a companion video game and a line of toys. Conversely, my mom might judge the performance on how comfortable the seats were, the temperature of the air conditioning, or the cleanliness of the restrooms.

Thankfully, there are some criteria that are common amongst audience members. We can group people together who have common attributes and common ways of judging performance. In the digital marketing business, they are called segments. In user experience design, these groups are called personas (I’ll save the discussion of the difference between marketing segments and personas for another day). For each audience segment or persona, the user experience designer can identify which criteria they will design for.

Who you perform for determines what you measure.

By choosing an audience to please, the designer must deal with the accompanying criteria that audience uses to judge performance. The criteria you choose to measure determines how you will design for performance.

For a typical web site, here are just a few examples of the different types of audiences and their different measures of performance:

Approach Audience Performance Criteria Sample Measurements
user-centered customer usefulness, value, relevance customer satisfaction, time on task, path efficiency, etc.
business-centered client, business stakeholders overall health of the business aspects of the business model, such as revenue growth, brand awareness, market reach, competitive conversions, etc.
politics-centered project managers, program managers internal stability & promotion project budget, project profitability, headcount, etc.
technology-centered developers, testers, IT administrators technical efficiency & stability uptime, processing speed, bugs fixed, etc.
ecology-centered eco-minded individuals sustainability and environmental health carbon consumption, trees planted, energy used, waste produced, etc.
social-centered social values-oriented individuals conversation and community health relationships created, events held, houses built, lives saved, etc.

You might notice that these last two are more cross-disciplinary and cross-audience. This is because they’re based on values, rather than on roles. Value-centered design (see Jess McMullin’s Boxes & Arrows article) and value-sensitive design (see Batya Friedman’s work at UW) are approaches to design that define success by how well the design aligns to commonly-held definitions of worth or ethics, respectively (I’m oversimplifying).

Some of the above approaches are better than others for different contexts. A user experience designer might gravitate toward a completely user-centered approach, but their job might depend on business- or politics-centered measurements. For a company whose entire business is conducted through their web site, depending solely on user-centered measurements of success might have disastrous consequences. Especially if the web server is crashing due to some poorly tested code.

As an online user experience designer, I must deliver a design that the users love, but that is also on time, within budget, functional, revenue-generating, and adheres to the values of the company and its customers. I also need to create an experience that will evolve as those values and the market landscape change over time.

How you perform determines how you EVOLVE.

Which brings us back around to evolution. Adapting a design to the changing environment will ensure its survival. Even if you don’t intentionally vary the design, your site or product WILL change over time. HOW it changes depends on how you measure its performance. What do you value? The user? The money? The environment? Design for the audience(s) you must satisfy to survive. Measure what matters to that audience. Keep what works, toss out what doesn’t, explore new approaches, measure again, and repeat.

Next up: Part 3 – Process & Deliverables

If we want to create user experiences that have a better chance of surviving the chaos of the market, it may help to adopt evolutionary principles in our design process.

Fundamental Principles of Evolution

Evolution is really very simple: life begets life, with a little variation in each iteration (due to entropy). Some variants perform better than others, giving them an advantage in the face of predators and environmental factors. Over time and zillions of variations, specialization occurs.

Evolution in a nutshell

  1. Live
  2. Reproduce with variation
  3. What performs well, survives
  4. What survives, reproduces
  5. Repeat

Evolution is not random

It’s important to note that the process of evolution is not random. If it were, we’d have 3-eyed frogs with 6 legs. The reason we don’t is that, if 3-eyed, 6-legged frogs did occur, they didn’t survive any better than the 2-eyed, 4-legged frogs. That’s not random. 2-eyed, 4-legged frogs are simply better at surviving.

What is random about evolution is the mutation and the environment in which this cycle of life and death take place. In nature, evolution is not intentionally guided, as far as we know. Catastrophic events and chance encounters with predators cannot be perfectly controlled in the wild, so evolution tends to take a wandering course.

Evolution with a purpose = breeding

If you wanted to intelligently guide the evolution of an organism, you’d need to look no further than your local AKC kennel or county fair. Humans have been breeding plants and animals to select for certain characteristics for millennia. Corn, rice, wheat, bananas, apples, chickens, cows, horses, goats, sheep, dogs, cats – practically every plant and animal we eat or live with has been bred intentionally. They didn’t evolve naturally – humans pollinated, inseminated, incubated, hybridized, and culled millions of generations of wild organisms to get the highly specialized cadre of useful, nutritious, cuddly, non-toxic, and benign farmyard animals and plants we eat and love.

What’s different between evolution and breeding is that most of the random variables have been removed. Farms are very controlled environments, with relatively few random predator attacks, competitors, or catastrophic weather events. This is because we humans protect our plants and animals with weeding, fencing, shelter, medicine, and a steady supply of food. The randomness of mutation remains but is minimized by selecting breeding pairs that resemble the desired characteristics as much as possible.

These domesticated breeds wouldn’t survive in the wild. Instead, they have helped us survive and become the dominant species on the planet. It was the transition to agriculture – intentional cultivation of plants and animals – that alleviated the need for humans to wipe entire herds of mastodons to feed their burgeoning population. In short, guided evolution allowed humans and their companion organisms to survive. It also allowed and all the beauty and variety of human culture, knowledge, and art to blossom. And it might – in a return to sustainable agriculture – be what allows us to stop catastrophic climate change and survive for another million years.

Art with a purpose = design

Art is self-expression. It’s an act of creation, or, in some cases, an act of destruction or deconstruction. Whatever art is, it rarely has a purpose beyond making visible/audible/legible/tangible the vision of the individual artist. As with the natural world, the world of art evolves based on predation, competition, and the sociopolitical weather. Like evolution, art is not random, but it isn’t necessarily guided either. At least, not guided sufficiently by factors outside the artist.

Design, on the other hand, is guided art. Design is art that is meant to be used by someone other than the artist. There are varying degrees to this – and probably libraries full of these that discuss this subject ad nauseam – but my operational definition of design is art with a purpose. User experience designers are in the business of creating systems that fulfill a purpose – namely, meeting the needs of users.

Evolutionary Design

The problem with the design agency world, where I work, is that design is often confused with art. We strive to create the One True Design, a work of art so perfect that it will be viewed as useful, usable, elegant, and beautiful to all. And it will make money, further the brand, and pacify business stakeholders and partners. We are so enamored with this idea that we structure our projects to conduct research, create a design concept, test that concept, and deliver the One True Design after a few rounds of revisions. Most designs fail the first time. Sometimes miserably. And if you bet the farm on that one design, you only get that one chance.

Here’s an alternative: forget the One True Design. It is an illusion. There are no perfect designs for all time, there are only appropriate designs for specific contexts. Just as orchids and tree frogs are perform well in their niches in the tropical rain forest but not the frozen tundra, certain designs only work well in certain environments. And those environments are subject to change over time. Thus, design must change along with the often catastrophic shifts in markets, customer needs, and business strategy.

At this moment in 2009, we are all facing catastrophic economic change. If we want to create user experiences that have a better chance of surviving the chaos of the market, it may help to adopt the same guided evolutionary approach in our design process.

Evolutionary design process

  1. Strategize
  2. Generate design variants
  3. What performs well, survives
  4. What survives, generates more variants
  5. Repeat

This looks very similar to the guided evolutionary process (i.e. breeding), with one key difference: randomization. Unlike with breeding dogs or corn, a guided evolutionary approach to design could be nearly free of random factors. Designers can control the mutation in design variants, creating only those variants that have a chance of surviving. No 3-eyed, 6-legged frogs. But maybe a frog with wings. Or a frog with X-ray vision. Or a frog with the ability to digest pesticides. However, in order to understand whether our flying superfrog has left us better off than with a regular frog, we must measure its performance against some goal. Otherwise, we’re just wasting our time and torturing frogs.

Next: Part 2 – Performance-Driven Design

Information Architecture Summit 2009 presentation by Aaron Louie and Rachel Elkington: “Darwin Does Design: Measuring & Optimizing the User Experience”

At the Information Architecture Summit this last week, I attended “Evolve or Die“, a panel discussion about the future of Information Architecture. The panelists ranged in tone from prophecies of doom to obituaries for sitemaps and wireframes (my take: IA is not doomed, nor are sitemaps or wireframes even close to extinction).

To the panelists, evolution was a thing to fear, as if we are as powerless as dinosaurs in the face of an approaching asteroid. I found myself wondering how we might embrace evolution as a tool for advancing the discipline or, more ambitious, develop an evolution-inspired approach to advancing the process of how IA is done. How would we re-conceive of the process of IA as the guided evolution of information spaces? Might we guide the evolution of the discipline itself, utilizing evolutionary principles to improve what how IA is done and ensure the survival of IA as a discipline?

That evening, I realized that the book I’m co-writing (more on this later) on the blending of analytics, optimization, and user experience already described this evolutionary approach. Early the next morning, I signed up for a session slot to co-present the idea with fellow ZAAZ user experience architect (and optimization test designer) Rachel Elkington. Our session title: “Darwin Does Design: Measuring & Optimizing the User Experience.”

You can view the slides of our presentation on SlideShare. Audio coming soon!

Here’s my list of things we in UX will need to do to remain relevant – and alive – in this economy:

Stop thinking about designing the perfect system. Perfection is expensive, illusory, and unattainable. Perfection will get you laid off. Good-enough will allow you to survive until the next round.

Design sustainable systems. Create revenue, efficiency, and value feedback loops. Design iterative workflows and self-sustaining user flows that add value, increase engagement, and make the system smarter.

Just design something that works. If it can’t be built on time and under budget, you have failed as a designer. Every subsequent iteration is a chance to improve the quality and return-on-investment of your user experience. But there must be something in place to iterate on.

Understand how your users will evolve. Look beyond the first site visit or the first use. Look at the entire lifetime relationship you form with your users. Design for their first visit, their 10th visit, and their 100th visit. And know how to measure the value of each visit.

Measure the performance of your design. Measure what will keep you your job — revenue, customer satisfaction, efficiency. Measure pre-design. Measure post-design. Fix what’s broken and measure the performance of your fixes. If you can point to measurable improvements, you’ll keep your job.

Treat your site like a hyper-local, self-sustainable, fertile permaculture.

In my previous post on Sustainable Garden Design, I explored the parallels between user experience architecture and landscape architecture. The analogy was thought-provoking, but I put it on the shelf for a while as the immediate demands of work took precedence. However, at a recent social media discussion at ZAAZ, I started thinking again about how permaculture concepts can be applied to user experience design for the web — specifically, in social technologies.

Nature + Agriculture = Permaculture

Natural ecosystems evolved over hundreds of millennia, developing sustainable, complex webs of relationships over millions of generations. Humans gathered or hunted whatever edible organisms they could find from this emergent food web. However, supply was limited and unpredictable, which is why humans invented agriculture. Our ancestors replaced forests with fields, which were planted with carefully-selected species that provided the tastiest or most nourishing byproducts. Over time, these species were bred to maximize yield and reduce maintenance costs. Unfortunately, this led to a destruction of natural habitat and mass extinction of plants and animals that did not conform to the human ideal. Farms became brittle monocultures of one or two crops, denuding the land of nutrients.

There is a better way: permaculture. Permaculture is a human-cultivated, sustainable ecosystem that produces food by mimicking the balance and interactions between plants and animals found in nature. It basically involves planting a forest filled with a wide variety of native edible and beneficial species that enrich the soil and provide each other with nutrients, protection from parasites and disease. A permaculture plantation can yield food for humans and animals while surviving indefinitely — without the addition of artificial fertilizers, irrigation, or pesticides.

Permaculture for social media

What if we structured social media like a permaculture — capable of yielding revenue for a business while supporting a vibrant, self-organized, self-sustaining community indefinitely? What would that online community look like? How would we go about designing and structuring that permaculture for production AND longevity?

Here’s my unscientific and mostly untested back-of-the-paper-napkin wild-ass guess for an approach to applying permaculture practices to social media.

Step 1: Analyze the site

View your social media site as a plot of land. Each plot of land is different, with a unique mix of soils, wildlife, topography, microclimate, precipitation, sun profile, and people who live on or near that plot. The same principles apply to web sites. Each has a different organizational structure, political hierarchy, business model, content domain, audience, competitive landscape, and so on. In order to design a permaculture for your site and choose the appropriate elements for it, you must consider all of these factors.

  1. Business model: Begin by reviewing how resources — money, usually — flow into the business and, as a result, the web site. What are all the sources of funding, staff, political will, and so on?
  2. Content domain: What is your organization’s specific industry, subject matter expertise, or genre?
  3. Audience: Who do you serve? Who do you sell to? Where do you sell? From what cultural point of reference do you speak from?
  4. Competition: Who serves/sells to the same audience as you? Who offers the same product or expertise? Who has the same business model?
  5. Seasonal factors: How does the environment for your organization change over time? Based on historical records, what periodic fluctuations can you expect on a monthly, quarterly, and yearly basis?

Step 2: Design the value web

Let’s return to the analogy of the natural ecosystem. In every self-sustaining community of organisms, there is a constant cycle of give and take. Predators eat prey while becoming prey themselves to some other predator. Each organism eats something and is eaten by something. Every organism’s byproducts becomes food, catalyst, insulation, structure, protection, or poison for another organism. The complex network of dependencies that emerges from an ecosystem is called a food web.

On a full-circle farm, the food web is simplified to be more manageable by humans. The sun feeds the grass which feed the sheep whose manure fertilizes the grass and attracts flies which lay eggs which hatch into maggots which are eaten by chickens whose manure fertilizes the grass and enriches the compost which nourishes the corn which is fed to pigs… and so on. In a permaculture, the food web is more complex. Fungi on the roots of a legume will enrich the soil with nitrogen, supporting nitrogen-hungry onions whose flowers produce an aroma that draws harmful insects away from the fruit tree whose fallen leaves prevent water from evaporating and block weed seeds from germinating… and so on.

In a social media system, the people and business entities that make up the network of dependencies don’t eat each other. Instead, they form a living food web where the unit of exchange is value, forming a value web.

In order to create highly targeted, self-sustainable, vibrant social system that also makes money, we must identify how each actor in the system gains value from and gives value back to the system. Consider how users, the content they contribute, and the affordances provided by the system act to create a living, vibrant community. For each user, determine their needs, what they produce, and how what they produce meets the needs of other types of users.

For each product offering, feature, function, target user type, or content type, answer:

  • Who will find value in it?
  • Who will use the output of it? For what purpose?
  • What information goes into it? Where does that information come from?
  • What part of it can be used directly? (e.g., revenue generation, brand awareness, data mining)
  • What useful byproducts does it produce? (e.g., metadata, customer demographics, behavioral data)
  • What waste products does it produce? (e.g., irrelevant content artifacts)
  • What does it compete with?
  • What is its lifecycle? How does it change over time?
  • What is its life expectancy? How often does it need to be “re-planted”?

In gardening, a common practice is companion planting, where the gardener places two or three plants that have some sort of simple dependency relationship near each other. Over time, the output of each plant will nourish its companion.

Companion planting can be applied to social media as well. For example, a Twitter-style micro-blogging feed from a product design team could be used to seed topics for a Digg-style feature-voting discussion board about the product. Votes and comments harvested from the discussion group could then be used to provide feedback to the product design team. By designing multiple clusters of such value loops and then linking them together, a nascent value web could be created.

However, it’s not enough to just create a network of dependencies. The value web must be flexible enough to survive, even if one element of that web is removed. Each organism has multiple sources of nourishment and produces multiple byproducts for multiple organisms. Additionally, each organism has individual variations, even amongst members of its species. This is what makes natural ecosystems robust enough to survive — and even thrive on — storms, disease, or seasonal fires.

In the business world, this strategy is called diversification. However, this usually means diversifying investments in multiple markets and multiple products. What needs to be added to this strategy is an understanding of the different kinds of value your customers gain from your business. Consider the value a customer gets from not just the product, but also from friends and family, other customers, society (in the form of social cachet from using your product), their own sense of accomplishment, your customer service, bonuses and rewards, and so on. If there are enough different types of value from a diverse enough set of sources, you could replace one of those vlaue components without causing a collapse of the value web.

Step 3: Fill in the niches

In nature, certain roles must be filled and in balance for a permaculture to form and thrive. Might we construct a social permaculture online by identifying and designing for analogous user roles to those in nature? (Or am I taking the analogy too far?)

Nature Social Media User Type Example Feature/Activity
Nitrogen fixers: plant/fungi symbiotes that improve the quality of the soil by pulling nitrogen from the air and convert it into highly-useful, nitrogen-rich foliage Users who bring rare and interesting content and ideas from outside the system and package it in a form that others can use Provide users with a way to bring new and interesting content into the system.
Dynamic accumulators: plants that draw useful and/or poisonous minerals & metals from the soil Users who find hidden, high-quality content already within the system and collect it together for others to use. Also includes the moderators who police the community by removing toxic elements and cultivating quality content and interactions. Give them a means for users to promote and demote content already in the system.
Living mulch: plants that crowd out invasive weeds through dense, ground-covering broad, shady leaves Users, in aggregate, who generate massive amounts of average-quality content and prevent spam through self-moderation. This is the background noise against which high-quality (and low-quality) signals stand out clearly. Encourage the average user to participate frequently and casually by lowering the bar for participation.
Structural/Keystone species (usually, large plants that other organisms use as support, habitat, food, or shelter) Users who connect other users together and form the nexus of their social circles. Allow users to form groups amongst themselves and invite others to join them.
Pollinator attractors: flowering plants that entice bees, butterflies, and other pollinators into the ecosystem. In exchange for playing a crucial role in reproduction and stimulating fruiting, pollinators collect nectar from the flowers. These are the “cool kids” who set trends, mix up technology, and information in interesting ways, and encourage their friends to follow them. Form partnerships with notable industry bloggers; publish APIs to encourage mash-ups; reimburse content creators through micropayments or rewards points.
Root crops: plants that store carbohydrates in large, nutritious taproots, breaking up the soil in the process Loyal lurkers who engage conservatively but consistently over a long period of time. They keep a sizable reserve of content private and form limited relationships with other users. They may represent a significant portion of traffic and revenue but rarely engage the business or other users in any visible fashion. Allow anonymous access, casual participation, and gradual engagement.

Step 4: Control Weeds

Some of the niches above will be filled naturally by “weedy” users. Weeds are essentially ANY plant growing in the wrong place. It could even be a very valuable plant, such as a saffron crocus or rare orchid. If it occurs in an improper context, it’s a weed. In a permaculture, weeds are naturally suppressed by having an abundance of the right kind of plant. If a community is filled with active moderators who diligently cull and suppress the irrelevant and harmful content, there will be little need for the business owner to actively weed.

Many social media sites simply start with an empty lot, letting their “plot of land” become overgrown with weedy users and their by-products — irrelevant content, off-topic flame wars, link farms, spam, and so on. Sometimes, by selective weeding and cultivation, these chaotic systems can be coaxed to some semblance of community. But it is very difficult, once a permaculture of weeds is established, to steer that community toward relevancy.

It’s far better to plan the social media permaculture and seed it with the right content and encourage the right users to participate. Identify and constrain the system to the audience you want to reach. Provide them with the right mix of functionality and interactions to encourage conversations and connections. Slowly add new elements until you get the right balance.

Step 5: Harvest, prune, and tend

The idea behind permaculture is to create a self-sustaining system that also produces food. In social media, you want to encourage community AND accomplish some business objective. How do you know the establishment of the community is helping you reach your goals? How do you know it’s making money? How do you know what’s working and what’s not? The answer: measurement.

Farming requires fastidious bookkeeping. What did I plant where? How did that plant react to the addition of the other plant? How did the late onset of summer affect yield? How does compost made of kitchen scraps and lawn clippings perform year-over-year compared to compost made with chicken manure? What’s the optimum distance between fruit trees so enough light reaches the understory to encourage the growth of fruiting shrubs?

The same goes for social media. Analytics must be collected throughout the lifetime of the site to understand the effect of seemingly minute changes to elements of the online community. You won’t be able to predict with 100% accuracy how your permaculture will develop over time. As a result, you’ll need to swap out underperforming technologies, keep an eye on content rot, prune back overgrown categories, re-target audiences, tune your messaging.

For the permaculture gardener, many of these optimization decisions require trial and error over decades. For a web site or online service, new features, designs, and content can be trialed and refined over a few weeks. Here are some of the most promising methods:

  • Rapid, iterative testing & evaluation: prototype designs are tested with actual users, revised in real time or within a matter of days, and re-tested and revised repeatedly. This results in a relatively well-optimized pre-launch design.
  • Beta launch: this is the pilot test. Users are more forgiving when informed that the site or service may not be stable. They are also more likely to return to see if improvements have been made since their last visit. Just don’t leave it in beta forever.
  • Multivariate testing: in-flight testing of minor changes, run on a random selection of a small percentage of visitors.

The user experience of a permaculture

You may well wonder whether a permaculture feels any better to a user than a simple, straightforward, single-product site or service. I’m going to cop out and say, “it depends”. Any web site or product, no matter how complex, can be made to feel simple, given enough latitude in the design. Sure, a natural ecosystem would just look and feel like an overgrown jungle. But a truly useful, sustainable, and profitable permaculture by definition must have simple and aesthetically-pleasing pathways, fully-accessible harvest patches, and an easily-maintained structure. Likewise, a sustainable social media system must look and feel simple, approachable, and accessible, even though it may be supported by an extremely complex set of business rules and technologies.

I wrote an article about InfoCamp 2007 for the June/July ’08 issue of the ASIS&T Bulletin. It tells the story of how we came up with the idea, some of the lessons we learned, and our plans for the future.

We tried to apply user-centered design principles to our conference planning process — hopefully, that message comes through in the article.

Interactive Strategy

New blog post about strategy and interaction design over at the ZAAZ Blogs.

My wife and I are in the process of re-designing our yard, since we need to take care of some urgent drainage and landscaping issues. Last night, we spent the entire evening poring over garden design books, most notably Ann Lovejoy’s Organic Garden Design School and Jacke & Toensmeier’s Edible Forest Gardens.

I was struck by how similar – and educational – the garden design process is to the user experience design process. While reading these books, I’d often just substitute “garden” and “landscape” with “web site”, transforming the text into a comprehensive guide to creating sustainable and useful information systems.

The garden design process all starts with the vision and goals. Our goals for our garden include:

  • low-maintenance structure
  • self-sustaining, bootstrapping, balanced micro-ecosystem
  • layout optimized for access & use (socializing, entertaining, harvesting)
  • long-term, sustainable source of food

After that comes a site assessment that analyzes the specific space and context from many different perspectives: climate, microclimate, seasonal factors, flow, access, use, aesthetics, materials, soil, organisms, etc. A design can’t simply be copied from one site to another – the topology, wind, water, sunlight, and so on are extremely site and context-specific. For each perspective, the designer maps out the healthy and high-yield areas, the sick and risky areas, confounding factors, and so on. An overlay of all of these perspectives simultaneously shows which areas will support which types of features and which areas will need to be built up or re-designed.

From there, the designer brainstorms different approaches, blocking out on a conceptual bubble diagram how the new design will address the factors identified in the site assessment while still fitting within the vision. They then choose from a pattern library of well-tested, sustainable feature configurations, using some features to protect and support, while using others to add aesthetic and nutritional value. Again, not every pattern will work for every site. The designer must intelligently select the right pattern for the constraints and factors inherent to the context for which they’re designing.

The design is iteratively refined, considering all of the perspectives from the site assessment, until enough details have been resolved to begin selecting and placing individual plants. At the same time, the site can be prepped for installing the new design. As the new components are inserted into the landscape, the designer must consider how these will change over time. An edible forest garden must be able to sustainably evolve over decades, so the designer needs to take into account short-term factors as well as the long-term plan for their design.

Once the design is reasonably certain, implementation can begin. However, it doesn’t end when the last plant is placed or the last paving stone is installed. The design must be refined and the site tended over the months and years. Iteration and optimization are key to the long-term success of any design.

There are so many complex and competing factors to consider in a garden. As with a social networking web site, the community can’t really be controlled directly. The garden designer must attract beneficial organisms with certain layouts, features, and plants, while considering how the requirements and products of one organism might affect other organisms. If the structure is well-designed and plants well-selected, different organisms will nourish, protect, and police each other.

Sounds a lot like the user-centered design process, right? Well, it’s no mystery. Landscape architecture and garden design have histories that extend to the beginning of human civilization. There is deep wisdom to be gained from examining the process of creating and cultivating of healthy, sustainable forest gardens.

In web sites, the pathways in, out, and through the site determine how people use, gain value from, and give value to the site and the other people linked to it. The site will constantly evolve, powered by a constant influx and outflow of information, money, time, and so on. Like a yard gone to seed and overgrown with weeds, an unattended, poorly designed web site will stagnate and collect spammers and trolls. The web site designer must consider how to attract the right audiences, provide them with value, allow them to be productive, and make use of the products of their input to further improve the structure and value of the site.

At the 2008 IA Summit in Miami this last weekend, I and 40 other user experience practitioners showcased their work on the first annual Wall Of Deliverables. Conference attendees then voted on the entries. The deliverable I submitted, ZAAZ’s Blended Agenda Matrix, won 2nd place!

I’ve posted a blog entry about the Blended Agenda Matrix and its history to the ZAAZ Blog.

Every time I venture into the wide world beyond my daily haunts, I encounter people that, to put it bluntly, annoy the hell out of me. You know them: the “other” people, the ones who litter, talk on their cellphones while driving, drive atmosphere-polluting behemoths, snore loudly in airplanes on red-eye flights, actually enjoy watching TV commercials, click on the animated monkey in banner ads, listen to smooth jazz, etc, etc.

Yesterday, it struck me that the users I design user interfaces for might just be those same people. This led me to realize that we in the user-centered design industry often tend to idealize our users, creating soft-focus glamorized personas that portray them as honest, hard-working, likeable people. I wonder, though… do we do the world a disservice by glossing over their flaws?

What would happen if we tried to understand our users as real human beings, warts and all? How would our design process change for people who are dishonest, lazy, disagreeable, and – heaven forbid – evil?

As user-centered professionals, it’s our job to promote and defend the needs of all users, right?

No, not really. It’s our job to design experiences that simultaneously accomplish the goals of our employer or client while meeting users’ needs as best we can. All user needs we meet must be within the subset that are correlated – directly or indirectly – with the business’ or organization’s needs.

In social networking sites and multiplayer online games, it becomes more crucial to understand the personality flaws of users. Every online community suffers from trolls and griefers who intentionally abuse other users and poison social systems with offending or annoying content or actions. They do this to gain attention, feel a sense of empowerment, get revenge on another user, or just entertain themselves. However, even “normal” users will game social systems to maximize their virtual wealth, improve their peer rating, gain attention, feel a sense of empowerment, etc. The design of any good online community will have checks and balances to prevent users from exploiting the system or abusing each other.

So what does this mean for the user-centered designer? I think it means that maybe we should drink our persona kool-aid with a grain of salt. Just as pharmacists include a list of side effects and contraindications for every drug they recommend, perhaps we should be detailing the weaknesses, negative traits, and potential errant behaviors of our personas. Or maybe we should create anti-personas for the trolls, griefers, or any other user we’d like to actively discourage.

What would follow from this would be designs that proactively inhibit or balance out negative behaviors. We would specify who we are designing against in addition to who we are designing for, thereby improving the focus on our true target audiences. We’d discover which audiences are supporting the bottom line of our clients. We’d create online communities that are self-regulating. Or maybe we’d just understand who our users really are: imperfect human beings with foibles and vices. Just like you and me.

On strategy

I’ve been realizing more and more that the crucial role user experience architects play in the software and web design process is that of a strategist. So I started reading up on strategy.

What I found is that, contrary to the paper-thin plots in movies, strategic planning does not happen in a subterranean room filled with maps of the world and scale models of ships. It doesn’t happen in the mind of a genius admiral at the helm of a battleship. More likely, strategy happens in a well-facilitated brainstorming session after everyone in the room has gathered and examined as much information as possible. Strategy is what you get after thorough research and analysis and testing and refinement. A strategic plan is a blueprint for achieving a set of goals.

Sounds similar to the user-centered design process, right? I have a sneaking suspicion that we are on the verge of a shift in the IA/UX discipline toward the strategic planning end of the customer engagement.

We cannot simply focus on drawing well-structured diagrams, creating fancy deliverables, and conducting well-designed user studies. These are the tools of our craft, but they are merely tactics for communicating, visualizing, and extracting the information needed to make strategic decisions. When our clients and business decision makers can follow the thread of user objectives and high-level business goals down to the smallest detail of a wireframe, we will have done our job as strategists.

As a result of lessons learned the hard way throughout my career and in my personal life, I’ve learned to stop worrying about perfection. In the business world, perfectionism leads to “analysis paralysis” — the lack of action due to too much information. In everyday tasks, perfectionism is the precursor to procrastination. The fear of doing anything imperfect leads me to do nothing, which is infinitely worse than doing something — anything at all — imperfectly.

As a result of this insight, I’ve been trying to intentionally throw in mindful imperfection in everything I do. When cooking, I avoid using measuring spoons. When creating artwork, I sketch as many ideas as possible on scrap bits of paper. When writing, I often type in stream-of-consciousness just to get my ideas out. At work, I start everything on paper, whiteboards, and unsorted lists.

I start from this raw material of apparent chaos and gradually make sense of it all. I combine and iterate and remix and refine. I adjust as I go, let the patterns emerge, allow the ingredients speak for themselves, trusting my instincts. When it’s done, the product is often surprisingly good — far better than I could have achieved through planning and fretting.

In Japanese artwork, this principle is called wabi sabi — nothing lasts, nothing is finished, and nothing is perfect. It’s responsible for some of the most beautiful and poetic (perfect?) works of art in the world.

It’s difficult to see how this principle may be applied in the user experience architecture process. I am constantly faced with a project schedule or budget that is too tight to do anything perfectly. I must choose between two awful approaches — do we cut scope and deliver a half-implemented design? Do we scale back user research and requirements gathering to leave more time for design iterations? Or do we invest on understanding the problem in depth while using up all our budget for creating a solution to that problem?

What to do?

I don’t think there’s a simple answer to this problem, even though it’s one we consultants encounter on a daily basis. But here’s what I’ve learned: if all options are equally flawed, go with the one that has the least damaging long-term, ongoing impacts. Choose the approach that will set a precedent for future work. Create a path that defines your role, sets boundaries, and sets you up for ongoing success.

To do this, step back for a moment and consider your priorities and your overall strategy. For a consultant, the highest priority is to set up an interaction that strengthens the client’s trust. A client-consultant relationship based on mutual respect and trust is an extremely powerful and profitable strategy over time. Consider what will gain the client’s trust more: timely execution of short-term objectives or deep-thinking strategic analysis. If your client just wants to know that you are responsible and dependable, go with the former. If your client values your judgment and intelligence, go with the latter.

Whatever you do, don’t just sit there and refine your plan or deliverable over and over. Missing a deadline will jeopardize the clients’ trust in you far more than delivering something less than perfect. If you present them with a product that is imperfect, be honest about your reservations and suggest your recommended alternative approaches. Your honesty and openness will cultivate more trust than the most perfect process or deliverable in the world.

At the IA Summit this year, an area of the conference will be devoted to showcasing project artifacts: The Wall of Deliverables. It’s a fantastic idea — I love the concept of hundreds of practitioners sharing their best work, learning from each other, and advancing the craft.

But there’s a dark side to the Wall of Deliverables. For those of us in the private sector, especially we who work for consulting companies, our deliverables are our secret recipe. They are one of the differentiating factors that give our clients a visual indicator of the quality of our work. If our competitors knew what our deliverables looked like, we’d lose our competitive edge, right?

It seems to depend on one’s analysis of what makes us competitive. There are many factors that govern a client’s choice to work with a certain consultant. Customer service, personal connection, visual style, portfolio, service offerings, process, culture, etc. are likely more important than the format of a particular deliverable. To what extent do clients choose to work with us based on our documents? If we invent a new way of communicating design to stakeholders, how much of an advantage does that method give us over time? How much can we milk that Cow of Intellectual Property before a competitor comes up with a similar, more effective idea?

I posit that the Law of Diminishing Returns reigns in the Secret Recipe approach. There may be short-term gain from keeping the special deliverable under wraps, but, just as with restaurants, the final product must be consumed by someone outside the organization. And that someone can easily share with another person, reverse engineer the recipe, and make it their own… and claim that they invented it.

The alternate approach is to publish the recipe to the wider community, plastered with your company’s brand. If your method is truly novel, the idea will spread like wildfire, and practitioners all over the world will be passing around a template that is essentially a marketing tool. Potential clients, competitors, and recruits will see it, comment on it, share it, appropriate it, and improve upon it. And, just as with Jesse James Garrett’s IA Visual Vocabulary and Alan Cooper’s Personas, the creator of the recipe becomes a household name. The new deliverable can spark a revolution in the industry and further establish user-centered design professionals as essential, valuable facilitators of strategic change.

Never underestimate the power of a well-designed, informative, visually-appealing document. I’ve learned that if there’s something needs to be communicated and evangelized across an organization, it helps to use a format that pleases the eye from any distance — across the room, at arm’s length, or up close. Provide a concise, at-a-glance summary, but pack it with detailed information backed by research and analysis. This enables the audience to digest it progressively. If we’ve done our due diligence in creating the document, they’ll be able to gain insight, identify problems, and understand the depth of our work.

However, sexy deliverables can be powerfully destructive as well if they are not well-researched, properly constructed, presented without guidance, or disseminated out of context. Clients and stakeholders may be lured by the prospect of gaining an incisive communication tool, but that knife can cut the person that wields it. Or, to stretch the metaphor, the knife can be totally ineffectual if not forged and sharpened by a professional.

An example of such a deliverable is the persona. Most personas are easily recognizable — a one-page summary of a member of a user segment, often including:

  • a fake name and job title
  • a stock photo
  • a summary paragraph
  • a few bulleted lists of characteristics and tasks
  • maybe a little graph showing their rating on a couple behavioral or aptitude scales

Sounds easy, doesn’t it? All these elements could be completely fictitious. Just get a copy writer to make up some stuff, hand it to a graphic designer, and you’re done, right? Wrong, wrong, wrong. Personas are like the tip of an iceberg — they are the executive summary of a mountain of documentation, the culmination of weeks of extensive user research. An even cheesier simile: a persona without the attendant background work is like a mannequin dressed up like a doctor. From far away, it looks like it might be able to help you, but good luck getting it to give you CPR.

It would be far safer to create a deliverable that can’t be misused in this manner. Make it a visually pleasing, giant poster with multiple levels of detail, encapsulating in its form factor all the depth of the background work that went into it. Make it challenging to construct and difficult to fake.

At ZAAZ, we have a deliverable called a Blended Agenda Matrix (a.k.a. “BAM”) that puts this into practice. The last one I did was for a large national financial institution with stakeholders across all lines of business and based on the work of a team of user researchers and consultants over several weeks. By the time I was finished analyzing and summarizing their work, the poster was 6′ x 4′ with 12pt text. The amount of information that goes into a BAM poster is daunting, but it connects high-level business and user goals down to low-level tactics and metrics in a visual and digestible format. It exposes the patterns that emerge from the apparent chaos of competing business and user objectives and provides a guide to accomplishing those objectives in the most strategic manner. Most importantly, it’s a sexy deliverable, an effective sales tool and communication artifact that cannot be easily misused.

(Unfortunately, I’m not at liberty to post a sample BAM here yet…)

As a user experience architect, it’s always disappointing to see my wireframes implemented as graphic design with little more than a color treatment.

Such a site went live the other day. While I congratulate the client and their army of content editors on implementing the usability guidelines and content strategy that our team recommended, the visual design leaves much to be desired. The boxes and blocks of color are straight from the wireframes: visual cues that were simply meant to call out certain types functionality were carried over wholesale into the graphic design.

It will be interesting to see whether the site performs well, despite the lack of a sexy design. It’s for a client whose users would actually expect and appreciate a low-budget look & feel.

… is the library’s website.

So why is there only one person running it?

A certain large local public library system has that one poor systems librarian doing the work of 8 people:

  • managing the online public access catalog,
  • handling the integrated library system,
  • managing the web server,
  • managing the database vendors and databases,
  • developing web applications that interface with those databases,
  • coordinating the content authoring workflow,
  • editing and uploading site content,
  • and managing one web editor and a part time graphic artist.

This valiant librarian is working 80+ hours per week, and the redesigned website that I and my fellow consultants have provided her will require even more of her already-scarce time to truly realize the library administration’s vision.

Think about it. To run even the smallest library branch, it requires the coordinated teamwork of several people. But the library’s website, no matter how good the information architecture, content management system, or design, cannot survive on one person’s effort. It receives many more visitors than even the busiest downtown branch of the library system. It answers far more questions per day than the busiest reference desk. It is seen by more passersby and reaches more potential patrons than the most beautifully designed or most visible library building. Satisfying the needs of all those users requires considerable effort and attention — far more than one person can supply.

Radical Transparency

I’ve been pondering lately the apparent contradiction between collegial collaboration and corporate competition. This issue became all the more relevant when I attended the IA Summit this past week. Surrounded by my fellow user experience designers and information architects, many of whom work for my employer’s direct competitors, I suddenly felt the familiar twinge of cognitive dissonance (which I’ve come to think of as my “Spidey sense”). There we were, trading best practices, tips and tricks, research findings, processes, and design patterns — with people who were representatives of our rivals in the marketplace.

We’re competing for recruits. We’re competing for clients. We’re competing for market share. So why the heck should we be sharing our companies’ secret sauce with each other? Shouldn’t we be hiding our knowledge from each other in order to gain a competitive advantage?

Well, no. Obviously, we aren’t breaking our non-disclosure agreements with our clients. We aren’t showing screenshots of unreleased products or sensitive data. We’re not that stupid. But we share the lessons we’ve learned for the mutual benefit of our colleagues. By making our processes and patterns available to each other, we hone our craft for the benefit of all, with the expectation that others will reciprocate in time. Design patterns, heuristics, and processes that are grounded in research and verified by colleagues across the industry can be reused with confidence, saving everyone time and money. This leads to better (and less expensive) design, since we don’t need to reinvent the wheel for every project. Better design raises the standards of the entire industry, leading customers to demand better user experiences, leading to increased demand for better research and better design. And the virtuous circle (spiral?) begins another turn.

There’s an article in this month’s Wired on the concept of “radical transparency” — gaining a competitive advantage by being completely (or mostly) open. The author posits that doing so is beneficial for a company’s reputation in the marketplace, since customers will trust a company that bears all over a competitor who hides behind a PR smokescreen. A company’s reputation amongst its competitors could serve as a marketing tool as well — for example, get your colleagues talking about your company by having them cite your research, or send a large contingent of designers to a conference to create buzz that your company is a leader in the design community.

Our employers have little to gain from guarding their designers from each other. In fact, they run the risk of falling behind their competitors. Professional development conferences ensure that designers keep abreast of their counterparts in competing companies and provide an opportunity to poach the best talent from competitors. This is the nature of capitalism. In the IA/UX community, this free market also happens to benefit customers and end users.

I’m on Confab!

I throw my opinions in with five other fellow mid-30s Seattleites on the Confab podcast about the terrible customer experience of credit reporting companies, business plans that thrive off of bad usability, Vista, the Wii, dying businesses, and downloadable video.

Reviving ASIS&T PNW

I’ve been appointed chair of the Pacific Northwest Chapter of ASIS&T for 2007. The chapter has been mostly inactive over the last two years, at least locally. However, the student chapter at the University of Washington has been the exact opposite.

The majority of the Pacific Northwest’s ASIS&T members are in the Seattle area, thanks to the constant supply of student members and recent grads from the UW Information School, who join primarily for the professional networking opportunities. Unfortunately, like myself, many of them find that the regional chapter is nowhere near as involved as the student-run organization. As a result, many let their memberships lapse, opting for more lively organizations, such as ALA, PNLA, ACRL, SLA, SIG-CHI, etc., etc. Well, that’s going to change, if my fellow chapter officer, Corprew, and I have anything to say about it. Here’s the plan, wo/man:

By far the most active portion of the ASIS&T organization is the Information Architecture special interest group (SIG-IA), and the majority of local ASIS&T members are either IAs professionally or do IA in the course of their jobs. There are three local organizations that cater to IAs and their ilk (interaction designers, user experience designers, whatever): IAI, SIG-CHI, and ASIS&T. And a major annual conference many of us attend is the IA Summit, which is organized jointly by SIG-IA and IAI. So, why the heck don’t we get together socially on a monthly basis? Say, second Thursday of every month at the Elysian in Seattle? Good? See you there. Oh, and bring an interesting topic to chat about. And business cards. Let’s put all those bright-eyed students in touch with local professionals, so they stay involved as they start their careers.

We’re also going to hold some more formal events, such as an annual meeting (held in Seattle, of course). And we’ll invite members from the other orgs to attend.

I’ll also be communicating with the membership more often via a monthly newsletter, which I’ll send out on the first of every month. In that newsletter, I’ll collect all the various announcements about IA and library-related events going on. And I’ll try to make it somewhat entertaining…

[Yes, it’s been almost 2 months. Yes, I know, I’m a terrible blogger. Work gets in the way.]

Pursuant to my previous post “On Systems Librarianship“, I will be moderating a session at the annual meeting of the ASIS&T PNC on the role of Systems Librarians and Information Architects in getting people to work together. You see, the theme of the conference is “Building Bridges: Overcoming the Barriers To Data Interoperability,” and the call for proposals asked for research and reports on how information professionals were enabling the smooth transformation of data from one system to another.

In my opinion, that’s mostly a solved problem, given the wide variety of computer programs and routines that are all tuned specifically for processing data — text, numbers, images, binaries, and so on — but what is NOT solved is how to transmit meaning from one context to another. You could have the best data-sharing protocol or content management system in the world and still fail in the end due to miscommunication between the techies and… well, everyone else. For instance, look at the word “groupware”. To some, it means videoconferencing, shared drawing, collaborative VR, MUDs and MOOs, and other fancy technologies that are “virtual” analogues of real-life interactions. To others (particularly IT departments), “groupware” means shared calendaring, email, and discussion boards. Now, say the CEO or library director passes down an edict to the IT department that they need to support “groupware” ASAP. Imagine the derisive snorts and invective each camp will direct at the other when they finally realize, after weeks of confusion, what the other really means. Think of the irreparable damage to human relations between the administrators and developers that will cripple progress for years to come. Who would care then if MARC records could be seamlessly converted to XML-RDF?

It’s a human problem that requires a human solution. And it’s precisely where people like IAs and IT librarians come in. We need people who have a systems1 view, who can see the big picture, and who are bilingual (or multilingual!) in the jargon of technology AND business. Who better to fill that gap than those who are skilled in discovering user needs and translating those needs into designs and guiding those designs through to development and implementation?

This is the topic of the panel I’ll be moderating at ASIS&T PNC 2005. The abstract follows:


Session Title: Lingua Franca: How do we facilitate human interoperability?
Session Abstract: The barriers to data interoperability between information systems are nothing compared to the barriers people associated with those systems build between themselves. This is especially true between non-technical stakeholders and the technical staff who must implement and maintain the information systems. What is the role of information professionals in bridging this gap?Session will include a panel of 4-6 systems librarians and information architects. Panelists will be invited to discuss how information professionals (especially systems librarians and information architects) can act as translators between non-technical stakeholders, end users, and technical staff throughout the lifecycle of an information system. Focus will be on practical strategies and tools of the trade: visual language (eg. concept maps, flowcharts, ERDs, UML, IA diagrams, wireframes, etc.), documents (eg. prospectuses, business cases, paper prototypes, technical specifications, etc.), and communications technology (eg. groupware, content management systems, etc.).


[If you’re an IA or Systems Librarian, can be in Seattle on May 14, and would like to be on the panel, let me know. ASIS&T can’t afford to pay travel expenses, and I don’t know if you can get any kind of discount on registration. I’ll post an update when I know for sure, though…]

[UPDATE: Yep, your registration fee for the conference will be covered. Here’s the link to the chapter web site again.]

1By “systems” I mean it in the sense of holistic “systems science” or “ecosystem” or “systemic”, not “IT Systems”.

RSS & CMS

My new task is to find something that does RSS feed generation. For the lazy, there’s stuff like ListGarden, which has a desktop client and a Perl CGI. Unfortunately, it has no real good way of controlling who can edit what. So a content management system may be in order.

Thankfully, there’s a couple good CMS comparison web sites out there:
http://www.cmsmatrix.org/
http://opensourcecms.com/

Moodle

I guess this is really part of that whole groupware thread, but I got tired of keeping track of how many parts there were. It’s apparent to me that this thing may go on indefinitely.

Moodle is a open source course management system that works pretty much the same as any other groupware system. It’s PHP-based, and the only major differences from other PHP-based systems like phpGroupWare or PHProjekt, besides being geared toward teachers and students, is that it actually works and has decent documentation.

I installed it and got most parts working in record time with little hacking required.

It’s been nearly 3 months since I started this blog, and I haven’t really talked about Systems Librarianship — or anything librarian-ish, for that matter — nearly enough. This is mostly due to the nature of the work I’ve been doing recently, which has involved installing and testing and debugging web applications. It’s time to take a step back and look at what it all means.

As I sit here in my office staring at the whiteboard, which is covered with diagrams and flowcharts hastily drawn to illustrate the flow of information from user to system to system and back again, I reflect on why this job exists at all. Why do we need librarians in IT departments anyway? Shouldn’t the librarians focus on reference and classification and selection and all those other bookish concerns? And shouldn’t computer support personnel just stay in their fortress of circuit boards and techno-jargon, honing their 1337 h4x0rz sk1ll5 and preventing the script kiddies and cyber terrorists from taking down the network?

Well, in answer to these questions, I offer the following illustration from this Sunday’s Dilbert: http://dilbert.com/comics/dilbert/archive/images/dilbert20050101046179.jpg

Notice that, in this comic, the IT department is presented as sadistic and needlessly legalistic, a stereotypical portrayal of the stereotypical computer support department. The reasoning behind the confiscation of “non-standard” equipment is that, in order to save money and staff time, the organization can only support a limited range of technology. Every boutique piece of hardware or software adds yet another expense, yet another line item, yet another headache on top of the already thin budget and overloaded schedule tech departments must handle. Such a perspective is technology and cost-centric. And everyone knows the bottom line — especially in libraries — is king. Right?

Well, let’s look at the stereotypical portrayal of librarians: maternal, dour, overly-educated-yet-technologically-challenged protectors of arcane knowledge. I have met several librarians who, at first glance, fit this description perfectly. However, being a librarian, I know that even the most stereotypical librarians care deeply about helping and educating the public. Yes, we are keepers of the sacred word, but we’re here to assist real people in finding real answers to real questions. We are user-centric.

So what happens when you have a library, full of librarians and library staff, that also happens to be full of computers — public terminals, staff workstations, printers, scanners, catalog servers, web servers, application servers, file servers, backup servers, database servers, print servers, routers, switches, wireless access points, etc. etc. etc.? You get an IT department to manage all that technology on a shoestring budget. You get aging reference librarians who are 5 years from retirement being asked all manner of tech support questions by patrons. You get stereotyped user-centric librarians criticising stereotyped computer geeks for being technology-centric and budget-conscious. You get civil war.

UNLESS… this is the point of this whole spiel… unless you have Systems Librarians there to fill the gap between users and technology, between the IT department and the librarians. They bridge that crucial disconnect, providing the user-centered perspective to the bit checkers and bean counters while keeping the limitations and challenges of technology management in mind.

Systems Librarians are bilingual, translating techno-jargon into natural language and back again; they are cyborgs, chimaeras, the moderators between the binary of circuitry and the poetry of meat. They are catalysts of change. IT Librarians bring a holistic perspective of the operation of the library to every interaction and can see possibilities for improvement where non-technical library staff or non-librarian technical staff would see only barriers and the fog of indecipherable complexity.

Thus, if I often switch abruptly in this blog between apparently disparate topics such as LDAP and library committee formation, know that it’s just part of my job as a Systems Librarian. I’m just exercising my babelfish.

Groupware (part 3)

I’ve now had the opportunity to install and attempt to integrate two open source PHP-based groupware systems (PHProjekt and phpGroupWare), and the verdict is not good. Although they mostly work as advertised within a limited operating environment, both systems have serious problems with stability, reliability, support, and documentation.

It’s looking more likely that we will go with a best-of-breed approach. In this scenario, we would select separate packages from separate vendors and glue them together with some common interface. Microsoft provides products for such a scenario, allowing one to tie together Office, Outlook, SharePoint, Content Management Server, SQLServer, and Server together with Exchange. Unfortunately, licensing fees, disdain for MS products, incompatibility with everything else, and security concerns have kept us from selecting the Microsoft route — except as a last resort.

However, a couple open-source alternatives to MS Exchange exist:

Both currently use WebDAV and XML as an interface for communication between components of a groupware system.

LDAP

[Yes, I know it’s been more than 2 weeks since my last post…]

Due to the needs of the groupware project I’m working on, I’ve been exploring the world of LDAP.

Unfortunately, I’m trying to query our Windows Active Directory to discover the LDAP attribute names for use in a PHP script — and the Microsoft documentation for the LDAP interface to Active Directory is cryptic and completely unhelpful. I’m sure what I want to know is in there, but I don’t have time to read it all. Thankfully, some nice people have written some more useful guides to the LDAP attribute names in Active Directory.

Especially of help is the ldapsearch command line tool, which allows one to query an LDAP server and get all kinds of useful user, group, and permissions information. Well, at least it allows one to do it if one knows what one is doing. Through trial and error with the syntax, I figured out a Linux command-line query that would allow me to see the attributes available in the LDAP-ified Active Directory:

ldapsearch -h "ldapserver.mydomain.edu" \

-b "CN=Aaron Louie,OU=Library Systems,OU=Staff,DC=subdomain,DC=mydomain,DC=edu" \

-D "cn=pubuser,ou=public,dc=subdomain,dc=mydomain,dc=edu" \

-w "********" \

"objectclass=*"

Check out the ldapsearch manual pages for an explanation of what all those flags mean. Once I can get our PHProjekt installation to map the resulting LDAP attributes to meaningful username and address book entries, we’ll be in business.

Groupware (part 2)

Like any good systems librarian, I should probably figure out a few things before suggesting a groupware system. You know, little stuff like…

  • Why do we need groupware anyway?
  • Who are the stakeholders?
  • Who are the end users?
  • What do those users need?
  • How are they meeting those needs now?
  • What constraints (technological, budgetary, etc.) are we working with?

I should also know the who, what, where, when, and how of this project and its possible implementation. In a large library with tens of thousands of users, all this should be figured out by a task force. The committee should ideally be made up of:

  • representatives of the end users (staff)
  • someone from Administration
  • someone who does computer training/support in the library
  • someone who’s studied the organizational problems in our library
  • someone from Systems (like me)

This process should be typical in any major IT project in a library. At this point in our process, it’s not clear whether we’re really going to commit to a groupware solution. To gauge need and interest, we’ll set up a prototype system as a “straw man” and present it to the administration. If they feel it’s worth spending time and money on, we may form a groupware task force to figure out which vendor meets our needs. The task force would need to write up an RFP, schedule demos by the vendors, collect proposals, evaluate proposals, etc. etc.

But first, let’s just focus on selecting our “straw man” prototype. Like I said back in my first post on this subject, I’d like to create a feature comparison table to track which system does what. Unfortunately, most commercial groupware vendors are less than forthcoming about the specs of their systems in order to protect their intellectual property. Thus, we’ve decided to evaluate open source packages only. An added benefit is that we can really get into the guts of open source systems and figure out how they’re built.

Our development server is running Linux and PostgreSQL, and our university’s authentication service is LDAP-compatible. We also need a system that is being actively developed (i.e. updated in the last year). Any system that does not meet these minimum requirements is ignored.

So, without further ado, here’s the resulting groupware comparison matrix (Excel spreadsheet). You’ll notice that the vendors are sorted according to the number of results in a Google search. It’s a rather arbitrary metric, but it’s useful for seeing how popular (or controversial) a system is [Credit goes to Bill for this idea]. Although there was no clear winner, the system we will probably go with is phpGroupWare. It’s lightweight, supports a good number of critical features (PDA syncing being the most critical) and has the added bonus of running under Apache, so we don’t have to run another server environment.

[Update 12/21/2004]

Well, after playing with phpGroupWare, I’ve decided it’s too unstable and disorganized to implement. Plus, recently discovered security vulnerabilities make me wary of considering it as an option. I’ve decided to play with PHProjekt instead. There’s a possibility the PDA syncing could be made to work with it. Maybe…

StumbleUpon

I know, this is probably old news by now, but I can’t resist commenting on a meme that I think has a lot of potential for a real impact on the way people find information. StumbleUpon is a toolbar that, with the click of a button, appears to send you to a random page. But it’s not just any random page. After you install StumbleUpon (available as a Firefox/Mozilla extension), you select from a list of about 500 topics you are interested in. When you click the Stumble! button, it matches those topics against the thousands of sites other people have reviewed using the “I like it!” and “Not-for-me” toolbar buttons. So what you get is highly targeted. In fact, after selecting my 30 or so topics, the first three links I got were Slashdot, HowStuffWorks, and New Scientist. In other words, dead on. I read Slashdot every day, subscribe to New Scientist, and am obsessed with finding out how stuff works.

So, as you surf the web, be it for work or play, you review the sites you like (or don’t like) with those toolbar buttons. And, on those days when you just want something new, you can use StumbleUpon to explore new sites in any of the topics you’ve chosen. There are some spyware concerns with StumbleUpon, but none more than Google Tooolbar. I’m not too worried about it — yet.

The thing that fascinates me about this tool is the great potential for a kind of “readers’ advisory” or Amazon Recommendations for the public Web. It also allows some degree of personal knowledge management, as you can add a review of any site you come across. Those reviews are shared with others, but you can also view the collection of all your reviews in one place. Even without this aspect, the signal-to-noise ratio has the potential to be very low, so information encountering and foraging behaviors can become very efficient.

One gripe, though… they don’t have a “Society > Libraries” topic, but I sent in a suggestion to have it added.

Groupware

My current project is to identify a decent groupware system to replace the ad hoc, messy combination of poorly organized HTML pages and fileshare directories. The most necessary features include shared calendaring, instant messaging, meeting room booking, PDA syncing, and a wiki or CMS. Oh yeah, and it would be nice if it ran on Linux. As far as I can tell, there are few systems that do all of these well. I’m in the process of creating a matrix that compares the best candidates and all their features, similar to Lucane Groupware’s feature comparison.

One major challenge in selecting a groupware suite is the bewildering variety of groupware systems out there. Looks like I have a long way to go…

[Update 11/24/2004]

I just stumbled upon this database of groupware tools courtesy of the Alliance for Community Technology. It’s far from complete, but it’ll definitely make my job easier. Oh, and the database site runs on Zope. There’s also the less helpful — but still informative — Google Directory topic for Groupware.

One of the major issues in maintaining a large web site with many authors is that of keeping everything up to date. In our system, authors edit HTML pages on a Windows file share, and those same files are served up on the web site. Unfortunately, this means that any changes the authors make are immediately reflected online, blemishes and all. While the ideal solution to this problem would be a content management system like Plone, a stop-gap is needed to provide some sort of staging service between the authoring server and the web server. The only problem is that our authors are accustomed to the Windows Explorer interface, and we’re trying to migrate our web servers to Linux. The solution? Synchronization with rsync. It slices, it dices, etc.

Unfortunately, the rsync server only provides two interfaces: command line and SSH. We’re syncing between two mounted file systems on a Linux box, so we don’t really need the SSH one. So that leaves us with the ugly command line. Fortunately, there’s a Perl module called File:Rsync. It basically wraps the ugly exec command in an easy(er)-to-use Perl API. It’s still ugly, and, yes, I know, it’s Perl. But it works. Kind of.

A big complaint about rsync is that, if you sync a gigantic file tree recursively, you risk running out of memory. Why? Because rsync loads the entire tree into RAM before running the sync command, eating up about 100 bytes of memory per node. Bad, bad, bad. Well, fear not, for I have written a Perl script that forces rsync to just sync one directory at a time, holding only as much as is needed to navigate to the directory being synced in memory. It’s not quite complete yet, but I’ll post links to the source when it is…

[Update 11/29/2004]

Here’s the source. Please note that this was written for a very specific context and is only presented as an example of what one can do with rsync. It will not work for any domain outside the University of Washington Libraries.

Another interesting feature of this script — and perhaps the centerpiece of its functionality — is its ability to authorize users via a Java web service that goes by the imaginative name, The Authorization Project, or, unofficially, “Authorizator”. With our synchronization script, users are authenticated via the university’s central UW NetID web service, which uses UW’s own pubcookie. This just means that we already know who our users are — we just have to find out what they are allowed to do.

This is where Authorizator comes in. Since our network admin already manages the myriad of Windows users and their permissions on the Windows file share, the authorization information is already there. The Authorizator web service provides the glue that allows us, with the information submitted in an HTML form, to find out if someone is allowed to synchronize a particular directory. Authorizator takes an HTTP query (with username, directory/file path, and domain parameters) and returns permissions data in the form of XML. It’s not pretty, but it works for now.

Secret links

In the past, I’ve maintained a secret page of links for jumping to various resources I use frequently. Many of these links require authentication to view, and a few are broken. This blog may soon take the place of that page. Several people I know use such secret link pages as a way to access their bookmarks from any computer. Unfortunately, these pages are in HTML and can grow unwieldy after a while. Perhaps a blog with a categorization or search system can solve this problem…

Mission:

This blog will record my thoughts about the technologies and issues I encounter during my career as an information technology librarian. Since I’m currently a (temporary) systems librarian at the University of Washington Libraries, I’ll focus on those platforms and applications which most affect my work — Innovative Interfaces Inc., Linux, Apache, PHP, Java, Perl, and the wide variety of open source tools available through SourceForge. I will try to put up at least one post per week.

Essential links:

As do most techies, I get most of my tech news from Slashdot. And, like many librarians, much of my library news comes from LISNews. I get my general news about the nation and the world from Google News, and the more politically skewed news/commentary comes from Drudge Report and Daily Kos. Since this is my professional personal k-blog, I’ll avoid editorial commentary on non-library or non-tech issues. Such things are covered elsewhere.

Personal info:

My personal web site and online portfolio is available here.

<>