The woman, a test subject, sits at a computer listening to a set of scripted instructions.
‘'Tell me what you think, not what I want you to think.
‘'You can leave at any time.
‘'I'm here to learn about how travelers obtain traffic and road construction information through a Web site."
Conducting the test is a University of Texas masters student in information science, Donna Habersaat. She watches and answers questions as the test subject clicks and scrolls through drivetexas.org, a Texas Department of Transportation Web site for travelers.
In the next room at UT's School of Information eXperience Lab (or IX Lab for short), Habersaat's test partner Tom Reavley is watching the woman's online actions through one-way glass and on a computer monitor that shows what she's doing. He listens in on a set of headphones and checks off items on a list of ways a person might find information on the Web site. A few minutes later, he says, "She's doing pretty good. She actually got through a lot of things people made mistakes on, but now she's hitting a mistake that nobody else has made so far."
Habersaat and Reavley will collect and analyze data from multiple test subjects, turn it into recommendations on how the Web site might be improved and hand that information over to TxDOT, where Habersaat works. Habersaat suggested the project to her supervisors, turning a class project into a real-world usability study.
Margo Richards, director of TxDOT's travel information division, sat in on some of the testing. The site, which launched in May, is still being improved, and testing like this could help get rid of glitches or problems. "It really needs to be reliable for people to continue to use it," she said. "If we are going to put the effort and time we have into the site, it needs to be user-friendly and accurate."
The TxDOT study is just one example of "usability testing," a broad term for a process of improving products based on how people really use them. If you've ever been stymied by a badly designed website, frustrated by a cell phone with Byzantine menus or been faced with setting up a DVD player that came with bad instructions, you've been the victim of bad design.
The guru of usability at UT's School of Information is associate professor Randolph Bias, who as director of the lab has been teaching students how to test for the past 10 years. Before this, he worked at Bell Labs, IBM and BMC Software, as a usability expert.
When he talks to people about usability, he tends to refer to it as a religion (as in, "When did you get the usability religion?"). While the methodology of studying whether products are easy to use has been in place for decades, it's only been in the past 15 years that companies have gotten the message, Bias says.
‘'All this technically and all this information, if human beings can't gain access to it, it's of no value. The last time you went to a website and you tried to do something and you couldn't, that's because it's bad usability. It's not because you're stupid. It's because they didn't design for us, the target audience," Bias said.
If that sounds painfully familiar and if there are proven methods for testing for usability, why do we still deal with so many badly designed products and websites?
Bias says that many companies still test as an afterthought, just before a product release instead of throughout the design process. Others don't test well or simply refuse to believe they haven't invented the greatest thing since the the Post-It. That's why good usability testers are important.
‘'Usability people are in the business of telling people their baby's ugly," Bias said. "We don't just say, ‘Your baby's ugly,' we say, ‘Here's how we make this baby pretty.' And it's not just about pretty; it's about useful and usable."
In the mid ‘90s, just as usability was starting to get a seat at the tech design table, Bias co-wrote a book, "Cost-Justifying Usability." That book made the argument that companies shouldn't usability test just to make customers happy but to be more profitable. "You do it for business reasons. You'll sell more, you'll have more customers and less customer support. At the same time, you'll get good press," he said.
But how do you teach students to be good product testers? In class, Bias uses humor, Skype calls to usability experts like "Don't Make Me Think" author Steve Krug, and even role-playing.
In one October class session, Bias and master's student Simon McCann demonstrated what not to do in a usability test. McCann played a frustrated test subject ("I'm lost") and Bias portrayed an unethical tester who wouldn't let his subject leave and undermined the subject's confidence ("We'll, nobody's ever clicked that. That's not necessarily stupid.")
The lesson was that how you test is as important as what you test and the data you get back. Testing, Bias says, must be done ethically and without the tester getting defensive in the face of feedback or going into "teacher mode," guiding and influencing a test subject. But in the IX lab, students learn by conducting their own tests and turning those observations into suggestions for improvements, often for nonprofits or government agencies that can use the help.
It's clear to see where in the tech world usability can be a problem or an advantage. For the past decade, Apple has made a run in computers, music players and phones with products often considered more intuitive and user-friendly than those of competitors. Meanwhile, Microsoft currently faces a huge crossroads with Windows 8, a new version of its operating system that has been slammed by some usability experts for being confusing to use.
While testing thoroughly and properly doesn't necessarily mean a product or site will be improved -- designers can ignore usability data due to time constraints or cost -- designing without it can be a recipe for disaster.
‘'It's impossible to have good intuitions about what other people's experiences are until you watch them," Bias said.