Tom Reamy

Tom Reamy

Every time I interview another information architect, I'm blown away yet again by the backgrounds and career paths they've taken. Tom Reamy, an information architect at Charles Schwab, is no exception. He's been a programmer, a consultant, and an entrepreneur. He has degrees in English, Philosophy, and the History of Ideas. He's even designed a couple of commercial computer games. Of course he scuba dives (hey, I don't pretend to understand the connection, but it does seem to follow). And as another who's been lured and jilted by Artificial Intelligence, I wanted to get Tom's thoughts on how AI and IA relate (or don't).
-- Lou Rosenfeld

Lou: Tom, in a nutshell: what sorts of information architecture work are you doing for Schwab?

Tom: I was hired about nine months ago and the first decision was whether to call the position "information architect" or "knowledge architect". As we had no real information architecture on the Schwab Intranet (called the SchWEB), we called me an information architect.

My job essentially consists of introducing, planning, marketing, and implementing an overall information architecture for the SchWEB. It involves creating a taxonomy of content across the 300-600 web sites that make up the intranet, introducing metadata to hundreds of site owners, improving search and navigation, and consulting with existing and new site developers.

The effort revolves around two main themes: organization of content and coordination of content owners. As daunting a task as organizing content across 500 independent web sites appears, coordination has been by far the more difficult.

Lou: What sorts of tasks are on tap for the SchWEB during the coming year?

Tom: We have in place the beginnings of a good taxonomy (largely thanks to Yael Schwartz who works for me), which we are about to implement. In addition, we've started some very interesting knowledge management efforts that build on that taxonomy and could turn out to be of very high value for Schwab as a whole.

The foundation for all these efforts is the creation of a knowledge map of Schwab. The map consists of an organization and categorization of content, user populations, and tasks. The map will then create a structure to support the linking of all three: content, users, and tasks.

Lou: Tom, you admitted to me that at one time you'd been "seduced" by artificial intelligence (AI). I won't hold it against you; after all, who hasn't succumbed at one point or another? But what's really more interesting to me is how we eventually are deprogrammed. What exactly happened to sour you on AI?

Tom: What happened to me is what happened to a lot of people: I tried to develop a real world, practical application and found out how far away we really are from the promise of "we're just around the corner from being able to model human intelligence".

Specifically, I was working on my dissertation in the History of Ideas from the University of Maryland. I was, however, doing my research at Stanford. I had finished all my course work, taken the exams and gotten a prospectus approved for a fairly traditional dissertation on Ernst Cassirer. Cassirer was a historian of ideas and a German philosopher (a Neo-Kantian) who had a very interesting approach to epistemology. He took experimental results from psychology (what would now be called cognitive science) and used those results in his attempt to determine how humans think and know.

Soon, I too was researching both cognitive science and artificial intelligence for clues as to how we think and know. I found that I was spending more time reading about Mycin, the early medical diagnosis program, and KRL-0, an AI approach developed by Terry Winograd at the Stanford AI lab, than I was spending on understanding Cassirer's rather modest place in history.

I also found that I was more interested in developing my own approach to epistemology than evaluating Cassirer's. I guess I had never really reconciled philosophy and history. And then I had the breakthrough idea - why not take the concepts and methods that I was studying in AI and apply them to developing a new approach to understanding historical knowledge?

As you may remember, AI was "very close" to being able to model common sense knowledge. Of course, they've been very close for the last 15 years, but I didn't realize that at the time and so I couldn't avoid the seduction. I decided to take an AI approach to modeling historical knowledge.

My experience trying to do just that led me to the same conclusion that many have arrived at: human cognition is a lot more complicated than we thought. And that logic and reasoning have a little, but not too much, to do with the way humans solve problems. The last piece to the anti-puzzle is that the way humans solve problems is a whole lot more successful than the way formal programs solve problems even with faster and faster computers.

Lou: So where and when do you think AI might actually work?

Tom: In some ways it's beginning to work already - but only in limited domains. A good example is how Deep Blue defeated Garry Kasparov, the world chess champion. Deep Blue was a hybrid approach - it combined more and more rules about good chess and a learning capability built into the system with a super-big, super-fast computer. Without the super fast computer, Deep Blue could beat most humans, but not the world champion. Without the rules and learning, Deep Blue couldn't beat any grandmaster.

Of course, chess represents a very limited domain, both in terms of the rules (but not outcomes) and the kind of reasoning. In fact, Deep Blue disappointed a lot of AI people, because its "reasoning" was so dependent on simple calculations, which is not how humans reason about chess.

So what I would look for from AI is the slow evolution of more and more smart subsystems, programs that do one or two things well. The trick is going to be to start combining these subsystems into something that can either solve or help us solve real world problems.

After all, when you look at the growing (and fascinating) body of research in cognitive science, what you find is that there are thousands of specialized sub-systems in the brain and its the evolutionary interaction of those parts that leads to what we call intelligence and knowledge. There's no one general intelligence system in the brain.

Lou: Aside from being valuable primarily in specific domains, are there any other lessons from your fling with AI that might help us understand IA any better?

Tom: First, that things like automatic categorization are not the answer. By no means will they allow us to retire our librarians and IAs and replace them with an automatic, machine intelligence that will categorize, organize, evaluate, and create a self-referential, self-generating help system.

We've been looking at a number of vendors, like Autonomy, Semio, Verity, and Inxight, that are offering some kind of automatic categorization and the result was the same in every case - they were interesting, of some value, but only if they were combined with human intelligence in the form of an information architect.

The same is true for the new Natural Language Processing (NLP) search engines - you need an enormous amount of up front work from information architects and subject matter experts to build large specialized dictionaries and thesauri and business rules to make it work.

Another lesson I would take away from the whole experience with AI is somewhat counter-intuitive to what you might expect from a cautionary tale of being seduced by AI. The lesson is that not all new catchy acronym-ized developments are going to turn out to be the AI of the 90's and 00's. What I'm thinking about is Knowledge Management (KM).

The reason I think that KM is different is the field's focus on how humans actually interact, whether it's a speech act or collaboration on a document. KM takes advantage of both the amount of research into cognition and knowledge-based interaction, and the cautionary tales from AI. In other words, as long as people understand that there are no simple, magic, "buy this software" solutions to KM problems, and that good KM is based on good IA.

Lou: Your career path, like many of us, has zigged and zagged in some interesting directions. How has being a game designer and an addicted entrepreneur helped lead you to information architecture?

Tom: I can't really say that either of them led me to IA, more like they were just parts or phases of an overall plan to get to learn and do as much as I could. In other words, I'm a confirmed generalist and they were two pieces.

I once thought of stealing the family crest of Neils Bohr, which I can't remember in the original Latin, but in English is some thing like "Depth is achieved through Breadth". In less pretentious moments, I figure, I should take the line from the Grateful Dead: "What a long strange trip its been".

Although, when I think about it, there really is a bit of a common thread. First, I have to say that I'm no longer addicted to being an entrepreneur. Starting a training software company in the middle of California's worst recession of the century cured me of that particular addiction.

But the common thread is a desire to create large, dynamic organizations of information whether about a real world company or a fictional universe. For example, designing and creating computer games is a lot different from playing them and I quickly realized that there is a lot of categorization work in creating a game. The two games that I created, Galactic Gladiators and Galactic Adventures, were fairly complex for their time, especially Galactic Adventures with 15 different species, each with their own characteristics and history, weapons, different planets, and different adventures. It was a great combination of creative and organizational activity.

Lou: What about your perspective as a futurist? Did you land on IA because you see it as a career for the new millennium? Or simply as a pastime until your predictions come true and you can retire as a modern-day Nostradamus?

Tom: One of the things I learned as a futurist is that it's not so important whether or not your predictions come true. It's more important to convince people that your predictions came true. Or, alternatively, to present your predictions in such a clever or pompous fashion that people will think that they didn't come true in interesting ways. It's the latter that often constitutes success for futurists, because actual hits on predictions are abysmally low.

IA is definitely not simply a pastime. And I will make my own prediction: it's a field that will grow and see some extremely powerful breakthroughs in the next few years. With developments in cognitive science, knowledge management, organizational theory and practice, coupled with increasingly smart support software, all driving new, deeper understandings of human forms of business, I predict that IA will change the way corporations work. How's that for a prediction that's almost impossible to either prove or disprove?

Lou: Well, such a prediction certainly won't get an argument from the likes of me! It's already coming true; companies that see the site map as having equal importance to the org chart are where I'll put my money, at least what's left of it post-NASDAQ meltdown... Thanks Tom!

Subscribe to our bi-weekly newsletter for notification of new ACIA articles and interviews.