Should information architects be code-monkeys?

08.04.2011 | Author: Eris Reiss

At the recent IA Summit in Denver, CO, the inimitable Jared Spool suggested that information architects could do their jobs better if they knew how to code. This provocative statement did indeed provoke a lot of comment. So in answer to Jared, and as a little bit of Friday fun from the team here at FatDUX Copenhagen, let me offer my own bit of code (as opposed to cipher).

0041701        1071510        0391309        0791505        0050808        1120614

2021501        1901014        0890405        0881501        1712310        1071506

1600911        1040809        0670103        1600911        0042509        0450413

0041701        0820306        2021501        0570104        0040811        0391309

1140107        1890811        1162003        1591801        1050401        1962901

0920702        0680407        1151302        1901014        0671903        1081303

0670103        0990805        0750204        0031301        0572512        0052814

0671903        0960309        0391309        1762707

BTW, Jared, we love your most recent book, Web Anatomy. Cheers from the DUXlings.

Dopamine and the mind – why good designs go wrong

14.02.2011 | Author: Eric Reiss

Over the years, I have noticed a strange pattern: when executives (site owners) are asked to comment on design layouts, they often say there is too much text and demand larger pictures/graphics – whether these are relevant or not. These executives are disappointed and frustrated with the design proposals they see. On the other hand, if you listen to users (during usability testing, for example), they complain that these same pictures/graphics are getting in their way. Like the executives, they also exhibit frustration, but in a diametrically different way – “Why are you making me scroll past this crap to get to the information I really need?” 

My question was simple: was there a scientific reason for these dramatically different reactions to essentially the same designs? And I think the answer is “yes”.I’ve included a few salient footnotes for those of you who are scientifically inclined.

Thesis in brief (1)

Why do two groups of people seem to consistently disagree regarding the “attractiveness” of a website design? Could it be that there was a physiological reason for these reactions? In short, was our brain playing tricks on us or misleading us? Were our development and presentation techniques actually encouraging inappropriate client reactions?

Early research

I have known about the functions of neurophysiological “reward chemicals” since my pre-med studies at Washington University in St. Louis 1972-1976. In late 2007, having spotted the curious reaction pattern described above, I started to do some more serious research, focusing on the limbic system (2) and the nature of reward chemicals (3).

I made the assumption that if the pattern I had identified was universal, voluntary intake of recreational reward chemicals (e.g. nicotine, caffine, cocaine, etc.) was probably not at the heart of these reactions. So I looked for chemical rewards produced by the body itself. Soon, my inquiry zeroed in on dopamine, a chemical messenger similar to adrenaline. (4)

Dopamine – friend or foe?

Dopaminergic neurons appear to code environmental stimuli rather than specific movements. (5) This, in layman’s terms, means that pretty pictures stimulate dopamine release, which perhaps explains why executives favour graphics over blocks of text in dummy design layouts.

Although this reaction seems obvious (pictures are more attractive than text), it was reassuring to know that there was a scientific reason for this.

Task-solving activities

The second part of my question dealt with why test subjects so often reacted badly to eye-candy (i.e. gratuitous pictures/graphics).

There are various viewpoints as to the role of dopamine and the task-completion process. For example, Pennartz et al. (6) asked in 2009:

“Given the parallel organization of corticostriatal circuits, the question arises how coherent behavior, requiring integration of sensorimotor, cognitive, and motivational information, is achieved.”

Perhaps part of the answer to this critical question can be found in Taizo Nakazato’s research, published back in 2005 (7):

“During the task performance, dopamine concentration started to increase just after the cue, peaked near the time of the lever press, and returned to basal levels 1–2 s after the lever press.”

By way of background, this study deals with rats pressing a lever to receive a food reward. In internet terms, I equate this behavior with humans pushing a button/clicking a link to receive an informational reward. In other words, task accomplishment produces a reward – in this case chemical.

Actually, though, it appears that the anticipation of task-completion triggers dopamine release (8). And it could be that executives about to see a proposed design for the first time may be anticipating the presence of pretty pictures.

Yet the essence of the problem seems to be that if something delays/hinders task completion, dopamine release actually causes post-action frustration. Dr. J.G. Fleischer describes this phenomenon quite succinctly: (9, 10)

“If the [subject] does not receive the reward when it expects to receive it, then there is a depression of dopamine release, which is consistent with the negative preduction error that would occur in that situation.”

In other words, if something gets in the way of task completion, dopamine doesn’t get where it’s needed (“depression of dopamine release”). I suggest that perhaps the pretty pictures and eye-candy that were anticipated and appreciated during the presentation phase, are actually getting in the way of test subjects who expect a more relevant response to their query (i.e. clicking on a promising link). If we make people scroll to get to the stuff they want (and expect to receive), they experience dopamine depression.

That said, a more recent study by Wanat et al. (11), suggests that further research is needed:

“[The] enhancement of reward-evoked dopamine signaling was also observed in sessions in which the response requirement was fixed but the delay to reward delivery increased, yoked to corresponding trials in PR sessions. These findings suggest that delay, and not effort, was principally responsible for the increased reward-evoked dopamine release in PR sessions. Together, these data demonstrate that NAcc dopamine release to rewards and their predictors are dissociable and differentially regulated by the delays conferred under escalating costs.”

In other words, the tougher it is to achieve a result, the greater the dopamine reward. This somewhat contradicts my thesis – and yet these findings also indicate that the response is situational. Hence, I feel certain that Wanat & Co. are actually looking at a different side of the problem, unrelated to task-based frustration, but that related to task-completion in a triumphal ”I just made it to the summit of Mt. Everest” kind of manner.

Drawing on my network

In late 2009, my online research led me to my grade-school best-friend, Jon Kassel. (12) Jon is now Professor of Psychology at the University of Illinois. Jon’s speciality is addiction. Naturally, the effect of drugs on emotions represents a key part of his own research.

Jon and I chatted informally about the problem with which I was wrestling. And without putting too many words in Jon’s mouth, it seems my thesis holds water – certainly from a cognitive point of view, and more and more from a clinical-psychology point of view, too. I hope that Jon and I can work on this in more detail sometime.

Please note: my conversations with Jon served merely as litmus tests and should not be construed as formal endorsement of my theories on the part of Dr. Kassel or the University of Illinois.

Community research

Of course, it could be that the pattern I thought I had detected was merely a fata morgana, Maybe my community wasn’t seeing the same things I was. So in January 2010, I published a simple survey on SurveyMonkey, which I broadcast to the interactive-design community via social media and list serves. (13) All of my questions could be answered with a simple yes/no. Here they are, along with the results of the 144 people who responded within the first week:

1. Have you ever been at a client meeting where you or your company have presented detailed page mockups for a proposed website (a “comp” complete with graphics and “greeked” text)?

Note: This may or may not represent the culmination of a longer discovery/strategic/IA process, but exactly where this presentation occurs in the overall process is not particularly important in terms of this survey.

Yes: 97.9%
No: 2.1%

2. If you have been to a website design presentation meeting as described above, have you ever heard the client say, “Very pretty, but there’s too much text. We need more/better/prettier graphics.” (this is when clients start talking about including pictures of their pet cat.)

I see this mostly when senior officials have not participated in an earlier discovery/IA/wireframing process.

Yes: 70.5%
No: 29.5%

3. Having been present at the original design presentation, have you later observed (probably through a one-way mirror during a usability session) that respondents say “Don’t make me scroll through the damned eye-candy to get to the substance. Get rid of the picture of that dumb cat!”

Yes: 58%
No: 42%

4. So in short, do you see any correlation between requests for more eye-candy during the layout approvals, and irritation with the same eye-candy during task-based usability testing?

Yes: 59.9%
No: 41.1%

About 62% of the respondents were from North America, 30% were from Europe, 8% were from the rest of the world.

Even though this is a primitive survey, the statistical results are significant; the pattern I hypothesised is recognized by others by a factor approaching 2 to 1.

Today, “dopamine” seems to have become “flavor of the month”

I first mentioned this research en passant in blogpost I published in January, 2009. (14) I talked about it again briefly at the IA Summit in Phoenix, AZ in April, 2010. Today, the subject seems to be finally taking hold – most recently at the IxDA’s conference, Interactions 11, in Boulder, CO last week (February 2011). Here, Charles Hannon, presented the subject formally (e.g. as the main subject of a talk) for the first time in our community. (15) Although the subject has also been broached tangentially at EuroIA 2010 and elsewhere, I look forward to speaking with Prof. Hannon at some point; alas, I was not able to attend the Boulder conference.

A second empirical observation

When I first suspected that comprehensive design mock-ups might be creating problems, we tweaked the development/presentation process in my own company, FatDUX. Subsequently, we spent much more effort in guiding senior management through our decision-making process prior to showing actual color design mockups. Although we had always involved our clients in the earlier stages of the development process, we had never previously insisted on top-management participation.

My empirical observation is that if C-level administrators are made part of the comprehensive design process, there is less chance they will insist on bigger pictures or cuter kittens on the website. In situations where we have not been able to obtain face-time with senior officials, our designs are more often open to challenge. Only expensive rounds of usability testing have enabled us to reinstate the graphic-design best-practices we normally espouse.

Some background

Both of my parents were scientists and the value of the scientific method and controlled studies was something I learned in parallel with my ABCs. As a pre-med student at Washington University in St. Louis, I continued my scientific studies, although I did wind up in a so-called “unrelated field” (encouraged by my father, who helped me send my first e-mail back in 1982 (no typo) to his secretary at the University of Miami). I have since been involved in the creation and/or critique of over 1500 websites and online apps.

So in closing, I encourage you to do your own research to prove or disprove my contention. And if you’d like to share your own empirical observations and/or research, I hope you’ll leave a comment here or write me directly at

Here, I use “thesis” in the literal Greek fashion: as an “intellectual proposition” (θέσις), not a “dissertation” (dissertātiō).,+dopaminergic+neurons+are+fired&source=bl&ots=JMINA_81vP&sig=9fC8tPBQ6hGBzweK0B9y0Og3rIg&hl=da&ei=9_tYTeDyEozoOaTzsJIF&sa=X&oi=book_result&ct=result&resnum=6&ved=0CEkQ6AEwBQ#v=onepage&q&f=false

Twitter, plus the SIGIA list maintained by the American Society for Information Science and Technology, and the discussion list of the Interaction Design Association. The survey was published on 10 January 2010.

Busby Berkeley invents the gesticular interface

05.09.2010 | Author: Eric Reiss
Contrary to popular belief, Apple Computer didn't invent gesticular interfaces. Take a look at this short clip from the Warner Bros. Vitaphone production Gold Diggers of 1935 (at the 27 minute mark of the movie). Choreographer Busby Berkeley seems to have figured out some key movements back in 1935.

In this scene, tenor Dick Powell is taking poor-little-rich-girl Gloria Stuart shopping in the basement arcade of a swanky new hotel. I apologize in advance for the quality; I simply used my camera to record my iPad in a decidedly analog fashion. (Don't even ask why this movie is in my iPad to begin with).

Notice, too, the graphic incorporation of metadata. Each department is coupled with the name of the woman in charge. For example, in "Lingerie", we find "Annette". Pretty sophisticated "menu" considering that this footage predates the birth of the web by 65 years.

If you want to see the entire number, here's a link:

Movies on your desert island iPad

13.08.2010 | Author: Eric Reiss
OK. Here’s the deal. You’re shipwrecked on some desert island. Lots of coconuts, fish, and other food - plus a magic spring that spouts water, beer, wine, cocktails, and Coca-Cola. There is also a power outlet for your iPad.

Alas, your iPad has very limited memory and there is no wireless. So which 10 movies would you want to view over and over again until you’re rescued? Here’s my list:

Footlight Parade (1933)

Casablanca (1942)

The Big Sleep (1946)

Singin’ in the Rain (1952)

Some Like it Hot (1959)

Lawrence of Arabia (1962)

The Good, the Bad, and the Ugly (1966)

The Godfather (1972)

The Right Stuff (1983)

Good Night, and Good Luck ( 2005)

Believe me, I have a zillion movies I’d like on this list. But honestly, if you really had to narrow it to 10, what would they be?

Geeky relics from the past

05.08.2010 | Author: Eric Reiss
I'm a pack rat. I admit it. My wife, coworkers, casual acquaintances, and even strangers on the street tell me to throw stuff out. But I never do.

So, here I am cleaning up in the FatDUX Copenhagen server room. Loads of artifacts from my previous lives.


Basically, what you see here is every mobile phone and every laptop I've owned since the early 90's. We'll take the laptops first, starting in the back row, moving to the front, left to right:

MacBook 160. The very first MacBook. Good machine. I wrote two books on it. This was one of the very first MacBooks in Denmark, purchased in the fall of 1992 in the U.S. Keyboard converted to Danish about a year later.

Powerbook G3. The so-called "Wall Street" model without a USB port. Very inconvenient, but the machine did serve me well for a couple of years. About 1997.

Acer TravelMate 350. Fantastic machine, fast, lightweight, but a crappy keyboard for touch-typists. This is what happens when hunt-and-peck engineers try and squeeze the three Danish letters (a, ø, å) onto a small piece of keyboard real estate. Note the optional wireless card sticking out the left-hand side. About 2001.

Fujitsu Siemens Lifebook P7010. The best computer I've had. Bar none. But the hard-disk died and my supporter cost me EUR600 before concluding that the machine could not be fixed. About 2005. So that led to...

Fujitsu Siemens Lifebook P7230. The upgraded version (2007) of the previous machine. But not without some quirks. In the meantime, I did manage to get the old hard-disk replaced on the P7010, so I'll probably go back to the older machine.

Apple iPad 64GB 3G. Wonderful for sharing photos, listening to music, and surfing the net. I do like it, but not for serious work that requires typing. Also seriously lousy presentation capability. The FS P7230 is still the workhorse that follows me to conferences. Summer 2010.

And now to the phones:

Motorola "brick" - about 1990. Very clunky, but a real "gee-wow" piece of kit back when everything else in the world was wired. Very Gordon Gekko. Actually, the correct name for this is a "CommNet 2000, ultra Classic by Motorola". Today, it really IS an ultra classic. I can't remember, but I think this might have been an NMT telephone rather than for the GSM network.

Motorola Micro Tac 5200. World's first flip-phone. The antenna is actually a placebo - it does nothing at all! About 1994. This was the first dual-band phone. "TAC" stood for "Total Area Coverage".

Ericsson GH 174. Really heavy piece of crap. Never liked this much - but it was a company phone so it wasn't my decision. About 1994. I can't remember why we got this phone, which was actually an out-of-date model by the time I got it.

Nokia 2110. Absolutely one of the best phones I've ever owned. And a true classic in terms of keyboard layout. This phone set the standard for much that followed. About 1994-5. I switched to a Nokia 3210 in 1999, but I forgot to include it when I took the photo.

Motorola Timeport. My first tri-band telephone that enabled me to work in the USA. Very sexy blue screen, but an unfathomable menu structure. Summer 2000.

Sony Ericsson T68i. Notice the natty clip-on camera. This was my first telephone with a color display. Very poor resolution (101x80 with 256 colours), but hey, color was incredibly neat back around 2002. And it had Bluetooth! I also owned the earlier Ericsson T68 (prior to the merger with Sony).

Nokia 6670. Still one of my favorite phones, despite the early S60 operating system, which qualifies it as one of the very first smartphones. Never got caught in your pocket thanks to the rounded corners. And the 1.0 megapixel camera was pretty good, too. Good MS Office integration. About 2004.

Nokia E70. Another great phone. With the advent of SMS, this phone was great as the keyboard unfolds like two wings on either side of the screen for really fast QWERTY input. Summer 2006.

Apple iPhone 1st generation. We bought a bunch of these in the U.S. and jailbroke them. Fantastic bragging rights back when no one else in Europe had them. I gave this one away to one of our art directors because I was constantly looking for somewhere to charge it, which drove me crazy. My friends at Apple told me, "Eric, you know better than to buy the first generation of any of our products..." Even so, three years later, the unit is still in service. Summer 2007

Nokia E71. Although the Symbian 60 operating system is still difficult to work with, this phone basically did all of those great phone things that I needed - like making phone calls. And it almost never needed to be recharged. Spring 2009

HTC Desire. This is an Android 2.1 smartphone. Devours power like I devour marshmallows. I'm constantly looking for a power outlet. But it can do a lot of stuff when it feels like it. (FatDUXling Andrea Resmini tells me to turn off the Wi-Fi to conserve energy). Unfortunately, European data-transfer rates are crazy, so I'm forced to turn off pretty much everything most of the time. For example, if I just leave the phone on for a day, it will download about 93 MB of data. I don't know where this data comes from or where it goes, but it's a lot. And when I go to the United States, 1 MB costs about USD 10. So, at a potential cost of USD 930 a day, this thing scares me to death each time it beeps. So much for smartphones. Spring, 2010.

Now, that I've showed it to you, I've really got to get rid of this crap...

SEO & IA: A Roadmap for Discoverability Success

29.04.2010 | Author: Marianne Sweeny
Frank Lloyd Wright said that the two most important tools for an architect were the drafting pencil and the sledgehammer. Of the two, the pencil is the easier to use as well as the more effective. As it is with building design, so it is with designing websites and their discoverability by search engines, the tool used by a majority of users.  The Web has become so vast and the search systems have become so sophisticated that retroactive optimization can be only marginally effective.

My mantra of late has been that search engine optimization must be part of the strategy at the beginning of a site design or redesign project.  I believe that user experience is as much about how users find the site as with their experience once they get there.  At the 2010 IA Summit in Phoenix, I presented a poster session on an SEO/UX design framework that sees search optimization as part of the UX engagement throughout the project lifecycle.

- Discovery comes before experience. Including search optimization in discovery sessions with the client provides opportunities to illuminate the state of the competitive landscape and the current search visibility state of the existing site. During this stage, I give the client a brief education in how search engines work. Despite the sophistication behind how results are presented, the core functionality of search technology is still based on information retrieval methods from the early days of electronic data storage. In order to appear in the results, the search terms used must appear in the content.

- Planning reduces the signal to noise ratio for the search engine spider. Search engine spiders do not have eyes, ears, thumbs or fingers. They cannot read the messaging in sticky Flash and Silverlight applications. They cannot hear instructions or compelling evidence contained in videos. They cannot “click” anything to move forward. Provide on the page or in the code annotation for all rich media to make sure that the messaging contained here makes its way to search results.

- Build a relevant site structure.  Something that you keep in the attic of your garage is likely less important to you than something kept in the cabinet over your coffee machine. Search engine spiders interpret your site structure as an indicator of relevance. Content buried deep in the structure is seen as less relevant that content found closer to the home page.  Design site and link structures that reveal context and importance.

- Create a flexible design to ensure ongoing visibility. There is no “set it and forget it” in search engine optimization.  Post-launch optimization continues with analysis and measurement. Analyze search terms driving traffic to the site, bounce rate, time on site and other analytics to discern patterns and anticipate customer needs or interests. What were they looking for? Did they find it? Benchmark positioning for key metadata phrases prior to redesign. Run regular placement reports to chart progress and provide quantitative evidence of effectiveness.

Following a roadmap of optimization through the stages of a website project is a step is extending the user experience to from start to finish.

Download the Search Engine Optimization and User Experience Design Framework Poster

Calculating the length of an internet year

22.09.2009 | Author: Eric Reiss
We know technology moves fast these days. But how fast? And which technology?

Most folks know a “dog year” equates to about seven human years. Although this is not a particularly accurate actuarial device (little dogs live longer than big dogs), it does give us a rough idea as to when Bowser is going to be chasing that chariot in the sky.

We have other measurements, too. For example, Moore’s Law suggested back in 1965 that the number of transistors in a chip would double about every two years. Again, a generalized barometer that has proven to be more accurate and useful than one would have thought.

So where are we at with the internet? With the web? How can we measure the maturity of our apps? Or predict business cycles in the online world?

I think I've found a useful answer. Here’s how I worked it out.

Establishing a baseline
My first task was to find a suitable industrial-era baseline. I needed to find a well-established, highly industrial segment that had demonstrated:

-  a period of invention, followed by
-  a period of adoption, followed by
-  a definitive end to an era of pioneering, followed by
-  a long period of slow, incremental innovation
-  a long-term, on-going global presence

Finding such a segment is easier said than done. The obvious-in-hindsight choice was the automobile industry – a sudden inspirational flash in the middle of a presentation on innovation I was giving in Italy.

Autos and the web have a lot in common
The first commercial vehicle was a Daimler, produced in the United Kingdom in 1896. Interestingly, the first commercial websites started to appear about 100 years later. In my calculations, I will use 1993, which marked the introduction of the first true graphic browser, Mosaic. Most experts agree that the introduction of Mosaic represented a turning point in the history of the Web – much as the original Daimler represented a turning point for those experimenting with horseless carriages.

Certainly, no one would contest that websites in 2009 cannot represent the final phase of online evolution. If we compare ourselves to automobiles anno 1909, it would seem we haven't come very far at all. If nothing else it strongly suggests that a "calendar year" is significantly longer than an "internet year".

End of the pioneering period
The era of pioneers, where most of us working in the online arena were pretty much making things up as we went along have long since passed. Today, we have pretty good sets of best practices. But when did the age of pioneering actually end? We need a date for our calculations. Although the current economic crisis has caused unimployment in our industry, it's doing that in all industries. We are not seeing the great "weeding out" of questionable practices that we saw back in the early years of this decade. From a development point of view, we need to go back to the burst of the dot-com bubble of 2001.

Even given the rise of social media and other innovations during the past decade, the market reaction to events of 1998-2001 equate, from a business point of view, to many other technology inspired booms, including autos in the 1920s (other similar technology booms include railroads in the 1840s, radios in the 1920s, and transistor electronics in the 1950s).

So which year represents the end of pioneering for the automobile industry? The introduction of mass production by Ford in 1908? The U.S. entry into World War I in 1917? I pondered this for over a year before I accidentally came across a footnote in a book on antique cars that stated “The stock market crash of 1929 marked the end of the pioneering period for car manufacturers.” Conveniently, a crisis again seems to have marked the end of an era.

The similarities observed in the aftermath of both 1929 and 2001 have erie commonalities. For example, a study of the car industry suggests that there were more makes and generally better cars in the 1920s than in the 1930s. After the market crash of 1929, bad ideas (and poorly built cars) became more prevalent. And I would argue that in many semi-developed online markets (Scandinavia for instance), websites did deteriorate in quality during most of the decade following the dot-bomb; while pretty, these applications did not successfully build the shared frames of reference needed to establish credibility, trust, and a willingness on the part of site visitors to deal with these business entities. Indeed, there is still far too much "brochure-ware" polluting the ether.

But back to 1929. If the Wall Street Crash marked the end of the pioneering era in automobiles, it should be possible to work some numbers.

Doing the math
If 1896 and 1929 mark the age of pioneering for the auto industry, we have a period of about 33 years. And if we accept that 1993 and 2001 represent watershed years for the Web, that works out to seven years (late 1993 to early 2001). The months of introduction are critical when calculating the length of the pioneering period for the Web as the difference between seven and eight years has far too significant an effect on the calculations.

Setting up a simple ratio, we find:

33 =   X
7        1

And that means X = 4.7 years.

Proof of concept
So is 4.7 years a viable figure? Just like Moore's Law, Reiss' Law will be proven or disproven by history - it is impossible to provide hard proof. But the anecdotal evidence is already compelling. For example, economists put the average business cycle (as defined by Burns and Mitchell) somewhere between 3.5 - 7 years. This appears pretty much the same for cycles triggered/ended by exo- and endogenous causes. The average time needed for a traditional business to establish a business model, gain goodwill, and prove its worth is about eight years – longer than a single business cycle, but usually shorter than two.

So if my number works, one calendar year should roughly represent slightly less than one complete business cycle in the online environment.

Curiously, if you look at the online ventures that have succeeded this past decade, you’ll find that an incredible number of them have been sold or expanded their ownership base within the first two years of operation – which fits surprisingly well with the timing of the offline experience, using my 4.7-to-1 adjustment; if we look beyond the get-rich-quick IPO mania of the late 1990s, many successful offline ventures typically seek alternative financing early into a second business cycle.

If you look at the online ventures that have failed this past decade, you’ll find that the same cyclic pattern repeats – ventures have roughly two calendar years to make it or break it. We, of course, knew this from our emprical observations over the years. But using the numbers I’ve laid out here, it’s easier to see why this is so from a business-economic standpoint.

Of course, I haven't yet identified the triggers that mark the beginning or end of an online business cycle. But I'm working on that, too.

So where are we now?
If we compare, for example, websites to cars, we’re 15 x 4.7 years into our development. With a calendar starting point in 1896, that puts current web development on par with the car model year 1960.

If we continue to use cars as our barometer, we can see that a number of things have been invented and standardized – the number of wheels, shift patterns, basic controls (pedals, steering wheel), the placement of heating and ventilation controls, etc.

In web terms, perhaps this suggests that many of the basic navigational devices we use today will be around for some time. But it also leaves us wide open for innovation. For example, web servers account for an incredibly high proportion of CO2 emissions – almost as much as the aviation industry according to the UK’s Health Protection Agency task force.

Should we be using gray backgrounds rather than white to reduce electrical consumption? Maybe AJAX isn’t a good idea seen from a sustainability point of view.

If we look at the development of the automobile these past 50 years, two issues really stand out: safety and fuel economy.

So my question to you is, what are OUR safety and fuel economy issues? And how long do we have to make these improvements? Can we use my magic number to predict the future of our industry by examining the past of other industries?

Datageeking for hope

20.10.2008 | Author: Lynn Boyden
***blatant political overtones and overtures forthwith.  don't say you weren't warned.***

I've been trained by the Obama campaign as a data manager for hope.  I volunteer in pod 33.  We're a bunch of believers using weekend-empty offices and restaurants, our own laptops, cell phones and chargers, and this awesome database.  As in a database that inspires awe.  Along with pod 30 the canvassers are making about 50,000 calls every weekend to registered voters in swing states, and we talk with about 20% of those 50,000.  And whenever they talk to you, they hammer you with questions.  And when you answer those questions (Who ya votin' for?  Which way ya leaning?  Gonna vote absentee?  Need a lift on the 4th?  Know where to vote?  Could ya put in a couple hours for us down at headquarters?) the canvassers record your answers and put it into a database.

And it makes us stronger.  Because come November 4?  We're gonna know where to send vans.  And who can drive vans on election day.  And who not to bother because they've already voted early, or voted absentee, or because they've told us they're voting McCain.  Yesterday I had the privilege of being lead data manager for hope.  And when one of my crack datageek team was lamenting that she had a full page of 5Ms (committed McCain -- datageeks don't mince words) we talked about how actually knowing that all those folks didn't need to be called ever again between now and election day was a feature, not a bug.  And then we got busy and had all 7500 calls entered into the system by 7pm.

Here's to the re-election campaign in '12!

Filed under:  Blather , Datageeking | Tags: | 19 comments

Smackdown in America

16.10.2008 | Author: Lynn Boyden
Since I can't shoot and drive anymore, guess I'll just shoot. Here's a little fun from Google trends: Joe the plumber is off to a good start, but will he have the lasting power of Joe six-pack?

But more to the point, what does this Google trends graph indicate? The searchers, if not the voters, have spoken. Or does it mean that searchers who are looking for information about Obama are more savvy about their use of the Internets to find information about the candidate?  Or are searches related to support?

Update 11/2/08: Over at FiveThirtyEight, where the datageeks make me look a real piker, they've put up this post on the same topic.  We've got our fingers on the pulse here in the duckpond!

Filed under:  Datageeking | Tags: search analytics |