VoteSpry: Real-time Voting

I built VoteSpry, a real-time SMS polling system, to fulfill my goals for this project:

  • Incorporate multiple interfaces
  • Respond dynamically to the user
  • Solve a real problem

I’ve run into many situations where exactly this sort of polling system would have been useful, such as judging hackathons or while giving presentations. In my proposal, I outlined my assumptions:

From an interface perspective, in order to be seamless, the app needs to work with the tools that people already use. That’s why SMS is the baseline for creating, voting, and even monitoring—every phone supports it.

Before building my voting system, I had to test that assumption. Then I moved on to interface sketches and user flow diagrams. I built an initial working version of the app, and then continued to tweak it while testing myself. Finally, I conducted a user study on the app, gathering data about pain points in the interface.

These results are presented in more detail in the project presentation.

Interface Evaluation

As I proposed in the second part of this assignment to create an interactive menu, one could use a tablet-based interface for ordering instead of the traditional menu system. But how much better is it? In order to put real data behind this design, I will design an evaluation using the DECIDE framework, then gather data:

  1. Determine the goals
  2. Explore the questions
  3. Choose the evalution methods
  4. Identify the practical issues
  5. Decide how to deal with the ethical issues
  6. Evaluate, interpret, and present the data

As a reminder, the front page looks like this:

Here are the steps of the evaluation process:

Determine the goals

The goals of this evaluation are:

  1. Determine if this interface is more usable than before

Explore the questions

  1. How much more efficient is this digital system than a traditional menu?
  2. Does the user experience improve?

Choose the evaluation methods

  1. Controlled study of ordering process

    • Compare time to make a selection
    • Find points of confusion in either interface
    • Can also get qualitative data (sentiments, frustrations, etc)
  2. Survey of experience

    • Ask about person’s frequency of eating out
    • Rate parts of this experience on a Likert scale
    • Can be split into before and after parts to avoid bias

Identify the practical issues

A research environment will never fully recreate a restaurant experience. There are simply too many variables that affect the experience, including the time of day, day of week, other customers, your server, the chef, and more. Most of these cannot be replicated in a controlled setting.

In using a survey to judge user experience, which may include quantitative and open-ended questions, we may overlook the same factors as above that will be present in a restaurant but likely not in a controlled setting.

It’s also unlikely that I’ll get a large or representative sample of people, which if possible would obviously give more significant meaning to the results. User experiences will obviously vary depending on their backgrounds, including age, culture, familiarity with technology, and more, and I will probably not be able to draw comprehensive conclusions from this data.

Decide how to deal with the ethical issues

I don’t foresee any ethical issues that will arise as a result of this evaluation.

Evaluate, interpret, and present the data

The test group was split into two groups, one of which performed tasks with a traditional menu and the other using printed digital mockups as seen in the previous example. Both groups answered identical survey questions to allow for quantitative comparison.

  1. Using either traditional menu or mockups of digital tablet menu, perform a task (“select drink, appetizer and main course”) while being timed.

  2. Post-survey asking about subjective experience with their ordering process (example question: “Compared to the average menu at a restaurant, rate the difficulty of using your menu to select items” on a Likert scale)

I was able to conduct this study on a group of 12 of my peers, randomly assigning each group to either the old-style menu or the paper mockup for the new. Though my original use case was for Betty’s Wok and Noodle, I substituted that with University House of Pizza because it was easier to acquire a menu for the study. I used photos from a web search as examples of each dish for the digital menu.

Data

After conducting a controlled study of the ordering process and a post-survey, here are my results in chart form.

First, let’s look at task time. I asked people to select three items (a drink, appetizer, and main course) from each type of menu, and tell me when they had made their decision. I wanted to test efficiency, to see whether my alternative interface would actually make it easier to pick items.

According to this initial data, it’s relatively clear that this interface works better than a traditional paper menu. The median time with the paper menu was 122.8 seconds, while it was 84.0 with the digital menu, a difference of 38.8 seconds. However, I can only speculate as to why this difference occured. For example, it’s possible that certain people make decisions more quickly than others regardless of the presentation. It’s also possible that the use of photos on the digital copy would be a deciding factor. But my impression based on subject’s statements was that it was a combination of two factors: I informally asked “What did you choose?” and “Why?”, and 3 people mentioned that they found a particularly appealing photo.

The question about frequency of eating out was meant as a way to correlate other data points, but as you can see when correlating survey results, no meaningful trends appear. As frequency of dining out increases, there is no obvious increase or decrease in satisfaction with the other metrics in the survey:

However, a more interesting correlation can be made between task completion time and dining frequency (with frequency scaled into proportion):

It appears that the people that people who eat out most frequently either take very little time to decide or a lot of time to decide, but not in the middle. I did not expect this type of inverse Gaussian distribution to apply here. I can infer based on personal experience that sometimes I know exactly what I want, but other times I actually read the whole menu looking for something interesting. I just didn’t realize other people may have the same behavior.

The questions about difficulty and user experience are, as shown in the first correlation attempt, hard to analyze. Overall it appears that people found the digital prototype easier to use and compared it more favorably, but it’s possible that it’s because they’re so familiar with paper menus that the new-ness of the digital mockup is what they’re responding to. You can see this in the question about their overall experience, where people with paper menus said that their experience was basically the same as usual.

Even though I didn’t intend to collect open-ended data, a couple participants talked to me about my prototype and gave me feedback that was regretably not written down. One suggestion was that not every menu item really needs a photo, which was an assumption I made. The other was from a friend who said he didn’t like the idea of scrolling through a menu, he wanted pagination. There was also some general confusion on the process, including whether or not the participants should “tap” the paper prototype to access menus and select items.

Conclusion

I have two further improvements that I would have liked to have when conducting this study:

  1. More testers to get more meaningful data
  2. Open-ended questions about the user experience

I think that both of those tools, in hindsight, would have allowed me to draw richer conclusions from the data. Overall, people seemed to respond favorably towards this new prototype, with faster completion times and in their own survey answers. But to drill deeper into the particular features I would like to improve would require the open-ended questions that I was asking informally. Based on the feedback I have, I think the best path forward with this design would be to test options with displaying photos, categorization, and further simplification of the design to improve choice times.

Project Proposal

As an organizer, I want to be able to quickly gauge a crowd’s opinion in real time. For my term project, I will build an app that can be used on any phone or browser to create, vote, and monitor these threads. Organizers will have full control, such as limiting time period or votes needed and adding features like group chat, depending on their needs.

The focus of this app will be to make it so easy to set up a real-time poll that I can use it anywhere I am. I will create the entire platform and related services:

  • Backend API for threads and users
  • Web client
  • Apps for phones and tablets
  • SMS integration
  • Email integration
  • Facebook integration
  • Google Calendar integration

From an interface perspective, in order to be seamless, the app needs to work with the tools that people already use. That’s why SMS is the baseline for creating, voting, and even monitoring—every phone supports it. From there each component is a progressive addition, with the interface changing to match the type of device.

While this is challenging due to the number of independent parts, I’m confident that I can build this entire system. I have build apps on top of all of these platforms before, now I just need to integrate them all into one system, which I have already modeled through my API.

This is how the system would be organized:

Architecture Diagram

One of the key challenges that I anticipate is the ability to go through the entire system with no login, just a phone number. I think minimal set-up is key to a seamless user experience, and will design the entire app around this base case. If users were only using mobile phones, this would be easy. But because users may also log in over the web, I must have a way to link a person’s email and phone number on the database level. For security, I also need to make sure that people can only participate in threads they are supposed to, which requires cryptography.

I will use several methods to gauge the success of this app. First, I will use a survey to gauge broadly how people are currently gathering opinions. Once I have a prototype, I will conduct usability studies with a small group of users to pinpoint problems in the interface. I will also gather metrics on usage and participation from within the web and mobile client. Using A/B testing would also allow me to judge elements that people respond to best.

I’m excited about the possibilities that building this system offer. If all goes well, I know that this will solve a real problem for me, and be awesome doing so.

Interactive Menu Design

I considered two possible options for an interactive menu system, one using a typical tablet and another using a touchscreen table system. The interaction models create very different experiences for the user, and both are being used in practice today.

Design considerations

While designing these interactive components, I started with my previous ethnographic study and task analysis and tried to imagine a system that would be intuitive, fast, and accomodating for those with unusual cases. I also looked at existing solutions, particularly E la Carte.

Tablet

Many restaurant visitors in our key demographics will already be familiar with touchscreen phones and tablets. According to this article, iPad users have a very similar age distribution to people in Boston, with most users being 25-34. Therefore it makes sense to borrow conventions established for touchscreen phones and tablets, such as using a navigation bar, sidebar for navigation, and large touch targets.

Let's take a look at the flow through the app:

We start at a scrollable menu of all the items the restaurant offers.

If you hit "add" on an item, you may get prompted for options.

You can also see a detailed view and description of each item, and swipe between them with side-to-side gestures.

The options menu is similar.

There's a persistent link to the shopping cart in the menu bar of the app. It brings you to this page to review your order.

The tablet would have a built-in card reader to accept payment by credit card.

And finally, we confirm that the order has been sent.

Table

Designing a system for ordering directly into the table presents some unusual challenges. For example, how do you distinguish deliberate input from accidental? How do you prevent the table from getting dirty? How do you design around usual items that are on the table while ordering, such as silverware and drinks?

One interesting solution has been implemented at inamo restaurant in London by E-Table Interactive using a projector above a waterproof touchscreen table. Using that setup, they actually detect plates and project a preview of your food onto your plate as you order.

When I designed this interactive table system, I thought that it would be best to integrate everything into one unit. In this case, it would mean that the screen, card readers, and base would all work together. However, it may be more practical to split these functions. Either way, the interface offers similar possibilities.

Watch me walk through the paper prototype of the table system:

A Proposal for a Distributed Development System

I started working at Hoot.Me over the summer. It’s currently a four-person team. Our CTO spent the summer working with us remotely, and now that I’m in school, I’m doing the same thing. So our workflow has evolved to accomodate those needs.

We use a combination of communication tools that are convenient: Github, Trello, Google Talk, and our phones. The most noticeable thing about these tools is that they are either asynchronous or synchronous, but not both.

We also use several internal tools to manage our code, files, and infrastructure: git, Dropbox, Fabric and Chef. I’ll go into the role each plays later, but like our communication tools, they each serve one purpose for us.

This workflow works well with such a small team, but a critical question we have to ask ourselves is, “Does it scale?” As a team grows, the level of communication overhead it needs will increase, a core tenet of Fred Brooks’s book on the practice of software development, The Mythical Man-Month.

A well-designed software development system cannot eliminate the additional communication overhead of a larger team, but it can minimize it.

Development needs

Every component of the Hoot workflow addresses a need we have. These needs can be broadly categorized as:

  • Data storage
  • Automation
  • Management

The ideal software development environment for us will address all these needs in a centralized way, but be flexible if the underlying tools change.

Data storage

Code, graphics, plans, and other files need to be shared freely among team members. But there is not one solution that works for any type of file. The type of data dictates the actions that can be performed on it.

Dropbox is a good solution for static files because it every member has an identical state. For example, a logo file does not change often, but if it does everyone instantly gets the update. However, that model is impractical for code. If I’m working on a feature for an upcoming release, it needs to be separated from the stable version of the application until it’s ready for production. A system to store code needs to handle states better than Dropbox.

A version control system’s core function is to track changes to files.1 But a good version control system implements branching and merging, transforming a linear history into a tree. Branching enables parallel development of features, much like parallel computing makes code run more efficiently.

While git is a general-purpose version control system, one strategy we use at Hoot keeps us from making mistakes: the master branch is always ready to be deployed. Any major changes to the application are done through feature branches, which have descriptive names like upgrade-libraries. Enforcing a rule like this actually makes the system more usable, and asynchronously communicates the current software status when developers check the remote server.

Automation

Why do something manually if you can do it with code? At Hoot, we don’t use automation as extensively as we could, but using any sort of automation increases scalability. It allows a team to deal more abstractly with tasks like deploying files, starting servers, and testing.

Fabric is a Python library that lets you build custom scripts that automate the building and deployment processes. For example, we use the command fab production deploy to automatically ssh into the production machine, pull the latest revision on the master branch, and restart the server.

Chef is a Ruby framework for systems management. It handles machine configuration, making it incredibly easy to launch new servers.

While we don’t use it, git enables a number of command-line hooks that can automate tasks like deployment in a similar way as Fabric. A useful hook might be to automatically run unit tests before every commit, and then deploy to production if it’s on the the master branch.

Automation can encompass any internal tool that increases reliability and speed, which is the purpose of building software in general. It’s critical to a software development team’s scalability to have the right tools, and the ideal collaboration system is actually a programmable tool itself.

Management

A good development process will create meta-information, which is used iteratively to control, analyze, and improve the development process.

A small team working in the same location may not ever need to write down the meta-information, but having a remote team forces you to communicate better. One of the realizations I’ve had from working remotely is that synchronous communication tends to be undocumented, while asynchronous communication is by definition available after it’s created.

Both communication modes are critical at Hoot, though in practice we are more likely to call each other or use Google Talk to chat rather than Trello or Github. The benefit of synchronous communication is in decision-making, because it is easier to come to a conclusion after a quick phone call than over a long email thread. In contrast, posting a bug on Github or a card on Trello is information that needs to be accessed later, or that takes time to respond to.

Github in particular has been a great example of a project management tool that makes meta-information easy to create and access. It’s another layer above git that adds bug tracking, code review, and visualizations, which are essential for large software projects. However, it lacks the functionality and connectedness that would make it ideal for integrating everything.

The proposal

So how can we build the perfect collaborative system for Hoot? It turns out that if we design it as a series of plugins to a core system, then it can be extended to fit the needs of any development team.

The core environment needs to support both synchronous and asynchronous communication. The only existing tool that can achieve that is a text chat, because any text can be saved for later.

To extend this chat environment, you could write any script that takes text input (one per line) and returns text output (or not). To extend the system entirely, like to add a feature, a more complete module would have to be built that hooks into a strong event-driven

Group chat would likely be the default, but having one-on-one chats with colleagues can help to focus in on particular problems in exactly the same way a phone call would work now. Working with text allows search, but additionally links, files, and other media could be embedded directly in the chat.

The key extension that has become popular recently is to use a chat bot like Jenkins or hubot. Imagine that instead of running fab production deploy from a command prompt, you run a similar command in the chatroom. Suddenly it becomes trivial to see what the last update was to the production server; it’s just a search away. Even if a team wanted to use completely different methods than we do at Hoot, they could organize their internal tools using extensions and a chat bot.

Data storage

We could continue using the same differentiated data model, and even the same services, on the ideal collaborative platform. git, svn, and other version control systems can be integrated through their command line tools. Dropbox has a web API. Anybody could write an extension that would add support for such data, and allow it to be referenced anywhere in the chat. For example, git already uses unique hash codes to represent data, and that could be used in chat to reference files.

Automation

A chat bot can be as simple as a command line program that listens for text and can respond with other text. But even though the input and output are simple, it can handle complex tasks in an automated way. I would imagine that if I could use this system, it would be easier to deploy code to servers from chat than a terminal, because the scripts that the chat bot would run automate that away.

Management

Would you like bug tracking? There’s an extension for that. Pull requests and code review? That can also be extended. Once you allow scripting, you can make your system add as much meta-information as you want. I think the minimum useful set of management tools is a to-do list and code review, but other people may have differing opinions.

Why doesn’t this exist?

What I’ve just described is an incredibly general system that can be extended to become exactly what a team needs. But in my research for this piece I spent some time comparing project management tools, and have come to the conclusion that Beanstalk would be the best tool to use at Hoot. It already incorporates a number of the ideas I’ve talked about, with built-in data storage, automation, and management functions and the ability to use popular third-party alternatives or write your own. It combines the flexibility of Github with Basecamp’s great project management tools, which is one of the goals I had in mind when designing this system.

But even more than our desire for a better project management system, the system I’ve described sounds like something I want to build. After all, it’s just a chat. But the interesting challenge for me is to build a system that other people will want to contribute to. In fact, some of the concepts I’ve mentioned here are already being implemented in the Hoot Facebook application.


  1. By that definition, Dropbox is also version control system because they store a complete history of your files. But without branching it's not a good tool for coding.

User Interface Analysis of the Nikon D200

The Nikon D200 is a DSLR camera originally released in 2005. I got mine in 2008 as my first digital camera, replacing my previous film SLR, the Nikon FE. Having taken over 30,000 photos on this camera, I am intimately familiar with it’s workings.

The D200 fit into the middle of Nikon’s lineup of DSLRs, for either high-end amateurs or low-end professionals. It closely resembles the professional Nikon D2X, but in a smaller body. It has been superseded by the D300.

The primary purpose of any camera is obviously to take photos. But a professional camera is a tool, and needs to both help the photographer and completely get out of his way. Speed and accuracy are good indicators of that.

Camera overview top Camera overview rear

The D200 is both very fast and very accurate. It has physical buttons and dials to control nearly all useful settings without resorting to menus. These physical controls can be manipulated even with gloves on, without looking. It also has three displays: in the viewfinder and the top surface of the camera, it shows current settings, and the rear screen shows photos and menus.

The camera turns on almost instantly and is ready to take photos. It can shoot at 4fps for several seconds on full-quality RAW before the buffer is full, meaning that bursts of activity like in sports can be captured.

Its design gives the photographer direct input and immediate feedback, and feels very responsive.

Camera closeup top display Camera through viewfinder

The top display and viewfinder are essential feedback mechanisms for taking photos. Shutter speed, aperture, and ISO, the three elements of a proper exposure, are shown in these displays. They also offer contextual display, for example while setting the ISO they both isolate that number.

The viewfinder, which is used to properly compose a photo, shows focus points for autofocus and can be set to show gridlines. However, unlike some other cameras it does not indicate if the focus is accurate.

Camera overview lens from side

This is the Nikon 18-200mm VR lens. Lenses are yet another input device for the photographer, and are probably more important than the body in getting good photos. Most have really simple controls: a zoom ring and a focus ring, as well as an aperture ring in some lenses. This particular lens also gives control over focus mode and vibration reduction.

While zooming is very easy and requires only a half-twist of the barrel, focusing manually with the ring feels weak and takes some time to get accurate.

Camera custom settings menu

The D200 avoids the need for complex menus to set the most common settings by using physical inputs. But it supplements that with extensive customizability, which can make it an even better tool for the photographer. While it’s possible to save four named groups of settings, I typically leave it in the default group.

Camera timers and autofocus menu

For example, I like to set a short self timer instead of using a cable release on long exposures, and a long self timer when doing self portraits.

The menus themselves appear to be well-designed given the constraints of the device. The directional pad and dedicated “enter” button make menu choices fairly well. Menus are color-coded and labeled, and non-default settings have an asterisk. Each menu item also shows its current state, instead of hiding it. The feedback mechanisms here were clearly thought out.

The biggest problem with the menus is depth, and finding certain settings again can be difficult. But there is a pretty decent solution:

Camera recent settings menu

The recent settings menu reduces the effort to find settings I use frequently such as timers. But it suffers from the flaw that recent settings are not persistent. Sometimes the timer settings are at the top, but sometimes they are on the second page. An alternative would be a favorites-style interface where I choose the menu options. The tradeoff made in the camera is that I have to search the full menu sometimes, but it will automatically gather together most actions for me to reduce that effort.


As a tool, the D200 succeeds in getting out of my way while I shoot. It’s customizable, so I have it set up in a way that I understand, and the settings that I have to change frequently are directly manipulated.

The feedback mechanisms are excellent overall, and show me relevant information to make decisions about.

The camera fits the needs and desires of its target audience, and that leaves me with a positive experience even four years later.