Open House

Before we were ready to show off our prototypes at the HCDE Open House and poster session, we had a lot of small moving parts that had to sync up. Not only did we have to put the final touches on the gesture prototype, but we also had to prepare for our final presentation and the poster session. From making posters and tickets to shooting video, this week was jam packed with action.

Gesture Prototype

Last week, we had a long discussion on the interaction design of the gesture prototype and agreed on a workflow using thumbs up/down to go to the next page/back and divided the interaction into these steps:

  • User walks up to the prototype
  • Drags hand to change movie, thumbs up to confirm
  • Drags hand to change showtime, thumbs up to confirm
  • Does one of the corner postures to choose the type of ticket, repeats until all tickets needed are added, thumbs up to confirm
  • Slides the card for payment

During implementation, we found that with the current Kinect SDK, it is very difficult to get a reliable detection of thumbs up/down, especially if the hand is in front of the body. We looked for alternative ways such as having the user nod or shake their head, but the current SDK does not provide enough features to implement detection of those actions. Given the SDK is still in tech preview, we understand that problems and missing features are expected, but we cannot wait for its final release.

Therefore, we revised the workflow by merging some of the steps into one screen, removing the showtime selection feature and requiring one user represents oneself for choosing the type of ticket. Our logic was to ensure the quality of the whole experience by cutting use cases that happen less frequently. Now the workflow we will show in the demo is:

  • User walks up to the prototype
  • Drags hand to change movie
  • Does one of the corner posture to choose the type of ticket
  • Swipes the payment card

We are planning to continue to work on the prototype after the quarter ends and we expect the SDK to be more stabilized soon, so we’ll be able to add back the cut features.

Card Scanner

We wanted to make a credit card scanner so users can actually slide their card when they used the prototype; to make it a more realistic experience. However, we don’t really need to get their credit card information, so we made a fake scanner just to detect the action of sliding a card. This is what the scanner currently looks like. Compared to last week’s model, we rounded the corners on each end of the slit to make it easier to slide a card into the slit.


One idea is to have an LED and a light sensor mounted on each side of the slit, so that when users slides their cards, they’ll get feedback in the form of a change in brightness. Another LED will be put behind the hole on the front of the scanner and be turned on to indicate the scanner is activated and ready to scan.


Using a circuit like above, we convert the resistance of the light sensor into analog input for Arduino. We use three 1KΩ resistors instead of one and our photosensor is 16kΩ-2MΩ, meaning its resistance is 16kΩ when exposed in maximal light and 2MΩ in maximal darkness. We used three 1KΩ resistors is to increase the delta of voltage between the cases of light and dark. Below is the units soldered into a component.

2014-05-26 17.55.07

Then we added two LEDs to the component. Both LEDs will be turned on at the same time so they share the digital output and ground.

2014-05-26 18.29.35

We used a 6-pin header to arrange the wires to make it neat. It is an optional step but since our engineer is a obsessive about such things, it became required. The 6 pins in the picture are 5V, light reading, LED, unused, unused and GND. And it can be easily wired to Arduino.

2014-05-26 18.48.44

The scanner model was also revised, we added walls inside to make it easy to attach the circuit.

2014-05-26 19.46.21

The Arduino code is as simple as: get the reading of light sensor and print it to serial port; if there are data from serial port, read one byte which indicates whether the LEDs should be on or off, and then set LEDs.

void loop() {
int reading = analogRead(PIN_LIGHT);
if (Serial.available() > 0) {
active =;
digitalWrite(PIN_LED, active);

The Arduino is connected to a laptop through USB and the C# code on the laptop reads from a COM port to get the light reading and writes a byte to the port to turn on or off the LEDs.

And here is the final look:

2014-05-26 21.16.20

And a close-up shot to show the inside of the scanner:

2014-05-26 21.18.23

Posters, Presentations, and Paraphernalia

The rest of our focus this week was on making sure we had everything ready for the big events next week: our presentation on Monday and the poster session on Wednesday.

The presentation is a chance for us to show off all the work we have done over the past 10 weeks, and just putting it together was a great way to reflect over the whole process so far. Saturday was the first time we got to set up the prototype we’ll be demoing on Wednesday. To make that part of the presentation, we shot the video at the top of this post. The rest of Saturday was spent making sure we had the content we needed, and designing an overall look for the slide deck.

We met with our adviser, Andy, on Wednesday to go over our poster design and from there set out to put together the assets needed for it. We debated exactly how to layout the poster and ultimately committed to an elaborate, innovative concept involving interactive panels and a multilayered design. We’re all excited about it and will share it on the blog soon.

Open House poster landscape

Finally, we designed some more assets we needed for the poster session. For one, there’s the beautiful movie poster above, which is something you can see in person if you stop by the HUB lyceum on Wednesday. We also finally agreed on a logo we liked, which is just nice to have. And we got caught up working on a handful of other tasks too. Like everything else, these materials will be uploaded to the blog as soon as their finished.

Milestone Three

Milestone Three documents our low-fi prototyping stage. It shows how we took the basic ideas from our ideation and developed them out into a prototype that was complex enough that we could use it in a usability test with real potential users.

Open deck

Prototyping, Part Three

Holiday weekend, shmoliday shmeekend! That’s the Bixcreen motto, as we were busy at work on Friday, Saturday, and Sunday of Memorial Day weekend. Actually, we were busy the rest of the week too… After all, there are only a couple of weeks left in this quarter.


In class on Monday we took advantage of being on campus by finding a spot where we could possibly set up a demo unit during the poster session on June 4, which is basically our prototype deadline. With a space picked out and plans in motion, we began rapidly prototyping the touch model, knowing we needed to usability test by the end of the week.

So far, we had a lot of design iterations on our prototypes and we are working on the 5th round of prototype before moving into the final visual design. With our prototype version 4, we performed usability testings with participants and got some great feedback on what we need to improve our overall design. Prototyping will be done by the end of this week, and we are planning to focus on the final designs.

Once we had agreed on the look for all the screens, we developed them into an interactive prototype. Since we knew we would be using an iPad for the usability test, we decided to use Apple’s Keynote. We put together a deck that allows users to go through our screens in the way we directed them to, complete with flashy animations to make it seem more real. One thing we were missing was support for swipes, as Keynote only supports taps from the user. This came up in our usability testing.

Usability Test

We conducted our first official usability test on Friday, shortly after finishing the interactive prototype.

We conducted six sessions with a total of eight participants (two groups of two) using an iPad mini loaded with our prototype. Demographically, we had four males and four females with an approximate age range from young-20s to mid-50s.


Our findings are separated into three categories: findings about the process, findings about the design, and findings about user opinions.


  • Participants were confused by the ‘View 3D Showtimes” button and did not notice it quickly.
  • Some participants do not understand the distinction between IMAX and 3D.
  • When customizing, users did not understand what the ticket tabs meant or when a poster had been selected.


  • A majority of participants failed to notice the % of seats sold indicated on the showtime.
  • One participant who did notice misunderstood “60% SOLD” to mean the showing was sold out, likely due to the capital lettering.
  • One participant commented on the size of the ‘+’ and ‘-’ buttons as being too small for the screen size, but suggested a larger screen could preclude that issue.
  • Participants tried to swipe where we expected, but we had not yet implemented that gesture.
  • The animated posters were praised and participants actually would like to see more of them.


  • Average Ease-of-use Likert scale score: 1.7 (1 – Very Easy, 7 – Very Difficult)
  • Average Satisfaction likert scale score: 1.5 (1 – Very Satisfied, 7 – Not Satisfied at all)
  • Participants commented positively on the highly visual and interactive nature of our design versus current kiosks.
  • All participants felt the length of the interaction was appropriate and may be even shorter than current methods.
  • All participants greatly enjoyed the ability to customize tickets with a movie poster.

Overall, participants claimed they would use our device to purchase tickets if it were available. Even the two participants who indicated they only buy tickets from the box office and had never used a kiosk before felt they would use our product. Hearing this feedback is a wonderful indication that Bixcreen is on the correct path.

Next Steps

Microsoft’s empty conference rooms hosted our group meetings on Saturday and Sunday, when we committed to a plan for the final couple of weeks as well as assembled Milestone 3. That document will be posted here shortly.

We debated how to move forward, given that we still had two prototypes and were not sure if a fully-featured gesture prototype would be possible to make in the time we had left. But we agreed to best demonstrate our vision for the product, we would have to try to have something for the poster session. So once again we debated what screens needed to be there, what would be on them, and what gestures are needed to use them.


We will implement gesture detection based on the skeleton data from Kinect. There might be technical difficulties in writing algorithms to recognize gestures like thumb-up so we might need to revise the design while not sacrificing the experience.


In order to bring a complete experience to the demo, we are also working on a fake credit card scanner. The scanner will be 3D-printed and built with Ardruino. An LED and a light sensor will be put on each side of the slit so when a card is slided through, it will block the light to the sensor and we can detect it and consider it as a card sliding action.


Along with the gesture prototype, we agreed to develop the touch prototype a little further, so that we could have a polished version running on an iPad at the poster session. Speaking of that, we still also need to finish our printed poster for that too, as well as numerous other important administrative tasks. June 4th is just around the corner, and we’ve got to make sure everything will be ready. Stay tuned.

Prototyping, Part Two

This week we continued working on prototypes and used our peers as a chance to run a quick pilot usability study.

Pilot Usability Test

We used our last critical friends group meeting as a chance to conduct a pilot version of our usability test, using the prototypes we worked on last week. That meant setting up two separate studies, one with the gesture prototype and one with the touch prototype. We took our users into separate rooms and had them run-through the tasks we came up with, recording things like time-on-task and failures/successes. Their feedback:

Gesture Test

  • Gesturing to the four corners proved easier and faster than the set of “unique” gestures to add a type of movie ticket.
  • For the unique gestures, participants found the adult and child poses more preferable and easier to perform.
  • For the corner gestures, participants found the lower corners easier to perform.
  • For the corner gestures, a few participants mentioned gesturing for an upper corner made them feel “silly” or “exposed.”
    • To remedy this, we can identify the upper corner gestures as a raised hand 90 degrees from the elbow, a less exaggerated pose.
  • Time on task for the corner gestures were near identical across participants. This consistency is nice to aim for.
  • Unique gestures are not off the table, but the particular gestures we tested likely are.

Touch-Based Prototype Walkthrough

  • When presented with the pickup or purchase options, the nav arrows on the sides of the screen are not a clear indication of what action can be taken.
  • We should try 3 main buttons: Pick Up Tickets, Purchase Tickets for MOVIE TITLE, or See Another Movie.
  • When selecting another movie, the interface should proceed with the purchase, not require an additional push of the “Purchase Tickets” button.
  • The time listings have some numbers in blue that are difficult to see on a black background.
  • It is unclear how to get to more times.
  • The overall process was extremely quick. Even with think-out-loud and forcing the participant to change their order, the entire interaction took around a minute.
  • The critical friends would like to see the customize ticket screen implemented as well as the pick-up tickets scenario.

Overall, our critical friends reported they liked both methods of interaction and suggested if we can’t decide on one to try allowing for both interactions in the way we design the prototypes.


Based on the above feedback, we set out on refining our prototypes so that we could conduct real usability tests in the next week or too. We decided that we would use the touch-based prototype as our primary interface and retrofit the gesture interactions onto it once that prototype is further along. As it stands right now, Yongji is hard at work with the Xbox One Kinect SDK and the rest of us are working on putting together the interface prototype. We have also started writing up a formal task list and putting together the whole test kit. All of these things will be shared on this blog once they are completed.

Milestone Two

Here is the second of our three major milestones. This artifact showcases our ideation process, as we took the user research and design requirements from the first milestone as a starting point to design an actual product. It was a lot of sketching, brainstorming, and even a little usability testing, but it was all worth it, since we came out of this stage ready to begin prototyping.

Download PDF

Ideation, Part Two

To recap where we were last week: We started by sketching different concepts. The ideas spanned mobile apps, poster-sized screens, and gesture interfaces. This provide the basis for a brainstorming session, where we established a user task flow. We worked down to three concepts and created wireframes to better communicate those ideas. This week began by showing those wireframes to our critical friends to get feedback.

Peer Reviews

During our weekly class meeting, our group was paired up with the AwesomeSquare teams to evaluate each other’s progress. We received feedback based on our wireframes of three different ideas. Their feedback was:

Feedback on Prototype Idea (movie poster) 1:

  • “Is there a way that when people approach they know it’s interactive?”
  • “So the poster changes if you change the movie?”
  • “I like this but this seems like a slight pivot from what exists now, but its heading in the right direction.. it’s much more visual”
  • “The carousel looks super imposed onto the poster, is there a way to be more integrated?”
  • Maybe what’s inside the box, isn’t a smaller representation of the larger
  • Maybe go with a greyed out treatment
  • “Would the animations persist while you are buying your ticket?”
  • “I really like the look of it, it really seems movie like”
  • “I like that the buttons are really separated, but when its big, the buttons will be further apart and seem more separated”

Feedback on Prototype Idea 2 (sidebar):

  • “I like having both of the types, 3D and Regular together and color coded”
  • “It’s not clear how you drill down into the synopsis after picking the movie”
  • “That showtimes thing at the bottom is a little misleading to me because they look like buttons”
  • “How do you go back?”
  • “Since you have only a strip of touch screen on the side, information architecture needs to be more clear to help guide people through the process”
  • “This one for me, visually for me it was too busy… as a user I may be confused, but the features and functionality is great…”
  • “I’m still distracted by the show times… it makes sense to maybe reduce the number that’s visible”
  • Maybe reducing the number of show times that are visible would be better

Feedback on Prototype Idea 3 (gesture):

  • “Placement of arrow to show picking up tickets is near Spiderman’s genitals”
  • “What if one person comes first and the other person comes later… like one person buying tickets for multiple people?”
  • “You should have a photo ticket in this prototype”
  • “It’s really immersive and really brings that movie going experience”
  • “what happens if you’re NOT with your significant other?”
  • Option to not have customization in your tickets
  • “The button doesn’t have to be a box, I like the fact that the poster is being integrated into ticket buying UI itself”
  • “I think there’s a ton of potential here”
  • When asked if they think this would benefit from having a sub navigation, they said, “No I really like the simplicity of this”
  • It seemed like our critical friends really liked this option

Milestone 2

We spent this last week putting together the ideation milestone, which had to document what we have done so far for the ideation part of the project. I will post the document to the blog shortly.

First Milestone 3 Meeting

On Saturday we had a lengthy meeting at Microsoft’s Redmond campus. We used this as an opportunity to discuss the feedback we got and how we wanted to proceed, given that we are already half way through the quarter. After discussing about three designs, we decided to pick one idea that we will be working on. Ultimately, however, we decided to continue working on two prototypes, as we felt a combined touch screen model and a gesture model were both still viable.

Touch Screen Ideas:

Here are some specs we agreed on:

  • Full screen of poster size is touchable (24’x38’).
  • User taps anywhere on the screen to start.
  • A modal dialog flies in at the height a little below the eye level which is easiest for users to access.
  • All the interactions afterwards (except sliding card for payment) happen within the modal dialog.
  • Only the horizontal sidebar on the right is touchable. The poster-sized screen does not accept user inputs.
  • All the interactions (except sliding card for payment) happen on the sidebar.

Gesture Ideas:

For the gesture/motion concept, we have decided:

  • We can use the Kinect SDK to program the prototype.
  • To use gesture-style interaction instead of hot-spot style.
  • To play pre-recorded voice with subtitle display on the screen.
  • Voice command recognition is out of scope.

Ideation, Part One

After finishing the user research milestone, we began work on the next stage: ideation. It is our goal to dedicate two weeks to designing a compelling solution to the problems we exposed over the last few weeks. To do that, we have so far done a lot of brainstorming and sketching, as well as starting work on wireframes.



Last weekend, we all agreed to take what we learned from the milestone as inspiration to start sketching ideas for what we want the Bixcreen device to be. We came to class with a variety of ideas, from phone apps all the way to motion-sensing displays. We spent the rest of the four-hour-long class debating what worked best out of everyone’s ideas, and the best way to move forward. By the end of the night, we had developed an early draft of the user flow, and agreed to meet again over the weekend with even more sketches.

We met again in the Design Lab at UW, where we presented our individual takes on the screens from our flow. Once again, there was much debate over what was and was not working. We started bringing in realistic concerns, talking about the size of the device, where it would be placed, ways of interaction that are technologically feasible and not too expensive, until we could agree on three potential ways of going forward. From there, we started work on our wireframes.


With the early sketching stage behind us, we agreed on three possible ways for the Bixcreen device to work: as a movie poster replacement that is either entirely a touch screen, has a touch screen to one side, or uses Microsoft’s Kinect to recognize gestures. It would also need to accept credit card swipes, possibly be able to scan QR codes or register NFC taps, and, practically, it must be able to print tickets. Since we have a critical friends meeting on Monday, we decided to wireframe these three ideas so we could have them assess them in class.

So Saturday and Sunday were spent developing these three ideas into presentable form. We worked within Google Drive presentations so that all five of us could collaborate on each one. We’re still putting the finishing touches on them for AwareSquare tomorrow, but once they’re ready, we’ll be sure to put up archived versions of them on this blog too.

Milestone One

The first of this project’s three major milestones is complete. This document is a writeup of all the user research we conducted during the first weeks of this quarter. It should demonstrate that we have found a need in the space we investigated and used feedback from potential users to establish design requirements for a possible solution for that need. This is the basis we will draw from as we continue ideating solutions, creating prototypes, and conducting usability tests.

Download PDF