Open House

Before we were ready to show off our prototypes at the HCDE Open House and poster session, we had a lot of small moving parts that had to sync up. Not only did we have to put the final touches on the gesture prototype, but we also had to prepare for our final presentation and the poster session. From making posters and tickets to shooting video, this week was jam packed with action.

Gesture Prototype

Last week, we had a long discussion on the interaction design of the gesture prototype and agreed on a workflow using thumbs up/down to go to the next page/back and divided the interaction into these steps:

  • User walks up to the prototype
  • Drags hand to change movie, thumbs up to confirm
  • Drags hand to change showtime, thumbs up to confirm
  • Does one of the corner postures to choose the type of ticket, repeats until all tickets needed are added, thumbs up to confirm
  • Slides the card for payment

During implementation, we found that with the current Kinect SDK, it is very difficult to get a reliable detection of thumbs up/down, especially if the hand is in front of the body. We looked for alternative ways such as having the user nod or shake their head, but the current SDK does not provide enough features to implement detection of those actions. Given the SDK is still in tech preview, we understand that problems and missing features are expected, but we cannot wait for its final release.

Therefore, we revised the workflow by merging some of the steps into one screen, removing the showtime selection feature and requiring one user represents oneself for choosing the type of ticket. Our logic was to ensure the quality of the whole experience by cutting use cases that happen less frequently. Now the workflow we will show in the demo is:

  • User walks up to the prototype
  • Drags hand to change movie
  • Does one of the corner posture to choose the type of ticket
  • Swipes the payment card

We are planning to continue to work on the prototype after the quarter ends and we expect the SDK to be more stabilized soon, so we’ll be able to add back the cut features.

Card Scanner

We wanted to make a credit card scanner so users can actually slide their card when they used the prototype; to make it a more realistic experience. However, we don’t really need to get their credit card information, so we made a fake scanner just to detect the action of sliding a card. This is what the scanner currently looks like. Compared to last week’s model, we rounded the corners on each end of the slit to make it easier to slide a card into the slit.

scanner.v3

One idea is to have an LED and a light sensor mounted on each side of the slit, so that when users slides their cards, they’ll get feedback in the form of a change in brightness. Another LED will be put behind the hole on the front of the scanner and be turned on to indicate the scanner is activated and ready to scan.

learn_arduino_LDR_schematic

Using a circuit like above, we convert the resistance of the light sensor into analog input for Arduino. We use three 1KΩ resistors instead of one and our photosensor is 16kΩ-2MΩ, meaning its resistance is 16kΩ when exposed in maximal light and 2MΩ in maximal darkness. We used three 1KΩ resistors is to increase the delta of voltage between the cases of light and dark. Below is the units soldered into a component.

2014-05-26 17.55.07

Then we added two LEDs to the component. Both LEDs will be turned on at the same time so they share the digital output and ground.

2014-05-26 18.29.35

We used a 6-pin header to arrange the wires to make it neat. It is an optional step but since our engineer is a obsessive about such things, it became required. The 6 pins in the picture are 5V, light reading, LED, unused, unused and GND. And it can be easily wired to Arduino.

2014-05-26 18.48.44

The scanner model was also revised, we added walls inside to make it easy to attach the circuit.

2014-05-26 19.46.21

The Arduino code is as simple as: get the reading of light sensor and print it to serial port; if there are data from serial port, read one byte which indicates whether the LEDs should be on or off, and then set LEDs.

void loop() {
int reading = analogRead(PIN_LIGHT);
Serial.println(reading);
if (Serial.available() > 0) {
active = Serial.read();
}
digitalWrite(PIN_LED, active);
}

The Arduino is connected to a laptop through USB and the C# code on the laptop reads from a COM port to get the light reading and writes a byte to the port to turn on or off the LEDs.

And here is the final look:

2014-05-26 21.16.20

And a close-up shot to show the inside of the scanner:

2014-05-26 21.18.23

Posters, Presentations, and Paraphernalia

The rest of our focus this week was on making sure we had everything ready for the big events next week: our presentation on Monday and the poster session on Wednesday.

The presentation is a chance for us to show off all the work we have done over the past 10 weeks, and just putting it together was a great way to reflect over the whole process so far. Saturday was the first time we got to set up the prototype we’ll be demoing on Wednesday. To make that part of the presentation, we shot the video at the top of this post. The rest of Saturday was spent making sure we had the content we needed, and designing an overall look for the slide deck.

We met with our adviser, Andy, on Wednesday to go over our poster design and from there set out to put together the assets needed for it. We debated exactly how to layout the poster and ultimately committed to an elaborate, innovative concept involving interactive panels and a multilayered design. We’re all excited about it and will share it on the blog soon.

Open House poster landscape

Finally, we designed some more assets we needed for the poster session. For one, there’s the beautiful movie poster above, which is something you can see in person if you stop by the HUB lyceum on Wednesday. We also finally agreed on a logo we liked, which is just nice to have. And we got caught up working on a handful of other tasks too. Like everything else, these materials will be uploaded to the blog as soon as their finished.

Advertisements

Milestone Three

Milestone Three documents our low-fi prototyping stage. It shows how we took the basic ideas from our ideation and developed them out into a prototype that was complex enough that we could use it in a usability test with real potential users.

Open deck

Prototyping, Part Three

Holiday weekend, shmoliday shmeekend! That’s the Bixcreen motto, as we were busy at work on Friday, Saturday, and Sunday of Memorial Day weekend. Actually, we were busy the rest of the week too… After all, there are only a couple of weeks left in this quarter.

Prototyping

In class on Monday we took advantage of being on campus by finding a spot where we could possibly set up a demo unit during the poster session on June 4, which is basically our prototype deadline. With a space picked out and plans in motion, we began rapidly prototyping the touch model, knowing we needed to usability test by the end of the week.

So far, we had a lot of design iterations on our prototypes and we are working on the 5th round of prototype before moving into the final visual design. With our prototype version 4, we performed usability testings with participants and got some great feedback on what we need to improve our overall design. Prototyping will be done by the end of this week, and we are planning to focus on the final designs.

Once we had agreed on the look for all the screens, we developed them into an interactive prototype. Since we knew we would be using an iPad for the usability test, we decided to use Apple’s Keynote. We put together a deck that allows users to go through our screens in the way we directed them to, complete with flashy animations to make it seem more real. One thing we were missing was support for swipes, as Keynote only supports taps from the user. This came up in our usability testing.

Usability Test

We conducted our first official usability test on Friday, shortly after finishing the interactive prototype.

We conducted six sessions with a total of eight participants (two groups of two) using an iPad mini loaded with our prototype. Demographically, we had four males and four females with an approximate age range from young-20s to mid-50s.

Findings

Our findings are separated into three categories: findings about the process, findings about the design, and findings about user opinions.

Process

  • Participants were confused by the ‘View 3D Showtimes” button and did not notice it quickly.
  • Some participants do not understand the distinction between IMAX and 3D.
  • When customizing, users did not understand what the ticket tabs meant or when a poster had been selected.

Design

  • A majority of participants failed to notice the % of seats sold indicated on the showtime.
  • One participant who did notice misunderstood “60% SOLD” to mean the showing was sold out, likely due to the capital lettering.
  • One participant commented on the size of the ‘+’ and ‘-’ buttons as being too small for the screen size, but suggested a larger screen could preclude that issue.
  • Participants tried to swipe where we expected, but we had not yet implemented that gesture.
  • The animated posters were praised and participants actually would like to see more of them.

Opinion

  • Average Ease-of-use Likert scale score: 1.7 (1 – Very Easy, 7 – Very Difficult)
  • Average Satisfaction likert scale score: 1.5 (1 – Very Satisfied, 7 – Not Satisfied at all)
  • Participants commented positively on the highly visual and interactive nature of our design versus current kiosks.
  • All participants felt the length of the interaction was appropriate and may be even shorter than current methods.
  • All participants greatly enjoyed the ability to customize tickets with a movie poster.

Overall, participants claimed they would use our device to purchase tickets if it were available. Even the two participants who indicated they only buy tickets from the box office and had never used a kiosk before felt they would use our product. Hearing this feedback is a wonderful indication that Bixcreen is on the correct path.

Next Steps

Microsoft’s empty conference rooms hosted our group meetings on Saturday and Sunday, when we committed to a plan for the final couple of weeks as well as assembled Milestone 3. That document will be posted here shortly.

We debated how to move forward, given that we still had two prototypes and were not sure if a fully-featured gesture prototype would be possible to make in the time we had left. But we agreed to best demonstrate our vision for the product, we would have to try to have something for the poster session. So once again we debated what screens needed to be there, what would be on them, and what gestures are needed to use them.

skeleton

We will implement gesture detection based on the skeleton data from Kinect. There might be technical difficulties in writing algorithms to recognize gestures like thumb-up so we might need to revise the design while not sacrificing the experience.

printing_scanner
printed_scanner

In order to bring a complete experience to the demo, we are also working on a fake credit card scanner. The scanner will be 3D-printed and built with Ardruino. An LED and a light sensor will be put on each side of the slit so when a card is slided through, it will block the light to the sensor and we can detect it and consider it as a card sliding action.

working_in_progress_ardruino_scanner_prototype

Along with the gesture prototype, we agreed to develop the touch prototype a little further, so that we could have a polished version running on an iPad at the poster session. Speaking of that, we still also need to finish our printed poster for that too, as well as numerous other important administrative tasks. June 4th is just around the corner, and we’ve got to make sure everything will be ready. Stay tuned.

Prototyping, Part Two

This week we continued working on prototypes and used our peers as a chance to run a quick pilot usability study.

Pilot Usability Test

We used our last critical friends group meeting as a chance to conduct a pilot version of our usability test, using the prototypes we worked on last week. That meant setting up two separate studies, one with the gesture prototype and one with the touch prototype. We took our users into separate rooms and had them run-through the tasks we came up with, recording things like time-on-task and failures/successes. Their feedback:

Gesture Test

  • Gesturing to the four corners proved easier and faster than the set of “unique” gestures to add a type of movie ticket.
  • For the unique gestures, participants found the adult and child poses more preferable and easier to perform.
  • For the corner gestures, participants found the lower corners easier to perform.
  • For the corner gestures, a few participants mentioned gesturing for an upper corner made them feel “silly” or “exposed.”
    • To remedy this, we can identify the upper corner gestures as a raised hand 90 degrees from the elbow, a less exaggerated pose.
  • Time on task for the corner gestures were near identical across participants. This consistency is nice to aim for.
  • Unique gestures are not off the table, but the particular gestures we tested likely are.

Touch-Based Prototype Walkthrough

  • When presented with the pickup or purchase options, the nav arrows on the sides of the screen are not a clear indication of what action can be taken.
  • We should try 3 main buttons: Pick Up Tickets, Purchase Tickets for MOVIE TITLE, or See Another Movie.
  • When selecting another movie, the interface should proceed with the purchase, not require an additional push of the “Purchase Tickets” button.
  • The time listings have some numbers in blue that are difficult to see on a black background.
  • It is unclear how to get to more times.
  • The overall process was extremely quick. Even with think-out-loud and forcing the participant to change their order, the entire interaction took around a minute.
  • The critical friends would like to see the customize ticket screen implemented as well as the pick-up tickets scenario.

Overall, our critical friends reported they liked both methods of interaction and suggested if we can’t decide on one to try allowing for both interactions in the way we design the prototypes.

Prototyping

Based on the above feedback, we set out on refining our prototypes so that we could conduct real usability tests in the next week or too. We decided that we would use the touch-based prototype as our primary interface and retrofit the gesture interactions onto it once that prototype is further along. As it stands right now, Yongji is hard at work with the Xbox One Kinect SDK and the rest of us are working on putting together the interface prototype. We have also started writing up a formal task list and putting together the whole test kit. All of these things will be shared on this blog once they are completed.

Prototyping, Part One

This was an exciting week for the Bixcreen Team!

This week, we greatly advanced our touch and gesture-based prototypes to reflect the feedback given to us by our critical friends, while remembering to remain true to our original research results captured in Milestone 1. Our prototype iteration involved melding two touch screen prototypes into one, which made an end-product that reflects the best of both designs; as well as exploring two types of gesture-based interaction modalities.

Touch-Based Interaction Design

Touch Design Iterations

Touch Design Iterations

This week’s touch-based designing included iterating with the strengths and weaknesses of touch interaction in mind. This meant making the interface options more simple, touch buttons bigger, and making each step focused on a particular step in our pathway. The image above reflects our thoughts on redesigning the pathways and flow for our new touch-based experience. With our new direction and melded design we are poised to focus and improve the overall experience while, hopefully, decreasing the potential learning curve required to successfully navigate through our UI.

Gesture-Based Interaction Design

Xbox Sensor Upgrade!

Xbox Sensor Upgrade!

This week our project received tangible support from the Lead Interaction Designer for the Xbox One, Tim Franklin. Tim met up with two of our team members to give feedback on our current prototype ideas, discuss the limitations of the Xbox 360 & Xbox One Sensor, and explain the potentials for using an Xbox One Sensor instead of a Xbox 360 Sensor. Tim then helped the Bixcreen team apply for an internal Kinect for Windows v2 (K4Wv2) Software Development Kit (SDK). Since two of our team members already work at Microsoft we were instantly approved for two Kinect for Windows V2 kits; which we received only two days after our meeting with Tim. This is particularly exciting given that the K4W2 SDK won’t be available to the public until the middle of this summer! Big kudos to Tim!

Gesture

Unique Gesture Method

 

4 Corner Gesture

4-Corners Method

With the new sensor available to our team, new functionality can be implemented into our design; thus, our team will need to explore our newly available options. These options include upgrades to hand states (particularly grip/release & press functionality), a more robust voice detection system/mic array, access to more gesture options, more stable cursor control, and further and more accurate skeletal tracking distance (3.5M to 4M m). This week we explored a Four Corners approach to gesturing, which simply supports raising one arm in relation to a corner of the screen to identify the ticket type: Adult, Child, Senior, and Student. The Unique Gesture model incorporates four unique poses to select a particular movie ticket type. The illustrations above show these gesture models starting with the Unique Gesture Method followed by the Four Corner Method.

We finished the week putting the finishing touches on the two prototypes for our pilot usability test, which will be on Monday. The task lists are written, the technology is ready, the time has come. Stay tuned for next week!

Milestone Two

Here is the second of our three major milestones. This artifact showcases our ideation process, as we took the user research and design requirements from the first milestone as a starting point to design an actual product. It was a lot of sketching, brainstorming, and even a little usability testing, but it was all worth it, since we came out of this stage ready to begin prototyping.

Download PDF

Ideation, Part Two

To recap where we were last week: We started by sketching different concepts. The ideas spanned mobile apps, poster-sized screens, and gesture interfaces. This provide the basis for a brainstorming session, where we established a user task flow. We worked down to three concepts and created wireframes to better communicate those ideas. This week began by showing those wireframes to our critical friends to get feedback.

Peer Reviews

During our weekly class meeting, our group was paired up with the AwesomeSquare teams to evaluate each other’s progress. We received feedback based on our wireframes of three different ideas. Their feedback was:

Feedback on Prototype Idea (movie poster) 1:

  • “Is there a way that when people approach they know it’s interactive?”
  • “So the poster changes if you change the movie?”
  • “I like this but this seems like a slight pivot from what exists now, but its heading in the right direction.. it’s much more visual”
  • “The carousel looks super imposed onto the poster, is there a way to be more integrated?”
  • Maybe what’s inside the box, isn’t a smaller representation of the larger
  • Maybe go with a greyed out treatment
  • “Would the animations persist while you are buying your ticket?”
  • “I really like the look of it, it really seems movie like”
  • “I like that the buttons are really separated, but when its big, the buttons will be further apart and seem more separated”

Feedback on Prototype Idea 2 (sidebar):

  • “I like having both of the types, 3D and Regular together and color coded”
  • “It’s not clear how you drill down into the synopsis after picking the movie”
  • “That showtimes thing at the bottom is a little misleading to me because they look like buttons”
  • “How do you go back?”
  • “Since you have only a strip of touch screen on the side, information architecture needs to be more clear to help guide people through the process”
  • “This one for me, visually for me it was too busy… as a user I may be confused, but the features and functionality is great…”
  • “I’m still distracted by the show times… it makes sense to maybe reduce the number that’s visible”
  • Maybe reducing the number of show times that are visible would be better

Feedback on Prototype Idea 3 (gesture):

  • “Placement of arrow to show picking up tickets is near Spiderman’s genitals”
  • “What if one person comes first and the other person comes later… like one person buying tickets for multiple people?”
  • “You should have a photo ticket in this prototype”
  • “It’s really immersive and really brings that movie going experience”
  • “what happens if you’re NOT with your significant other?”
  • Option to not have customization in your tickets
  • “The button doesn’t have to be a box, I like the fact that the poster is being integrated into ticket buying UI itself”
  • “I think there’s a ton of potential here”
  • When asked if they think this would benefit from having a sub navigation, they said, “No I really like the simplicity of this”
  • It seemed like our critical friends really liked this option

Milestone 2

We spent this last week putting together the ideation milestone, which had to document what we have done so far for the ideation part of the project. I will post the document to the blog shortly.

First Milestone 3 Meeting

On Saturday we had a lengthy meeting at Microsoft’s Redmond campus. We used this as an opportunity to discuss the feedback we got and how we wanted to proceed, given that we are already half way through the quarter. After discussing about three designs, we decided to pick one idea that we will be working on. Ultimately, however, we decided to continue working on two prototypes, as we felt a combined touch screen model and a gesture model were both still viable.

Touch Screen Ideas:

Here are some specs we agreed on:

  • Full screen of poster size is touchable (24’x38’).
  • User taps anywhere on the screen to start.
  • A modal dialog flies in at the height a little below the eye level which is easiest for users to access.
  • All the interactions afterwards (except sliding card for payment) happen within the modal dialog.
  • Only the horizontal sidebar on the right is touchable. The poster-sized screen does not accept user inputs.
  • All the interactions (except sliding card for payment) happen on the sidebar.

Gesture Ideas:

For the gesture/motion concept, we have decided:

  • We can use the Kinect SDK to program the prototype.
  • To use gesture-style interaction instead of hot-spot style.
  • To play pre-recorded voice with subtitle display on the screen.
  • Voice command recognition is out of scope.