Milestone Three documents our low-fi prototyping stage. It shows how we took the basic ideas from our ideation and developed them out into a prototype that was complex enough that we could use it in a usability test with real potential users.
Holiday weekend, shmoliday shmeekend! That’s the Bixcreen motto, as we were busy at work on Friday, Saturday, and Sunday of Memorial Day weekend. Actually, we were busy the rest of the week too… After all, there are only a couple of weeks left in this quarter.
In class on Monday we took advantage of being on campus by finding a spot where we could possibly set up a demo unit during the poster session on June 4, which is basically our prototype deadline. With a space picked out and plans in motion, we began rapidly prototyping the touch model, knowing we needed to usability test by the end of the week.
So far, we had a lot of design iterations on our prototypes and we are working on the 5th round of prototype before moving into the final visual design. With our prototype version 4, we performed usability testings with participants and got some great feedback on what we need to improve our overall design. Prototyping will be done by the end of this week, and we are planning to focus on the final designs.
Once we had agreed on the look for all the screens, we developed them into an interactive prototype. Since we knew we would be using an iPad for the usability test, we decided to use Apple’s Keynote. We put together a deck that allows users to go through our screens in the way we directed them to, complete with flashy animations to make it seem more real. One thing we were missing was support for swipes, as Keynote only supports taps from the user. This came up in our usability testing.
We conducted our first official usability test on Friday, shortly after finishing the interactive prototype.
We conducted six sessions with a total of eight participants (two groups of two) using an iPad mini loaded with our prototype. Demographically, we had four males and four females with an approximate age range from young-20s to mid-50s.
Our findings are separated into three categories: findings about the process, findings about the design, and findings about user opinions.
- Participants were confused by the ‘View 3D Showtimes” button and did not notice it quickly.
- Some participants do not understand the distinction between IMAX and 3D.
- When customizing, users did not understand what the ticket tabs meant or when a poster had been selected.
- A majority of participants failed to notice the % of seats sold indicated on the showtime.
- One participant who did notice misunderstood “60% SOLD” to mean the showing was sold out, likely due to the capital lettering.
- One participant commented on the size of the ‘+’ and ‘-’ buttons as being too small for the screen size, but suggested a larger screen could preclude that issue.
- Participants tried to swipe where we expected, but we had not yet implemented that gesture.
- The animated posters were praised and participants actually would like to see more of them.
- Average Ease-of-use Likert scale score: 1.7 (1 – Very Easy, 7 – Very Difficult)
- Average Satisfaction likert scale score: 1.5 (1 – Very Satisfied, 7 – Not Satisfied at all)
- Participants commented positively on the highly visual and interactive nature of our design versus current kiosks.
- All participants felt the length of the interaction was appropriate and may be even shorter than current methods.
- All participants greatly enjoyed the ability to customize tickets with a movie poster.
Overall, participants claimed they would use our device to purchase tickets if it were available. Even the two participants who indicated they only buy tickets from the box office and had never used a kiosk before felt they would use our product. Hearing this feedback is a wonderful indication that Bixcreen is on the correct path.
Microsoft’s empty conference rooms hosted our group meetings on Saturday and Sunday, when we committed to a plan for the final couple of weeks as well as assembled Milestone 3. That document will be posted here shortly.
We debated how to move forward, given that we still had two prototypes and were not sure if a fully-featured gesture prototype would be possible to make in the time we had left. But we agreed to best demonstrate our vision for the product, we would have to try to have something for the poster session. So once again we debated what screens needed to be there, what would be on them, and what gestures are needed to use them.
We will implement gesture detection based on the skeleton data from Kinect. There might be technical difficulties in writing algorithms to recognize gestures like thumb-up so we might need to revise the design while not sacrificing the experience.
In order to bring a complete experience to the demo, we are also working on a fake credit card scanner. The scanner will be 3D-printed and built with Ardruino. An LED and a light sensor will be put on each side of the slit so when a card is slided through, it will block the light to the sensor and we can detect it and consider it as a card sliding action.
Along with the gesture prototype, we agreed to develop the touch prototype a little further, so that we could have a polished version running on an iPad at the poster session. Speaking of that, we still also need to finish our printed poster for that too, as well as numerous other important administrative tasks. June 4th is just around the corner, and we’ve got to make sure everything will be ready. Stay tuned.
Here’s a writeup of our usability test, complete with the test kit. We tested the touch-based interface using an iPad. We were able to test six users.
Here is a video walkthrough of the iPad prototype of the touch screen model of our product. We used this prototype for our first usability test.
This week we continued working on prototypes and used our peers as a chance to run a quick pilot usability study.
Pilot Usability Test
We used our last critical friends group meeting as a chance to conduct a pilot version of our usability test, using the prototypes we worked on last week. That meant setting up two separate studies, one with the gesture prototype and one with the touch prototype. We took our users into separate rooms and had them run-through the tasks we came up with, recording things like time-on-task and failures/successes. Their feedback:
- Gesturing to the four corners proved easier and faster than the set of “unique” gestures to add a type of movie ticket.
- For the unique gestures, participants found the adult and child poses more preferable and easier to perform.
- For the corner gestures, participants found the lower corners easier to perform.
- For the corner gestures, a few participants mentioned gesturing for an upper corner made them feel “silly” or “exposed.”
- To remedy this, we can identify the upper corner gestures as a raised hand 90 degrees from the elbow, a less exaggerated pose.
- Time on task for the corner gestures were near identical across participants. This consistency is nice to aim for.
- Unique gestures are not off the table, but the particular gestures we tested likely are.
Touch-Based Prototype Walkthrough
- When presented with the pickup or purchase options, the nav arrows on the sides of the screen are not a clear indication of what action can be taken.
- We should try 3 main buttons: Pick Up Tickets, Purchase Tickets for MOVIE TITLE, or See Another Movie.
- When selecting another movie, the interface should proceed with the purchase, not require an additional push of the “Purchase Tickets” button.
- The time listings have some numbers in blue that are difficult to see on a black background.
- It is unclear how to get to more times.
- The overall process was extremely quick. Even with think-out-loud and forcing the participant to change their order, the entire interaction took around a minute.
- The critical friends would like to see the customize ticket screen implemented as well as the pick-up tickets scenario.
Overall, our critical friends reported they liked both methods of interaction and suggested if we can’t decide on one to try allowing for both interactions in the way we design the prototypes.
Based on the above feedback, we set out on refining our prototypes so that we could conduct real usability tests in the next week or too. We decided that we would use the touch-based prototype as our primary interface and retrofit the gesture interactions onto it once that prototype is further along. As it stands right now, Yongji is hard at work with the Xbox One Kinect SDK and the rest of us are working on putting together the interface prototype. We have also started writing up a formal task list and putting together the whole test kit. All of these things will be shared on this blog once they are completed.
This was an exciting week for the Bixcreen Team!
This week, we greatly advanced our touch and gesture-based prototypes to reflect the feedback given to us by our critical friends, while remembering to remain true to our original research results captured in Milestone 1. Our prototype iteration involved melding two touch screen prototypes into one, which made an end-product that reflects the best of both designs; as well as exploring two types of gesture-based interaction modalities.
Touch-Based Interaction Design
This week’s touch-based designing included iterating with the strengths and weaknesses of touch interaction in mind. This meant making the interface options more simple, touch buttons bigger, and making each step focused on a particular step in our pathway. The image above reflects our thoughts on redesigning the pathways and flow for our new touch-based experience. With our new direction and melded design we are poised to focus and improve the overall experience while, hopefully, decreasing the potential learning curve required to successfully navigate through our UI.
Gesture-Based Interaction Design
This week our project received tangible support from the Lead Interaction Designer for the Xbox One, Tim Franklin. Tim met up with two of our team members to give feedback on our current prototype ideas, discuss the limitations of the Xbox 360 & Xbox One Sensor, and explain the potentials for using an Xbox One Sensor instead of a Xbox 360 Sensor. Tim then helped the Bixcreen team apply for an internal Kinect for Windows v2 (K4Wv2) Software Development Kit (SDK). Since two of our team members already work at Microsoft we were instantly approved for two Kinect for Windows V2 kits; which we received only two days after our meeting with Tim. This is particularly exciting given that the K4W2 SDK won’t be available to the public until the middle of this summer! Big kudos to Tim!
With the new sensor available to our team, new functionality can be implemented into our design; thus, our team will need to explore our newly available options. These options include upgrades to hand states (particularly grip/release & press functionality), a more robust voice detection system/mic array, access to more gesture options, more stable cursor control, and further and more accurate skeletal tracking distance (3.5M to 4M m). This week we explored a Four Corners approach to gesturing, which simply supports raising one arm in relation to a corner of the screen to identify the ticket type: Adult, Child, Senior, and Student. The Unique Gesture model incorporates four unique poses to select a particular movie ticket type. The illustrations above show these gesture models starting with the Unique Gesture Method followed by the Four Corner Method.
We finished the week putting the finishing touches on the two prototypes for our pilot usability test, which will be on Monday. The task lists are written, the technology is ready, the time has come. Stay tuned for next week!
Here is the second of our three major milestones. This artifact showcases our ideation process, as we took the user research and design requirements from the first milestone as a starting point to design an actual product. It was a lot of sketching, brainstorming, and even a little usability testing, but it was all worth it, since we came out of this stage ready to begin prototyping.
To recap where we were last week: We started by sketching different concepts. The ideas spanned mobile apps, poster-sized screens, and gesture interfaces. This provide the basis for a brainstorming session, where we established a user task flow. We worked down to three concepts and created wireframes to better communicate those ideas. This week began by showing those wireframes to our critical friends to get feedback.
During our weekly class meeting, our group was paired up with the AwesomeSquare teams to evaluate each other’s progress. We received feedback based on our wireframes of three different ideas. Their feedback was:
Feedback on Prototype Idea (movie poster) 1:
- “Is there a way that when people approach they know it’s interactive?”
- “So the poster changes if you change the movie?”
- “I like this but this seems like a slight pivot from what exists now, but its heading in the right direction.. it’s much more visual”
- “The carousel looks super imposed onto the poster, is there a way to be more integrated?”
- Maybe what’s inside the box, isn’t a smaller representation of the larger
- Maybe go with a greyed out treatment
- “Would the animations persist while you are buying your ticket?”
- “I really like the look of it, it really seems movie like”
- “I like that the buttons are really separated, but when its big, the buttons will be further apart and seem more separated”
Feedback on Prototype Idea 2 (sidebar):
- “I like having both of the types, 3D and Regular together and color coded”
- “It’s not clear how you drill down into the synopsis after picking the movie”
- “That showtimes thing at the bottom is a little misleading to me because they look like buttons”
- “How do you go back?”
- “Since you have only a strip of touch screen on the side, information architecture needs to be more clear to help guide people through the process”
- “This one for me, visually for me it was too busy… as a user I may be confused, but the features and functionality is great…”
- “I’m still distracted by the show times… it makes sense to maybe reduce the number that’s visible”
- Maybe reducing the number of show times that are visible would be better
Feedback on Prototype Idea 3 (gesture):
- “Placement of arrow to show picking up tickets is near Spiderman’s genitals”
- “What if one person comes first and the other person comes later… like one person buying tickets for multiple people?”
- “You should have a photo ticket in this prototype”
- “It’s really immersive and really brings that movie going experience”
- “what happens if you’re NOT with your significant other?”
- Option to not have customization in your tickets
- “The button doesn’t have to be a box, I like the fact that the poster is being integrated into ticket buying UI itself”
- “I think there’s a ton of potential here”
- When asked if they think this would benefit from having a sub navigation, they said, “No I really like the simplicity of this”
- It seemed like our critical friends really liked this option
We spent this last week putting together the ideation milestone, which had to document what we have done so far for the ideation part of the project. I will post the document to the blog shortly.
First Milestone 3 Meeting
On Saturday we had a lengthy meeting at Microsoft’s Redmond campus. We used this as an opportunity to discuss the feedback we got and how we wanted to proceed, given that we are already half way through the quarter. After discussing about three designs, we decided to pick one idea that we will be working on. Ultimately, however, we decided to continue working on two prototypes, as we felt a combined touch screen model and a gesture model were both still viable.
Touch Screen Ideas:
Here are some specs we agreed on:
- Full screen of poster size is touchable (24’x38’).
- User taps anywhere on the screen to start.
- A modal dialog flies in at the height a little below the eye level which is easiest for users to access.
- All the interactions afterwards (except sliding card for payment) happen within the modal dialog.
- Only the horizontal sidebar on the right is touchable. The poster-sized screen does not accept user inputs.
- All the interactions (except sliding card for payment) happen on the sidebar.
For the gesture/motion concept, we have decided:
- We can use the Kinect SDK to program the prototype.
- To use gesture-style interaction instead of hot-spot style.
- To play pre-recorded voice with subtitle display on the screen.
- Voice command recognition is out of scope.