I attended CHI 2014 in Toronto and had a great time. Here, in no particular order, are some of the papers I found interesting.

Emerging sites of HCI innovation: hackerspaces, hardware startups & incubators

An investigation of how hackerspaces and other maker-culture places can turn into startups. Some discussion of what HCI’s role is going forward: understanding the relationship between design & manufacturing (e.g. the challenge of going from 3D printed models to actual manufactured product); moving beyond “amateur expertise” to actual professionalism; and finding alternatives to the (over-)hyped narrative of the maker movement.
Abstract >

In this paper, we discuss how a flourishing scene of DIY makers is turning visions of tangible and ubiquitous computing into products. Drawing on long-term multi-sited ethnographic research and active participation in DIY making, we provide insights into the social, material, and economic processes that undergird this transition from prototypes to products. The contribution of this paper is three-fold. First, we show how DIY maker practice is illustrative of a broader “return to” and interest in physical materials. This has implications for HCI research that investigates questions of materiality. Second, we shed light on how hackerspaces and hardware start-ups are experimenting with new models of manufacturing and entrepreneurship. We argue that we have to take seriously these maker practices, not just as hobbyist or leisure practice, but as a professionalizing field functioning in parallel to research and industry labs. Finally, we end with reflections on the role of HCI researchers and designers as DIY making emerges as a site of HCI innovation. We argue that HCI is positioned to provide critical reflection, paired with a sensibility for materials, tools and design methods.

Silvia Lindtner, Garnet D Hertz and Paul Dourish. Emerging sites of HCI innovation: hackerspaces, hardware startups & incubators. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Printing Teddy Bears: A Technique for 3D Printing of Soft Interactive Objects

This was a really fun paper. The motivation is that currently, everything we can 3D print is hard, so let’s investigate “printing” with softer materials. Scott Hudson, the author, modified his home 3D printer to needle felt yarn into custom shapes, such as teddy bears. It’s a great example of how progress in personal fabrication has hit an inflection point, to let this kind of experimentation with new materials and techniques happen at a rapid pace.

Abstract >
This paper considers the design, construction, and example use of a new type of 3D printer which fabricates three-dimensional objects from soft fibers (wool and wool blend yarn). This printer allows the substantial advantages of additive manufacturing techniques (including rapid turn-around prototyping of physical objects and support for high levels of customization and configuration) to be employed with a new class of material. This material is a form of loose felt formed when fibers from an incoming feed of yarn are entangled with the fibers in layers below it. The resulting objects recreate the geometric forms specified in the solid models which specify them, but are soft and flexible — somewhat reminiscent in character to hand knitted materials. This extends 3D printing from typically hard and precise forms into a new set of forms which embody a different aesthetic of soft and imprecise objects, and provides a new capability for researchers to explore the use of this class of materials in interactive devices.

Scott E Hudson. Printing Teddy Bears: A Technique for 3D Printing of Soft Interactive Objects. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Project page

Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-Air Interactions

The famous gesture-based interface in Minority Report looked super-cool, but has been criticized because waving your arms in the air for a long period of time would get very tiring. This paper quantifies just how bad this can be, looking at how long people can hold out their arms and deriving some guidelines on how to design interfaces for mid-air interaction.

Abstract >
Mid-air interactions are prone to fatigue and lead to a feeling of heaviness in the upper limbs, a condition casually termed as the gorilla-arm effect. Designers have often associated limitations of their mid-air interactions with arm fatigue, but do not possess a quantitative method to assess and therefore mitigate it. In this paper we propose a novel metric, Consumed Endurance (CE), derived from the biomechanical structure of the upper arm and aimed at characterizing the gorilla-arm effect. We present a method to capture CE in a non-intrusive manner using an off-the-shelf camera-based skeleton tracking system, and demonstrate that CE correlates strongly with the Borg CR10 scale of perceived exertion. We show how designers can use CE as a complementary metric for evaluating existing and designing novel mid-air interactions, including tasks with repetitive input such as mid-air text-entry. Finally, we propose a series of guidelines for the design of fatigue-efficient mid-air interfaces.

Juan David Hincapié-Ramos, Xiang Guo, Paymahn Moghadasian and Pourang Irani. Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-Air Interactions. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Vulture: A Mid-Air Word-Gesture Keyboard


This paper looks at how you might make a Swype-like gesture keyboard in mid-air. It makes sense for game systems like the Xbox with the Kinect, to improve the speed and convenience of text input. The system gets reasonable speed, about 28 WPM after user training. This paper received a Best Paper Award.
Abstract >

Word-gesture keyboards enable fast text entry by letting users draw the shape of a word on the input surface. Such keyboards have been used extensively for touch devices, but not in mid-air, even though their fluent gestural input seems well suited for this modality. We present Vulture, a word-gesture keyboard for mid-air operation. Vulture adapts touch based word-gesture algorithms to work in mid-air, projects users’ movement onto the display, and uses pinch as a word delimiter. A first 10-session study suggests text-entry rates of 20.6 Words Per Minute (WPM) and finds hand-movement speed to be the primary predictor of WPM. A second study shows that with training on a few phrases, participants do 28.1 WPM, 59% of the text-entry rate of direct touch input. Participants’ recall of trained gestures in mid-air was low, suggesting that visual feedback is important but also limits performance. Based on data from the studies, we discuss improvements to Vulture and some alternative designs for mid-air text entry.

Anders Markussen, Mikkel Rønne Jakobsen and Kasper Hornbæk. Vulture: A Mid-Air Word-Gesture Keyboard. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Understanding finger input above desktop devices


There’s been a good amount of previous work looking at interacting in the space above a desktop display. This paper considers the practicality aspects of the idea by studying how well people can stay within and move between different height layers.
Abstract >

Using the space above desktop input devices adds a rich new input channel to desktop interaction. Input in this elevated layer has been previously used to modify the granularity of a 2D slider, navigate layers of a 3D body scan above a multitouch table and access vertically stacked menus. However, designing these interactions is challenging because the lack of haptic and direct visual feedback easily leads to input errors. For bare finger input, the user’s fingers needs to reliably enter and stay inside the interactive layer, and engagement techniques such as midair clicking have to be disambiguated from leaving the layer. These issues have been addressed for interactions in which users operate other devices in midair, but there is little guidance for the design of bare finger input in this space.

In this paper, we present the results of two user studies that inform the design of finger input above desktop devices. Our studies show that 2 cm is the minimum thickness of the above-surface volume that users can reliably remain within. We found that when accessing midair layers, users do not automatically move to the same height. To address this, we introduce a technique that dynamically determines the height at which the layer is placed, depending on the velocity profile of the user’s initial finger movement into midair. Finally, we propose a technique that reliably distinguishes clicking from homing movements, based on the user’s hand shape. We structure the presentation of our findings using Buxton’s three-state input model, adding additional states and transitions for above-surface interactions.

Chat Wacharamanotham, Kashyap Todi, Marty Pye and Jan Borchers. Understanding finger input above desktop devices. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.