First impressions: Sony DPT-S1

Quick thoughts. More comprehensive post to follow eventually.

  • Highlight button on pen is awesome
  • Wish it had an eraser mode or at least a quick undo button
  • They anticipated some things very intelligently, like ability to duplicate a tab (I use to flip quickly to refs)
  • Fast page flip is great
  • Zoom out is nice too
  • Pen calibration could use work, especially at edges
  • Manual pen calibration is stupid
  • Annotating with ink in boxes doesn’t work with Preview, crappy with Acrobat
  • I wish landscape viewing of portrait docs had annotation space on side, but that would be weird with PDFs
  • Box sync is ok but seems really slow
  • Need to figure out workflow with Papers for transferring back and forth
  • Color pictures obviously suck
  • Higher-res display would be nice
  • Ink shows up surprisingly fast but still is slow
  • Need more margin space somehow for notes
  • Wish it showed page number without bottom bar
  • Smart moving annotation bar is smart
  • Hand rejection is nearly perfect
  • Someone complained about 10-page limit for notes, but could try adding custom many-page PDF for notes
  • Can’t use it while it’s charging

Vim script to resize and split

In writing things like grant proposals, I find myself wanting to refer to other files or earlier parts of the same file. So I borrowed Michael Scarpa’s resize script and adapted it to add and remove a window split at the same time. Important to note is that under iTerm2, you have to make sure that under the current profile’s settings, in the Window tab “Disable session-initiated window resizing” must be turned off.

function SizeUpFunc()
	if exists("g:oldColumns")
		return
	endif
	let g:oldColumns = &columns
	au VimLeave * SizeDown
	set columns=160
	vsplit
endfunction
command SizeUp call SizeUpFunc()
 
function SizeDownFunc()
	if !exists("g:oldColumns")
		return
	endif
	only   "Remove window split
	let &columns = g:oldColumns
	unlet g:oldColumns
endfunction
command SizeDown call SizeDownFunc()
 
"Map control-right-arrow to expand and ctrl-left-arrow to contract
map <C-Right> :SizeUp<CR>
map <C-Left> :SizeDown<CR>

HCIC 2014 Trip Notes

I spent the last week attending the Human-Computer Interaction Consortium, HCIC. HCIC is a membership-based organization that (whether by design or not was a topic of discussion) is not very publicly-visible. At any rate, I had a great time, hanging out with many interesting people, listening to fascinating talks, and participating in great discussion. Here are some of my notes on the talks that were presented (the theme of HCIC 2014 was “mobile”). HCIC has a different structure than many conferences; there are a few hour-long talks, followed by a 15-minute presentation by a “discussant”, whose job it is to further comment on the talk and encourage discussion by the audience. The discussion phase is extended as well, allowing the conference attendees to get deeper into thinking about a particular topic.

Representation Technologies

Tapan “Tap” Parikh, UC Berkeley School of Information

Tap’s research group “studies the design and use of information and communication technologies for sustainable development”. His talk focused on what he calls “representation technologies”—technology used to represent and communicate knowledge—and how to increase diversity and widen access to such technology. He presented two projects: Awaaz.De and LocalGround.

Awaaz.De is something like a phone-based message board service. The initial project, targeted at agriculture, allows farmers in India to call a number and ask or answer questions. For example, “I want to grow cotton. Which weather environment is best?” Interestingly, their research results may indicate that farmers tend to trust their nearby peers more than experts. Here’s a PDF of one of the latest papers on the project.

The second project Tap talked about was LocalGround, a system for helping to document geographically-linked local knowledge. He gave an example from a low-income neighborhood in the East Bay of San Francisco, where the tool was used to collect community impressions about locations around a school, as well as to try to influence the planning process for a new city park. The talk ended on a bit of a down note (as Tap laughingly acknowledged), as data collided with politics and lost. The latest paper on LocalGround can be found here.

Mobile Community Apps as an Innovation Infrastructure

John M. Carroll & Jessica Kropczynski, Penn State

John and Jess talked about several mobile apps they’ve developed to help build community, particularly in State College, the town where Penn State is located. Although many non-profit agencies often exist in communities, John and Jess pointed out that they are frequently quite siloed, not interacting or sharing much with each other. Their goal is to enhance “community innovation” by linking agencies and community members to each other. They showed three pieces of research: Lost State College, an historic walking tour of State College that adds the ability for community members to share their own perspectives on history; Future State College, a similar application that allows users to browse in-place the 10-year master plan for the town and comment; and Local News Chatter, which collects locally-relevant news with local tweets.

Beyond the Smartphone: Rich Mobile Shared Experiences

John Tang, Gina Venolia, Sasa Junuzovic, & Kori Inkpen, Microsoft Research

John Tang (properly pronounced, I learned “Tong”) works for Microsoft Research in Redmond, WA, but lives here in the SF bay area. He uses a telepresence robot to interact with his colleagues at MSR, and we got a fun demo of him driving the robot through the halls on the Microsoft campus. His main thread of discussion, however, was on using telepresence-like technology to share experiences with others. He showed several projects—Experiences2Go, PortaProxy, and ProxyWear—that allow remote people to view and, to a limited extent, interact with remote events. One example (of Experiences2Go) was of grandparents remotely attending their grandchild’s birthday party.

Capture and Playback for Designing Context-Aware Interactive Systems

Mark Newman, Mark Ackerman, Stanley Chang, Manchul Han, Perry Hung & Jungwoo Kim, University of Michigan

Mark “Epic Beard” Newman presented some work very much in the vein of part of my own dissertation work. The idea is that for developers of context-aware systems, testing and iterating on the system can be very difficult and time-consuming. The RePlay system that Mark and colleagues built allows developers to record sensor information and then use it for repeated testing of a context-sensitive system. They studied the system by having developers create an application for location-based content delivery. One issue that came up—which I experienced in my own work—is that of running studies on complex software like they wrote. Another concern I had (and likewise had no answers for) was that their system (like my own) was highly optimized for the kind of data they were using, making it difficult to generalize to other problems.

The discussant for this paper, Gregory Abowd, had some interesting points. He opined that Mark was working on the right problem, and that tools for programmers of Ubicomp systems are still in their infancy. He pointed out that we design and implement 2D GUI interfaces using 2D GUIs, but that hasn’t translated to the new paradigm of multiple devices situated in various places in the world.

Usable Privacy and Security for Mobile Devices: With Great Power Comes Great Responsibility

Serge Egelman, UC Berkeley

Serge talked about the difficult problem of allowing users to understand and control the kind of access that applications (especially mobile applications) have to a system. He posited as an example trying to discover which application on a phone might be responsible for excessive SMS charges, and showed that to even find out which applications have permissions to send SMS takes over a dozen steps on Android! He compared the Android approach of “ask once at install” to the iOS approach of “ask once when necessary” and pointed out the flaws—a major one being that apps want to use so many things in the system that they ask all the time and users become habituated and tend to ignore the security and privacy implications of the questions. One possible solution he put forth is attribution—allowing users to see what application has recently done what. He gave two examples of how this might work: an ongoing notification icon in Android showing which app is currently playing sound, and a note on the wallpaper setting for the phone showing which app most recently changed it. Here is a recent paper discussing some of the issues.

Mobile Support for Face-to-Face Social Interaction

Jaime Teevan, Merrie Morris, & Scott Saponas

Jaime’s short talk was about how phones might add value to social interaction rather than simply being a distraction. Her talk was augmented by a system whereby people could give thumbs up or down during the talk, as well as answer questions. There was some interesting discussion—mainly engendered, I think, by her initial photograph of her family at dinner engrossed by mobile phones—about whether such technologies actually increase social interaction or simply provide another layer of distraction. A general consensus was reached that such applications could be quite useful in targeted settings such as the classroom, but it was unclear whether a similar benefit could be realized in less-focused situations such as a family dinner.

Other notes and thoughts

I had an interesting discussion with Steve Johnson, a Ph.D. student at UW Madison, about telepresence robots and how they might communicate “presence” when someone is using them, even if the user is not doing anything active with the robot. I was thinking of how I could avoid my commute to San Francisco by working via robot, except that mostly I program all day. How could the robot, sitting still at my desk, give the impression that I’m there and available to talk? We discussed subtle motions and even (very subtle!) smell-based cues.

Don Norman had some things to say (which were met with varying degrees of appreciation) about the difficulty of research work influencing products. This is one of my own fundamental conflicts, although now that I am in a more product-focused organization, I’ve discovered that it’s nearly as difficult to influence product anyway.

Also, Yo.

CHI 2014 Trip Notes: Papers 2–?

I haven’t gotten together my notes on the other papers yet, but here’s a list of other interesting ones:

Yoshio Ishiguro and Ivan Poupyrev. 3D printed interactive speakers. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Vidya Setlur and Jock D Mackinlay. Automatic generation of semantic icon encodings for visualizations. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Steven J Jackson and Laewoo Kang. Breakdown, obsolescence and reuse: HCI and the art of repair. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

David A Mellis and Leah Buechley. Do-It-Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Xiang ‘Anthony Chen, Tovi Grossman, Daniel J Wigdor and George Fitzmaurice. Duet: Exploring Joint Interactions on a Smart Phone and a Smart Watch. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Georgios Marentakis and Rudolfs Liepins. Evaluation of hear-through sound localization. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Robert Xiao, Gierad Laput and Chris Harrison. Expanding the input expressivity of smartwatches with mechanical pan, twist, tilt and click. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Arun Kulshreshth and Joseph J Jr. LaViola. Exploring the usefulness of finger-based 3D gesture menu selection. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Esben Warming Pedersen and Kasper Hornbæk. Expressive Touch: Studying Tapping Force on Tabletops. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

and Tawanna R Dillahunt. Fostering social capital in economically distressed communities. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Jari Kangas, Deepak Akkil, Jussi Rantala, Poika Isokoski, Päivi Majaranta and Roope Raisamo. Gaze gestures and haptic feedback in mobile devices. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Albert Ng, Michelle Annett, Paul Dietz, Anoop Gupta and Walter F Bischof. In the Blink of an Eye: Investigating Latency Perception during Stylus Interaction. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Yi Ren, Yang Li and Edward Lank. InkAnchor: Enhancing Informal Ink-Based Note Taking on Touchscreen Mobile Phones. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Ian Oakley and Doyoung Lee. Interaction on the Edge: Offset Sensing for Small Devices. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Nicholas J Bryan, Gautham J Mysore and Ge Wang. ISSE: An Interactive Source Separation Editor. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

and Raf Ramakers Esben W Pedersen Johannes Jasper Sven Köhler Aileen Pohl Hannes Rantzsch Andreas Rau Patrick Schmidt Christoph Sterz Yanina Yurchenko Patrick Baudisch Dominik Schmidt. Kickables: Tangibles for Feet. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Joseph Jofish Kaye, Mary McCuistion, Rebecca Gulotta and David A Shamma. Money Talks: Tracking Personal Finances. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Martin Weigel, Vikram Mehta and Jürgen Steimle. More Than Touch: Understanding How People Use Skin as an Input Surface for Mobile Computing. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

John Vines, Paul Dunphy and Andrew Monk. Pay or Delay: The Role of Technology When Managing a Low Income. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Graham Wilson, Thomas Carter, Sriram Subramanian and Stephen A Brewster. Perception of Ultrasonic Haptic Feedback on the Hand: Localisation and Apparent Motion. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Thomas Fritz, Elaine M Huang, Gail C Murphy and Thomas Zimmermann. Persuasive Technology in the Real World: A Study of Long-Term Use of Activity Sensing Devices for Fitness. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Martin Spindler, Martin Schuessler, Marcel Martsch and Raimund Dachselt. Pinch-Drag-Flick vs. Spatial Input: Rethinking Zoom {\&} Pan on Mobile Displays. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Christopher Smith-Clarke, Afra Mashhadi and Licia Capra. Poverty on the Cheap: Estimating Poverty Maps Using Aggregated Mobile Communication Networks. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Onur Yürüten, Jiyong Zhang and Pearl H Z Pu. Predictors of life satisfaction based on daily activities from mobile sensor data. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Christian Rendl, Patrick Greindl, Kathrin Probst, Martin Behrens and Michael Haller. Presstures: Exploring Pressure-Sensitive Multi-Touch Gestures on Trackpads. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Jie Qi and Leah Buechley. Sketching in Circuits: Designing and Building Electronics on Paper. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Alexander Travis Adams, Berto Gonzalez and Celine Latulipe. SonicExplorer: Fluid Exploration of Audio Parameters. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Martin Murer, Mattias Jacobsson, Siri Skillgate and Petra Sundström. Taking things apart: reaching common ground and shared material understanding. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Jörg Müller, Matthias Geier, Christina Dicke and Sascha Spors. The BoomRoom: Mid-air Direct Interaction with Virtual Sound Sources. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Joshua Tan, Khanh Nguyen, Michael Theodorides, Heidi Negrón-Arroyo, Christopher Thompson, Serge Egelman and David Wagner. The effect of developer-specified explanations for permission requests on smartphone user behavior. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Jun-Ki Min, Afsaneh Doryab, Jason Wiese, Shahriyar Amini, John Zimmerman and Jason I Hong. Toss `N’ Turn: Smartphone as Sleep and Sleep Quality Detector. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Da-Yuan Huang, Ming-Chang Tsai, Ying-Chao Tung, Min-Lun Tsai, Yen-Ting Yeh, Liwei Chan, Yi-Ping Hung and Mike Y Chen. TouchSense: Expanding Touchscreen Input Vocabulary Using Different Areas of Users’ Finger Pads. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Oliver Bates, Mike Hazas, Adrian Friday, Janine Morley and Adrian K Clear. Towards an holistic view of the energy and environmental impacts of domestic media and IT. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Ross McLachlan, Daniel Boland and Stephen Brewster. Transient and Transitional States: Pressure as an Auxiliary Input Modality for Bimanual Interaction. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Taku Hachisu and Masaaki Fukumoto. VacuumTouch: Attractive Force Feedback Interface for Haptic Interactive Surface using Air Suction. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

CHI 2014 Trip Notes: Papers 1

I attended CHI 2014 in Toronto and had a great time. Here, in no particular order, are some of the papers I found interesting.

Emerging sites of HCI innovation: hackerspaces, hardware startups & incubators

An investigation of how hackerspaces and other maker-culture places can turn into startups. Some discussion of what HCI’s role is going forward: understanding the relationship between design & manufacturing (e.g. the challenge of going from 3D printed models to actual manufactured product); moving beyond “amateur expertise” to actual professionalism; and finding alternatives to the (over-)hyped narrative of the maker movement.
Abstract >

In this paper, we discuss how a flourishing scene of DIY makers is turning visions of tangible and ubiquitous computing into products. Drawing on long-term multi-sited ethnographic research and active participation in DIY making, we provide insights into the social, material, and economic processes that undergird this transition from prototypes to products. The contribution of this paper is three-fold. First, we show how DIY maker practice is illustrative of a broader “return to” and interest in physical materials. This has implications for HCI research that investigates questions of materiality. Second, we shed light on how hackerspaces and hardware start-ups are experimenting with new models of manufacturing and entrepreneurship. We argue that we have to take seriously these maker practices, not just as hobbyist or leisure practice, but as a professionalizing field functioning in parallel to research and industry labs. Finally, we end with reflections on the role of HCI researchers and designers as DIY making emerges as a site of HCI innovation. We argue that HCI is positioned to provide critical reflection, paired with a sensibility for materials, tools and design methods.

Silvia Lindtner, Garnet D Hertz and Paul Dourish. Emerging sites of HCI innovation: hackerspaces, hardware startups & incubators. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Printing Teddy Bears: A Technique for 3D Printing of Soft Interactive Objects

This was a really fun paper. The motivation is that currently, everything we can 3D print is hard, so let’s investigate “printing” with softer materials. Scott Hudson, the author, modified his home 3D printer to needle felt yarn into custom shapes, such as teddy bears. It’s a great example of how progress in personal fabrication has hit an inflection point, to let this kind of experimentation with new materials and techniques happen at a rapid pace.

Abstract >
This paper considers the design, construction, and example use of a new type of 3D printer which fabricates three-dimensional objects from soft fibers (wool and wool blend yarn). This printer allows the substantial advantages of additive manufacturing techniques (including rapid turn-around prototyping of physical objects and support for high levels of customization and configuration) to be employed with a new class of material. This material is a form of loose felt formed when fibers from an incoming feed of yarn are entangled with the fibers in layers below it. The resulting objects recreate the geometric forms specified in the solid models which specify them, but are soft and flexible — somewhat reminiscent in character to hand knitted materials. This extends 3D printing from typically hard and precise forms into a new set of forms which embody a different aesthetic of soft and imprecise objects, and provides a new capability for researchers to explore the use of this class of materials in interactive devices.

Scott E Hudson. Printing Teddy Bears: A Technique for 3D Printing of Soft Interactive Objects. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Project page

Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-Air Interactions

The famous gesture-based interface in Minority Report looked super-cool, but has been criticized because waving your arms in the air for a long period of time would get very tiring. This paper quantifies just how bad this can be, looking at how long people can hold out their arms and deriving some guidelines on how to design interfaces for mid-air interaction.

Abstract >
Mid-air interactions are prone to fatigue and lead to a feeling of heaviness in the upper limbs, a condition casually termed as the gorilla-arm effect. Designers have often associated limitations of their mid-air interactions with arm fatigue, but do not possess a quantitative method to assess and therefore mitigate it. In this paper we propose a novel metric, Consumed Endurance (CE), derived from the biomechanical structure of the upper arm and aimed at characterizing the gorilla-arm effect. We present a method to capture CE in a non-intrusive manner using an off-the-shelf camera-based skeleton tracking system, and demonstrate that CE correlates strongly with the Borg CR10 scale of perceived exertion. We show how designers can use CE as a complementary metric for evaluating existing and designing novel mid-air interactions, including tasks with repetitive input such as mid-air text-entry. Finally, we propose a series of guidelines for the design of fatigue-efficient mid-air interfaces.

Juan David Hincapié-Ramos, Xiang Guo, Paymahn Moghadasian and Pourang Irani. Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-Air Interactions. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Vulture: A Mid-Air Word-Gesture Keyboard


This paper looks at how you might make a Swype-like gesture keyboard in mid-air. It makes sense for game systems like the Xbox with the Kinect, to improve the speed and convenience of text input. The system gets reasonable speed, about 28 WPM after user training. This paper received a Best Paper Award.
Abstract >

Word-gesture keyboards enable fast text entry by letting users draw the shape of a word on the input surface. Such keyboards have been used extensively for touch devices, but not in mid-air, even though their fluent gestural input seems well suited for this modality. We present Vulture, a word-gesture keyboard for mid-air operation. Vulture adapts touch based word-gesture algorithms to work in mid-air, projects users’ movement onto the display, and uses pinch as a word delimiter. A first 10-session study suggests text-entry rates of 20.6 Words Per Minute (WPM) and finds hand-movement speed to be the primary predictor of WPM. A second study shows that with training on a few phrases, participants do 28.1 WPM, 59% of the text-entry rate of direct touch input. Participants’ recall of trained gestures in mid-air was low, suggesting that visual feedback is important but also limits performance. Based on data from the studies, we discuss improvements to Vulture and some alternative designs for mid-air text entry.

Anders Markussen, Mikkel Rønne Jakobsen and Kasper Hornbæk. Vulture: A Mid-Air Word-Gesture Keyboard. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Understanding finger input above desktop devices


There’s been a good amount of previous work looking at interacting in the space above a desktop display. This paper considers the practicality aspects of the idea by studying how well people can stay within and move between different height layers.
Abstract >

Using the space above desktop input devices adds a rich new input channel to desktop interaction. Input in this elevated layer has been previously used to modify the granularity of a 2D slider, navigate layers of a 3D body scan above a multitouch table and access vertically stacked menus. However, designing these interactions is challenging because the lack of haptic and direct visual feedback easily leads to input errors. For bare finger input, the user’s fingers needs to reliably enter and stay inside the interactive layer, and engagement techniques such as midair clicking have to be disambiguated from leaving the layer. These issues have been addressed for interactions in which users operate other devices in midair, but there is little guidance for the design of bare finger input in this space.

In this paper, we present the results of two user studies that inform the design of finger input above desktop devices. Our studies show that 2 cm is the minimum thickness of the above-surface volume that users can reliably remain within. We found that when accessing midair layers, users do not automatically move to the same height. To address this, we introduce a technique that dynamically determines the height at which the layer is placed, depending on the velocity profile of the user’s initial finger movement into midair. Finally, we propose a technique that reliably distinguishes clicking from homing movements, based on the user’s hand shape. We structure the presentation of our findings using Buxton’s three-state input model, adding additional states and transitions for above-surface interactions.

Chat Wacharamanotham, Kashyap Todi, Marty Pye and Jan Borchers. Understanding finger input above desktop devices. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

CHI 2014 Trip Notes: Posters

I attended CHI 2014 in Toronto and had a great time. Here, in no particular order, are some of the things I saw in the posters section. Posters are smaller research results, or works in progress, presented with a poster and a short paper.

Suit Up! Eyes-Free Interactions on Jacket Buttons

Suit Up! buttonsSome design thinking and basic prototyping of how you might interact with wearable technology using buttons on your clothing. Prototype suit jacket buttons above are: 1. Four-way control; 2. Status LEDs; 3. OLED Display; 4. Piezo-electric Speaker.

Kashyap Todi and Kris Luyten. Suit up!: enabling eyes-free interactions on jacket buttons. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

Project page

ExtendedThumb

extendedthumbAn offset cursor technique to help users select items using their thumb on large screens (e.g. Samsung Galaxy Note), by using a virtual thumb. “After initiating it with double tap: 1) aim at a target, 2) move the virtual thumb towards the target, 3) adjust the virtual thumb position and place the red cross located at the tip of the virtual thumb on the target, and 4) lift the real thumb up from the screen to select the target.”

Jianwei Lai and Dongsong Zhang. ExtendedThumb. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

 Extending Interaction for Smart Watches

expanding_interaction_for_smart_watchesSome exploration around expanding the interaction space for smart watches by using the hand’s surface. The idea is to use some sort of depth sensor pointing out of the side of the watch (their prototype used IR proximity sensors) to allow touchscreen-type gestures on the hand—kind of a Magic Trackpad for your watch. Various gestures could be supported:
expanding_interaction_for_smart_watches2

Whirlstools

WhirlstoolsA design exploration of adaptive architecture: stools that rotate themselves to encourage social interaction. “Every time a person newly sits on a stool, all the other unoccupied stools in its vicinity also rotate, in ways so that when another person comes along and sits on any of the remaining stools, s/he will be more likely to sit face-to-face (and consequently more likely to have spontaneous conversations) with people already seated…“.

Yuichiro Takeuchi and Jean You. Whirlstools: kinetic furniture with adaptive affordance. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

Project page

CHI 2014 Trip Notes: Interactivity

I attended CHI 2014 in Toronto and had a great time. Here, in no particular order, are some of the fun things I saw at the “Interactivity” exhibits. These exhibits let you try out demos of various systems in person, and was lots of fun!

Comp*pass

This was probably my favorite thing at the conference! It’s a drawing compass that dynamically opens and closes, allowing you to draw any radial shape—including squares.

Ken Nakagaki and Yasuaki Kakehi. Comp*Pass: a compass-based drawing interface. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

Project page

GaussBricks


A fun system that uses a high-density grid of Hall-effect sensors underneath an iPad to track in realtime the motion of magnetic building blocks above the surface. It allows a playful, hybrid physical-virtual experience.

Rong-Hao Liang, Liwei Chan, Hung-Yu Tseng, Han-Chih Kuo, Da-Yuan Huang, De-Nian Yang and Bing-Yu Chen. GaussBricks: magnetic building blocks for constructive tangible interactions on portable displays. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Project page

Wrigglo

A wacky interface for emotional communication between friends. Two flexible antennae protrude from the top of the phone and actuate to reflect the emotion of another person. They use spring-form shape memory wire to actuate the antennae.

Joohee Park, Young-Woo Park and Tek-Jin Nam. Wrigglo: shape-changing peripheral for interpersonal mobile communication. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

 Project page

Muzlog

A highly-augmented guitar with capacitive neck stickers, pickups, and a ring that automatically transcribes what’s being played.

Han-Jong Kim and Tek-Jin Nam. Muzlog: instant music transcribing system for acoustic guitarists. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

 Tactonic Touch 2.0

A grid of force-sensitive resistors to do multi-touch and -pressure sensing, with mechanical force distribution for higher resolution with fewer sensors.

Alex M Grau, Charles Hendee, John-Ross Rizzo and Ken Perlin. Mechanical force redistribution. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Product page

 Maker Vis

Physically visualizing data with personal fabrication technology. 3d print or laser cut an actual, physical bar chart.

Saiganesh Swaminathan, Conglei Shi, Yvonne Jansen, Pierre Dragicevic, Lora A Oehlberg and Jean-Daniel Fekete. Supporting the design and fabrication of physical visualizations. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Project page

InGrid: Interactive Grid Table

A table made of multiple replaceable squares. Augmented with electronics, a server knows where each tile is, allowing the user to share content between tablets attached to tiles by swiping.

Mounia Ziat, Josh Fridstrom, Kurt Kilpela, Jonathan Fancher and James J Clark. Ingrid: interactive grid table. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

Project page

Game of Tones

A piano learning interface using projected graphics to help you know what to play next. Although the video shows simple bars, the CHI Interactivity version was more like Space Invaders, with the next notes falling from the top of the screen.

Linsey Raymaekers, Jo Vermeulen, Kris Luyten and Karin Coninx. Game of tones: learning to play songs on a piano using projected instructions and games. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

Project page

HamsaTouch

An electro-tactile display: a 32×16 grid of phototransistors one one side is mapped to a 32×16 grid of electrodes on the other side. You put your palm on the electrodes, and detected light on the other side is turned into tiny, tickly electric shocks! Paired with a mobile phone running an edge detection algorithm, you can “see” edges by feeling with your hand. Fun to try!

Hiroyuki Kajimoto, Masaki Suzuki and Yonezo Kanno. HamsaTouch. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

 Research lab page

For some reason my Nokia Lumia 920 and my MacBook don’t get along where internet sharing over wifi is concerned. The MacBook continually drops the connection with the cryptic message “kernel[0]: AirPort: Link Down on en0. Reason 7 (Frame received from nonassociated STA).” After much searching and failing to find a cure, I fashioned a good old-fashioned command-line band-aid:

while true;
    do if ping -W2 -c2 $(route -n get default |grep gateway | awk '{print $2}') > /dev/null;
    then echo "still up";
    else echo "reconnect";
    networksetup -setairportnetwork en0 MY_NETWORK_NAME MY_NETWORK_PASSWORD; 
    fi;
done

DEVONthink Pro as a replacement for OmniFocus

I’m trying to be more organized at work. I tried out OmniFocus for its trial period and liked it, but $80 is a bit steep. I also wanted a bit more functionality; specifically, for some tasks I want to include notes and thoughts and references. So I turned to DEVONthink Pro Office, which I already use extensively for organizing documents and notes.

I created a new database, “Todo”. I organized it with folders according to areas (e.g., “Life”, “Work”) and sub-areas (e.g. project names under “Work”). For each task, I make a new RTF document whose title is the task; if I want to add more details, I put them in the body of the task.

To replicate OF’s contexts, I simply use tags. What about finishing a task? I add the tag done and move it to a top-level “Done” group (more about automating this shortly).

So far, so good. But OmniFocus has the nice feature of being able to view all of your tasks at once. DEVONthink can do the same with a Smart Group. Data→New→Smart Group makes one, with the simple search of “[Tag] [is not] [Done]”. I call the smart group “All tasks”. I also made one called “All work tasks” with the same search but limited to the top-level “Work” group. I also turn on the “Location” column in the document list and move it to the leftmost position so I can easily sort tasks by what category they’re in.

OF view in DTP

The last thing (so far) is automation. When I finish a task, I want it to be moved to the top-level “Done” group so it’s out of the way, but I do want to know where it originally came from and when I completed it. This required a little bit of Applescript, which I bind to ctrl-D; it adds a tag “Done”, a tag for the current date, a tag for the original location of the task (e.g., “Work/Cool project”), and moves the task to the top-level “Done” group.

-- Mark as done: apply the "done" tag, a tag for the current date, and move this item to the "Done" folder in the current group.
tell application "DEVONthink Pro"
  try
    set r to the content record
    set t to the tags of r
    if "Done" is not in t then set t to t & {"Done"}
    set t to t & {do shell script "date +%F"}
    set t to t & {characters 2 thru -1 of (the location of the current group & the name of the current group) as string} -- Tag with origin group
    set the tags of r to t
 
    set done_group to create location "/Done"
    move record r to done_group
 
  on error error_message number error_number
    if the error_number is not -128 then display alert "DEVONthink Pro" message error_message as warning
  end try
end tell

Finally, as a bonus, I wanted to experiment with setting priorities on items. I made six nearly-identical Applescripts, which I bound to ctrl-1 through ctrl-5 and ctrl-0 (to remove the priority):

tell application "DEVONthink Pro"
  try
    repeat with r in (the selection as list)
      set old_tags to the tags of r
      set new_tags to {}
      repeat with t in old_tags
        if t is not in {"1", "2", "3", "4", "5"} then set new_tags to new_tags & {t}
      end repeat
 
      -- Change the "1" to the desired priority for each script;
      --   comment out the following line to remove the priority.
      set new_tags to new_tags & "1"
      set the tags of r to new_tags
    end repeat
  on error error_message number error_number
    if the error_number is not -128 then display alert "DEVONthink Pro" message error_message as warning
  end try
end tell