Posts from the ‘Uncategorized’ category

First impressions: Sony DPT-S1

Quick thoughts. More comprehensive post to follow eventually.

  • Highlight button on pen is awesome
  • Wish it had an eraser mode or at least a quick undo button
  • They anticipated some things very intelligently, like ability to duplicate a tab (I use to flip quickly to refs)
  • Fast page flip is great
  • Zoom out is nice too
  • Pen calibration could use work, especially at edges
  • Manual pen calibration is stupid
  • Annotating with ink in boxes doesn’t work with Preview, crappy with Acrobat
  • I wish landscape viewing of portrait docs had annotation space on side, but that would be weird with PDFs
  • Box sync is ok but seems really slow
  • Need to figure out workflow with Papers for transferring back and forth
  • Color pictures obviously suck
  • Higher-res display would be nice
  • Ink shows up surprisingly fast but still is slow
  • Need more margin space somehow for notes
  • Wish it showed page number without bottom bar
  • Smart moving annotation bar is smart
  • Hand rejection is nearly perfect
  • Someone complained about 10-page limit for notes, but could try adding custom many-page PDF for notes
  • Can’t use it while it’s charging

Vim script to resize and split

In writing things like grant proposals, I find myself wanting to refer to other files or earlier parts of the same file. So I borrowed Michael Scarpa’s resize script and adapted it to add and remove a window split at the same time. Important to note is that under iTerm2, you have to make sure that under the current profile’s settings, in the Window tab “Disable session-initiated window resizing” must be turned off.

function SizeUpFunc()
	if exists("g:oldColumns")
		return
	endif
	let g:oldColumns = &columns
	au VimLeave * SizeDown
	set columns=160
	vsplit
endfunction
command SizeUp call SizeUpFunc()
 
function SizeDownFunc()
	if !exists("g:oldColumns")
		return
	endif
	only   "Remove window split
	let &columns = g:oldColumns
	unlet g:oldColumns
endfunction
command SizeDown call SizeDownFunc()
 
"Map control-right-arrow to expand and ctrl-left-arrow to contract
map <C-Right> :SizeUp<CR>
map <C-Left> :SizeDown<CR>

CHI 2014 Trip Notes: Papers 2–?

I haven’t gotten together my notes on the other papers yet, but here’s a list of other interesting ones:

Yoshio Ishiguro and Ivan Poupyrev. 3D printed interactive speakers. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Vidya Setlur and Jock D Mackinlay. Automatic generation of semantic icon encodings for visualizations. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Steven J Jackson and Laewoo Kang. Breakdown, obsolescence and reuse: HCI and the art of repair. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

David A Mellis and Leah Buechley. Do-It-Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Xiang ‘Anthony Chen, Tovi Grossman, Daniel J Wigdor and George Fitzmaurice. Duet: Exploring Joint Interactions on a Smart Phone and a Smart Watch. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Georgios Marentakis and Rudolfs Liepins. Evaluation of hear-through sound localization. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Robert Xiao, Gierad Laput and Chris Harrison. Expanding the input expressivity of smartwatches with mechanical pan, twist, tilt and click. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Arun Kulshreshth and Joseph J Jr. LaViola. Exploring the usefulness of finger-based 3D gesture menu selection. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Esben Warming Pedersen and Kasper Hornbæk. Expressive Touch: Studying Tapping Force on Tabletops. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

and Tawanna R Dillahunt. Fostering social capital in economically distressed communities. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Jari Kangas, Deepak Akkil, Jussi Rantala, Poika Isokoski, Päivi Majaranta and Roope Raisamo. Gaze gestures and haptic feedback in mobile devices. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Albert Ng, Michelle Annett, Paul Dietz, Anoop Gupta and Walter F Bischof. In the Blink of an Eye: Investigating Latency Perception during Stylus Interaction. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Yi Ren, Yang Li and Edward Lank. InkAnchor: Enhancing Informal Ink-Based Note Taking on Touchscreen Mobile Phones. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Ian Oakley and Doyoung Lee. Interaction on the Edge: Offset Sensing for Small Devices. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Nicholas J Bryan, Gautham J Mysore and Ge Wang. ISSE: An Interactive Source Separation Editor. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

and Raf Ramakers Esben W Pedersen Johannes Jasper Sven Köhler Aileen Pohl Hannes Rantzsch Andreas Rau Patrick Schmidt Christoph Sterz Yanina Yurchenko Patrick Baudisch Dominik Schmidt. Kickables: Tangibles for Feet. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Joseph Jofish Kaye, Mary McCuistion, Rebecca Gulotta and David A Shamma. Money Talks: Tracking Personal Finances. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Martin Weigel, Vikram Mehta and Jürgen Steimle. More Than Touch: Understanding How People Use Skin as an Input Surface for Mobile Computing. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

John Vines, Paul Dunphy and Andrew Monk. Pay or Delay: The Role of Technology When Managing a Low Income. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Graham Wilson, Thomas Carter, Sriram Subramanian and Stephen A Brewster. Perception of Ultrasonic Haptic Feedback on the Hand: Localisation and Apparent Motion. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Thomas Fritz, Elaine M Huang, Gail C Murphy and Thomas Zimmermann. Persuasive Technology in the Real World: A Study of Long-Term Use of Activity Sensing Devices for Fitness. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Martin Spindler, Martin Schuessler, Marcel Martsch and Raimund Dachselt. Pinch-Drag-Flick vs. Spatial Input: Rethinking Zoom {\&} Pan on Mobile Displays. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Christopher Smith-Clarke, Afra Mashhadi and Licia Capra. Poverty on the Cheap: Estimating Poverty Maps Using Aggregated Mobile Communication Networks. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Onur Yürüten, Jiyong Zhang and Pearl H Z Pu. Predictors of life satisfaction based on daily activities from mobile sensor data. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Christian Rendl, Patrick Greindl, Kathrin Probst, Martin Behrens and Michael Haller. Presstures: Exploring Pressure-Sensitive Multi-Touch Gestures on Trackpads. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Jie Qi and Leah Buechley. Sketching in Circuits: Designing and Building Electronics on Paper. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Alexander Travis Adams, Berto Gonzalez and Celine Latulipe. SonicExplorer: Fluid Exploration of Audio Parameters. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Martin Murer, Mattias Jacobsson, Siri Skillgate and Petra Sundström. Taking things apart: reaching common ground and shared material understanding. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Jörg Müller, Matthias Geier, Christina Dicke and Sascha Spors. The BoomRoom: Mid-air Direct Interaction with Virtual Sound Sources. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Joshua Tan, Khanh Nguyen, Michael Theodorides, Heidi Negrón-Arroyo, Christopher Thompson, Serge Egelman and David Wagner. The effect of developer-specified explanations for permission requests on smartphone user behavior. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Jun-Ki Min, Afsaneh Doryab, Jason Wiese, Shahriyar Amini, John Zimmerman and Jason I Hong. Toss `N’ Turn: Smartphone as Sleep and Sleep Quality Detector. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Da-Yuan Huang, Ming-Chang Tsai, Ying-Chao Tung, Min-Lun Tsai, Yen-Ting Yeh, Liwei Chan, Yi-Ping Hung and Mike Y Chen. TouchSense: Expanding Touchscreen Input Vocabulary Using Different Areas of Users’ Finger Pads. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Oliver Bates, Mike Hazas, Adrian Friday, Janine Morley and Adrian K Clear. Towards an holistic view of the energy and environmental impacts of domestic media and IT. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Ross McLachlan, Daniel Boland and Stephen Brewster. Transient and Transitional States: Pressure as an Auxiliary Input Modality for Bimanual Interaction. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Taku Hachisu and Masaaki Fukumoto. VacuumTouch: Attractive Force Feedback Interface for Haptic Interactive Surface using Air Suction. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

CHI 2014 Trip Notes: Papers 1

I attended CHI 2014 in Toronto and had a great time. Here, in no particular order, are some of the papers I found interesting.

Emerging sites of HCI innovation: hackerspaces, hardware startups & incubators

An investigation of how hackerspaces and other maker-culture places can turn into startups. Some discussion of what HCI’s role is going forward: understanding the relationship between design & manufacturing (e.g. the challenge of going from 3D printed models to actual manufactured product); moving beyond “amateur expertise” to actual professionalism; and finding alternatives to the (over-)hyped narrative of the maker movement.
Abstract >

In this paper, we discuss how a flourishing scene of DIY makers is turning visions of tangible and ubiquitous computing into products. Drawing on long-term multi-sited ethnographic research and active participation in DIY making, we provide insights into the social, material, and economic processes that undergird this transition from prototypes to products. The contribution of this paper is three-fold. First, we show how DIY maker practice is illustrative of a broader “return to” and interest in physical materials. This has implications for HCI research that investigates questions of materiality. Second, we shed light on how hackerspaces and hardware start-ups are experimenting with new models of manufacturing and entrepreneurship. We argue that we have to take seriously these maker practices, not just as hobbyist or leisure practice, but as a professionalizing field functioning in parallel to research and industry labs. Finally, we end with reflections on the role of HCI researchers and designers as DIY making emerges as a site of HCI innovation. We argue that HCI is positioned to provide critical reflection, paired with a sensibility for materials, tools and design methods.

Silvia Lindtner, Garnet D Hertz and Paul Dourish. Emerging sites of HCI innovation: hackerspaces, hardware startups & incubators. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Printing Teddy Bears: A Technique for 3D Printing of Soft Interactive Objects

This was a really fun paper. The motivation is that currently, everything we can 3D print is hard, so let’s investigate “printing” with softer materials. Scott Hudson, the author, modified his home 3D printer to needle felt yarn into custom shapes, such as teddy bears. It’s a great example of how progress in personal fabrication has hit an inflection point, to let this kind of experimentation with new materials and techniques happen at a rapid pace.

Abstract >
This paper considers the design, construction, and example use of a new type of 3D printer which fabricates three-dimensional objects from soft fibers (wool and wool blend yarn). This printer allows the substantial advantages of additive manufacturing techniques (including rapid turn-around prototyping of physical objects and support for high levels of customization and configuration) to be employed with a new class of material. This material is a form of loose felt formed when fibers from an incoming feed of yarn are entangled with the fibers in layers below it. The resulting objects recreate the geometric forms specified in the solid models which specify them, but are soft and flexible — somewhat reminiscent in character to hand knitted materials. This extends 3D printing from typically hard and precise forms into a new set of forms which embody a different aesthetic of soft and imprecise objects, and provides a new capability for researchers to explore the use of this class of materials in interactive devices.

Scott E Hudson. Printing Teddy Bears: A Technique for 3D Printing of Soft Interactive Objects. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Project page

Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-Air Interactions

The famous gesture-based interface in Minority Report looked super-cool, but has been criticized because waving your arms in the air for a long period of time would get very tiring. This paper quantifies just how bad this can be, looking at how long people can hold out their arms and deriving some guidelines on how to design interfaces for mid-air interaction.

Abstract >
Mid-air interactions are prone to fatigue and lead to a feeling of heaviness in the upper limbs, a condition casually termed as the gorilla-arm effect. Designers have often associated limitations of their mid-air interactions with arm fatigue, but do not possess a quantitative method to assess and therefore mitigate it. In this paper we propose a novel metric, Consumed Endurance (CE), derived from the biomechanical structure of the upper arm and aimed at characterizing the gorilla-arm effect. We present a method to capture CE in a non-intrusive manner using an off-the-shelf camera-based skeleton tracking system, and demonstrate that CE correlates strongly with the Borg CR10 scale of perceived exertion. We show how designers can use CE as a complementary metric for evaluating existing and designing novel mid-air interactions, including tasks with repetitive input such as mid-air text-entry. Finally, we propose a series of guidelines for the design of fatigue-efficient mid-air interfaces.

Juan David Hincapié-Ramos, Xiang Guo, Paymahn Moghadasian and Pourang Irani. Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-Air Interactions. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Vulture: A Mid-Air Word-Gesture Keyboard


This paper looks at how you might make a Swype-like gesture keyboard in mid-air. It makes sense for game systems like the Xbox with the Kinect, to improve the speed and convenience of text input. The system gets reasonable speed, about 28 WPM after user training. This paper received a Best Paper Award.
Abstract >

Word-gesture keyboards enable fast text entry by letting users draw the shape of a word on the input surface. Such keyboards have been used extensively for touch devices, but not in mid-air, even though their fluent gestural input seems well suited for this modality. We present Vulture, a word-gesture keyboard for mid-air operation. Vulture adapts touch based word-gesture algorithms to work in mid-air, projects users’ movement onto the display, and uses pinch as a word delimiter. A first 10-session study suggests text-entry rates of 20.6 Words Per Minute (WPM) and finds hand-movement speed to be the primary predictor of WPM. A second study shows that with training on a few phrases, participants do 28.1 WPM, 59% of the text-entry rate of direct touch input. Participants’ recall of trained gestures in mid-air was low, suggesting that visual feedback is important but also limits performance. Based on data from the studies, we discuss improvements to Vulture and some alternative designs for mid-air text entry.

Anders Markussen, Mikkel Rønne Jakobsen and Kasper Hornbæk. Vulture: A Mid-Air Word-Gesture Keyboard. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Understanding finger input above desktop devices


There’s been a good amount of previous work looking at interacting in the space above a desktop display. This paper considers the practicality aspects of the idea by studying how well people can stay within and move between different height layers.
Abstract >

Using the space above desktop input devices adds a rich new input channel to desktop interaction. Input in this elevated layer has been previously used to modify the granularity of a 2D slider, navigate layers of a 3D body scan above a multitouch table and access vertically stacked menus. However, designing these interactions is challenging because the lack of haptic and direct visual feedback easily leads to input errors. For bare finger input, the user’s fingers needs to reliably enter and stay inside the interactive layer, and engagement techniques such as midair clicking have to be disambiguated from leaving the layer. These issues have been addressed for interactions in which users operate other devices in midair, but there is little guidance for the design of bare finger input in this space.

In this paper, we present the results of two user studies that inform the design of finger input above desktop devices. Our studies show that 2 cm is the minimum thickness of the above-surface volume that users can reliably remain within. We found that when accessing midair layers, users do not automatically move to the same height. To address this, we introduce a technique that dynamically determines the height at which the layer is placed, depending on the velocity profile of the user’s initial finger movement into midair. Finally, we propose a technique that reliably distinguishes clicking from homing movements, based on the user’s hand shape. We structure the presentation of our findings using Buxton’s three-state input model, adding additional states and transitions for above-surface interactions.

Chat Wacharamanotham, Kashyap Todi, Marty Pye and Jan Borchers. Understanding finger input above desktop devices. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

CHI 2014 Trip Notes: Posters

I attended CHI 2014 in Toronto and had a great time. Here, in no particular order, are some of the things I saw in the posters section. Posters are smaller research results, or works in progress, presented with a poster and a short paper.

Suit Up! Eyes-Free Interactions on Jacket Buttons

Suit Up! buttonsSome design thinking and basic prototyping of how you might interact with wearable technology using buttons on your clothing. Prototype suit jacket buttons above are: 1. Four-way control; 2. Status LEDs; 3. OLED Display; 4. Piezo-electric Speaker.

Kashyap Todi and Kris Luyten. Suit up!: enabling eyes-free interactions on jacket buttons. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

Project page

ExtendedThumb

extendedthumbAn offset cursor technique to help users select items using their thumb on large screens (e.g. Samsung Galaxy Note), by using a virtual thumb. “After initiating it with double tap: 1) aim at a target, 2) move the virtual thumb towards the target, 3) adjust the virtual thumb position and place the red cross located at the tip of the virtual thumb on the target, and 4) lift the real thumb up from the screen to select the target.”

Jianwei Lai and Dongsong Zhang. ExtendedThumb. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

 Extending Interaction for Smart Watches

expanding_interaction_for_smart_watchesSome exploration around expanding the interaction space for smart watches by using the hand’s surface. The idea is to use some sort of depth sensor pointing out of the side of the watch (their prototype used IR proximity sensors) to allow touchscreen-type gestures on the hand—kind of a Magic Trackpad for your watch. Various gestures could be supported:
expanding_interaction_for_smart_watches2

Whirlstools

WhirlstoolsA design exploration of adaptive architecture: stools that rotate themselves to encourage social interaction. “Every time a person newly sits on a stool, all the other unoccupied stools in its vicinity also rotate, in ways so that when another person comes along and sits on any of the remaining stools, s/he will be more likely to sit face-to-face (and consequently more likely to have spontaneous conversations) with people already seated…“.

Yuichiro Takeuchi and Jean You. Whirlstools: kinetic furniture with adaptive affordance. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

Project page

CHI 2014 Trip Notes: Interactivity

I attended CHI 2014 in Toronto and had a great time. Here, in no particular order, are some of the fun things I saw at the “Interactivity” exhibits. These exhibits let you try out demos of various systems in person, and was lots of fun!

Comp*pass

This was probably my favorite thing at the conference! It’s a drawing compass that dynamically opens and closes, allowing you to draw any radial shape—including squares.

Ken Nakagaki and Yasuaki Kakehi. Comp*Pass: a compass-based drawing interface. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

Project page

GaussBricks


A fun system that uses a high-density grid of Hall-effect sensors underneath an iPad to track in realtime the motion of magnetic building blocks above the surface. It allows a playful, hybrid physical-virtual experience.

Rong-Hao Liang, Liwei Chan, Hung-Yu Tseng, Han-Chih Kuo, Da-Yuan Huang, De-Nian Yang and Bing-Yu Chen. GaussBricks: magnetic building blocks for constructive tangible interactions on portable displays. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Project page

Wrigglo

A wacky interface for emotional communication between friends. Two flexible antennae protrude from the top of the phone and actuate to reflect the emotion of another person. They use spring-form shape memory wire to actuate the antennae.

Joohee Park, Young-Woo Park and Tek-Jin Nam. Wrigglo: shape-changing peripheral for interpersonal mobile communication. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

 Project page

Muzlog

A highly-augmented guitar with capacitive neck stickers, pickups, and a ring that automatically transcribes what’s being played.

Han-Jong Kim and Tek-Jin Nam. Muzlog: instant music transcribing system for acoustic guitarists. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

 Tactonic Touch 2.0

A grid of force-sensitive resistors to do multi-touch and -pressure sensing, with mechanical force distribution for higher resolution with fewer sensors.

Alex M Grau, Charles Hendee, John-Ross Rizzo and Ken Perlin. Mechanical force redistribution. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Product page

 Maker Vis

Physically visualizing data with personal fabrication technology. 3d print or laser cut an actual, physical bar chart.

Saiganesh Swaminathan, Conglei Shi, Yvonne Jansen, Pierre Dragicevic, Lora A Oehlberg and Jean-Daniel Fekete. Supporting the design and fabrication of physical visualizations. In CHI’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.

Project page

InGrid: Interactive Grid Table

A table made of multiple replaceable squares. Augmented with electronics, a server knows where each tile is, allowing the user to share content between tablets attached to tiles by swiping.

Mounia Ziat, Josh Fridstrom, Kurt Kilpela, Jonathan Fancher and James J Clark. Ingrid: interactive grid table. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

Project page

Game of Tones

A piano learning interface using projected graphics to help you know what to play next. Although the video shows simple bars, the CHI Interactivity version was more like Space Invaders, with the next notes falling from the top of the screen.

Linsey Raymaekers, Jo Vermeulen, Kris Luyten and Karin Coninx. Game of tones: learning to play songs on a piano using projected instructions and games. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

Project page

HamsaTouch

An electro-tactile display: a 32×16 grid of phototransistors one one side is mapped to a 32×16 grid of electrodes on the other side. You put your palm on the electrodes, and detected light on the other side is turned into tiny, tickly electric shocks! Paired with a mobile phone running an edge detection algorithm, you can “see” edges by feeling with your hand. Fun to try!

Hiroyuki Kajimoto, Masaki Suzuki and Yonezo Kanno. HamsaTouch. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 2014.

 Research lab page

For some reason my Nokia Lumia 920 and my MacBook don’t get along where internet sharing over wifi is concerned. The MacBook continually drops the connection with the cryptic message “kernel[0]: AirPort: Link Down on en0. Reason 7 (Frame received from nonassociated STA).” After much searching and failing to find a cure, I fashioned a good old-fashioned command-line band-aid:

while true;
    do if ping -W2 -c2 $(route -n get default |grep gateway | awk '{print $2}') > /dev/null;
    then echo "still up";
    else echo "reconnect";
    networksetup -setairportnetwork en0 MY_NETWORK_NAME MY_NETWORK_PASSWORD; 
    fi;
done

It turns out that using cheap USB cables with the Raspberry Pi is a bad idea. I was experiencing very odd SSH behavior: anything that sent a chunk of data quickly (e.g., pasting several lines of code into a vi buffer) would kill the SSH session. Replacing the USB cable with a better one solved it. Apparently cheap USB cables use 28 AWG wire for both data and power lines, while better ones use 24 AWG wire for power. Thicker wires means less resistance, meaning a better ability to withstand the apparently monstrous power draws behind pasting code!