This is a list of project ideas. Most of them are half-formed and will require a great deal of independent thinking and work from the student, but also represent excellent learning opportunities.
Most of these projects deal with physical computing: computing devices that somehow interact with the real world.
If you’re looking for a MSc thesis idea, be sure to also read my advice for thesis students.
This list of projects is organized into rough categories, illustrating the balance of work involved:A frequent problem for novices in 3D modeling is understanding size. Can we make reference to a database of standardized “familiar objects” (e.g., consumer electronics with known sizes like phones) to use as in-software reference items? This could help novices get an idea of the true size of their model, and possibly could also serve as reference geometry.
Lint for software finds problems in code. Can we make something similar for augmented fabrication? This would involve modifying a 3D design environment to check for problems of symmetry, centering, objects almost but not quite touching, and so on, and providing feedback to the user about these problems.
Figure out how to make voice assistant commands discoverable. Can it be done without using visual assistance?
Refs:
Two points make a line, but three can make a plane, triangle, or circle. The goal in this project is to take a small set of measurements in space and suggest possible shapes that those measurments could be from: was it a cube, a sphere, a light bulb or an iPhone? This will form part of an interactive system to help people create digital models of physical objects.
Lots of fabrication activities concentrate on small, handheld objects. How can we design for larger objects, like furniture? Can we do this at a 1:1 scale, situated in the real world? Augmented reality could be a good way to do this. What kind of tools and visualization ability is needed?
Can we control objects in the environment solely via eye tracking? By using the vestibuluo-ocular reflex, which is the property that keeps our eyes stil when we’re looking at something, we might be able to get information about how the head is moving and use this motion as a control mechanism.
In our lab, we’re looking at pneumatic (air-driven) interaction. However, we need a design environment that allows the construction of tubes, holes, and various other features in 3D meshes. This project would involve using graphics libraries to construct or modify a design environment to make this task easier.
I have many ideas for building a printer that can embed threads into a 3D-printed object. There’s a non-trivial (I think) algorithmic task here in figuring out the right way to route threads while printing to make everything work correctly.
We are working on a large-scale air table—imagine an air hockey table with each jet of air individually controllable. There is a lot of mechanical construction to be done, as well as software work (both Arduino-level and computer control) to create interesting interactions and demos.
Build and evaluate an interface that lets people control a wearable by wiggling their ears. (Maybe it also teaches them how to wiggle their ears first.)
Previous research at CMU illustrated acoustic barcodes, a pattern of ridges which make a characteristic sound when scratched. Can we print small areas which have the same property into 3D-printed objects, so that a user can simply use a fingernail and scratch an area of interest? This could be done by varying the layer height or other properties of the 3D printing process.
Can we identify characteristics about users by how they grasp [1] a tool while they’re using it? Soldering iron, screwdriver, hammer, saw… maybe we can identify users in order to track tool use [2], or determine experience level with that tool, or automatically figure out where in the process of a project someone is [3].
References:
Fluidics, or fluid logic, is a principle that uses air to perform computation. How can we use this principle to create 3D printed objects that become interactive with only air input? This would allow the quick creation of useful objects without any complex assembly. This project involves a lot of experimentation with fabrication, as well as reading historical literature about design for fluidics.
How could voice and gesture control be used to help in modeling for 3D printing? What kinds of things might people say? How would their command and gestures change based on what they are modeling? This project would involve asking people to pretend to use a future 3D-modeling system in this way, and determining what kinds of features would be natural to use.