Stumbled on this guy who makes really elaborate jigs and uses shop tools in a high-precision-meets-resourceful-boy-scout fashion. None of it is digital fabrication, but I think its informative to know how people make do without CNC routers by rethinking everyday hobbyshop tools.
He made a pantorouter, which is a pantograph that moves a router bit instead of a pen/pencil. He designed it with the purpose of cutting gears in wood, but realized it also makes fantastic mortise/tenons and integral dowels.
Description from website: WikiHouse is an open source construction set. It’s aim is to allow anyone to design, download, and ‘print’ CNC-milled houses and components, which can be assembled with minimal formal skill or training.
Here’s a video of an impressive Kinect SLAM execution. SLAM, an acronym for Simultaneous Localization And Mapping, is the bread and butter technique for getting robots to know what’s going on in their world.
I first wanted to reference some of the more natural types of 3D scanning. This being the sonar system that bats use. The bats are using specific frequencies which send high pitch sound waves outwards and wait for receiving the echo. The interesting fact is that the frequencies that bats send and receive are so frequent and complicated, much more complicated than the brain should be able to process. But science theorizes that the bats are able to slow down the processing of the wavelengths to understand the patterns specifically.
Multi-Kinect Scanning:
Secondly is a project I worked on last semester with Jonathan Armistead. Below is posted a video of the system we used to do full body scans. The scanner was developed for “professional” use in a medical facility. But in actuality the scanning system is utilizing multiple kinect cameras. The system is impressive because important to the system was that all 8 kinects would grab and record point cloud data at the same exact time. The system then outputs 4 separate point clouds.
In terms of process, I was hired by Jonathan to help align the point clouds using meshlabs to align the multiple scans. The workflow involves taking multiple pieces of the scan, rotating pieces into near perfect alignment, and then having the computer guess and rotate the pieces the rest of the way. (overall a very tedious process). Afterwards the aligned point cloud models are converted into a mesh for further processing.
Positives:
~high resolution scans of full figures
~textures which semi match the figures
Negatives:
~tedious alignment of scans
~bubbling of skin texture?
~immense time spent: scanning, aligning, and reworking smoothly
Sonar is pretty old technology but theres a really cool company doing high end imaging versions of it that go far beyond detecting torpedos and submarines in the water.
A sonar imager or scanner essentially just has an emitter and a receiver and a processor. The emitter emits sound, and the receiver receives it. The processor determines the attenuation and the delay of the reflected impulse signal to determine how far away the object is. Do this enough over a large area several times from a few directions and you get co-ordinates to the object’s surface which you can then make surfaces/meshes out of.
Here is a 3-d scan of a submerged fallen bridge structure at the bottom of a river, taken with a sonar imager.
Some cool gifs of Lamprey getting scanned from 5 metres away in really really dark water.
They combined it with low-light underwater robotic photography to make a really intense map of the shipwreck.
The good: Using auditory sound to generate visual images is such an engaging concept to my inner printmaker. The synaesthesia involved in this process involves so much translation of data between different processes (that analyse the attenuation, delay, direction…)… it is pretty impressive that we can actually do it without very much technology involved. Whales, Bats and Dolphins use it, so it has to be pretty useful. Especially when light isn’t available. You can also scan really big things.
The bad: Sonar is nasty in terms of noise, movement and random refractions/reflections that mess up with the actual reflected wave data that the receiver is supposed to collect. Its also expensive. And doesn’t really work on small objects because sound has a pretty big wavelength (compared to light).
Medical imaging technologies such as the CT and MRI create sets of 2D slices. There are various ways to reconstruct these 2D slices into 3D models. Generally each of these slices were taken at a known distance apart. Imaging software can create a simple 3D structure by placing each of these images that known distance apart in 3D space and thereby create a 3D model out of 2D images. This model can be further analyzed by doing a “volume rendering” in which internal objects with different grey scale values are separated into different 3D components.
The following is an image of a 3D model of the brain and eyeballs created in a free software called OsiriX:
I am not sure how to classify this, but I’ll say it is a skin printer. Using living cells, they are cultivated and then applied to places where skin was burnt and new skin gets formed. Here is the wikipedia article on it.