Description from website: WikiHouse is an open source construction set. It’s aim is to allow anyone to design, download, and ‘print’ CNC-milled houses and components, which can be assembled with minimal formal skill or training.
I made a small file in Rhino and purely cut it out of Stonehenge Printmaking paper. I wanted to not make fold lines and strong cuts because I felt that this mini assignment really influenced me to create something out of hand and leaving some sense of ‘human error’ in the design. I also felt the small cute form of an F-14 Tomcat, the most decorated fighter in history, was interesting because of it’s smooth translation from war machine to toy.
I made an attempt at generating a model of my very messy room. The following is a screenshot of the model generated by 123D Catch from 50 stills. The .obj file is too large to post to the blog, but is available for distribution upon request.
Here’s a video of an impressive Kinect SLAM execution. SLAM, an acronym for Simultaneous Localization And Mapping, is the bread and butter technique for getting robots to know what’s going on in their world.
Excellent article on the Make Blog on cnc joinery, including discussions of laser-cutting vs. cnc routing (see section titled “Laser vs. Rotary Cutters – The Inside Corner Problem“), as well as wood-bending with cnc machinery (see “Flexures“).
I tried capturing my roommate’s gorgeous head, but that didn’t work. I tried capturing my 3d print sculpture (which I could then re-print, with the obviously results of insanity and profit), but that didn’t work. I tried capturing my kitchen table, and that worked pretty well, if “Dr. Seuss battles the termite queen” is an aesthetic you’re into. Here’s one of the pictures I took:
I first wanted to reference some of the more natural types of 3D scanning. This being the sonar system that bats use. The bats are using specific frequencies which send high pitch sound waves outwards and wait for receiving the echo. The interesting fact is that the frequencies that bats send and receive are so frequent and complicated, much more complicated than the brain should be able to process. But science theorizes that the bats are able to slow down the processing of the wavelengths to understand the patterns specifically.
Multi-Kinect Scanning:
Secondly is a project I worked on last semester with Jonathan Armistead. Below is posted a video of the system we used to do full body scans. The scanner was developed for “professional” use in a medical facility. But in actuality the scanning system is utilizing multiple kinect cameras. The system is impressive because important to the system was that all 8 kinects would grab and record point cloud data at the same exact time. The system then outputs 4 separate point clouds.
In terms of process, I was hired by Jonathan to help align the point clouds using meshlabs to align the multiple scans. The workflow involves taking multiple pieces of the scan, rotating pieces into near perfect alignment, and then having the computer guess and rotate the pieces the rest of the way. (overall a very tedious process). Afterwards the aligned point cloud models are converted into a mesh for further processing.
Positives:
~high resolution scans of full figures
~textures which semi match the figures
Negatives:
~tedious alignment of scans
~bubbling of skin texture?
~immense time spent: scanning, aligning, and reworking smoothly
Found a site that shows how you can make your own 3d scanner, using just a laser pointer, wine glass, rotating platform, and a digital video camera. Which seems great if you need to get a digital model of something in a pinch.
The stem of the wine class diffracts the light so that it creates a beam along the surface of the object that is being scanned on the rotating platform. As the object rotates, the camera then records change in the line that the diffracted laser makes. Using an edge detection algorithm on the video avi to find the location of the laser line, reconstructs a 3-D model that looks like this:
It’s not the highest quality scan, but it’s a quick cheap alternative if nothing else is around and you’re forced to Macgyver a solution.
Sonar is pretty old technology but theres a really cool company doing high end imaging versions of it that go far beyond detecting torpedos and submarines in the water.
A sonar imager or scanner essentially just has an emitter and a receiver and a processor. The emitter emits sound, and the receiver receives it. The processor determines the attenuation and the delay of the reflected impulse signal to determine how far away the object is. Do this enough over a large area several times from a few directions and you get co-ordinates to the object’s surface which you can then make surfaces/meshes out of.
Here is a 3-d scan of a submerged fallen bridge structure at the bottom of a river, taken with a sonar imager.
Some cool gifs of Lamprey getting scanned from 5 metres away in really really dark water.
They combined it with low-light underwater robotic photography to make a really intense map of the shipwreck.
The good: Using auditory sound to generate visual images is such an engaging concept to my inner printmaker. The synaesthesia involved in this process involves so much translation of data between different processes (that analyse the attenuation, delay, direction…)… it is pretty impressive that we can actually do it without very much technology involved. Whales, Bats and Dolphins use it, so it has to be pretty useful. Especially when light isn’t available. You can also scan really big things.
The bad: Sonar is nasty in terms of noise, movement and random refractions/reflections that mess up with the actual reflected wave data that the receiver is supposed to collect. Its also expensive. And doesn’t really work on small objects because sound has a pretty big wavelength (compared to light).
3D structured light scanning is a way of taking images of 3D objects and creating meshes or 3D point clouds from the images. You can do it at home by projecting alternating patterns of striped light across a person or surface. You take 3 images of the object or person, each image has a different pattern projected onto it. Then you run the image through a program that assembles figures out the distances between the points of the object and creating a 3D point cloud. However, this is very difficult to get right, and you need rather precise lighting in order to produce the correct effect. Whenever I have tried it, it usually just ends up flat.