Team: Nadya Spice, Kurt Rose, Stefano Prezioso, Nicolette Nugent, and Jon Rowe



My name is Nadya Spice, and I am in charge of the camera system for our PTM. As of now, we have a few requirements that relate to color of the images as well as the software compatibility. A visit from Mark Mudge helped us to clarify the importance of color accuracy of the images produced. Carl Salvaggio helped us to pick a software language that will be easy for both us and users to manipulate, called lab-view. Although most of our time at the moment is focused on developing a presentation for our Preliminary Design Review, we are always thinking of our system. Our next step is finding the right camera to use, and calibrating the colors that the camera will capture. Attached is my training plan that begins with the camera device, and changes focus to software because I am the assistant to the software user interface team.


This group has come a long way. With two new additions, Kurt Rose and Stefano Prezioso, we are now working on characterizing the camera system. Nadya has done research and experiments involving tone transfer function of our camera. We are working with a Nikon D50. The TA for the class, Dave Kelbe, really helped the group to understand the processes of tone transfer function. There is an equipment survey of the processes at this link. The group was successful in linearizing the TTF of this device, and will be handing over the process to the software group. Stefano is working closely with the lens and its characteristics. He will be modeling distortion and other factors this week. Kurt is teaching himself modulation transfer function processes with the help of the 3rd year imaging science students. This will help to characterize the resolution of our system.


Spectral Efficiency of Silicon Sensor


The training plan above proved somewhat irrelevant when the Winter quarter began. With nine new students in the class, there was a lot of catching up to do. Two new members were added to the group and there were many obstacles to overcome. First, Stefano can only attend class on Thursdays because of a scheduling conflict. We had a lot to look into as far as what our game plan was. However, both Kurt and Stefano know a lot about cameras already. Kurt was previously in the Photo Tech program at RIT. Stefano started the term majoring in Biomedical Photography and has now declared to double major in Imaging Science as well. Their experience with cameras helped the class to understand the basic principles of how cameras work.

Project list Phase 1:
Choose a camera
Choose exposure settings
Pick a lens
Distortion correction
Document tests

Project list Phase 2:
Choose a camera
Figure out how it works
Choose a lens
Designate liaisons between other groups


Today as a class we decided not to use UV lights. Our camera system puts out really noisy photos in the UV range. IR pictures have looked good because our sensor is more sensitive to IR.

We are working on defining requirements for phase 2. So far we have...
  • IR lights, no UV
  • Strong support
    • heavy camera (7 lbs)
    • expensive
    • not ours
  • Lab view (from software side)
  • Minimal focus distance: 90 mm

This list in still a draft stage, as most of our plans our.

Jonathan and Dave completed a draft of the lens characterization today. The purpose of this task was to correct the barrel distortion given off by the 14mm lens we are using in Phase 1. To solve the correction Matlab software was used. The process consisted of taking a test photo of a grid with the lens (Sample), and generating a corrected image in photoshop with the correct geometries. Using Image J software, coordinates were taken of contrasting intersections in both images. These images were written into Matlab as matrices. After factoring in a geometric manipulation algorithm to the system, an output image of the corrected image was created. However, due to the complexity of the program, the process took longer than anticipated to complete. To test our program further, we decreased the size of the corrected image (Map) and created a new warped image (Warped). After running the program again, we show the outputs with the two images, our program-corrected image (Nearest), and the errors in the warped and corrected image. The black space represents all of the places where the pixel intensity measurements were the same and the white space represented the difference in the two images. As you can see, the error in our program-corrected image is far less than the original warped. Unfortunately, although the program does produce an efficient output, the time it takes to run (even for the very low resolution test images) took far too long. Therefore, we decided not to include our results for the lens distortion correction in the final project due to its complexity and processing time.

Image 1 and 2: The sample image taking with the 14mm lens and the "map" image generated in photoshop.

Image 3: MATLAB program results.


Today Nikki and Kurt finished doing the MTF of the multispectral camera. The first MTF seminar shows the MTF of the D50 and the second MTF seminar shows the MTF of the Multispectral camera.


On March 31, 2011 Sam from the illumination group and Nadya went to talk to a senior design team that is constructing a geodesic dome very similar to ours. The main difference in the projects is that their illumination is made to be a pointed light source, while ours is to have uniform illumination within the structure.

Nick Liotta showed us the parts that were available in the shop, and explained the details of how they went about planning. Honestly, it's quite simple. The website's dome calculator page does all of the calculations for you. You simply enter in the diameter of the dome you wish the build, and how many "layers" of struts you wish to make it out of. Then, it will give you the measurement of each strut needed to build the dome.

The team chose to build their dome out of steel because it was easy to work with. The process of cutting all of the pieces and bolting them together took one day. A note of advice that Nick suggested was to round the edges of the struts before bolting them together, because they didn't do so until after the dome was in one piece. This would cut down the amount of time spent even more. Also, keep in mind that our dome shouldn't be any larger than 1 foot in radius, so it should take us a third of the time to build a dome like this.

Another plus about this structure is that it's relatively cheap. Our Phase 1 dome cost over $400, and this dome that is 3 times the size cost less than $100.

MATLAB is used to control the lights and all things software for this system.

Nick Liotta has offered a helping hand through any of the processes described.

Images from the excursion are below:

Image 1: The geodesic dome itself. It is 1 m in diameter and made of steel. The production time was 1 day and the cost for the parts was less than $100.

Image 2: Demonstration of the magnetic LED casings.

Image 3: This piece will lay on top of the piece that holds the LEDS in place in order to focus the angle of illumination.

Image 4: The two pieces show in Images 2 and 3 next to each other.

Image 5: The casing shown above on the left, but the casing on the right will most likely be the one used. The LEDs will fit tightly in the center.

April 12, 2011

Our Phase 2 Multispectral camera (Photometrics Quantix) has been giving me plenty of issues so far, but I was finally able to get some workable multispectral images. Here is what I captured today.

This is a shot of a test subject under standard white fluorescent illumination.
W=White-Out Tape
I=Ballpoint Pen Ink
G=Pencil Graphite
M=Dry Erase Marker
Standard Twine in Lower Right Hand Corner

This is a shot under Infrared Illumination with an IR Pass filter over the lens (redundant, but safe). Noticeable changes from the White Light image are that the highlighter is no longer visible, the pen ink is much lighter, as well as the twine.

This is a shot under UV Illumination with no filter over the lens. This means the camera is picking up both UV light and visible fluorescence caused by the illumination source. Noticeable changes from the white light image are the highlighter being darker, the white out tape being very dark, and the twine being much darker.

Finally, I merged the files into a color image using the IR image as the Red channel, the White light image as the Green channel, and the UV image as the Blue channel. Looks pretty cool!