Tuesday, July 11, 2017

Putting the Sky in Contrast


 
This image shows a test rig that Eric is using to examine the feasibility of LCD contrast enhancement of images. Such a system might be useful for future spacecraft which often must acquire scientific photos under challenging lighting conditions.

by Eric Shear

Several months ago, I agreed to take on a hands-on project for my masters’ thesis. I had been doing planetary mission design and it seemed like a nice change to do something experimental that might end up in future spacecraft cameras.

I picked up from where a former summer undergraduate had left off. She, with John's help, had built and tested an imager apparatus with a liquid crystal display (LCD) in front of a digital camera. The whole assembly had additional optics to sharpen the image and was bolted on a black aluminum bread-board (see the image above).

The goal was to make a sky imager that could selectively block out the sun in order to increase contrast and dynamic range in the image, allowing otherwise hard to see details to be easily picked out. An obvious application would be on Mars, where there are high altitude cirrus clouds that would be hard to see in bright daylight. The same thing could be accomplished with a physical shade, but it would be heavier and less flexible.

By the time I came in, she had already succeeded in blocking the bright light from a lamp in the lab. But the blocking image had to be drawn in the LCD’s program and manually uploaded to the LCD - a task that requires too much human intervention to be useful on a robotic spacecraft.

My contribution to this project was mostly code. The camera could send its output to Matlab, and I wrote a script to convert the grayscale image to a black spot in a white field - the black spot corresponding in size and location to the original bright light so it will make the bright light disappear when shown on the LCD. Because the block-image is directly derived from the original image, it can track any number of bright lights without an additional tracking algorithm. In theory, at least.

In practice, though, another challenge was to adjust the block-image so it aligns with the original image. That was easier said than done, because the LCD is 320 by 240 pixels and both images were much larger than that. The size difference causes the block-image to be misaligned when shrunk down, and I had to resize, crop, and translate the final block-image accordingly to get it to align with the original image. What came out wasn’t perfect, but John only wanted a proof-of-concept result:



Image Results, original image (top) and blocked light (bottom)
  
I think we could do much better if we could somehow access the camera’s array of light detectors and directly manipulate it. That way, the dark spots will perfectly map onto the bright lights without any modification. Unfortunately, many digital camera manufacturers don’t make it possible. So any future work in that direction will have to rely on a custom-built detector.

Over the next few weeks, I will test the apparatus outside on the roof of Petrie building, in what I hope will be a sunny/cloudy conditions so the impact of the block can be most clearly seen on the clouds near the sun. These tests will be combined with increased exposure times to determine the magnitude of the contrast change. The results will be shown in a future paper for the Advances in Space Research journal. Depending upon the results, a case could probably be made to include this capability on a future rover mission.

What I’ve described above is the Cliff Notes version of a story full of little setbacks as I learned how to work with software and commands I wasn’t familiar with. Sometimes the devices had the gall to stop working for a while only to resume like nothing had happened. That’s the reality of early-stage development when you have to work with a jury-rigged apparatus made from parts that don’t work that well with each other.

Stay tuned for more results!

No comments:

Post a Comment