tbutler: (Default)
[personal profile] tbutler
Camera companies are starting to do some really cool things with sensor-shift image stabilization technology.

One of the biggest banes of low-light photography is camera shake. Basically, in order to get enough light for a good photo, the camera has to keep the shutter open for longer periods of time - and the longer it's open, the greater the chance of camera movement that blurs the photo. Sensor-shift stabilization tries to correct for this by putting the image sensor inside a magnetic frame, where it floats. When the camera moves, the sensor is shifted to compensate. There are limits to how well this can work - the camera can't move more than the sensor can shift - but in general it makes it much easier to take pictures with longer exposures.

This has been around for at least a decade. But recently, some clever engineers have figured out how to use it to do more.

Another important bit of camera tech is moire prevention. In a nutshell - the most common image sensing pixel don't understand color, only greyscale. To get color photos, the camera puts a patterned color filter over the sensor, so that each pixel 'sees' just light of a particular color; then the image processor uses that patterned color data to reconstruct what the original picture looked like, just as our eyes do when looking at a TV screen. However, when photographing something with regular patterns - like the weave of a fabric swatch, or closely-spaced bars in a grill - you can get interference if the pattern is small enough, which appears as moire. (A false color rainbow shimmer, for example.) To compensate for this, camera makers have traditionally put another filter over the sensor; this one adds enough 'blur' to hide the patterns that cause moire. The problem with that is that the filter reduces sharpness; the entire point is to add a little blur. Several high-end cameras over the last couple of years have removed the filter, because image processors have gotten better at detecting and correcting for moire when they stitch the image together - but moire can still creep in.

Pentax did something different when they released the K-3 at the end of 2013. Sensor-shift tech moves the image sensor very slightly to compensate for camera shake; the Pentax engineers realized that 'hey, if we're moving the sensor to prevent blur... we can also shift the sensor to create blur.' So instead of putting on a filter that creates blur all the time, they added a feature that can be turned on to create a little blur, only when wanted. So when shooting subjects likely to cause moire, the sensor-shift blurs to eliminate it... but when shooting other subjects, turn off the shift and you get pics with the sensor's full sharpness. Very nice. (See this Imaging Resource article for a lot more gory mathematical details on what causes moire and how the Pentax system compensates.)

Which brings us to today's announcement from Olympus, another pioneer in sensor-shift IS. Normally, the resolution of an image sensor is fixed - there are a fixed number of image pixels on the sensor, and each one captures one 'dot' of picture information; the total number of pixels make the camera's 'megapixel' rating. Pixel resolution is always kind of a balancing act; smaller pixels let you capture more detail... but the smaller the pixels are, the less light they're able to capture, and the worse the sensor will perform in low light. If a detail is smaller than the pixel size, the pixel doesn't capture the full detail; instead it captures an 'average' of the light falling on the pixel. So a thin black line running through the center of a pixel becomes blurry.

Olympus had the bright idea to use sensor-shift to try and capture a better description of the image. So when you take a picture in this new mode, it first takes a baseline capture of the pixel. Then it shifts the sensor a half-pixel in one direction, and takes a new capture to see if anything changes; in our thin black line example, the pixel will get brighter because the line will be off to one edge. And next it shifts the sensor a half-pixel in the other direction and checks the image. By repeating this process, and using the image processor to interpolate and stitch everything together, it can create a 40-megapixel image using a 16-megapixel sensor. According to the preliminary testing Imaging Resource did (which also goes into more detail on how it works), the results are pretty impressive. This tech only works when the camera is completely still - which makes sense, as they're using the image stabilization system to do something else - but it's still very cool.

Profile

tbutler: (Default)
Travis Butler

January 2026

S M T W T F S
    123
456789 10
11121314151617
18192021222324
25262728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 2nd, 2026 02:48 pm
Powered by Dreamwidth Studios