It is without a doubt that smartphones today can already keep up with and even surpass most point-and-shoot cameras when it comes to details of shots produced. This is why we have these flagship handsets with all those confusing camera specs at times, but what exactly affects the image output of a phone’s camera? We list down five factors to help you have a better understanding of smartphone photography.

Sensor Type
They say that the essence of a camera is its sensor since will dictate the image size, resolution, focal range, lenses compatible, and the overall size of its body. It also determines how well it can reproduce a photo and up to how big you can resize them without breaking the image.

To begin, one of the most common types of sensors is the CCD or charged coupled device. It dates far back and is actually one of the oldest technologies in digital cameras and it still can be found in a lot of affordable compact shooters today.
Another widely-used sensor today is the CMOS sensor or complementary metal oxide semiconductor. It didn’t quite match the picture quality produced by CCDs at first but the CMOS sensors we see today have been updated to match the output and even exceed the performance done by CCD sensors. They are now more efficient, demand less power, and can better handle high-speed shooting.
You might have also been hearing about BSI sensor being tossed around in smartphone specs. The back side illuminated sensor is different from CCD and CMOS in a way that it uses a different approach and allows for more light to penetrate to the sensor and be collected by the pixels. This translates to the camera being more effective to be used in low light situations as it produces less digital noise.
Sensor Size
Now let’s jump to the size of these sensors. The general thought is that the larger the surface area is, the more it can collect light. This, in turn, helps produce high-quality images. Basically, bigger is better and handsets with larger sensors are more effective when shooting in darkness.

The way a smartphone’s sensor is measured is a representation of a fractional number in inches (like 1/1.5-inch or 1/1.2-inch). Some of the big image sensors found in smartphones today are around the 1/2.3-inch mark.
Pixel Size
Having a large sensor but running on fairly small pixels will not yield the best smartphone images, either. Today, phone manufacturers tend to squeeze in a lot of tiny pixels just to bump up the pixel count and ride on the idea that the higher the pixels, the better image you get.
While there’s some truth in that, it’s only an ingredient to the entire recipe. In fact, when you have small pixels in your sensor, they don’t measure light accurately and tends to produce an unclear photo with digital noise.
The unit used to measure these pixels (or photodetectors) is a micrometer or simply micron, and abbreviated as µm. When talking about smartphones, the size is only either one or two microns. As an example, HTC’s Ultrapixel technology has a 2-micrometer pixel size which produces better photos in dimly-lit scenarios than those with smaller pixel sizes.
Image Stabilization
Image stabilization also plays a part in producing sharp images. These mechanisms work by countering the movement made by handling the smartphone which could produce softly-focused shots especially when you need a low shutter speed.

Hugo Barra explaining the OIS feature of the Mi 5
For image stabilization in handsets, we generally have two types: EIS and OIS. EIS or electronic image stabilization is a digital image enhancement method which is done by electronic processing. It basically compensates for the pitch and yaw movements.
OIS, on the other hand, stands for optical image stabilization and works by moving the lens elements to compensate for the movement to reduce blur. Between the two, this is a more effective way of image stabilization since it has a physical mechanism dedicated to minimizing motion blur.
Post-Processing Techniques
Lastly, there are handsets that have their own post-processing techniques as a final touch to their images. These are usually embedded in the phone’s software and while some could be turned off, others have it as a permanent ‘feature’.
This is why you get to notice other devices to produce highly saturated photos while some have increased contrast, for example.


When you say post processing for pictures, you do things like photoshop or take the Raw file and correct it by using ACR, C1 or DXO softwares.
Probably what you want to say is that it boils down to the *cameras’ image processor that processes algorithms to “cook” the image according to their taste. Examples are Canon’s Digic, Nikon’s Expeed and Sony’s Bionz.
But no, phones don’t have this but only rely on their SOC (ie, Qualcomm) to do the DSP for the images; unless of course they collaborate with the SOC manufacturer.