My new (reconditioned) Google Pixel phone is advertised as having a 50mp camera. However, the photos it takes are all displayed as 12.5mp.
My brief googling suggests that the camera is as advertised but the phone downsamples (if that's a word) to a lower number of pixels.
Can anyone explain to me what this means in reality in terms of the photo quality?
Thanks.
50Mp shot through a lens the size of half a pea on a typical phone, I would guess that most of those pixels are wasted and that 12 is about the sweet spot of useful information that it can usefully glean.
In short it probably does you good and saves storage space at the same time. More pixels through a lens that can't benefit is just noise.
Which pixel phone is it? Looking at this article
https://www.dpreview.com/reviews/google-pixel-8-pixel-8-pro-review-two-top-...
on the Pixel 8 and Pixel 8 Pro it is only the Pro that allows 50mp images to be saved. I wonder if it is a case of identical hardware but software differences to restrict the capability of the lower price standard phone. The full hardware features are then only available the premium price model.
The trend for higher pixel counts in things like phones is just ridiculous marketing. For a given sensor size, more MP just means smaller pixels and more noise. Unless blowing up to A0 (and who does that with a phone pic?), 12 MP is more than enough.
Are they referring to panoramas or similar, stitching 4 x 12 mega pixel images together?
> The trend for higher pixel counts in things like phones is just ridiculous marketing.
This was what I was thinking.
Advertising 50mp and only giving you 12 is a bit misleading.
However, using 50mp and down sampling to 12 will increase image quality. Even if a phone lens struggles to resolve 50mp
For example, one of the pixels might be random noise, and in Pixel cameras, the algorithm should then be able to eliminate it and use data from the other 3 pixels.
Computational photography like this is the future. More data is good, and the extra data will also help with digital zoom.
Pixel cameras also take multiple shots, with different exposures each capture, and uses these to ensure shadow and highlight detail is retained (like we used to have to do manually).
Welcome to the future, they are great cameras, and if you take a photo in good light, they print well to medium sizes.
Doesn't the large sensor allow for digital zooming whilst still retaining definition?
> Can anyone explain to me what this means in reality in terms of the photo quality?
It doesn't mean anything. Except they don't respect you as a customer.
Megapixels stopped being a useful metric to interpret camera quality on phones a long time ago.
The more pixels you squeeze onto the sensor, the more they interfere with each other.
The fact that they compress down to 12.5mp probably tells you that the there's so much noise between pixels, that there's no quality loss half the number of pixels in the x and y direction. Meaning every four pixels only gives you enough definition to write one pixel in the final output.
So you don't really have a 50mp camera. You have a poor quality 50mp sensor, but not a camera.
> ..., using 50mp and down sampling to 12 will increase image quality. Even if a phone lens struggles to resolve 50mp
Downsampling 50->12 will be better than 50, but not better than if it had been a 12Mpx sensor in the first place. Unless the sensor implements analogue 'binning' - which I've never seen in a CMOS sensor - each of the 50Mpx sites will contribute read noise to the final image.
> For example, one of the pixels might be random noise, and in Pixel cameras, the algorithm should then be able to eliminate it and use data from the other 3 pixels.
Not really. You can't by definition distinguish random noise across pixels. You could try this with fixed pattern noise, but that tends not to be a problem on modern sensors.
> Computational photography like this is the future. More data is good, and the extra data will also help with digital zoom.
In imaging the data is the photons. You can't make more photons or data computationally, only by having a bigger lens or (pedantically) a more efficient sensor (so you convert more photons to electrons).
Computational imaging is mostly like 'fake news' - you're just massaging the raw data you have into something that the consumer will 'like' more.
> Pixel cameras also take multiple shots, with different exposures each capture, and uses these to ensure shadow and highlight detail is retained (like we used to have to do manually).
Yes you can do that, but like with the 50->12 downsampling, its main use is to compensate for the poor dynamic range of the tiny pixel sites in a 50Mpx sensor.
> Welcome to the future,
Indeed.
Thanks for the input folks. I've been out and had a play with the phone camera and the short story is that seems pretty good.
Most high end smartphones use very similar if not identical physical camera sensors these days (I believe a Sony unit is the most common at the moment).
The major differentiator in image quality is the processing software, and all the tricks it uses to convert the raw sensor data into an image. All the flagships are pretty good these days, but Google pixels are particularly good, particular the 'a' range which beats any other midrange android phone thanks to the superior image processing.