English - Espaņol
Sharpness - Bokeh
Foto tutorial (English)
Foto tutorial (Espaņol)
Equipment recommendations US-ES
Flaat for Canon
Flaat for Nikon
Flaat for the BMC
Flaat for NEX-5N
Old Picture Style Tests
550D video lineskip
APS-C vs Full Frame
Badly assembled lenses and image quality
Lens mount compatibility chart
ISO on different cameras
High ISO on the 5D3
DIY: DR test chart
RGBWK Bayer sensors
Notes on DoF-FoV
Notes on crop-DoF-FoV
Custom Cropmarks for Magic Lantern on the Canon 550D
How many megapixels do I want?
How many megapixels can I see?
Quick Monitor Calibration Chart
I alreasy posted some ideas on some forums, e.g here, but I've thought out some additional details that I'll explain here more in depth.
The general idea is that, if we're going to end up subsampling our color information anyway, we might as well do it directly in the sensor, and get better DR out of it.
Given how our eyes work, 4:2:2 chroma subsampling seems to be enough for most application (it's already twice as much color resolution than the usual 4:2:0 that most cameras offer nowadays).
Also, the trend now is to have a sensor with 4 times as many photosites as pixels in the final image (e.g. the Canon C300 uses a 4K sensor to deliver a 2K image), so you have one full RGBG sample for each final pixel.
Taking that as given (photosites are still quite big, and it leads to very sharp images and -seemingly- nice grain), and knowing we're going to throw away at least half of the color informaiton when we encode the footage, we could substitute some of those RGB filters by two new ones: white (which is already used in some Kodak patents) and black (deep ND). I would arrange it as follows:
The white photosites increase the sensor's sensitivity (cleaner shadows), whereas the black photosites protect the highlights (only at half resolution, though). The color information is exactly what you need for a codec that uses 4:2:2 color subsampling: one full color signal for every two final pixels (8 photosites).
Recorded images should look just as good in terms of tonal range (there could be more issues with chroma moire, but good debayering algorithms should be able to avoid that), but the sensor would be capturing a lot more dynamic range: the sensor can see deeper into both the shadows and the highlights.