Yes, it's definitely a good idea to use "raw" images for analysis. And remember that the screen itself (and perhaps the computer that drives it) will apply a transfer function ("gamma") to each component.
To separate the effects of the colour response and the transfer function, I think I would do this:
- Test with a single primary colour illuminated a time and always the same value. For example (0,0,0), (160,0,0), (0,160,0), (0,0,160). Subtract the black level from the last three measurements, invert the matrix, then normalize each row to sum to 1.0. This should yield something like a colour correction matrix (for that particular display/camera combination).
- Test with all grey levels from (0,0,0) to (255,255,255). This should reveal the display's transfer function (I assume it's the same for all three components!?)
Hi again,
I measured the brightnesses of the calibration images and got the following results:
Code:
(0,0,0) : R: 4119, G: 4129, B: 4118 # Blacklevel# Calibration images with subtracted blacklevel ([R,G,B])(110,0,0) : [[11764.6732 3871.794 737.8676](0,110,0) : [ 1856.7948 24347.7208 4492.4592](0,0,110) : [ 608.5484 2846.1144 14478.0804]]Code:
[[ 8.71934941e-05 -1.38484322e-05 -1.46686403e-07] [-6.19809627e-06 4.36018169e-05 -1.32134926e-05] [-2.44652394e-06 -7.98920260e-06 7.16736162e-05]]Code:
[[ 1.19119439 -0.18919043 -0.00200396] [-0.25622314 1.80245581 -0.54623266] [-0.03995115 -0.13046176 1.1704129 ]]What is the idea of this "row-normalization"? Does it require an additional white balancing in order to correct the green overshoot?
Or do I have some errors in my calculations?
Statistics: Posted by Sepp88 — Tue Aug 05, 2025 10:28 am