Pillow (PIL) and NumPy libraries can do wonders in Python! I had once he requirement to overlap two images – not watermarking.
I found several alternatives, but curious to see which would work best.
- (x+y)/2 … Mathematically, x/2+y/2 seems equivalent to above, but it is not. We’d be loosing a ton of info by doing so!
- Numpy.minimum((x+y),256)
- final = (x+y/2) and then addition = addition[addition>256]=256
- Pillow’s Image.blend(x,y,0.5)
- Pillow’s Image.composite(x,y,y)
Input images used :
1.png | 2.png | C.png |
![]() |
![]() |
![]() |
Do you speak Parseltongue? I speak Python.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from itertools import izip | |
from PIL import Image | |
import numpy as np | |
def getImageDifference(x, y): | |
diff3 = 1.0 | |
# http://rosettacode.org/wiki/Percentage_difference_between_images#Python | |
if x.size == y.size: | |
pairs = izip(x.getdata(), y.getdata()) | |
if len(x.getbands()) == 1: | |
# for gray-scale jpegs | |
diff3 = sum(abs(p1 – p2) for p1, p2 in pairs) | |
else: | |
diff3 = sum(abs(c1 – c2) for p1, p2 in pairs for c1, c2 in zip(p1, p2)) | |
ncomponents = x.size[0] * x.size[1] * 3 | |
diff3 = (diff3 / 255.0 * 100) / ncomponents | |
return diff3 | |
i1 = Image.open('1.png').convert(mode='L', dither=Image.NONE) | |
i2 = Image.open('2.png').convert(mode='L', dither=Image.NONE) | |
cc = Image.open('C.png').convert(mode='L', dither=Image.NONE) | |
# Variation 1 | |
pixelThreshold = 200 | |
i1 = np.array(i1) | |
i1 = np.where(i1 > pixelThreshold, 255, 0) | |
i2 = np.array(i2) | |
i2 = np.where(i2 > pixelThreshold, 255, 0) | |
final1 = (i1+i2)/2 | |
final1 = np.where(final1 > pixelThreshold, 255, 0) | |
final1 = Image.fromarray(final1.astype(np.uint8)) | |
final1.show() | |
print getImageDifference(final1, cc) # ==> 0.12561853664 | |
# # Variation 2 | |
# pixelThreshold = 200 | |
# i1 = np.array(i1) | |
# i1 = np.where(i1 > pixelThreshold, 255, 0) | |
# i2 = np.array(i2) | |
# i2 = np.where(i2 > pixelThreshold, 255, 0) | |
# final1 = np.minimum((i1+i2),256) | |
# final1 = Image.fromarray(final1) | |
# final1.show() | |
# | |
# # Variation 3 | |
# pixelThreshold = 200 | |
# i1 = np.array(i1) | |
# i1 = np.where(i1 > pixelThreshold, 255, 0) | |
# i2 = np.array(i2) | |
# i2 = np.where(i2 > pixelThreshold, 255, 0) | |
# final1 = (i1+i2)/2 | |
# final1[final1>256]=256 | |
# final1 = Image.fromarray(final1) | |
# final1.show() | |
# | |
# # Variation 4 | |
i1 = Image.open('1.png').convert(mode='L', dither=Image.NONE) | |
i2 = Image.open('2.png').convert(mode='L', dither=Image.NONE) | |
# final2 = Image.blend(i1, i2, 0.5) | |
# print getImageDifference(final2, cc) # ==> 0.661266633307 | |
# # Variation 5 | |
final2 = Image.composite(i1, i2, i2) | |
final2.show() | |
print getImageDifference(final2, cc) # ==> 0.0380851773602 | |
# courtsey : https://stackoverflow.com/questions/524930/numpy-pil-adding-an-image/ |
Here are the results :
I am using getImageDifference(x,y) function to compare the outcome with the expected image and get the difference, the lesser the difference, the better, 0 is ideal. Note that if you see few pixel off, that is intentional, as chances are, the human eye would not catch it.
Variant | Difference |
---|---|
Variant 1 | 0.12561853664 |
Variant 4 | 0.661266633307 |
Variant 5 | 0.0380851773602 |
As far as my testing, for my particular usage is concerned, 1st variant below has performed better job as compared to others – in terms of getting the job done and being general enough. As you might have noticed, I am converting image to black and white based on some predefined threshold value (=200).
2nd variant gave me a white image, where as 3rd was kind of close by giving me gray color, instead black.
If the arrays of the input images have values 0-255 or 1-256 then adding these two arrays together and dividing by 2 would always be lesser or equal to the maximum value. I do not quite follow why addition = addition[addition>256]=256 is needed as 3rd variant suggests.
4th variant, does what it says – blending (Pillow 3.3.x Docs). 5th variant is to use composite function from Pillow (PIL).Here is what blend (alpha = 0.5) and composite functions return.
Blend | Composite |
![]() |
![]() |
0.661266633307 | 0.0380851773602 |
But there’s a catch … the masking parameter in the composite function does matter. See below. It does what it says. In this regard variant 1 is general enough it does the job without overthinking it.
If you want to tell Unix to deliver hammer to your foot, you can. And Unix would make sure to deliver the hammer to your foot in the most efficient way possible!
Composite parameters | Result |
Image.composite(x, y, x) | Image.composite(x, y, y) |
![]() |
![]() |
1.29032071241 | 0.0380851773602 |
May the source be with you!