Digital photos, captured by a camera sensor, don’t always respect the real color scene perceived by human observers; in fact the color of an object depends not only on the color of its surface but also on the color temperature of the illumination under which it’s placed, color constancy makes images appear like they have been taken under a canonical light source. In this scenario, this thesis deals with the central role of automatically estimating the scene illumination and maintaining invariant the objects’ colour perception despite variations of the environmental lighting. In the digital world, this can be obtained through a two steps process: the illuminant’s colour of the scene captured by the camera is estimated and subsequently the image will be adapted using a single transformation matrix according to chromatic adaptation transform (CAT). After an introduction on the multiplicity of methods that constitute the current state of art for predicting the illuminant, it will be introduced the notion of neural network and how it became the preferred tool for computer vision tasks, then it will be examined the implementation of a Convolutional Neural Network (CNN) architecture, potentially the most effective one. Once selected an architecture among the ones trained and tested on a series of different datasets containing real world images (taken under known illumination) and their corresponding chromaticity values, it will be possible to use the learned model for correcting the colours’ balance of new images, captured under a unknown illuminant, on a large variety of devices since it can be implemented directly into the CMOS sensor of today’s smartphones and cameras. While in the past only very simple and low level algorithms, like Max-RGB or Gray-World, could be implemented inside digital consumer cameras due to their very low computational capability. Finally, the results obtained will be compared with the ones achieved through the previously cited methods on the various available datasets.
Nella fotografia digitale, non sempre il colore catturato dal sensore rispecchia il colore della scena percepito dalle persone; infatti il colore degli oggetti dipende non soltanto dal colore della loro superficie ma anche dalla temperatura colore dell'illuminazione sotto cui si trovano. Far apparire le immagini come se fossero catturate sotto una luce neutra è l'obiettivo del computational color constancy. In questo scenario, questa tesi si occupa del riconoscimento automatico del tipo di illuminazione presente nella scena e del mantenere invariata la percezione del colore degli oggetti nonostante i cambiamenti di illuminazione. Nel mondo digitale, questo processo viene spesso effettuato in due stadi: nel primo il colore dell'illuminante catturato dal sensore viene stimato e successivamente l'immagine viene corretta attraverso una matrice di trasformazione che tiene conto del adattamento cromatico. Dopo un introduzione dei vari metodi che ad oggi costituiscono lo stato dell'arte, verrà introdotto il concetto di rete neurale e di come essa sia diventata lo strumento cardine nei problemi di computer vision; verrà inoltre esaminata l'implementazione di una rete neurale convoluzionaria. Dopo aver addrestrato e testato varie architetture su una serie di dataset differenti contenenti immagini del mondo reale, sarà possibile usare questi modelli per correggere il bilanciamento dei colori di nuove immagini catturate sotto una sorgente luminosa sconosciuta. In passato non era possibile utilizzare tali modelli su dispositivi consumer in quanto non erano dotati di grosse capacità computazionali, dunque venivano utilizzati algoritmi più semplici come Max-RGB o il Gray-World; oggi invece è possibile implementare tali sistemi direttamente dentro i sensori cmos di cellulari e camere. Infine i risultati ottenuti verranno confrontati con quelli dei vari metodi sopra citati sui dataset disponibili.
Achieving computational color constancy through deep learning
REINA, MICHELE
2019/2020
Abstract
Digital photos, captured by a camera sensor, don’t always respect the real color scene perceived by human observers; in fact the color of an object depends not only on the color of its surface but also on the color temperature of the illumination under which it’s placed, color constancy makes images appear like they have been taken under a canonical light source. In this scenario, this thesis deals with the central role of automatically estimating the scene illumination and maintaining invariant the objects’ colour perception despite variations of the environmental lighting. In the digital world, this can be obtained through a two steps process: the illuminant’s colour of the scene captured by the camera is estimated and subsequently the image will be adapted using a single transformation matrix according to chromatic adaptation transform (CAT). After an introduction on the multiplicity of methods that constitute the current state of art for predicting the illuminant, it will be introduced the notion of neural network and how it became the preferred tool for computer vision tasks, then it will be examined the implementation of a Convolutional Neural Network (CNN) architecture, potentially the most effective one. Once selected an architecture among the ones trained and tested on a series of different datasets containing real world images (taken under known illumination) and their corresponding chromaticity values, it will be possible to use the learned model for correcting the colours’ balance of new images, captured under a unknown illuminant, on a large variety of devices since it can be implemented directly into the CMOS sensor of today’s smartphones and cameras. While in the past only very simple and low level algorithms, like Max-RGB or Gray-World, could be implemented inside digital consumer cameras due to their very low computational capability. Finally, the results obtained will be compared with the ones achieved through the previously cited methods on the various available datasets.È consentito all'utente scaricare e condividere i documenti disponibili a testo pieno in UNITESI UNIPV nel rispetto della licenza Creative Commons del tipo CC BY NC ND.
Per maggiori informazioni e per verifiche sull'eventuale disponibilità del file scrivere a: unitesi@unipv.it.
https://hdl.handle.net/20.500.14239/12262