This book presents several signal processing algorithms for image fusion in noisy multimodal conditions, such as medical, surveillance and satellite imaging. It first introduces a novel image fusion method - Chebyshev polynomial analysis (CPA), which performs well for image sets heavily corrupted by noise. CPA's fast convergence and smooth approximation renders it ideal for indiscriminate denoising fusion tasks. The concept is then further extended by incorporating the advantages of CP with those of a state-of-the-art fusion technique named independent component analysis (ICA), to create a hybrid fusion scheme based on region saliency. Further, the book focuses on the development of a new metric for image fusion evaluation that is specifically based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. For this, gray-level co-occurrence matrix (GLCM) is utilised. Tests performed on established fusion methods verify that the proposed metric is viable, especially for multimodal scenarios.