Computer vision deals with the problem of manipulating information contained in large quantities of sensory data, where raw data emerge from the transducing 6 7 sensors at rates between 10 to 10 pixels per second. Conventional general purpose computers are unable to achieve the computation rates required to op erate in real time or even in near real time, so massively parallel systems have been used since their conception in this important practical application area. The development of massively parallel computers was initially character ized by efforts to reach a speedup factor equal to the number of processing elements (linear scaling assumption). This behavior pattern can nearly be achieved only when there is a perfect match between the computational struc ture or data structure and the system architecture. The theory of hierarchical modular systems (HMSs) has shown that even a small number of hierarchical levels can sizably increase the effectiveness of very large systems. Infact, in the last decade several hierarchical architectures that support capabilities which can overcome performances gained with the assumption of linear scaling have been proposed. Of these architectures, the most commonly considered in com puter vision is the one based on a very large number of processing elements (PEs) embedded in a pyramidal structure. Pyramidal architectures supply the same image at different resolution lev els, thus ensuring the use of the most appropriate resolution for the operation, task, and image at hand.