The goal of this work is the development of a novel computational formalization of whole-body affordances which is suitable for the multimodal detection and validation of interaction possibilities in unknown environments. The hierarchical framework allows the consistent fusion of affordance-related evidence and can be utilized for realizing shared autonomous control of humanoid robots. The affordance formalization is evaluated in several experiments in simulation and on real humanoid robots.