This resource captures object affordances in a direct, multimodal, and naturalistic way. Following a "thinking aloud" method, spontaneously-generated verbal and motoric data on object affordances have been elicited by 124 participants in a series of three behavioural experiments. The experiments comprised visuo-tactile stimulation stimuli (lithic tools), which were captured audiovisually from two camera-views (frontal/profile). This methodology allowed the acquisition of more than 96 hours of video, audio, and speech covering: (a) object-feature-action data (e.g., perceptual features, namings, functions), (b) exploratory Acts (haptic manipulation for feature acquisition/verification), (c) gestures and demonstrations for object/feature/action description, and (d) reasoning patterns (e.g., justifications, analogies) for attributing a given characterization. The corpus along with detailed descriptions of its contents can be accessed at: http://csri.gr/downloads/plt