Touch2Space integrates tactile sensation with digital visualization and auditory feedback. This project embodies an intersection of sensory psychology and cognitive science. At its core, Touch2Space is rooted in the principles of Multisensory Integration and Embodied Cognition. It provides a platform to explore how our perception shapes and is shaped by the interaction of different sensory modalities.
Utilizing advanced machine learning techniques, including convolutional neural networks, the system categorizes textures into distinct datasets. This classification is further enhanced through computer vision algorithms, enabling real-time texture analysis and processing.
We have trained two customized datasets: Soft and Rough, where each class contains over 200+ images. The images were preprocessed through trimming them into 2'x2' square pixels, as well as turning them into Black and White images. By this color transformation, the emphasis shifts from color information to texture and brightness details. This allows the analysis to concentrate on the variations in texture, which is more relevant for differentiating between soft and rough materials. Additionally, reducing the image size significantly simplified the data, which reduced the computational load.
For machine learning models, especially when using neural networks, managing the dimensions of input data is crucial. By standardizing the image size to two by two pixels, you ensure uniformity in the input data, which can lead to more efficient and effective training of the model.
In a conceptual sense, reducing images to such a minimalistic form can be seen as an abstraction of reality. It aligns with the idea of viewing the world through a 'machine's gaze,' where the complexity of the real world is distilled into simple, yet meaningful, representations.
The transformation of detailed textures into minimalistic representations resulted in distinctive and interpretative visualizations and soundscapes when projected and played. For our installation, we have experimented with a variety of soft, rough, and in-between textures. Silk, fabric, brush, fishnet, and different ratios of PETG and PLA were a part of our experimentation palette.
The important aspect of this project is how the touch is converted into visual and sonic space-making. As of now, we make this technology through a phone camera, and the touch is through a device. Our future goal with this project is to afford really touch, either through sensor wearable, which we are working on at the moment..