This research project aims to investigate the intersection between interactive architecture and performative architecture (when the latter relies on the automation of processes related to sustainability, environmental comfort, and energy efficiency) through the exploration of the idea of interactive architecture as spaces open to the continuous reprogramming of their interactivity by their users (understanding interactivity as the relationship between what is pre-programmed and what is indeterminable). To this end, we intend to develop this research along two complementary fronts: in the theoretical field, through a Feenbergian constructivist approach to technology in which both technology and technical objects must be open (reconfigurable, reprogrammable), and their users must actively participate in this reconfiguration; and in the experimental field, through the problem of how to program a technical object to be continuously reprogrammable. The openness of programming a technical object will be investigated and concretized in the final development of the HidraH System and the interactive interfaces derived from it. These interfaces will be developed at Lab NEXT (School of Architecture – UFMG) and will be part of an interactive space that reads variations in environmental intensities and communicates this information to users. These, in turn, will be able to reprogram the behavior of this space. In the theoretical field, this openness will be investigated starting from a review of the concepts of meta-object (Abreu, 2018), non-object (Goulart, 1955), and hybrids (Latour, 2003).
The HidraH System, in its current stage, generates code based on the configuration of sensors and actuators connected to the microprocessor. In the figure above, the sensors and actuators appear at the top of the image. From them extend “wires” that connect to the node representing the microcontroller. This configuration in the HidraH system must reproduce what is found in the physical circuit: the nodes above represent two encoders (potentiometers), two RGB sensors, one accelerometer, one light sensor, and one ultrasonic sensor. After configuring the assembly of sensors, actuators, and microcontroller, the user defines the names of each sensor/actuator. HidraH then automatically generates the variables for the code. Once this is done, simply press the “write” button and HidraH will generate the code to be installed on the microcontroller. This code will make all connected inputs and outputs visible to the network.
Above we see a website that makes visible the information generated by the sensors and actuators available on the network. There we can access the names of the variables generated by the sensors and actuators.
In the figure above we can see the mapping of the variables generated by the sensors. The nodes at the top of the image receive information from the sensors. This is done by entering the name of each sensor’s variable in the “topic” field. In this way, the node “subscribes” to receive information from a sensor. On the right side of the figure we see the nodes responsible for sending this mapped information across the entire network.
Above is an example of how remotely generated information can serve as parameters for video image filters, automated control of a video player, generation of sound effects, etc.



