Sensors “Visceralization”

  • Publish On 18 November 2017
  • Joseph Paradiso
  • 10 minutes

Urban responses to environmental issues are split between advocates of a return to/of nature and those who promote the technological solutions of the smart city, based on sensors and data. Joseph Paradiso, Director of the Responsive Environments Group at MIT, studies the interactions between individuals and computing technology. He explains how portable electronic sensors known as wearables allow access to a set of data that modify our experience of space and profoundly impacts the built environment. Electronic interfaces autonomously determine our needs, permitting the optimization of comfort and energy consumption. He sees a world of information becoming established in the real world, articulating wearables in real time with the general digital infrastructure. Carrying this virtual bubble along with us will even cause the notion itself of the individual to be modified. The roles of the virtual and the real also seem destined to change in his opinion, accompanying a “visceralization” of sensors and the digital, source of an increase in power of our sensorial capacities.

What is the Responsive Environments Group and what topics are you looking at?

At the Responsive Environments Group of the MIT Media Lab we look at how people connect to the nervous system of sensors that covers the planet. I think one of the real challenges for anybody associated with human interaction and computer science is really figuring out how people are transformed by this. The internet of things is something I’ve been working on for at least 15–20 years, ever since we referred to “ubiquitous computing1.” Now we’re looking at what happens to people after we connect to sensor information, in a precognitive and visceral way, not as a heads-up display with text or some simple information. How does that transform the individual? What’s the boundary of the individual, where do “I” stop?

We already see the beginnings of it now. People are all socially connected through electronic media, and they’re connected to information very intimately, but once that becomes up close and personal as part of a wearable, it reaches another level entirely. And what about when it eventually becomes implantable, which is as far as we can see right now in terms of user interface? Where does the cloud stop and the human begin, or the human stop and the cloud begin? How are you going to be connected to this, and how are you going to be augmented by it? These are fascinating questions.

What specific research are you working on, especially with regards to the built environment?

We’re doing projects that are definitely impacting the built environment and that are inspired by the changes in the built environment that technology provides. Beyond that, we’re also really interested in how people act in natural places in different ways. We did a project six or seven years ago controlling heating with comfort estimation, done by my then-student Mark Feldmeier. We built a wrist-worn wearable much like a smartwatch. It would monitor activity using very little power, so you could wear it for years before you had to change the battery. It also measured temperature and humidity every minute, and obtained location from the radio. Indoor location will be one of the next big sensors, so to speak, that’s going to roll out and transform our in-building experience. You’ll know within a few centimeters where people are indoors. That’s going to open up so much in terms of user interaction. In our project, we knew something about your local state because we were measuring these parameters right on the body. So, we essentially learned how to control heating, ventilation, and air conditioning (HVAC) based on the sensors as labeled by your comfort. You’re not controlling the HVAC like on a thermostat, you’re saying if you’re comfortable or not.

I think that’s basically what the future interface is going to be. We’re not going to tell building systems directly what we want; they’re going to infer our needs. At some point, we’re going to label whether we like something or not and they’re going to infer from that, and be able to bootstrap. This goes back to the pioneering work of Michael Mozer in the 1980s, when he had his house controlled by a neural net and switches were just doing reinforcement, essentially. We can take that to a whole other level now.

Before the smart HVAC project, we did a lot of user interface, wireless sensing, and wearable sensing, not concerned directly with the built environment. More recently, we’ve been focusing on lighting: for us lighting is intriguing because we now have control over any small group of lights or any fixture in a modern building. You can even retrofit a building with Bluetooth-enabled fixtures for lighting. But how do you interface to that? It’s not clear, it’s now a bit of a Wild West. So, we started projects that would label the light coming off the fixtures by modulation. If you modulate every fixture with a unique code, then you can see how much illumination comes from each fixture with a small, simple sensor. On our lighting controller, I can just dial my lighting as I want and it will be optimally using only the illumination it needs from proximate fixtures. It could be a wearable that I have on my wrist or eyeglasses that become my lighting control anywhere.

Can these innovations solve the problem of energy consumption?

In our tests, the smart HVAC had a significant effect on energy consumption, and it optimizes comfort as well as energy. Our current lighting controllers run off context. Knowing more or less what I’m doing, it knows the kind of situation I’m in and adjusts the lighting to be optimal for that. We’ve basically projected complex lighting into control axes optimized for humans. Instead of working with sliders or presets, the system can automatically adjust and converge pretty quickly into the lighting you want. I have a student wearing a Google Glass and the room illuminates automatically according to what she is doing. The lighting will change if she moves around or if she is in a social situation versus a work situation. It detects this, and will smoothly change the lighting to be appropriate. Of course, we can also optimize for energy consumption as well as satisfying contextual suggestions.

And now, it’s not just lighting: we’re also working with projection. Before too long we will have surfaces covering entire walls that provide dynamic video imagery. We now have large monitors, of course, and eventually we’ll have smart wallpaper. How do you control that to bring the right atmosphere into the room? We look at it responding to the individual, because we can measure affective parameters as well: Are you stressed? Are you relaxed? Are you into flow? What is your internal state? We can start to estimate that and have the room respond accordingly. The precise way we respond is different for everybody and can change—the system has to learn. But we discovered that it can learn sequences of images and lighting and bring you into a certain state that can be better suited to what you’re doing.

With all these technologies, what is your vision of our daily life in ten years time?

I think it’s going to come back to what we envisioned in the early 90s already, when ubiquitous computing first came about at places like Xerox PARC, in Palo Alto, with people like Mark Weiser and all the early pioneers. They had the idea that computational infrastructure would become common—almost a socialistic principle—whereby we would share this infrastructure. In those days, you didn’t have mobile phones, monitors were precious things usually associated with a particular computer. I think we’re going to get into an era where the information world will be continuously brokered between wearables and infrastructure. Information will reach you in different ways—projected right into your eyes and ears or coming from monitors and speakers nearby. In this world, sensor data from everywhere will flow up, and context will flow down, to guide what you’re doing, or to guide the “digital” things that are happening in your vicinity. It’s not like we’re going to pull a phone out—I think there will be very little of that, where we run an app and then have to do stuff that diverts our attention. The world isn’t an app, and under ubiquitous computing users will always be adding on capabilities instead of downloading software.

It’s just going to be us doing what we do—the digital world is going to manifest in the right way around us, and we’re going to live under that pooled infrastructure. In a way it ushers in the dream of what early researchers in ubiquitous computing were after.

Machine-learning is advancing enormously—we’re only seeing the beginnings of it now. Context has always been hard, because the real world is noisy. Making a reliable decision about what you’re doing is tough. However, it has gotten better because we’ve got more information, more data. We are also better at learning algorithms, utilizing deep learning and related approaches and hardware optimized for this kind of thing. This is going to be leveraged far more. We’re going to be moving away from putting our fingers on screens.

And how will this impact the way we design houses, office buildings, etc.?

That’s a great point. I’m not an architect, but I suspect the whole notion of private versus public space is already changing. Look at a contemporary office space: people work in open environments, but I think people naturally also want to work in private environments too. There is a tension between both. It depends on what the team is doing and what the dynamic is. I think there are going to be different ways of isolating yourself in a public environment—wearables are one example. There will also be a revolution in connecting to other people in public and private environments—where other people can virtually be in your presence in many different ways, not just via a video conference. I suspect that this kind of infrastructure will change the nature of perceived space. There will be public displays conveying all kinds of information, both personal and aesthetic, related to the space. Lighting will be totally dynamic and everything will be networked.

Think of a building where you have a wearable computer. What is it going to be like? It’s an intriguing idea that people have explored in science-fiction and fantasy, but it’s not so far off now. Currently, we’re playing with a HoloLens from Microsoft and we’ve got an entire outdoor landscape manifesting on this table. You wear this thing, and suddenly you see this beautiful outdoor setting which you can virtually walk around, see sensor information manifesting on it, point to it and interact. This was in the realm of fantasy, but we’re building it now.

The roles for the virtual and the real are going to change. These constructs will be mobile, and you’re going to bring your virtual bubble with you as you walk around. The future definition of the individual is similarly intriguing. This will all probably affect workspaces and social spaces too. I don’t know exactly how it will roll out, but it will involve very creative architects—they can go a long way with this, I’m certain.

Can you tell us more about your work that involves nature and the living?

One of our projects relates to an outdoor location we called Tidmarsh. It’s an old bog that used to grow cranberries. These farms all moved north because of economic change, climate change, and change in the plants themselves. Much of Tidmarsh’s 600 acres have been turned back to nature. Rather than make a shopping mall or development, the owners really want to return it to what it originally was, a wetland. So, it’s been bulldozed, it’s been changed, and we’re interested in capturing this whole process. We built low-power wireless sensors for measuring the parameters of the wetland as it is restored and scattered them all over several locations to get fine-grain data which we’re now manifesting in a virtual world. Just like we do with the building in DoppelLab, you can now virtually go to this outdoor place and float through it, see the sensor information as it comes up through animations. We’ve got thirty microphones in part of the landscape, so as you’re moving you can hear the natural world. The sensors make music too. We’ve got three or four different compositions that are driven by the sensor data in real-time. We could do a city in that way too, in principle. So, this becomes a whole new art form, that is just starting to mature and which intrigues us very much.

Like a virtual city?

We call it, instead of sensor visualization, or sensor virtuality, sensor “visceralization.” It’s all kind of precognitive, hence visceral. We’ve built a framework where any kind of sensor can be used by any application, which is so important for the internet of things. You can also run it with VR headsets like HoloLens, Rift, or Vive. We are very interested in the idea of sensory augmentation. We’re going to start with the audio. If you’re looking across a certain area, we detect that you’re concentrating on something there, then we’ll feed up sounds from the microphones that are there, the sensors in the vicinity will be making some sonification or some music that will be blended in, and we’ll track your head, your location, and some idea of your sensory focus. In as natural a way as we can, we’ll see how much we can get away with expanded perception—what we call a “sensory prosthetic.”

Towards a resynthetised reality

In that kind of environment, the boundaries of the living, where the natural starts and ends, become very vague.

This is going to be a major issue in the future. The world of information wants to have a presence in the real world in different ways, and were blurring that boundary a bit. But were not going completely virtual. Most of the stuff in VR really is VR, where youre just in a virtual space and thats it. What we do is to have the real world driving the virtual environment in real time. We call it resynthesized reality, built on top of what we call cross-realityits another idea of a distributed top layer that is resynthesizing perceived reality through the sensor data.

People will be growing up in this world. Were getting to the point where we don’t need to pose a query to the web—its going to be driven by context and what the digital world, or the cloud, or the virtual world thinks is important. Were basically going to augment humans at first via these techniques and resources. Some of them will be cognitive, some of them will be visceral or sensorial. Eventually, were going to transform ourselves and everything via sculpting DNA—who knows what well do if we go far enough? There are other people working on those aspects here. Thats an intriguing future that is highly discontinuous.

Another technological dilemma we found in philosophical literature is how far we go in geo-engineering or eco-engineering. What’s your vision of this issue?

We need all the tricks at our disposal because we need to be ready and looking at all thepossible climate forecastswere soon going to be at a point where well be seeing significant warming effects. Eventually, if we spray the sulfates or whatever else into the stratosphere, it may relieve some of the symptoms of warming without huge side effects,although we dont know the climate well enough to say for sure what will happen in detail. We have to make better models, do some limited tests of these ideas, see whats possible,effective, and feasible. The danger is that we see it as a panacea—“Oh, we can just spray some stuff and then we can keep on burning fossil fuelsthat’s the worst possible outcome. Weve got to get off of carbon-based energy, maybe use techniques like these to control temperatures in the near future. Then, if we knew how to pull carbon effectively out of the air at some point, we can fix this properly and regulate actual climate. Were going to have to master the climate anyway someday, because it goes through natural cycles. If humans are around long enough for that to affect us, I think we’ll have to be able to deal with it, unless we get to a point where we dont care about climate, for example if we have transcended into something else that’s climate agnostic. We are on the cusp of seeing what that is going to be. Its an exciting time to be around, because no matter what happens there will be profound changes and we are close to seeing them all out.

Bibliography

explore

Article
Article

Educating Citizen Architects: for a meaningful architecture

Andrew Freear runs the Rural Studio program at Auburn School of Architecture (USA). He believes that schools of architecture have an ethical responsibility to train citizen architects who are locally committed to concrete projects and experientially connected to contexts and places. To design an inclusive city, the Studio adopts an experimental field approach, combining analysis of the territory’s endemic problems, understanding of residents’ needs and new construction techniques. Read the full interview published in STREAM 05!

Discover
Article
Article

The potential of the night

Once a sanctuary for dreams and imagination, nighttime has now been relegated to the mere role of a utilitarian prelude to daytime. Nocturnal realms possess an alchemical power capable of transfiguring our perceptions. However, when viewed through the lens of urban uses, the night also exacerbates inequalities and raises questions about the possibility of achieving an urban night that is accessible to everyone. Exploring the range of possibilities associated with the night reveals it as a space-time where complex interactions are woven that could be revitalized through a chronotopic and inclusive architecture.

Discover
Podcast

“ Architecture has a unique relationship with the transformation of reality: it is, in a way, atlastic. ”

Podcast

“ Architecture has a unique relationship with the transformation of reality: it is, in a way, atlastic. ”


Architecture is a political practice

Manuel Bello Marcano is an architect, lecturer at ENSA Saint-Etienne and sociologist of the imaginary at the Centre d’études pour l’actuel et le quotidien – CEAQ, Université Paris Descartes (Center for Current and Everyday Studies at Paris Descartes University). In his view, architecture is an act of aggregation designed to put the world in order: in this sense, he is interested in the political fictions mobilized to equip our thinking and, in this case, to build a “ togetherness ”. Follow his words and discover animality understood as community.

Discover
Vidéo
Vidéo

Beautiful like an encounter on the glass roof of colored clouds

For Daniel Buren, architecture is an open-air studio. In an exclusive interview with architect Philippe Chiambaretta, he talks about his site-specific work, where art and architecture meet, just like the Nuages Colorés that cover the scales of the 175 Haussmann glass roof.

Discover
Podcast

“ Making the most out of urine in agriculture isn't a technological or technical issue, it's a matter of social organization. ”

Podcast

“ Making the most out of urine in agriculture isn't a technological or technical issue, it's a matter of social organization. ”


Recycling urine to fertilize the soil

Since urine is an inexhaustible ecological fertilizer, why not use it instead of chemical fertilizers that are expensive to produce? Designer Louise Raguet suggests bringing back to the fields what has been collected there. Her research with the LEESU laboratory (École des Ponts) has led her to develop a unique project: urine separation in the future Saint-Vincent de Paul district of Paris.

Discover
Vidéo

Victor Cord'homme

Vidéo

Machine system

Through animation, installation, sculpture and painting, Victor Cord’homme reveals the complexity and tireless workings of an urban system over which humans are losing control. He works on the autonomy of his installations and exhibitions, whose behavior varies according to the audience, invited to interact yet powerless to do so.

Discover
Vidéo

Minh Nguyễn, Yoann Malinge

Vidéo

Reusing turbine blades : the winds of change

The growth in energy consumption and the obsolescence of our infrastructures suggest that by 2030, we’ll have a stock of 60,000 tonnes of end-of-life wind turbines per year. To absorb this material on an industrial scale, we need to invent new ways of producing, consuming and building. With this in mind, the La Paletière project aims to reuse turbine blades – composite materials with multiple properties – by turning them into roofing elements.

Discover
Podcast

“ Unfortunately, the ambiance is seen as a corrective factor to be dealt with. ”

Air in architecture

Emmanuel Doutriaux

Podcast

“ Unfortunately, the ambiance is seen as a corrective factor to be dealt with. ”


Air in architecture

The challenges of air in architecture encompass a wide range of considerations that can affect the shape of a building, its degree of openness, the proportion of voids and solids, or the implementation of specific technical solutions. To reconcile seemingly contradictory requirements, such as the tension between energy efficiency and natural ventilation, architects and engineers are redoubling their inventiveness. Air, due to its invisibility, invites us to create an atmosphere and to consider buildings in terms of breathability.

Discover