What is the Responsive Environments Group and what topics are you looking at?
At the Responsive Environments Group of the MIT Media Lab we look at how people connect to the nervous system of sensors that covers the planet. I think one of the real challenges for anybody associated with human interaction and computer science is really figuring out how people are transformed by this. The internet of things is something I’ve been working on for at least 15–20 years, ever since we referred to “ubiquitous computing1.” Now we’re looking at what happens to people after we connect to sensor information, in a precognitive and visceral way, not as a heads-up display with text or some simple information. How does that transform the individual? What’s the boundary of the individual, where do “I” stop?
We already see the beginnings of it now. People are all socially connected through electronic media, and they’re connected to information very intimately, but once that becomes up close and personal as part of a wearable, it reaches another level entirely. And what about when it eventually becomes implantable, which is as far as we can see right now in terms of user interface? Where does the cloud stop and the human begin, or the human stop and the cloud begin? How are you going to be connected to this, and how are you going to be augmented by it? These are fascinating questions.
What specific research are you working on, especially with regards to the built environment?
We’re doing projects that are definitely impacting the built environment and that are inspired by the changes in the built environment that technology provides. Beyond that, we’re also really interested in how people act in natural places in different ways. We did a project six or seven years ago controlling heating with comfort estimation, done by my then-student Mark Feldmeier. We built a wrist-worn wearable much like a smartwatch. It would monitor activity using very little power, so you could wear it for years before you had to change the battery. It also measured temperature and humidity every minute, and obtained location from the radio. Indoor location will be one of the next big sensors, so to speak, that’s going to roll out and transform our in-building experience. You’ll know within a few centimeters where people are indoors. That’s going to open up so much in terms of user interaction. In our project, we knew something about your local state because we were measuring these parameters right on the body. So, we essentially learned how to control heating, ventilation, and air conditioning (HVAC) based on the sensors as labeled by your comfort. You’re not controlling the HVAC like on a thermostat, you’re saying if you’re comfortable or not.
I think that’s basically what the future interface is going to be. We’re not going to tell building systems directly what we want; they’re going to infer our needs. At some point, we’re going to label whether we like something or not and they’re going to infer from that, and be able to bootstrap. This goes back to the pioneering work of Michael Mozer in the 1980s, when he had his house controlled by a neural net and switches were just doing reinforcement, essentially. We can take that to a whole other level now.
Before the smart HVAC project, we did a lot of user interface, wireless sensing, and wearable sensing, not concerned directly with the built environment. More recently, we’ve been focusing on lighting: for us lighting is intriguing because we now have control over any small group of lights or any fixture in a modern building. You can even retrofit a building with Bluetooth-enabled fixtures for lighting. But how do you interface to that? It’s not clear, it’s now a bit of a Wild West. So, we started projects that would label the light coming off the fixtures by modulation. If you modulate every fixture with a unique code, then you can see how much illumination comes from each fixture with a small, simple sensor. On our lighting controller, I can just dial my lighting as I want and it will be optimally using only the illumination it needs from proximate fixtures. It could be a wearable that I have on my wrist or eyeglasses that become my lighting control anywhere.
Can these innovations solve the problem of energy consumption?
In our tests, the smart HVAC had a significant effect on energy consumption, and it optimizes comfort as well as energy. Our current lighting controllers run off context. Knowing more or less what I’m doing, it knows the kind of situation I’m in and adjusts the lighting to be optimal for that. We’ve basically projected complex lighting into control axes optimized for humans. Instead of working with sliders or presets, the system can automatically adjust and converge pretty quickly into the lighting you want. I have a student wearing a Google Glass and the room illuminates automatically according to what she is doing. The lighting will change if she moves around or if she is in a social situation versus a work situation. It detects this, and will smoothly change the lighting to be appropriate. Of course, we can also optimize for energy consumption as well as satisfying contextual suggestions.
And now, it’s not just lighting: we’re also working with projection. Before too long we will have surfaces covering entire walls that provide dynamic video imagery. We now have large monitors, of course, and eventually we’ll have smart wallpaper. How do you control that to bring the right atmosphere into the room? We look at it responding to the individual, because we can measure affective parameters as well: Are you stressed? Are you relaxed? Are you into flow? What is your internal state? We can start to estimate that and have the room respond accordingly. The precise way we respond is different for everybody and can change—the system has to learn. But we discovered that it can learn sequences of images and lighting and bring you into a certain state that can be better suited to what you’re doing.