Mapping expectations / class impressions #3

By | November 27, 2013

The various class formats we experimented with through the years were in part always about how heuristic design thinking can be underpinned by simulation models as cognitive tools. It’s therefore a good exercise to finally be able to contrast performance expectations with actual mappings of simulation model outputs in a spatial format- centered around question/assumption complexes like “I assume it would be a rather lousy thermal situation in [insert building space here]. Let’s see whether that shows up in the model”. As it turns out, those assumptions usually do show up, connected with the discovery of other spatial performance pitfalls otherwise not easily diagnosed in large(ish) models. The two images accompanying this post are snaps from such an interactive question/answer session; the designer in question already suspected certain spaces of his design to be a tad problematic, something the other participants picked up on pretty quickly.

small_IMAG5448

At any conference I’ve attended throughout the last two years, there has been a real glut of genetic/algorithmic performance optimization papers that claim to (semi)-automatically find various “optima” in a given design space, but few that actually take into account actual design thinking, which is still (and imho always will be) driven by the semi-heuristic knowledge base and experience of the involved designers. It always left me wondering whether once an “optimum” revealed itself, how robust would it really be, and to what degree would the people driving the machine actually understand why something is now “optimal”? As I wrote in an older paper for BS13, design thinking is (or at least can be..) holistic, expects the unexpected, and accounts for common sense performance modifiers that are not necessarily encoded in automated optimization procedures. Close the louvers, break performance; can your model close the louvers? Designer can imagine them closed, with the windows open, and disgruntled employees overriding the heating setpoints (maybe not the last bit, but we’ve all been there). What would happen then? The more performance domains interact, the trickier the whole situation. I’d argue that what needs to be the most robust is the mental model of performance interactions, something that can only be reinforced by properly re-representing domain baselines, and imagining a plethora of scenarios that might throw them into chaos. For once you have humans in the mix.. not all is linear. (go back in time to the previous class impressions post)

small_IMAG5402