Provocation 1: Data and Design

… it appears from Trobriand magic that these people continually exhibit a habit of thinking that to act as if a thing were so will make it so.  In this sense, we may describe them as semi-Pavolovians who have decided that “salivation” is instrumental to obtaining ‘meat powder.’

            – Gregory Bateson, “Social Planning and the Concept of Deutero-Learning”

 Approximately one year ago, I got into a debate with a fellow classmate of mine on the importance of information (data) for behavioral change (or action).  My classmate’s interpretation of information was decidedly instrumental, and he proposed that the revelation of specific relationships was sufficient for the inducement of behavioral shifts.  At the time, I rebutted that there is a fundamental disconnect between knowledge and experience, and hence between understanding and action, and that this is perhaps most apparent in systems of psychological dissonance.  For example, confronted with the factual correlation between smoking and increased chances of contracting certain kinds of cancers, etc., people still smoke – the decision is not intellectual (even though some smokers will create elaborate intellectual models to justify their decision to smoke), rather it is behavioral, and there is a clear division between the two.

What my classmate was advancing was the same kind of pseudo-Pavlovian belief that Bateson attributes to Trobriand magical practices (knowledge/perception = action/reality).  Our debate then extended to the importance of data in general, as a determinant for both understanding and behavior.  My classmate advanced the argument that data (as an abstraction) was always useful, and that the more data, the better, while I contended that data without context is essentially meaningless, and that an insuperable amount of data reduces the meaning of this data as actionable content exponentially; without the means to parse this data, and without a context to frame it, it is simply an opaque matrix of irrelevant measurements.

I would now extend this critique; to quote Bateson again,

“… ‘data’ are not events or objects but always records or descriptions or memories of events or objects.  Always there is a transformation or recoding of the raw event which intervenes between the scientist and his (sic) object.  The weight of an object is measured against the weight of some other object or registered on a meter.  The human voice is transformed into variable magnetizations of tape.  Moreover, always and inevitably, there is a selection of data because the total universe, past and present, is not subject to observation from any given observer’s position.

“In a strict sense, therefore, no data are truly ‘raw,’ and every record has been somehow subjected to editing and transformation either by man or by his instruments.”  (Bateson, Introduction to Steps to an Ecology of Mind, The University of Chicago Press, Chicago, 2000, xxv – xxvi).

Of course, in short form, this is a re-interpretation and articulation of the Heisenberg paradox that through the act of observation, the scientist influences the results of his/her experiment, perhaps most importantly by specifying the research parameters for the experiment in question, and designing the experiment in such a way as to illuminate only those attributes for which the scientist is looking.  In other words, the ends of the experiment become implied by the means.

Thus, the representational method employed by the researcher, also implies a level of abstraction that diverts attention away from events and objects, and to the system of representation used to describe them.  This further implies a level of disconnection between the object or event (as well as a potential limitation of the subject to fit within pre-defined restraints) from our understanding of either, as such.  While this has obvious bearing on the arguments advanced by Andrew Pickering in his The Cybernetic Brain with reference to the ontological differences between cybernetics and the traditional sciences, I would like to leave it as a background condition for the provocation that follows.

On May 25, 2011, I participated in a meeting of the Situated Technologies Research Group at the University of Buffalo; a small intimate affair that included students (past and present) and faculty.  During the course of the meeting, the issue of data application as a means for form generation was advanced, and the current Chair of the School of Architecture and Urban Planning at UB, Omar Kahn, advanced a subtle differentiation between what can be called variable interpolation and parametric design.  His general comments, I think, are worth repeating here, obviously as paraphrase, rather than word for word transcription.

Prof. Kahn’s argument was that the essential difference between these two modes of data-based form generation lies in the intentional (or conceptual) application of data in the articulation of the final form.  In variable interpolation, a pre-defined dataset – say the amount of light entering a space over time, and its penetration within this space – becomes the essential driver of form in a one-to-one relationship.  The strategy here would essentially be plotting a series of points within the room, accumulating lighting data (assume natural light) at these points, and lofting a figure between these points to generate a series of abstract representations that map directly to the lighting data at each point.  Sadly, it is at this point, and with a number of relatively “interesting” formal representations of natural lighting over time, that many students stop, contented with the purely formal attributes of their “analysis.”

In opposition to variable interpolation, Prof. Kahn advanced systems of parametric design, in which data was given intentionality via limiting factors that constrained the form in meaningful ways.  Assuming the same experimental conditions (natural lighting in a room) this design strategy may seek to generate optimal lighting conditions in the room over a period of time; thus the researcher applies various parameters to the experiment, say, maximum, minimum and mean lighting conditions based on general conditions of comfort for a human subject.  Using the same data that drove the formal generation in the interpolative model described above, the researcher/designer then extrapolates an optimal form (based on his/her parameters) that results in the desired lighting effects for the room.  While the previous design is a formal interpretation of lighting data, the later (parametric) design is performative optimization of lighting conditions in the room, representing the meaningful application of data in reaching specific design conditions.  While the constraints of the experiment were well-defined (a prior), the formal consequent is unpredictable, but more effective given the restraints applied at the beginning of the experiment.

Needless to say, I find the latter both more interesting and more useful in architectural design, but I would also argue that it is still significantly limited, not because of the design parameters introduced at the beginning of the project, but because of the intermediary mediation of the experience of lighting (an ephemeral condition) through systems of representation that are highly circumscribed.  A design like this responds to lighting conditions analyzed during the duration of the experiment, but it does not adapt to emergent conditions – the final design will be sufficient (ideally, most of the time), but not optimal all of the time.  This leads to the ultimate provocation of this admittedly long post – the essential difference between responsive design and adaptive design.

If my reading of cybernetics is right (which may be up for debate), then the ultimate orientation of this science was not towards responsive behavioral systems (which are arguably simplistically causal), but rather towards adaptive behavioral models which exhibited emergent qualities that could not be predicted by the initial design restraints.  What the formal resultant of parametric design implies is a direct means/ends relationship – data is used instrumentally to derive specific formal conditions that more or less match the design parameters.  An ephemeral, shifting condition is translated into a relatively static one via representation and formal articulation.  Conversely, in adaptive design, we are dealing with a direct correspondence between the existential design conditions of the room in real time, which will (more often than not) result in a-formal systems based on emergent patterns.

This is where the architect, embedded as he or she is, in a discipline of formal articulation (compounded by the contemporary discourse of monumental, iconic design), runs into some difficulty, and finds him/herself stuck, not only in the cross-currents of the profession and of cultural perceptions pertaining to the profession, but also in a relationship with systems engineers where he/she must either adopt the language of systems engineering (with all of its negative connotations of social engineering), cede ground to such engineers, or abandon the ambition of adaptive design altogether.  The most famous (and perhaps most disastrous) attempts at designing a truly adaptive architecture, the Price/Littlewood/Pask Fun Palace, Yona Friedman’s Ville Spatiale and arguably Stafford Beer’s VSM, as it was realized (more or less) in Chile, all suffer (in certain respects) because of their ephemeral open-ended nature.  In fact, as Stanley Mathews argues in From Agit-Prop to Free Space:  The Architecture of Cedric Price, it is precisely the open-ended nature of these structures, the inability to define, precisely, what they are, that has continually frustrated attempts to realize them, even where there has been significant initial public interest and support.

Some of the questions this raises are: in a discipline in which representation is such a fundamental aspect of realization, and where form is considered the crucial determinant of “popular” perception, how do we advance a more abstract, emergent and a-formal system of adaptive design?  At what point does architecture (which already embodies strong, if not purposefully obscured, aspects of both systems and social engineering) cede to either and/or both, or take a commanding role in the development of these sciences as they apply to human habitation?  What role does parametric modeling (if any) play in generating adaptive architectural designs?  And, how can we move from a more data-driven (representational) model of responsiveness to a more open-ended, ontological, model of interaction and communication?

Of course, these questions do not exhaust the potential field of investigation that this provocation implies, but they represent a sincere attempt to begin with, what I believe, are at least some of the right questions.  The goal of this article is to begin a debate around these questions, and others posed by designers attempting to navigate similar fields of enquiry, and I hope that more questions will be posted in response, as well as at least some interesting proposals for those already advanced.

Advertisements
This entry was posted in cybernetics, data, mapping, parametric design. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s