from intuition to simulation

by Patrik Schumacher

This essay is concerned with the possibility of a scientific approach to architectural design via the simulation of a design’s social functionality as key ingredient of optimisation via genuinely architectural, generative design processes. More precisely, what is posited here are design processes using evolutionary algorithms that use agent-based life-process simulations with social interaction frequencies as success measures to optimize social functionality. This methodology is currently being developed here at the University of Applied Arts, under the author’s leadership. The research is conducted within the framework of a three year funded research project involving a small team of researchers and PhD candidates: Robert Neumayr (team leader), Daniel Bolojan, Josip Bajcer, and Bogdan Zaha. In parallel a research team at Zaha Hadid Architects in London works towards the same, shared end: Tyson Hosmer (team leader), Soungmin Yu, Sobitha Ravichandran and Mathias Fuchs.
During the 15 years I have been teaching design studio with Zaha Hadid at the University of Applied Arts in Vienna our focus was on what might be called formal research, i.e. the expansion of the formal repertoire available to architects in their design efforts. This research was conducted by exploring new tools, concepts and formalisms and to then apply them to architectural design projects. The results of this design research can be viewed in two book publications: The first,’Total Fluidity’ (i), presents the work from 2000–2010, and the second, ‘Fluid Totality’ (ii), presents the work from 2010-2015. In the hands of designers tackling architectural design briefs formal repertoires become problem solving repertoires and their problem solving capacity and versatility is “tested” via the design output.

It should be a priori clear that an expanded repertoire that gives designers more degrees of freedom and more problem solving moves, the sheer possibility of finding a better solution has thereby been increased, because the search space has increased. If we take structural design via topology optimisation as example, it is clear that a solution search within restricted formal repertoire like a minimalist insistence on rectilinear geometry and seriality will be inferior to a more expanded repertoire that allows for the members of a framework to take any direction and where continuous differentiation supplants serial repetition of elements. Similar evolutionary optimisation approaches have been defined for circulation networks and layout patterns based on schedules of accommodation and adjacency requirements. Here too, a larger search space, e.g. street networks allowing for diagonals rather than remaining confined to grids, are advantageous. All these optimisation tools work via generate and test cycles whereby the testing step requires the application of a success measure that operates as fitness function. The generate step might be random, or constrained by a heuristic that offer a better chance to generate good and increasingly better results. Even in intuitive design processes, where “success” is less easily defined by a simple measure, it seems plausible to assume that an expanded repertoire offers a greater chance to find a better solution than is likely to be found with a restrictive repertoire. However, this advantage presupposes that here too operates some guiding functional feedback during the design process or at least a way of appraising the probable social functionality of the completed design proposal. Otherwise design remains just a mere random play with forms. So what we need is a way to appraise the design’s social performance relative to alternative design options. The difficulty of formalising and operationalising such an appraisal has led to the notion that good architectural design belongs to the realm of the ineffable. This state of affairs is highly unsatisfactory. The research work presented here aims to conquer this realm of the ineffable and bring into into the ambit of rational discourse.

Methods of Design Appraisal

We can distinguish four ways to come to such an appraisal, i.e. intuitive evaluation theoretical evaluation, simulation-based evaluation, and measurement. The intuitive evaluation relies on the designer’s overall sense of satisfaction with a given solution over alternatives he/she has been trying. Alternatives might have “problems” of fitting area requirements, adjacencies, legible circulation routes into a given envelope or formal arrangement. On simple counts like solving area requirements, adjacencies and circulation paths, it is prima facie obvious enough that a more expanded design repertoire like parametricism should be superior to a more restricted repertoire like minimalism. Parametricism gives the designer more tools and moves to solve the requirements. That shifting or shaping some volumes outside of a given formal straightjacket might solve a given area problem can often be readily observed and read off the evolving design model. The same might hold true for making design moves that improve the circulatory functionality of a plan, e.g. by adding a shortcut. The intuitive appraisal, if it is to home in on a more comprehensive notion of social functionality, involves using the visualisations of the design to imagine approaching, entering and using the building, and further to imagine the life process of the building. The designer performs a mental quasi-simulation. Conscious thought in general can be characterized as a form of simulation. Of course these simulations are very volatile, rely on prior experience of and extrapolation from similar spaces and deliver no quantitative information. In the case of complex designs, visualisations like renderings, animations and VR experiences allow a good intuitive evaluation of the legibility of the space, although the designer might be deceived here by his/her familiarity with his/her design.

Theoretical evaluation operates via generalisations that pertain to the comparison between formal repertoires or styles. Above insights, namely that a more expanded design repertoire like parametricism offers a superior problem solving tool set in comparison to a more restricted repertoire like minimalism, is a valid theoretical generalisation that should lead designers to opt for the expanded repertoire of parametricism as an underlying default condition of all their designs, i.e. it should lead them to adopt this style. However, this insight is a second order evaluation, a meta-evaluation that depends on experience with prior, concrete evaluations, verified either via intuition, simulation or measurement. Theoretical evaluations can home in on more specific generalisations. As example might serve the rather unproblematic claim that parametric variation and customisation, for instance in the case of residential developments, offer social functionality advantages over mass repetition, especially in a society where life style and income differentiations proliferate. However, this general insight says nothing about the concrete variants and customisations that a given project would require, i.e., the semblance of enhanced functionality in general might coincide with actual dysfunctionality. Another theoretical generalisation suggests that the complexity and dynamism of contemporary high value corporate work processes call for open but differentiated work environments that can be congenially articulated by the contemporary architectural concepts of ‘field conditions’ offering continuous differentiation via gradients as well as spatial overlap or interpenetration of zones suitable to express the intricacies of social relations at play. The question remains whether the concrete design offers the right overlap and the right differentiation. Another, related generalisation is the claim that the use of curves and blobs allows for the preservation of legibility in the face of this increasing complexity of articulation. These theoretical generalisations can guide the evaluation of a given design project with respect to the pertinence of its general formal or morphological features. However, the theory-based appraisal can only support the judgement that the design employs concepts and morphologies that are in principle congenial to its social functionality task, without being able to certify that the concrete deployment of these concepts and morphologies in the given design is successful. It is precisely this concrete evaluation of a given design (with its unique use of various general features) that we expect from simulations.

Both simulation-based evaluations and measurements seem to offer quantitative evaluations. What is the difference between measuring the performance of a design and running a simulation to test the design? The key difference is that the former is much more direct and reliable, and offers absolute quantitative results. Simulations, in contrast, at best offer a relative appraisal of a design’s probable performance in comparison to the probable performance of another design. The reason that underlies this difference to direct measurement is that while measurement is based on a relatively simple algorithm without any uncertain presumptions, simulations work via complex models that rely on many uncertain assumptions. To use an example from the disciple of structural engineering: We can measure the total tonnage of a proposed structural design if the design has been sufficiently detailed or specified. This is a direct measure that can be used in a comparative evaluation but that also delivers an absolute value to contractor and client. In contrast, the structural stability of the structural design can be simulated via a complex model only on the basis of many ultimately uncertain assumptions about the steel grades, their load capacities, the proper execution of connection details etc. That is why the results of such simulations are not considered super reliable and must be bolstered by rather high safety margins. While many measurements are straightforward, like GFA, measurements can sometimes also be based on more complex algorithms. For instance, this is the case with the space syntax measure of ‘integration’. Space syntax operates with the tools of network analysis and these mathematically defined measures can be very complex but they are still mere “measurements” as defined here. However, when space syntax theory makes claims about the probable social usage of the plan configurations so measured, it goes beyond reporting the results of a measurement. For instance, space syntax theory claims that street segments or space cell with a higher integration measure within a given network attract more circulation than a less integrated street segment or space cell. This theoretical generalisation – which is ultimately an empirical question – depends on many ceteris paribus assumptions that usually do not hold or cannot be ascertained in concrete design examples. Here only relative advantages can be ascertained and no absolute values. However, the integration value itself is a measurement delivering an absolute value. This combination of measurement with theory seems to offer something that is of comparable value to what we would expect from a simulation concerning social functionality. However, these are not simulations as defined here. As examples of social functionality simulations proper we can cite agent-based crowd simulations, and it is these types of simulations and their information value for design appraisals we will investigate in the remainder of this article.

Agent-based Life-Process Simulations

The simulation methodology developed under the research agenda ‘Agent-based Parametric Semiology’ (iii) is conceived as a generalisation and corresponding upgrade of the kind of crowd simulations currently offered by traffic and engineering consultants concerned with evacuation, circulatory throughput, the identification of bottlenecks and zones of congestion etc. These crowd simulations conducted by engineers are operating at the border between architecture and engineering. The demarcation criterion the author has proposed elsewhere (iv) suggests that architecture is in charge of the built environment’s social functionality while our collaborating engineering consultants are concerned with the subsidiary technical functionality of the built environment, solving the technical problems posed by the spatial solutions architects offer to facilitate the social functionality desires of clients and their end users. The problem of circulatory optimization is on the edge between the two disciplines. Mere physical throughput which treats users as physical bodies and simulates crowds like a physical fluid might be regarded as an engineering question. The demarcation criterion between engineering and architecture can be rephrased in terms of how each respective discipline conceives of the end users its design considerations cater for.

The engineering disciplines are concerned with users only as physical and physiological bodies of a certain size, weight, and with certain physiological requirements in terms of temperature, air change and lux levels. In contrast architectural design considerations are concerned with socialized actors who orient within a semantically coded environment and read the designed spaces as communications. The core competency of architecture is the ordering of social processes via designed environments with a differentiated panoply of designated zones, dedicated to different social situations, activities and interaction scenarios, each with its own eligibility conditions and participation protocols. Accordingly, the simulations that must be developed to get a handle on this ordering contribution of the designed environments and their facilitation of various desired social interaction scenarios will have to be quite a bit more elaborate than the current crowd models testing circulation processes.

The most obvious difference is the expansion of the menu of action types that must be considered, beyond walking and standing: this includes not only various types of walking/standing like pacing, strolling, stop-and-go, lingering, or more specifically window shopping etc. Further activities or interaction situation to be considered, for instance within the domain of office life, are concentrated work, team work, formal meeting, formal presentation, greeting, casual chat, more in-depth socialising etc.
The second major difference of these architectural simulations is that the agent population should no longer be homogenous. Rather it should be socially differentiated. For instance within the domain of corporate office life, the agent model should distinguish various types of agents, in accordance with their organisational role, rank, departmental or team affiliation etc. Because of this the expansion of action types, including multi-user interaction types, and because of the differentiation of the agent population into various types of social roles, the author proposed to talk about life-process simulations rather than crowd simulations.

The third significant difference of architectural life-process simulations in comparison to the engineer’s crowd simulation is the designation dependency of the agents’ behaviours. Within the architect’s perspective, in accordance with his responsibility and core competency, the designed environments are always semantically encoded and zoned in the sense of specifically designated areas and subareas. The general formula of order – a place for everything and everything in its place – applies also here, in the case of the spatial ordering social processes. For the agents this implies that they have a whole stack of behavioural rule sets and depending on where they are or which threshold they cross, a different rule set is activated and applies. When various actors enter the same designated zone or space, they should be able to infer the designation from the relative position of the space in the overall matrix of spaces as well as from morphological clues. They thus recognize the kind of social situation possible here and all participants are, as it were, on the same page, with aligned expectations and rule sets. Due to the semiological inscription of designations and behavioural rules, these agent-based models might be termed semiological models or simulations. Only within such a spatially differentiated and semiologically encoded order can specific social purposes be readily accomplished.

The fourth aspect that distinguishes these architectural-semiological models from the circulatory crowd models is the following: Congenial with contemporary cultural conditions the underlying presumption of these models is that agents are largely self-directed, rather than running on pre-scheduled tracks, and do self-select their actions and which interactions they participate in. These selections are guided by multi-dimensional, dynamic utility functions that can utilise contingent opportunities that are encountered within the environment these agents browse. These utility functions are implemented in the decision trees that control the agents’ actions on the basis of internal states due to prior actions and environmental offerings perceived. Within the preferred societal domain investigated by this research project – the domain of corporate working life – the increasingly widespread use of non-territorialised, activity based office landscape is most congenial to the assumptions and capacities of the life-process simulations promoted here. It is under these rather fluid conditions that the necessity of agent-based simulations becomes most compelling, as an intuitive grasp of the effects of various possible configurations becomes quite obviously impossible. Without simulations the architect is here condemned to ignorance and thus incompetent, and the more so, the more complex the overall collaboration process is.

This leads us to the fifth significant difference, namely that the focus shifts from the aggregation of parallel individual actions to the simulation of social interactions. This extension of the simulation scope from individual action to social interaction is to some extent implied in all or most architectural programme domains. However, it is most expressed in the case of corporate working life, as well as in university life or in the case of conferences etc. This feature is less expressed in institutions like museums, in retail environments or in residential projects where individual lives and actions are running in parallel rather than integrate into complex, dynamic social scenarios. It is the interesting challenge of these latter social scenarios, where global interaction patterns must be generated from individual, rule-based interactions that has led the author to focus the research project on the domain of working life, an inherently collaborative life-process proceeding via integrated, synergetic lives. In contrast to circulation processes which are effectively zero-sum competitive processes, the processes of facilitating encounter and communication density are essentially non-zero-sum interaction processes.

The sixth important differentiating aspect of this new methodology of generalized life-process simulations is the fact that there can be no single generic agent model that could be transferred from project to project. In contrast to current crowd models concerned with mere circulation, life-process models need to be tailor made for each specific domain of life, for each type of institution, and at least heavily adapted and customized for each specific client within a given domain of life, especially with respect to corporate work life. The model should attempt, as much as possible with respect to available information, to model the actual user group, its differentiation into status groups, roles, even its established network relations as might be retrieved via questionnaires or various electronic communication records. This customisation of the agent model serves to enhance the veracity of the simulation, as well as the information richness of the simulation results and the related ability of the methodology to home in on particular success criteria that might involve interaction facilitation between particular status groups, roles or even the empowerment and accessibility of important individuals.

Here is the summary list of innovations that the research group is working on and that must delivered by a generalized and semiologically informed life-process modelling:

1. expansion of action/behaviour types
2. differentiation of agent population
3. designation dependency of behaviours
4. agent decisions via dynamic utility functions
5. focus on social interactions and event scenarios
6. domain tailoring and client customization

All these differentiating features imply challenges and necessary complications, or positively phrased, necessary sophistications for this much more ambitious modelling and simulation effort. The research team is currently building up increasingly large, differentiated and sophisticated agent populations using Unity game development software as base system with a lot of additional original coding. The development work concerning our agent populations benefits from a technology transfer from the game development industry, both with respect to basics like action animation and simple tasks like pathfinding, obstacle avoidance etc. and with respect to the more complex decision making processes that need to be modelled, termed ‘game AI’. Sophisticated games populate their gaming scenes with increasingly versatile, intelligent, tactical, autonomous, seemingly spontaneous, life-like agents.

As already hinted at above, work-life entails multiple agent types, and more importantly many different action types, each dependent on multiple conditions. This implies the requirement for the agent system to handle a large number of input factors for each agent and to use these to select from a large number of possible actions at each moment in the ongoing life-process of the model. This situation is too complex to work with the simplest and most widespread game AI methodology of Finite State Machines (FSM). Finite State Machines work by defining states and conditions for transitioning between states. They are very open-ended and flexible by being able to translate from any state to any other state by specifying conditions. However, this approach lacks a systematic ordering principle that could help game developers to systematically work through the complexity of human decision making. Beyond a certain complexity threshold it becomes too difficult for AI developers to map out all scenarios. The remedy for this shortcoming was the methodology of Hierarchical Finite State Machine reducing complexity by separating certain behaviours into substates. This lead to the idea of organising tasks in tree-like structures, so called Behavior Trees, currently the dominant technique used by the game development industry. This methodology orders all actions into a hierarchy with a (left-to-right) priority sequence at each level. The decision of which action to execute is evaluated at a certain tick-rate. At each tick the whole tree is walked through until conditions are met that trigger action. Complex Behaviour Trees could allow for concurrent actions as well as for sequences of behaviours. For large behaviour trees, the computational costs of evaluating the whole tree become prohibitive. Sub trees can then be introduced so that a sub tree can continue executing without invoking the whole tree, until some exit condition is met. However, this brings back the complexities Finite State Machines.

The latest game AI methodology that is becoming more widespread in the game development industry is a methodology employing utility functions, so called ‘Utility AI’. Instead of switching between a set of finite states based on conditioning via triggers, or moving through a whole decision tree until trigger conditions are met, in Utility AI agents constantly assess the actions available to them in their current environment, and assign a utility score to each of those actions on a continuous scale. The utility system then selects the behaviour option that scores highest amongst the currently available options, based on the circumstances. Circumstances are both external and internal states. The latter being dependent on what went on in the game or simulation so far, i.e. the current utility and thus the urgency of a desire, and thus the utility of the related action, recedes or drops after the action was successfully completed and the desire satisfied. The basic laws of subjective economics like the law of diminishing marginal utility can be thus be implemented here. The normalized utility functions bring the most divers and otherwise incommensurable measures into direct comparison. Each choice of action is relative rather than based on absolute conditionals. These are temporary prioritizing decisions, based on internal states like desires, their urgency, available energy levels, as well as on opportunities afforded by environmental offering in proximity to current location. There can still be absolute conditionals placed in front, i.e. the various designated zones pre-condition the available action menu. Utility AI can take any group of action options, destination objects, interaction chances and score these. This makes the methodology very versatile for decision-making. This approach comes closer to model emergent behaviour and realistic decision-making under incomplete information.

This technology transfer from the gaming industry delivers thinking tools, formalisation strategies and coding techniques for the elaboration of sophisticated autonomous agents capable of navigation and interaction within semantically charged environments. However, the specification of the appropriate action types and of realistic decision rules for the domain of life to be simulated had to be elaborated without readily available sources of knowledge. The members of our development team have to reflectively retrieve and rely on their own knowledge and experience of office life and working environments. Thankfully the domain of study – creative industry work environments – are a domain we all know from our own daily life experience. The task thus is to make our intuitive competency as well-socialized user of office environments and participants in corporate life explicit and then formalize those into behavioural rule sets. The same kind of reflection is required to tease out plausible desires, goals and quantified utility functions. Critics might question the veracity of these constructions and worry about problematic degrees of subjectivity. However, some of these competencies are fairly obvious and we are here in a similar position to grammarians who use their own language competency as a basis and testing bed for the formalisation of the rules of grammar. Similarly, micro-economics is based on a priori assumptions about utility maximizing choices under constraints that are corroborated by reflection on our own economic choice competency rather than via empirical data. This does not mean that empirical observation should be excluded. It is indeed one of the ambitions and components of the research project to include empirical data collection as a means for model calibration. However, the efforts that must go into this are considerable, and the research should not have to wait for this.

The agent populations developed are being released into systematically varied designed environments. The resultant interaction events are then recorded to evaluate these environments with respect to defined criteria of success. The domain explored is the domain of corporate office-life. We run two research strands in parallel, one is using the London office and organisation of Zaha Hadid Architects as case study, the other strand builds up an increasingly complex, generic model. The ZHA case study focusses on the reception area and open social space within our offices at 10 Bowling Green Lane. The scene includes various settings, including the reception desk, a long self-serve coffee and refreshment counter, several circular stand up tables, several circular sit down coffee tables, a small meeting alcove, a large open meeting table, and plenty of free standing and circulation space. Two exterior and four interior doors lead into this space. This space was selected for its versatile settings and openness encouraging dynamic interaction patterns, offering plenty opportunities for chance encounters and spontaneous group formation, although appointments and scheduled meetings are also in the mix, inviting spontaneous participation. The generic scene is larger and includes various formal working zones, meeting spaces as well as social spaces.

The research focusses on communicative interaction events. These are the events we record, classified according to the duration, the number of participants, and the types of participants, i.e. their rank, and whether the interaction was an inter- or intra-team event. The final success criteria that determine the evaluation of alternative office designs would be specified in accordance to the client’s particular change requirements or management agendas.

The question of the realism of the simulations is a difficult one. Empirical experimentation and calibration will eventually be required to give confidence here. That will be the role of the ZHA test bed where observations and sensor data collections are under way. However, we should also realize that absolute success measurements are not necessary to achieve a comparative evaluation and selection of the best design alternative. The presumption that relative performance advantages have veracity even if absolute performance measures are false, seems plausible enough to merit investing in the simulation methodology, especially if the alternative is the designer’s absolute ignorance and incompetency. Finally, the advantage of the Unity based simulations is that they also deliver immersive visualisations that can be used as intuitive plausibility and desirability check by both designers and clients.

References:

Boero, R.; Morini, M.; Sonnessa, M.; Terna, P., Agent-based Models of the Economy: From Theories to Applications, Palgrave Macmillan, London 2015.
Chen, Shu-Heng. Agent-Based Computational Economics, Taylor and Francis, New York 2016.
Millington, Ian; Funge, John, Artificial Intelligence for Games, Taylor & Francis, New York 2009.
Rabin, Steve, Game AI Pro 3: Collected Wisdom of Game AI Professionals, Taylor & Francis, New York 2017.
Squazzoni, Flaminio, Agent-Based Computational Sociology, Wiley

(i) Total Fluidity: Studio Zaha Hadid, Projects 2000 – 2010 University of Applied Arts Vienna, Edition Angewandte, Vienna 2011
(ii) Fluid Totality: Studio Zaha Hadid, Projects 2010 – 2015, University of Applied Arts Vienna, Edition Angewandte, Birkhaeuser, Basel 2015
(iii) Patrik Schumacher, Advancing Social Functionality via Agent-Based Parametric Semiology. Published in: AD Parametricism 2.0 – Rethinking Architecture’s Agenda for the 21st Century, Editor: H. Castle, Guest-edited by Patrik Schumacher, AD Profile #240, March/April 2016.
(iv) Patrik Schumacher, The Autopoiesis of Architecture, Vol.1: A New Framework for Architecture, John Wiley & Sons, London 2010

Close Menu