Artificial intelligence, and the opportunity to be genuinely intelligent ourselves

Photo by Nick Page on Unsplash

When I was an undergraduate, I once got up before dawn and made my way to the end of Southend Pier to watch the sun rise. This wasn't the kind of eccentric behaviour that gets you a reputation at architecture school, or not entirely, anyway. It was site analysis. A tutor had set us a project on the seafront and I wanted, in the slightly over-earnest way of an eighteen-year-old who has just discovered that architecture is about more than drawing, to understand what the place actually was. What it felt like at the margins of the day, when the light came in low across the estuary and the whole strange, flat, end-of-the-line quality of the place was at its most itself. That, we were being told, was what separated building design from architecture. Finding the Genius Loci. The spirit of the place.

I haven't done anything like it since I graduated, and I don't think I'm unusual in that.

So what has any of this got to do with artificial intelligence? Bear with me.

The conversation about AI and the built environment tends to be, at its most ambitious, essentially about efficiency. How do we produce more documents, faster? How do we process planning policy without reading all of it? How do we stop junior architects spending three days on a site area calculation that should take twenty minutes? These are reasonable questions, and in our studio we are already finding good answers to them. We use AI tools to help manage the structure and narrative of complex documents, keeping track of changes and overall argument in a way that is genuinely error-prone when you're doing it in your head at eleven o'clock at night. We use them for research, cutting the time it takes to understand best practice on something like, say, the layout of a school within a mixed-use development from a day's reading to something considerably more manageable. And, if I'm honest, there is something quietly useful about having a tool that is always working at its best, which can smooth over the moments when I am very much not, filling in the gaps when the brain is elsewhere and the deadline is not.

But the efficiency conversation, while useful, is the boring part. The more interesting question is what happens when you start giving time back to the people doing the work, and what they might do with it.

Take consultation, as one example among many. We design homes and neighbourhoods for an enormous range of people, most of whom do not live lives that look anything like mine. I have, as it happens, quite strong views about how I want to live, informed by being a white man of a certain age without children, living in the Peak District with a bulldog who requires considerably more logistical accommodation than most people would choose to give her, with a particular set of interests and habits and assumptions that I carry around without always noticing them. The families, the elderly residents, the people from communities and cultures whose relationship with home and street and neighbourhood is different from anything in my experience, the people who are going to live in the places I help to design for the next hundred years, they need something better than my best guess at what they want. AI can help us reach more people, process more data, and sift through responses in a way that surfaces genuine insight rather than just confirming what we already thought. That seems like it might produce better neighbourhoods, which is presumably the point.

Photo by Dorota Trzaska on Unsplash‍ ‍

Or take the question of site-responsive design. The reason most British housebuilding looks the way it does is not because developers are philistines, though that is a convenient narrative. It is because non-standard homes are slower and more expensive to document, design and build, and the pattern book exists as a rational response to that pressure. If AI can meaningfully reduce the time and cost of designing and documenting something that actually responds to where it is, then the economic argument for the pattern book starts to weaken, and the designer gets to ask what would actually work here, on this particular piece of land, for these particular people, rather than reaching for the nearest standard house type and hoping for the best. It also gives back some of the time to do the kind of analysis that I used to think was normal, the kind that involves walking a site rather than looking at it on a screen, sitting in it at different times of day, talking to the people who already live nearby, and trying to understand something that no amount of GIS data will tell you. Whether that rises to the level of getting up at four in the morning to watch the light change over the Thames Estuary is another matter, but the principle is the same.

And then there is viability, which is the thing that kills more good ideas in British residential development than planning policy and client indecision combined. The gap between what a site could be and what it is financially viable to build on it is a problem that better, faster iteration can genuinely address. The tools to model a site mix, test the numbers, understand in something approaching real time what each design decision does to the bottom line, these are not exotic ambitions. I have been working with relatively basic versions of this kind of analysis for over a decade, and even those basic versions make a significant difference. What is coming is considerably more capable, and considerably more integrated into the design process, which means that the viability gap starts to look less like an unfortunate constraint and more like a failure of imagination.

These are three examples from what is, frankly, a much longer list. The point is not the list, it is the direction of travel.

Which brings me to what I think is the actually interesting question, and the one that tends to get crowded out by the ones we seem to prefer asking. The usual debate about AI runs roughly as follows: will it take my job? How does my country get rich from it? How does the company that built it make enough money to justify what it spent? These are the questions of an economy that has, over a few hundred years, arranged itself primarily around the production and accumulation of money rather than around the flourishing of the people in it, and they are not, I would suggest, the most useful frame.

Bhutan, of all places, has for some years been measuring national happiness as a formal component of government strategy, on the fairly straightforward basis that if the goal of a society is the wellbeing of its people, you might want to check how that's going. It is treated as a curiosity in most Western commentary, which perhaps tells you something. But if you start from that premise, the questions you ask about AI look quite different. Not what happens to my mortgage if AI does my job, but what would I do with my time if I didn't have to spend most of it doing things I don't particularly want to do? Not how does our economy grow, but how does our society become more fulfilling for the people in it? Not how does OpenAI maximise its return on a considerable investment, but how could the knowledge and resources of the AI industry be directed towards the most positive change it is possible to make?

Photo by Unma Desai on Unsplash‍ ‍

If we are all working less, which seems like a plausible outcome if we actually choose to pursue it rather than simply using AI to squeeze more productivity out of the same hours, then what do we do with the difference? Make art, perhaps, or grow food and cook it properly, or talk to the people in our communities who are lonely, or spend time with our children (or bulldogs), or learn something we have always meant to learn, or sit at the end of a pier and watch the light change, and think about what it means. Maybe we stop needing two weeks in the sun because we are taking actual pleasure in where we live. Maybe we stop filling the gap with fast fashion and a new games console because the gap has been filled with something that is not, in the end, a gap at all.

Our society has become, somewhere along the way, a machine for making money rather than a place for people to live well in. That is not inevitable, and it is not irreversible, and a technology that is capable of meeting many of our material needs more efficiently than we can meet them ourselves seems like a fairly significant opportunity to reconsider the arrangement. The question is whether we will use it to do that, or whether we will use it to make the machine run faster.

Artificial intelligence could, if we let it, give us the opportunity to be genuinely intelligent ourselves. What a thing that would be.

Next
Next

Why are architects so cheap?