News CA

International Women’s Day 2026: How can architecture make AI more inclusive?

As artificial intelligence (AI) becomes more deeply woven into daily life and used more frequently in architecture, the question of who gets to participate – and who risks being excluded – has never been more important.

Aside from greater efficiency and productivity, potential economic growth and enhanced data analysis, the Organisation for Economic Co-operation and Development (OECD) names “inclusive growth” as one of its guiding principles when it comes to AI. It calls for the “responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, well-being, sustainable development and environmental sustainability”.

However, technology, and now specifically AI, has long been shaped by a culture dominated by ‘tech bros,’ where (usually white) male perspectives have historically set the tone and the priorities.

This year, International Women’s Day offers a perfect opportunity to reflect and discuss inclusivity within AI systems and usage aimed at architects and the wider built environment. To help us do this, we’ve asked Katie Fisher (Director at CARD Projects and RIBA J Rising Star), Olivia Stobs-Stobart (Design and AI Lead at Plan A Consultants), and Renee Dobre (Architect and Design Computation Team Leader at NBBJ) to discuss this topic.

How do you evaluate whether an AI system is ‘inclusive’?

Olivia Stobs-Stobart: “Any new process or technology we adopt has to pass a few key tests, and inclusivity is always one of them. We take feedback seriously, running structured pilot tests on potential systems, gathering input and tracking engagement from diverse team members across different roles, experience levels, and working styles. We then work with our People & Culture team to ensure accessibility, ease of use, and whether the tool works for neurodiverse team members is considered before full adoption. It isn’t an afterthought for us; it’s part of the process from the beginning. We want our tools to empower and support our team to create a space where everyone can do their best work, not be left behind by the technology around us.”

Where do you see the biggest points of gender imbalance in the AI pipeline today (research, engineering, data work, product, leadership)?

Katie Fisher: “From what I’ve seen and read, the imbalance is most visible in leadership, core engineering and venture funding [for those creating the systems]. The teams building large language models, BIM-integrated AI plugins or generative design platforms are still overwhelmingly male. For example, optimisation tools that prioritise speed and cost over care or community engagement reflect who sets the metrics. Women are more visible in ethics panels and facilitation roles, but less often shaping the underlying code or investment decisions.”

Olivia Stobs-Stobart: “All of it. Gender imbalance in tech isn’t a new conversation, but in AI it carries a particular weight. When our systems learn from our data, reflect our instructions, and are shaped by our ideas and values, we must ask ourselves: whose ideas are in the room?”

“How can we responsibly say that AI is a fair representation of our population when only 22% of all AI and data professionals in the UK are women. The gender imbalance doesn’t exist in isolation either; it compounds with other forms of exclusion resulting in harmful biases and feedback loops.”

“I’m fortunate to work in an environment where I feel supported and where learning about AI is something I can actively pursue. But stepping into broader industry spaces and wider conversations, I’m reminded that my experience may be the exception, not the norm.”

“As an industry we have a responsibility not just to open doors, but to make sure the right people know those doors exist. That means actively encouraging women, and every underrepresented group, to have a seat at the table and build something that works for everyone.”

Renee Dobre: “While there is a clear imbalance in who codes the base models, as a firmwide computation leader, I see a massive bottleneck in the integration and technical leadership phase within architecture, engineering and construction firms. We have a gap in who is directing how these tools are actually deployed on projects. If the technical directors and computation leaders making decisions about AI adoption are predominantly male, the workflows we prioritise will naturally reflect their perspectives. We need more women in the ‘translator’ space: those leading the charge to turn raw AI capability into applied, human-centric architectural workflows.”

We often hear about bias in Large Language Models (LLMs). Where do you think gender bias most often enters AI systems?

Renee Dobre: “It enters right at the foundation: the training data. For LLMs and image-generation models in our industry, the data is scraped from a historical canon of architecture that has systematically elevated male ‘starchitects’ and marginalised female designers. If an AI system is trained on historical portfolios, its baseline definition of ‘good’ architecture is already gender biased. It also enters when we define what AI should ‘optimise’ for. If an AI tool is trained to optimise purely for spatial yield and cost, it often ignores the nuanced, qualitative aspects of space that support caregiving, community, and inclusivity.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button