AI and Architecture:
Augmentation or Disruption?

Thursday 29th June 2023

Hosted by Ollie Cronk

Host of Architect Tomorrow, and seasoned speaker on Architecture and AI, Ollie Cronk shared his expertise at our recent roundtable and engaged in a thought-provoking discussion about the challenges and opportunities presented by GenAI. In this blog post, we’ll delve into the key points raised during the session and explore the implications of embracing AI within architecture.

Perceptions and awareness of AI

To start the roundtable, Ollie conducted a survey revealing that only a quarter of the attendees were familiar with the distinction between AI and machine learning. However, a majority stated that their organisations already have teams working with AI, suggesting a widespread adoption of this technology. This indicates that many companies are recognising the potential benefits and are embracing AI to drive innovation and efficiency.

The role of AI tools


Ollie emphasised that current AI tools, such as language models like GPT, can provide substantial assistance, but they are not yet capable of completing tasks entirely on their own. These tools can augment human work by delivering an “80% job”.  Ollie states that he sees many smaller firms adopting AI and try to harness its power to add extra resource to their teams.


However, it is crucial to understand that the output of AI tools can vary in terms of accuracy and creativity. Ollie introduced the concept of “Creativity vs Hallucination,” explaining that as the temperature or parameters of AI tools change, they can shift from providing factual information to generating more imaginative yet potentially misleading responses. This was best shown in the case of an American lawyer citing fake cases in court, after conducting case research through Chat GPT (Forbes, 2023). It is essential for users to exercise critical thinking and domain expertise when evaluating AI-generated outputs.

Managing risks

One major risk Ollie highlighted is reputational damage, as there is a possibility of generative AI producing inappropriate or biased content that may be detrimental to an organisation’s image. Other risks include intellectual property concerns, privacy issues, the emergence of shadow IT (unauthorised AI usage), and the readiness of existing systems to accommodate AI technology. It is crucial for organisations to evaluate these risks and develop strategies to mitigate them effectively.

The future of AI

Ollie anticipates that upcoming technologies would be capable of replicating fusion even more effectively, extending beyond language-based applications. The focus would be on harnessing technology to make informed decisions based on factual data while leveraging AI tools to fill knowledge gaps and augment human decision-making.

Audience Questions and Insights

The audience raised thought-provoking questions during the session, shedding light on important aspects of AI adoption. One concern was the potential bias in AI models and the need for domain experts to verify the accuracy of AI-generated results. Ollie emphasised the importance of striking a pragmatic balance between reducing bias and acknowledging the limitations of AI systems (most of which lie with the accuracy of the data inputted into it, as intelligence created using inaccurate information cannot be trusted).


Regulatory and ethical considerations from the audience

The conversation turned towards the need for regulatory frameworks for AI adoption. Ollie highlighted forthcoming EU AI regulations and mentioned that the UK was adopting a more wait-and-see approach. In an ideal world, Ollie notes that stronger regulation should already be in place, as we do not want the same sort of catastrophes faced when the social media era began. Even OpenAI CEO Sam Altman states that he is “a little bit scared” of his technology.


Overall, regulation seems to be very reactive rather than proactive, but Ollie sees the US as the ones to lead the way on this. The discussion emphasised the importance of striking the right balance between innovation and regulation, with strong consideration for long-term impacts and the potential risks associated with AI.

Where can graph and AI work in tandem?

Graph structures serve as the backbone for language models, making them indispensable in AI applications. Adopting a graph-based approach allows for a comprehensive representation of information, enabling language models to extract insights effectively. Ollie, for instance, has developed a graph structure that models conversations, offering a profound understanding of desired outcomes and required data points. With this information, the conversation’s parameters become clearer, and customers can ask questions and receive outcomes that satisfy their queries effectively.

How does generative AI such as GPT differ from predictive text generators?

Ollie explains that unlike traditional predictive text generators, GPT has the potential to democratise coding and bridge the gap between junior and senior developers, as GPT leverages machine learning algorithms and deep neural networks to generate human-like text. This empowers junior coders by providing them with a solution that is curated by more experienced developers. Now, individuals with lower levels of technical expertise can now access a wealth of resources and tools, reducing barriers to entry.


Attendee discussions

All attendees on table one were currently engaged in prototyping or already had AI solutions in place. One participant in the legal industry explained that they had taken the proactive approach of experimenting, rather than waiting on the sidelines. They shared their experience of bringing language models to market for the past seven years, highlighting that the technology has been around for a considerable period, referring to it as ‘Old AI vs New AI.’


No one at the table opposed this concept of adopting a co-pilot, but they emphasised that AI outcomes still require expert approval. Ideally, they saw having a co-pilot to create a High-Level Design’s (HLD’s) as its best use. The overall sentiment here was to embrace the technology rather than fear it.

The participants on table two had varying degrees of exposure and familiarity with AI, but raised concerns to scenarios where an AI tool produces an accurate result, but the user perceives it as incorrect – has this eventuality been fully considered? They also felt that the specific area of architecture in which AI is applied makes a difference, as in Enterprise Architecture, where influencing stakeholders relies on gut feeling and intuition, AI implementation may not be as useful as domains with more definitive answers, such as Solution Architecture.


The discussion at table three reveals a divide between those who are willing to adopt AI and those who are not. An attendee in the construction sector expresses curiosity about what AI entails for their industry, but recognises its potential value in allowing faster access to information. They raise important questions: How are quieries constructed?; Who utilises it?; Who possesses the subject matter expertise (SME)? Where should the line be drawn to prevent an over-reliance on AI and ensure that essential knowledge and understanding are not compromised?


Even though it can be hard not to be overwhelmed, by working with your risk, security, regulatory and/or legal stakeholders as early as possible, it is possible to ‘shake things up’. Ollie feels that Architecture has become exciting again with the dawn of GenAI, as now has become time to differentiate and innovate with architecture to manage the risks to your organisation whilst allowing for a whole new wave of innovation.

To keep the discussion going with Ollie, connect with him on LinkedIn here: 

Alternatively, contact Konvergent Partner, Ben Clark, to discuss the topic further:

Ollie is the host of the Architect Tomorrow podcast, which you can find here:

More insights