Security Architecture Reviews (SAR) are an important part of the Cybersecurity working with development teams to prevent any security issues further down the development process.
They are traditionally conducted by experienced Architects who examine the design of a web application — its components, data flows, trust boundaries, dependencies, and exposure to threats, both before and during development. Their goal is to uncover potential vulnerabilities and validate that security controls are properly defined.
Much of this engagement is based on the expertise and knowledge of the Architect, as well as their ability to engage with their colleagues effectively. When done well, Security Architects not only educate the development teams on how to prevent similar vulnerabilities in future projects, but they also encourage them to engage with the Cybersecurity team much earlier in the development lifecycle, allowing any questions and issues to be addressed much earlier in the process.
However, as Artificial Intelligence (AI) is rapidly reshaping how Cybersecurity teams operate, it also has the potential to revolutionize how SARs are conducted. What will the SAR experience look like in the future, and can it replace the role of the Architect in the process?
The potential of AI
Today, AI is being widely used by Cybersecurity teams to address traditional challenges with automation, consistency, and scalability. The SAR process is no exception to this and there are already a number of companies on the market that can:
- Remove the need to create Architecture Diagrams from scratch by auto-generating threat models and insights from design documents and code, enabling faster feedback and scalability
- Reduce blind spots, align and automate documentation by applying standard frameworks (e.g., STRIDE, PASTA)
- Improve risk identification at scale by scanning vast, unstructured data to uncover hidden dependencies and subtle risks, reducing the dependency on the Architects’ knowledge
- Allow better tracking of findings, easing audits and compliance.
Given the high quality of AI-generated output, it’s only natural that organizations would seek to automate SARs using AI.
By adopting a self-service model, developers could leverage AI independently, reducing their reliance on Architects and enabling more agile workflows. In turn, this approach would empower development teams to take greater accountability for security, fostering a security-first culture across the organization. All the while, the Security Architects could redirect their efforts toward addressing novel threats and making strategic decisions.
A win-win scenario for everyone involved, right?
The other side of the coin
However, despite its benefits, AI is not entirely flawless. Like any tool, it is not perfect and with AI several key challenges remain, including:
- False positives that overwhelm developers with irrelevant alerts
- False negatives where subtle, context-driven threats go undetected
- Dependence on accurate input where outdated or incomplete architecture documents produce flawed results
- Overreliance where teams trust AI outputs too heavily and reduce manual oversight, and gaps in explainability, making it hard to justify or audit AI-driven findings
- AI security risks such as prompt injection or data leakage from sensitive inputs, and
- Lack of context — To improve the AI outcomes, context must be documented and provided to get clear guidance.
Furthermore, fully automating the SAR process risks eliminating valuable opportunities for learning and collaboration.
While AI can certainly handle many tasks, it cannot replicate those informal “watercooler moments” where Architects share their unique expertise to guide developers to new approaches or clarify important policies. These interactions spark lasting changes in how teams think about and approach their work, influencing both current and future projects.
To truly build with a security-first mindset, this culture of learning and awareness must be in place before we even have the idea; not after we’ve already finished the last sprint.
The future of SARs
So, what should the ideal SAR process look like in the future?
With the advent of AI, it is essential that organizations leverage its capabilities to stay ahead. AI will continue to evolve, and it’s reasonable to anticipate that soon it will dynamically review architectures with every design or infrastructure change, tailor its insights for industries like finance or healthcare, simulate attack paths by modelling lateral movement across microservices, and provide on-premises solutions for highly sensitive sectors — just to name a few.
Despite the benefits they bring, relying solely on a self-service AI model may not be the best way to strengthen your security posture. While it might accelerate processes, the real value lies in the unique synergy created when developers and Security Architects collaborate — a value that can’t be measured in purely monetary terms.
As with other AI-enabled tools, the ideal approach is to combine the strengths of both AI and human expertise. By partnering with your Architects to choose tools that amplify, rather than replace, their capabilities, you empower your Cybersecurity team to engage more effectively and make a lasting impact across the organization. Conversely, you continue to empower your developers to learn and create a secure organization for the future.
Authors: Sofia Ylén-Buxton(VP Cybersecurity Governance and Communications) & Piyush Sharma (Cybersecurity Architect)
FactSet clients have access to insightful technology articles on the FactSet Developer Hub.
Want to gain access to a network of industry experts, developers, architects and technologists? Request access to the FactSet Developer Hub via this link. https://hub.factset.com/signup?utm=medium
