Martyn Day, Author at AEC Magazine https://aecmag.com/author/martyn/ Technology for the product lifecycle Fri, 07 Nov 2025 08:36:04 +0000 en-GB hourly 1 https://aecmag.com/wp-content/uploads/2021/02/cropped-aec-favicon-32x32.png Martyn Day, Author at AEC Magazine https://aecmag.com/author/martyn/ 32 32 Driving AI design upstream https://aecmag.com/bim/driving-ai-design-upstream/ https://aecmag.com/bim/driving-ai-design-upstream/#disqus_thread Thu, 09 Oct 2025 05:00:49 +0000 https://aecmag.com/?p=25005 There may well come a time when AI will take a sketch or basic idea and design the entire building

The post Driving AI design upstream appeared first on AEC Magazine.

]]>
Software developers are using AI to generate co-pilots and remove the drudgery of repetitive manual tasks. However, there may well be a time when AI will take a sketch or basic idea and design the entire building. Amazingly, North Carolina-based Higharc appears close to delivering that, writes Martyn Day

Higharc is a cloud-based service for US housebuilders of timber-frame buildings, aimed at a market of users more likely to use AutoCAD and dumb 2D sketches than BIM.

Having a single focus on a specific type of building and process has enabled the development team to highly automate modelling, drawings, QA, costings and many other parts of the design process. While this may not be aimed at the type of buildings you create, it’s well worth looking at what this expert BIM system can do.

Higharc possesses a wealth of industry knowledge and has already secured significant financial backing, having raised $25 million in Series A funding and later $53 million in Series B funding. The leadership team contains veterans from relevant technology fields, including CEO Marc Minor, who came from the 3D printing world.

There are several former employees of Autodesk. CTO Peter Boyer is an ex-Autodesker who was a founding member of Dynamo, and Michael Bergin, VP of Product, was a research lead for Autodesk’s Industry AEC team. Bergin previously worked on Dreamcatcher, Autodesk’s AI/ML design system for manufacturing design, and his motivation stemmed from recognising the broken system of manual architecture design.


Find this article plus many more in the September / October 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

This year we are starting to see some of the AI work that Bergin has been working on for Higharc users. In April 2025, Bergin released a video demonstrating a very cool use of AI – a new Generative AI capability specifically designed for on-boarding designs to its cloud-based BIM platform. This AI accelerates the conceptual phase by converting a 2D hand sketch directly into a functional 3D BIM model (AEC Magazine covered it here.

Available in beta, the software used visual intelligence to analyse architectural features contained in a plan view sketch. As most people in US residential are not BIM experts and have little CAD knowledge, driving a modelling system from hand drawn sketches bypasses the need to learn complex CAD.

The process involves importing the sketch, which is then analysed to automatically detect room boundaries and wall types. The AI identifies and tags spaces, even handwritten notes denoting room allocation types (like kitchen or bedroom). The “magic” of the process is that it is not simply form finding; it involves mapping the rooms to specific data types within Higharc’s system of defined spaces.

Since Higharc is a data-first BIM system built from the ground up, the AI transforms the sketch directly into this organised data model. In a matter of minutes, the software generates the correlated 3D elements identified in the drawing, including walls, doors, windows, and fixtures.


Higharc
Higharc

Expanded AI

In September 2025 Bergin posted a new video to Linkedin showing his latest beta developments in applying AI to the Higharc modelling process.

Moving on from converting sketches to models, Bergin is aiming to take the process from sketch to completed model, with drawings and AI renderings. And ultimately to be able to talk to the model, to get it to update design features.

Bergin explained what the video showed, “Powered by our building design agent, a one-shot prompt turns a floor plan hand sketch into an enriched Higharc model. As with any Higharc model, once translated we have real-time dimensioned plans, building estimates, sales experiences, marketing materials, and more.

“Most importantly, for the first time, we’re opening the door to a research preview-DreamGraph, our first step toward enabling people who are not Pro Homebuilders to experience Higharc firsthand.”

The video kicks off with an imported 2D drawing and on the side menu, the user enters “Analyse this floor plan image and translate to Studio” (studio home design being the modelling tool of Higharc).

The AI replies “Great! I’ll begin analysing your floor plan image and translating it into a Studio home design. This process involves several steps: creating the underlay, detecting and scaling the floor plan, generating rooms, identifying doors and windows, and finalising the layout. Let’s get started.”

We are entering a different phase in design tool evolution. We will start talking to and working with AI from concept to document delivery

The system can then be seen running though routines on the screen. The AI scales the image which is used as an underlay to extract the room outlines. It then detects the room boundaries and converts them into Studio rooms, matching the original floor plan. Doors, windows, and other entities were identified and placed and the layout was analysed and refined to ensure logical room types and adjacencies.

Blocks and roofs were generated for a complete, buildable home structure. All this in a matter of seconds, and you can even look at the structural timber frame for the roof that was never drawn or designed. It’s all quite gobsmacking.

This then initiates the automated documentation capabilities of Higharc, delivering architectural plans, and 3D views and renders. This is the first demonstration of the automation of sketch to model to drawings.

To create a BIM model and all associated documentation, with costings and Bill of Materials, all you need to be able to do is sketch; it’s really quite amazing.

Bergin then posted a subsequent but short video, demonstrating editing capabilities. With the completed model he typed into the native language interface ‘bring out porch 180 inches deep’ and Higharc paused, identified that the existing porch was 96 inches deep and then extended out that part of the model by 84 inches, while maintaining the original porch width.

Expert BIM systems

Higharc is the perfect example of what a BIM 2.0 system can do. The only drawback is that it’s an expert system dedicated to a very niche market. By designing a BIM system to operate within the constraints of a single building type, the team has been able to drill incredibly deeply into the granularity of the construction type, enabling a wealth of riches in terms of data out and automation in modelling, drawing, costings etc. Every house is a variation on a theme and a reconfiguration of the granular entities that make up a US timber frame house.

While in the future the team could expand out to cover other building types and construction methodologies, each building would take immense focus and work to repeat Higharc for concrete offices or modular hotels etc.

In musings with Greg Schleusner, principal and director of design technology at HOK, we have discussed if expert BIM systems are the way forward, as opposed to generic systems, which even most of the BIM 2.0 players are creating, following the Revit replacement route.

HOK has many hotel and labs design jobs, so should there be bespoke BIM systems which cater for these building design types, as opposed to having a generic tool and developing your own internal layers to try and create a customised system? There will always be a problem when the software is created by programmers who have never worked in a design firm and the designers are in the practices knowing all the problems but not writing the software.

The jury is still out on this. But what might change is the impact of AI on coding, applications on demand. Firms may be able to describe a building type with all its nuances and get an automated and programmatic response.

Programs like Hypar could well sit in this space as they are BIM 2.0 and potentially flexible to define expert systems.

Conclusion

We are entering a different phase in design tool evolution. We will start talking to and working with AI from concept to document delivery. This kind of interface is coming to generic BIM tools as well as these powerful expert systems. But due to their intrinsic knowledge of a design type, it’s easier for AI to deliver deep productivity saving results.

With the AI-powered conceptual technology in Forma demonstrated at Autodesk University, Snaptrude’s AI launch and technology like Skema, which can take massing models and replace low level of detail models with high level of detail, productivity savings are coming – and for relatively common building types the level of automation will get quite frightening, quite quickly.

The post Driving AI design upstream appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/driving-ai-design-upstream/feed/ 0
Contract killers: how EULAs are shifting power from users to shareholders https://aecmag.com/business/contract-killers-how-eulas-are-shifting-power-from-users-to-shareholders/ https://aecmag.com/business/contract-killers-how-eulas-are-shifting-power-from-users-to-shareholders/#disqus_thread Fri, 03 Oct 2025 08:06:58 +0000 https://aecmag.com/?p=24946 Most architects overlook software small print, but today’s EULAs are redefining ownership, data rights and AI use — shifting power from users to vendors

The post Contract killers: how EULAs are shifting power from users to shareholders appeared first on AEC Magazine.

]]>
Most architects and engineers never read the fine print of software licences. But today’s End User Licence Agreements (EULAs) and Terms of Use reach far beyond stating your installation rights. Software vendors are using them to have rights over your designs and control project data, limit AI training, and reshape developer ecosystems — shifting power from customers to shareholders. Martyn Day explores the rapidly changing EULA landscape

The first time I used AutoCAD professionally was about 37 years ago. At the time I knew a licence cost thousands of pounds and was protected by a hardware dongle, which plugged into the back of the PC.

The company I worked for had been made aware by its dealer that the dongle was the proof of purchase and if stolen it would cost the same amount to replace, so we were encouraged to have it insured. This was probably the first time I read a EULA and had that weird feeling of having bought something without actually owning it. Instead, we had just paid for the right to use the software.

Back then, the main concern was piracy. Vendors were less worried about what you did with your drawings and more worried about stopping you from copying the software itself. That’s why early EULAs, and the hardware dongles that enforced them, focused entirely on access.

The contract was clear: you hadn’t bought the software, you had bought permission to use it, and that right could be revoked if you broke the rules.


Find this article plus many more in the September / October 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

As computing spread through the 1980s and 1990s, so did the mechanisms of digital rights management (DRM). Dongles gave way to serial numbers, activation codes and eventually online licence checks tied to your machine or network. Each step made it harder to install the software without permission, but the scope was narrow. The EULA told you how many copies of the software you could run, what hardware it could be installed on, and that you could not reverse-engineer it.

What it didn’t do was tell you what you could or could not do with your own work. Your drawings, models and outputs were your business. The protection was wrapped tightly around the software, not around the data created with it. That boundary is what has changed today.

The rising power of the EULA

As software moved from standalone desktop products to subscription and cloud delivery, the scope of EULAs began to widen. No longer tied to a single boxed copy or physical medium, licences became fluid, covering not just installation but how services could be accessed, updated and even terminated.

The legal fine print shifted from simple usage restrictions to broad behavioural rules, often with the caveat that terms could be changed unilaterally by the vendor.

At first the transition was subtle. Subscription agreements introduced automatic renewals, service-level clauses and restrictions on transferring licences. Cloud services were layered in terms around uptime, data storage, and security responsibilities. What once was a static contract at the point of sale evolved into a living document, updated whenever the vendor saw fit. And in the last five to seven years, we have seen more frequent updates.

Software firms now have an extraordinary new power: the ability to reshape the customer relationship through the EULA itself. Where early agreements were about protecting intellectual property against piracy, modern ones increasingly function as business strategy tools. They dictate not just who can access the software, but how customers interact with their data, APIs, and even with third-party developers. The fine print was no longer just about access control; it became a mechanism of control.

EULAs are no longer obscure boilerplate legalese, tucked at the end of an installer. They have become the front line in a new battle, not over software piracy, but over who controls the data, workflows, and ecosystems that shape the future of design

Profound changes

The most striking shift in recent years is that EULAs have moved beyond software access and into the realm of customer data. What you produce with the tools (models, drawings, schedules, and outputs) has become strategically valuable to the software developers – as valuable as the software itself. Vendors now see customer data as fuel for things like analytics, training, and new AI services. The contract language has followed and there are varying degrees of land grab going on.

This year alone we have seen two firms – Midjourney and D5 Render – attempt to change their EULAs to automatically lay claim to perpetual rights access and use customer created data (mainly AI renderings), as well as the right to pass on lawsuits if any of those images infringe copyright and are subsequently used by the software vendor to train its AI models.

Many of the pure-play AI firms will lay claim to your first born given half a chance.



D5 Render provided a response to this article to clarify its position on customer data rights including details on ownership of content, training data and liability published below.



EULA



Autodesk

Closer to home, Autodesk provides another example. Its current Terms of Use, which serves as the primary agreement for subscription and cloud users, includes a clause which prohibits training AI systems on data or models created with its software. An earlier draft of this article suggested the restriction was recent, but Autodesk has since clarified that it dates back to 2018.

On a strict reading, this clause implies that even if you create designs entirely in-house, you may not be allowed to use your own data to train and develop your own AI models. If correct, Autodesk could hold the right to decide if, when, or how your data can be used for such purposes.

As we are on the cusp of an AI revolution, this is a profound change. Historically, your files were yours: a Revit model or AutoCAD drawing was protected only by your own governance. Now the licence agreement could potentially dictate not only how the software runs, but also how you can use the fruits of your own labour.

Autodesk’s licensing language creates a subtle but important tension between ownership and control. In its Terms of Use (which serves as the effective EULA for all subscription and cloud customers), Autodesk reassures customers with familiar phrases such as “You own Your Work” and “Your Content remains yours.”

On the surface, this means that the models, drawings, and other outputs you create belong to you, not Autodesk. However, deeper in the Terms of Use and the accompanying Acceptable Use Policy (AUP), the scope of what you can do with that work becomes more constrained — particularly in relation to AI or derivative use cases.

Talking with May Winfield, global director of commercial, legal and digital risks for global engineering consultancy Buro Happold, she suggests this goes further: Autodesk’s Acceptable Use Policy’s purported restrictions on customer outputs may even conflict with copyright laws in certain jurisdictions, where authors automatically own their creations unless they expressly transfer or license those rights. The question becomes: if copyright law guarantees authorship, but Autodesk contractually limits permitted uses, which prevails?

In these documents, Autodesk introduces the term “Output,” meaning any file or result generated using its software. The AUP states that customers must not use “any Offering or related Output in connection with the training of any machine learning or artificial intelligence algorithm, software, or system.” In practice, this means that even though Autodesk concedes ownership of your designs, it may contractually restrict you from applying them in one of the most strategically valuable ways: training your own AI models.

I know many of the more progressive AEC firms that attend our NXT BLD event are training their own in-house AI based on their Revit models, Revit derived DWGs and PDFs. With no caveats or carve outs for customers, they potentially now have the Sword of Damocles hanging over their data. As worded, the broad use of the word ‘output’ could theoretically even apply to an Industry Foundation Classes (IFC) file exported from Revit, as it’s an output from Autodesk’s product stack, which could mean you are not even allowed to train AI on an open standard!

Legally, the company has not taken your intellectual property; instead, it may have ring-fenced its permitted uses, in a very specific way. This creates what I’d characterise as a “legal DRM moat” around customer data.

Autodesk potentially positions itself as the arbiter of how your data can be exploited, leaving you in possession of your files but without full freedom to decide their fate. The fine print ensures Autodesk maintains leverage over emerging AI workflows, even while telling customers their data still belongs to them. And the one place where this restriction doesn’t apply is within Autodesk’s cloud ecosystem, now called Autodesk Platform Services (APS). Only last month at Autodesk University, Autodesk was showing the AI training of data within the Autodesk Cloud.



Autodesk provided a response to this article, published below.

For clarity, several edits have since been made to this article.



Knock-on risks for consultants

Winfield also points out that Autodesk’s broad claims over “outputs” may have knock-on consequences for customer–client agreements. Most design and consultancy contracts require the consultant to warrant that deliverables are original and fully owned by them. If a vendor asserts ownership rights through its licence terms, that warranty could be undermined. The risk goes further: consultancy agreements often contain indemnities, requiring the designer to protect the client against copyright breaches or claims. If a software vendor were to allege ownership or misuse under its EULA, a client might look to recover damages from the consultant. This creates a potential double exposure — liability to the vendor, and liability to the client.

Possible reasons

The rationale behind this clause is open to interpretation. Autodesk maintains that its intent is to protect intellectual property and ensure AI use occurs within secure, governed environments. Some industry observers worry that the breadth could inadvertently chill legitimate customer innovation, despite Autodesk’s stated intent.

Others have speculated that such clauses could serve to pre-empt potential misuse of design data by large AI firms. However, Autodesk’s 2018 publication date predates the current wave of generative AI, suggesting the clause was originally framed more broadly as an IP-protection measure, challenging Autodesk’s hold on its customers. 2018 is a long time before these major AI players were a potential threat.

The short solution to this would be for Autodesk to refine the language in its Terms of Use and not have such an implied broad restriction on customers creating their own trained AIs on their own design data, irrespective of the software that produced it.

There is a lot of daylight between what Autodesk claims to be its intent and the plain language of what is written. If the intent is to stop reverse engineering of Autodesk AI IP, then why not state that clearly?

The reverse engineering of its products and services is covered quite extensively in section 13 Autodesk Proprietary Rights in its General Terms. Machine Learning, AI, data harvesting and API, are all in addition to this.

When Nathan Miller, a digital design strategist and developer from Proving Ground, discovered these limitations, he ran a a series of posts on Linkedin. Prior to this none of the AEC firms we had spoken with for this article had any insight into this, despite the Terms of Service being published seven years ago.

While it was certainly a topic hotly commented on, the only Autodesk-related person to add their thoughts to the LinkedIn posts was Aaron Wagner of reseller Arkance. He commented:

“I don’t think the common interpretation is accurate to the spirit of that clause. Your data is your data and the way you use it is under your own discretion. Of course, you should always seek legal counsel to refine any grey areas.

“This statement to me reads that the clause is from a standpoint of Autodesk wanting to protect its products from being reverse engineered and hold themselves free of liability of sharing private information, but model element authors can still freely use AI/ML to study their own data / designs and improve them.”

Buro Happold’s Winfield gave her perspective, “Contract interpretation is generally not impacted by spirit of a clause – if the drafting is clear, it is not changed by the assertion of a different intention? Unless there are contradictions in other clauses and copyright law then it all needs to be read together and squared up to be interpreted in a workable way? It may be the “outputs” in the clause needs to qualify / clarify its intentions, if different from the seemingly clear drafting of read alone?”

The interpretation that this was a sweeping restriction on AI training using any output from Autodesk software has not gone unnoticed by major customers. Autodesk already has a reputation for running compliance audits and issuing fines when licence breaches are discovered, so the presence of this clause in an updated, binding contract has raised alarm.

The fear was simple: if the restriction exists, it can be enforced. Several design IT directors have already told their boards that, on a strict reading of Autodesk’s updated terms, their firms are probably now out of compliance – not for piracy, but for training their own AI models, on their own project data.

Some of the commenters on Miller’s original LinkedIn post, reported that they raised the issue with Autodesk execs in meetings. By and large these execs had not heard of the EULA changes and said they would go find out more information.

Other vendors

Looking around at what other firms have done here, their EULAs include clauses about AI training of data, but it always appears to be in relation to protecting IP or reverse engineering commercial software – not broad prohibitions.

Adobe has explicit rules around its Firefly generative AI features and the company’s Generative AI User Guidelines forbid customers from using any Firefly-generated output to train other AI or machine learning models. However, in product-specific terms, Adobe defines “Input” and “Output” as your content and extends the same protections to both.

Graphisoft has so far left customer data largely unconstrained in terms of AI use. Bentley Systems sits somewhere in between, allowing AI outputs for your use but prohibiting their use in building competing AI systems. The standard Allplan EULA / licence terms do not appear to contain blanket prohibitions on using output for AI training.

Meanwhile, Autodesk’s wording has no caveats or carve out for customers’ data just what appears to be a blanket restriction on AI training using outputs from its software, combined with an exception for its own cloud ecosystem. This appears to effectively grant the company a monopoly over how design data can fuel AI. Customers are free to create, but if they wish to train internal AI on their own project history, the contract could shut the door — unless that training happens inside Autodesk’s APS environment. The effect is to funnel innovation into Autodesk’s platform, where the company retains commercial leverage.

This mirrors tactics used in other industries. Social media platforms, for example, restrict third-party scraping to ensure AI training occurs only within their walls – although in that instance the third party would be using data it does not own.

If licence agreements prevent firms from using their own outputs to train AI, they forfeit the ability to build unique, in-house intelligence from their past projects

In finance, regulators have intervened to stop institutions from controlling both infrastructure and the datasets flowing through them. Europe’s Digital Markets Act directly targets such gatekeeping, while US antitrust agencies are scrutinising restrictive contract terms that entrench platform dominance.

For the AEC sector, the potential impact of the restrictions in Autodesk’s Acceptable Use Policy is clear: it risks concentrating AI innovation inside Autodesk’s ecosystem, raising barriers for independent development and narrowing customer choice.

Proving is difficult

How Autodesk might enforce an AI training ban is an open question. Traditional licence audits can detect unlicensed installs or overuse. Meanwhile, proving that a customer has trained an AI on Autodesk outputs is way more complex. But Autodesk file formats (DWG, RVT, etc.) do contain unique structural fingerprints that could, in theory, be detected in a trained model’s weights or outputs – for example, if an AI consistently reproduces proprietary layering systems, metadata tags, or parametric structures unique to Autodesk tools.

Autodesk could also monitor API usage patterns: large-scale systematic exports or conversions may signal that datasets are being harvested for training. Another possible avenue is watermarking — embedding invisible markers in outputs that survive export and could later be detected.

APIs, APS and developers

Autodesk is also making significant changes to other areas of its business – changes that could have a big impact on those that develop or use complementary software tools. Autodesk’s API and Autodesk Platform Services (APS) ecosystem has long been central to the company’s success, enabling customers and commercial third parties to extend tools like Autodesk Revit and Autodesk Construction Cloud (ACC).

But what was once a relatively open environment is now being reshaped into a monetised, tightly governed platform — with serious implications for customers and developers.

Nathan Miller of Proving Ground points out that virtually every practice he has worked with relies on opensource scripts, third-party add-ins, or in-house extensions. These are the utilities that make Autodesk’s software truly productive. By introducing broad restrictions and fresh monetisation barriers, Autodesk risks eroding the very ecosystem that helped drive its dominance.

The most visible change is the shift of APS into a metered, consumption-based service. Previously bundled into subscriptions, APIs will now incur line-item costs for common tasks such as model translations, batch automations and dashboard integrations.

A capped free tier remains, but high value services like Model Derivative, Automation and Reality Capture will now be billed per use. For firms, this means operational budgets must now account for API spend, with the risk of projects stalling mid-delivery if quotas are exceeded or unexpected charges triggered.

Autodesk has also tightened authentication rules. All integrations must be registered with APS and use Autodesk-controlled OAuth scopes. These scopes, which define the exact permissions an app has, can be added, redefined or retired by Autodesk — improving security, but also centralising control over what kinds of applications are permitted.

Perhaps the most profound change is not technical, but contractual. Firms can still create internal tools for their own use. But turning those into commercial products — or even sharing them with peers — now requires Autodesk’s explicit approval. The line between “internal tool” and “commercial app” is no longer a matter of technology but of contract law. Innovation, once free to circulate, is now fenced in.

This changing landscape for software development is not unique to Autodesk. Dassault Systèmes (DS), which is strong in product design, manufacturing, automotive, and aerospace, has sparked controversy by revising its agreements with third party developers for its Solidworks MCAD software. DS is demanding developers hand over 10% of their turnover along with detailed financial disclosures. Small firms fear such terms could make their businesses unviable.

Across the CAD/BIM sector, ecosystems are being re-engineered into revenue streams. What were once open pipelines of user-driven innovation are narrowing into gated conduits, designed less to empower customers than to deliver shareholder returns.

Why all this matters

The stakes are high for both customers and developers. For customers, the greatest risk is losing meaningful control over their design history. Project files, BIM models and CAD data are no longer just records of completed work; they are the foundation for future AI-driven workflows. If licence agreements prevent firms from using their own outputs to train AI, they forfeit the ability to build unique, in-house intelligence from their past projects. The value of their data, arguably their most strategic asset, is redirected into the vendor’s ecosystem. The result is growing dependence: firms must rely on vendor tools, AI models and pricing, with fewer options to innovate independently or move their data elsewhere.

For software developers, the risks are equally severe. Independent vendors and in-house innovators who once built add-ons or utilities to extend core CAD/BIM platforms now face new costs and restrictions. Revenue-sharing models, such as Dassault Systèmes’ 10% royalty scheme, threaten commercial viability, especially for smaller firms. When API use is metered and distribution fenced in by contract, ecosystems shrink. Innovation slows, customer choice narrows, and vendor lock-in grows.

AI is the existential threat vendors don’t want to admit. Smarter systems could slash the number of licences needed on a project, deliver software on demand, and let firms build private knowledge vaults more valuable than off-the-shelf tools. Vendors see the danger: EULAs are now their defensive moat, crafted to block customers from using their own data to fuel AI. The fine print isn’t just about compliance — it’s about making sure disruption happens on the vendor’s terms, not those of the customer.

This trajectory is not inevitable. Customers and developers can push back. Large firms, government bodies and consortia hold leverage through procurement. They can demand carve-outs that preserve ownership of outputs and guarantee the right to train AI. Developers, too, can resist punitive revenue-sharing schemes and press for fairer terms. Only collective action will ensure innovation remains in the hands of the wider AEC community, not locked in vendor boardrooms.

The tightening of EULAs and developer agreements is not happening in a vacuum. In Europe, new regulations like the Digital Markets Act (DMA) and the Data Act could directly challenge these practices. The DMA targets “gatekeepers” that restrict competition, while the Data Act enshrines customer rights to access and use data they generate, including for AI training. Clauses restricting firms from training AI on their own outputs may sit uncomfortably with these principles.

In the US, antitrust law is less settled but moving in the same direction. The FTC has signalled increased scrutiny of contract terms that suppress competition, and restrictions such as Autodesk’s AI-output restriction or Solidworks’ 10% developer royalty could draw attention.

For customers and developers, this creates negotiating leverage. Large firms, government clients, and consortia can push for carve-outs citing regulatory rights, while developers may resist punitive revenue-sharing as disproportionate. Yet smaller players face a harder reality: challenging vendors risks losing access to platforms that underpin longstanding businesses.

A Bill of Rights?

With so many software firms busily updating their business models, EULAs and terms, the one group here that is standing still and taking the full force of this wave are customers. A constructive way forward could be the creation of a Bill of Rights for AEC Software customers — a simple but powerful framework that customers could insist their vendors sign up to and be held accountable against. The goal is not to hobble innovation, but to ensure it happens on a foundation of fairness and trust. Knowing this month’s ‘we have updated our EULA’ will not transgress some core concepts.

At its heart we’re suggesting five core principles:

Data Ownership – a statement that customers own what they create; vendors cannot claim control of drawings, models, or project data through the fine print.

AI Freedom – guarantees that firms may use their own outputs to train internal AI systems, preserving the ability to innovate independently rather than relying solely on vendor-driven tools.

Developer fairness – ensures that APIs remain open, with transparent and non-punitive revenue models that allow third-party ecosystems to thrive.

Transparency – requires vendors to clearly disclose when and how customer data is used in their own AI training or analytics.

Portability – commits vendors to interoperability and open standards, so that customers are never locked into one ecosystem against their will.

Such a Bill of Rights would not prevent Autodesk, Bentley Systems, Nemetschek, Trimble and others from building profitable AI services or new subscription tiers. But it would establish clear boundaries: vendors innovate and capture value, but not at the expense of customer autonomy. For customers, developers, and ultimately the built environment itself, this would restore balance and accountability in a market where the fine print has become as important as the software itself.

AEC Magazine is now working with a group of customers, developers and software vendors to see how this could be shaped in the coming months.

Conclusion

EULAs are no longer obscure boilerplate legalese, tucked at the end of an installer. They have become the front line in a new battle, not over software piracy, but over who controls the data, workflows, and ecosystems that shape the future of design.

In my view, as currently worded, Autodesk’s clause could be interpreted as a prohibition on AI training, although this may be counter to Autodesk’s intentions with regards to customer ‘outputs’. Furthermore, Dassault Systèmes’ demand for a slice of developer revenues illustrates just how quickly the ground is shifting. Contracts are no longer just protective wrappers around software; they are strategic levers which can be used to lock in customers and monetise ecosystems.

This should concern everyone in AEC. Customers risk losing the ability to use their own project history to innovate, while mature developers face sudden, new revenue-sharing models that could undermine entire businesses. Left unchallenged, the result will be less competition, less innovation, and greater dependency on a handful of large vendors whose first loyalty is to shareholders, not users.

The only path forward I see is collective action. Customers and developers must push back, demand transparency, insist on long-term contractual safeguards, and possibly unite around a shared Bill of Rights for AEC software. The question is no longer academic: in the age of AI, do you own your tools and your data — or does your vendor own you?


Editor’s note / Autodesk response:

In response to this article, Autodesk provided the following statement:

“The clause included in Martyn Day’s recent article has been part of our Terms of Use since they were originally published in May 2018. 

 “This clause was written to prevent the use of AI/ML technology to reverse engineer Autodesk’s IP or clone Autodesk’s product functionalities, a common protection for software companies. It does not broadly restrict our users’ ability to use their IP or give Autodesk ownership to our users’ content.

“We know things are moving fast with the accelerated advancement in AI/ML technology. We, along with just about every software company, are adapting to this changing landscape, which includes actively assessing how best to meet the evolving needs and use cases of our customers while protecting Autodesk’s core IP rights. As these technologies advance, so will our approach, and we look forward to sharing more in the months ahead.”

Autodesk also clarified that the License and Services Agreement only applies to legacy customers who still use perpetual licenses. The Terms of Use from May 2018 supersedes that agreement to cover both desktop and cloud services.


Correction (8 Oct 2025): An earlier version of this article incorrectly suggested that the changes to the Terms of Use were made in May 2025. Based on Autodesk’s statement above this article has now been corrected and updated for clarity


D5 Render’s response

In response to this article, D5 Render provided the following statement:

We fully understand and share the community’s concerns regarding data rights in the evolving field of AI. We remain committed to maintaining clear and fair agreements that protect user rights while fostering innovation.

Our Terms of Service (publicly available at www.d5render.com/service-agreement) do not claim any ownership or perpetual usage rights over user-generated content, including AI-rendered images. On the contrary, Section 6 of our Terms of Service explicitly states that users “retain rights and ownership of the Content to the fullest extent possible under applicable law; D5 does not claim any ownership rights to the Content.”

When users upload content to our services, D5 is granted only a non-exclusive, purpose-limited operational license, which is a standard clause in most cloud-based software products. This license merely allows us to technically operate, maintain, and improve the service. D5 will never use user content as training data for the Services or for developing new products or services without users’ express consent.

As for liability, Sections 8 and 9 of our Terms of Service are standard in the software industry. They are designed to protect D5 from risks arising from user-uploaded content that infringes on third-party rights. These clauses are not intended to transfer the liability of D5’s own actions to users.


Explainer #1 – EULA vs Terms of Use: what’s the difference?

At first glance, a EULA (End User Licence Agreement) and Terms of Use can look like the same thing. In practice, they operate at different levels — and together form the legal framework that governs how customers engage with software and cloud services.

The EULA is the traditional licence agreement tied to desktop software. It explains that you do not own the software itself, only the right to use it under certain conditions. Typical clauses cover installation limits, restrictions on copying or reverse-engineering, and confirmation that the software is licensed, not sold.

The Terms of Use apply more broadly to online services, platforms, APIs and cloud tools. They include acceptable use rules, data storage and sharing conditions, API restrictions, and often a right for the vendor to change the terms unilaterally.

One unresolved issue is how to interpret contradictions. If the EULA states ‘you own your work’ but the Acceptable Use Policy restricts what you can do with that work, and neither agreement specifies which takes precedence, which clause governs? In practice, customers may only discover the answer in the event of a dispute — an unsettling prospect for firms relying on predictable rights.


Explainer #2 – Why is data the new goldmine?

As the industry moves into an era defined by artificial intelligence and machine learning, customer content has become more than just the product of design work, it has become the raw material for training and insight.

BIM and CAD models are no longer viewed solely as deliverables for projects, but as vast datasets that can be mined for patterns, efficiencies, and predictive value. This is why software vendors increasingly frame customer content as “data goods” rather than private work.

With so much of the design process shifting to cloud-based platforms, vendors are in a powerful position to influence, and often restrict, how those datasets can be accessed and reused.

The old mantra that “data is the new oil” captures this shift neatly: just as oil companies controlled not only the drilling but also the refining and distribution, software firms now want to control both the pipelines of design data and the AI refineries that turn it into intelligence.

What used to be customer-owned project history is being reconceptualised as a strategic asset for software vendors themselves and EULAs and Terms of Use are the contractual tools that allow them to lock down that value.


Explainer #3 – Autodesk’s Terms of Use

What it says

Autodesk’s Acceptable Use Policy (AUP) appears to ban AI/ML training on any “output” from its software unless done within Autodesk’s APS cloud. This could include models, drawings, exports, even IFCs.

Why it matters

Customers risk losing the ability to train internal AI on their own design history. Strict licence audits mean firms could be flagged non-compliant even without intent.

Legal experts warn the AUP’s broad claims over “outputs” may conflict with copyright law, which in many jurisdictions gives authors automatic ownership of their creations.

Consultants could face knock-on risks if client contracts require them to warrant full ownership of deliverables — raising potential indemnity exposure.

Autodesk gains leverage by funnelling AI innovation into its paid ecosystem.

The big picture

This move mirrors gatekeeping strategies in other tech sectors, where platforms wall off data to consolidate control. Regulators in the EU (Digital Markets Act, Data Act) and US antitrust bodies are increasingly scrutinising such practices.


Explainer #4 – Developers at risk

What changed?

Autodesk has overhauled Autodesk Platform Services (APS): APIs are now metered, consumption-based, and gated by stricter terms. While firms can still build internal tools, sharing or commercialising scripts now requires Autodesk’s explicit approval.

Why it matters

Independent developers face new costs and quotas for integrations that were once bundled into subscription fees. In-house teams must now budget for API usage, turning process automation into an ongoing operational cost.

Quota limits mean projects risk disruption if thresholds are unexpectedly exceeded mid-delivery.

The contractual line between “internal tool” and “commercial app” is now defined by Autodesk, not developers.

Innovation that once flowed freely into the wider ecosystem is fenced in, with Autodesk deciding what can be shared.

The big picture

Across the CAD/BIM sector, developer ecosystems are being monetised and restricted to generate shareholder returns. What were once open innovation pipelines are narrowing into vendor-controlled platforms, threatening the independence of smaller developers and reducing customer choice.


Recommended viewing: May Winfield @ NXT DEV

May Winfield
May Winfield

At AEC Magazine’s NXT DEV event this year, May Winfield, global director of commercial, legal and digital risks for Buro Happold presented “EULA and Other Agreements: You signed up to what?”, where she invited the audience to reconsider the contracts they’ve implicitly accepted.

How many users digest the fine print of EULAs and AI tool terms? Winfield warns that their assumptions often misalign with contractual reality and highlights key clauses that tend to lurk in user agreements: ownership of content, usage rights, and liability limitations.

In her presentation May does not offer legal advice but she provides a practical reminder: what you think you own or can do might be constrained by what you signed up to — underscoring the urgency for users, developers, and governance bodies to delve into EULAs and demand clarity.

■ Watch @ www.nxtaec.com

The post Contract killers: how EULAs are shifting power from users to shareholders appeared first on AEC Magazine.

]]>
https://aecmag.com/business/contract-killers-how-eulas-are-shifting-power-from-users-to-shareholders/feed/ 0
FenestraPro – façade design / envelope analysis https://aecmag.com/sustainability/fenestrapro-facade-design-envelope-analysis/ https://aecmag.com/sustainability/fenestrapro-facade-design-envelope-analysis/#disqus_thread Fri, 03 Oct 2025 08:14:07 +0000 https://aecmag.com/?p=24891 This façade design optimisation tool works with Revit and Forma to help create sustainable, detailed designs

The post FenestraPro – façade design / envelope analysis appeared first on AEC Magazine.

]]>
FenestraPro offers a façade design optimisation tool for Revit and an envelope analysis tool for Forma that, when combined, can be used in workflows to create sustainable, detailed designs, writes Martyn Day

The building envelope has always been one of architecture’s most demanding battlegrounds. A façade is expected to satisfy multiple, often conflicting requirements. It must express design intent, meet performance targets for energy efficiency, comfort and daylight, and comply with regulations.

Traditionally, assessments to ensure these requirements are met have been left until late on in projects, once a design is largely fixed and alterations become expensive.

Dublin-based FenestraPro was created to address this issue, giving architects direct access to façade performance tools inside of their existing BIM workflows and when their decisions can most optimally influence outcomes.

Established in 2012 by technologists Simon Whelan and Dave Palmer, FenestraPro emerged from a frustration with digital analysis tools that were either too specialist for day-to-day design work or too disconnected from the platforms that architects actually use.


Find this article plus many more in the September / October 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

The goal was to bring performance data into the design process itself, enabling architects to weigh the consequences of their choices while still sketching and modelling.

Today, FenestraPro is used by international firms such as AECOM, Jacobs and HKS, where architects and engineers rely on it to help close the gap between aesthetic intent and energy performance.

Face value

FenestraPro’s technology centres on façade analysis and offers deep integration with Autodesk environments. Its best-known product, FenestraPro for Revit, runs as an add-on and allows users to test glazing proportions, shading devices and material selections without leaving their BIM model.

A partner application extends similar functionality into Autodesk’s emerging Forma conceptual design platform, enabling performance analysis from the massing stage onwards. In this way, designers can quickly evaluate how orientation, window-to-wall ratios or shading strategies will affect daylight levels and energy use.

Instead of waiting on external reports, the system provides immediate feedback, with colour-coded surfaces and dynamic charts that highlight potential problem areas such as glare or excessive solar gain.

The software deliberately avoids imposing the heavy computational demands associated with full building simulation tools. Instead, it delivers a lightweight, responsive engine designed for iteration.


FenestraPro FenestraPro FenestraPro FenestraPro

This makes it possible for users to compare multiple façade options in quick succession, guiding design choices before geometry becomes too fixed. The package also incorporates a database of more than a thousand glazing products, complete with accurate thermal and solar properties. Recent integrations, such as a link with Vitro Architectural Glass, allow data from manufacturers’ specification platforms to flow directly into the FenestraPro environment, grounding analysis in real-world products rather than generic assumptions.

As projects evolve, the software continues to add value. It supports detailed façade modelling inside Revit, from panelisation through to mullion layouts, while maintaining live performance feedback.

One notable feature is its ability to identify errors or weaknesses in BIM energy models – issues that can compromise downstream analysis. By flagging these early, the tool ensures that data exported from Revit is both reliable and compliant. Reports and outputs can then be generated for a range of uses, from compliance submissions to client presentations.

Design teams can evaluate options in minutes, not days, which accelerates iteration and avoids costly late-stage changes. Building owners get the assurance that the building envelope has been optimised for operational energy consumption and improved occupant comfort. Meanwhile, architects can have greater confidence that their aesthetic choices will work in harmony with performance/ sustainability requirements.

Connecting the dots

FenestraPro does not aim to replace engineering-grade simulation packages. Instead, it focuses on providing architects with the early intelligence they need to make smart façade decisions. By connecting the dots between early-stage exploration in Forma and detailed design in Revit, the platform promotes a joined-up approach to performance.

With sustainability targets becoming stricter and clients demanding more accountability, tools that embed envelope analytics into mainstream BIM workflows are gaining in importance.

FenestraPro’s strategy is to complement existing design environments, rather than reinvent them, positioning itself as a pragmatic but powerful partner in the pursuit of sustainable architecture.

Prices start at $29 per month for Envelope Analysis in Forma and $149 per month for a Premium offering, which adds Revit integration, detailed thermal analysis, carbon benchmarking, model checking and export tools. Discounts are available for teams.

The post FenestraPro – façade design / envelope analysis appeared first on AEC Magazine.

]]>
https://aecmag.com/sustainability/fenestrapro-facade-design-envelope-analysis/feed/ 0
Vectorworks 2026 https://aecmag.com/bim/vectorworks-2026/ https://aecmag.com/bim/vectorworks-2026/#disqus_thread Thu, 09 Oct 2025 05:00:25 +0000 https://aecmag.com/?p=24873 Martyn Day explores how the Vectorworks product set is evolving under new CEO Jason Pletcher

The post Vectorworks 2026 appeared first on AEC Magazine.

]]>
The arrival of Autumn also means the arrival of Vectorworks’ annual updates to its Architect, Landmark, Spotlight and Design Suite products. Martyn Day looks at how the product set is evolving under new Vectorworks CEO Jason Pletcher

Vectorworks has undergone some big changes over the last couple of years, as it navigates the shift to a more subscription-based model for customers and, more recently, adapts to new leadership. With Jason Pletcher now at the company’s helm, there could be further transformation ahead.

Pletcher was announced as the new CEO of Vectorworks in February 2025, taking the reins from Dr Biplab Sarkar, who retired in March after an impressive 25-year tenure at the company.

Pletcher came to Vectorworks from another Nemetschek brand, GoCanvas, where he served as chief operating and financial officer and, according to Nemetschek executives, was instrumental in almost quadrupling GoCanvas’ business over a 5-year period.

Hopes are presumably high that he can pull off a similar trick at Vectorworks, improving its business and expanding its market reach.


Find this article plus many more in the September / October 2025 Edition
👉 Subscribe FREE here 👈

The new Vectorworks CEO has wasted no time in emphasising his conviction that design creativity should drive business results, rather than be hindered by software limitations. That’s an interesting statement, perhaps suggesting that Vectorworks might be readying itself to explore the world of cloud-based services, a market in which GoCanvas already operates as a provider of mobile field work management software.

Moving forward

One thing that hasn’t changed, however, is Vectorworks’ commitment to providing its customers with an annual refresh of product capabilities – with the additional flourish this year of declaring Vectorworks 2026 as its most “forward thinking software version yet”.

As Pletcher put it: “Designers are ambitious and Vectorworks 2026 offers the tools to transform their big ideas into reality. Our latest version allows designers to work more efficiently, break free from busy work, automate manual processes and unleash their design freedom, so their best work can move forward.”

The overarching themes of this version include integrating sustainability metrics, enhancing collaboration and reducing manual and repetitive tasks through smarter automation.

On that last point, various updates across the portfolio – which includes the Architect, Landmark, Spotlight and Design Suite products – are engineered to automate routine adjustments, increase productivity and give designers more time for exploration and design refinement.

For example, the automated Depth Cueing feature is designed to improve the clarity and spatial depth of drawings with minimal user intervention, dynamically adjusting the visual properties of objects based on their distance from the viewer in both Hidden Line and Shaded viewports.

This includes the automatic manipulation of line weights, tonal values and pixel transparency, causing objects farther away to appear lighter or fainter, while foreground elements remain prominent. This feature is most impactful for generating presentation-quality elevations and sections directly from a model, significantly improving the graphical output for design review and client communication.

With Worksheet User Interface and Slicing, meanwhile, customers will see a new ribbon-style toolbar that provides them with a more intuitive interface for worksheet operations. The new slicing capability allows users to split large, complex reports into smaller, linked sections – particularly useful for controlling page layouts, as it ensures data fits neatly within specified print areas without manual reformatting. The interface now supports pinned headers that remain visible during scrolling. These updates make creating complex reports and documents more manageable, according to Vectorworks executives.


Vectorworks 2026 Vectorworks 2026 Vectorworks 2026

Elsewhere, File Health Checker is a new palette designed to maintain project performance and stability, but only available to subscription customers. This diagnostic tool proactively scans active documents for issues likely to degrade performance (such as hidden geometry or resource inefficiencies, for example). The workflow presents users with smart suggestions to resolve these problems, many of which can be executed with a single click. The aim here is to tackle a common pain point in collaborative projects, where imported third-party files can introduce performance-degrading data and even lead to file corruption.

When it comes to Vectorworks’ own graphical scripting tool, Marionette, key updates have been introduced to streamline the process of creating custom parametric objects and workflows. Marionette supports Python-powered nodes, making execution faster, and also has expanded Python library support, supporting access to a large ecosystem of existing Python libraries for complex data manipulation, geometric calculations and interoperability tasks. Vectorworks executives hope this streamlining will make Marionette a more direct competitor to McNeel Grasshopper and Autodesk Dynamo.

Finally, 3D modelling gets a new Offset Face mode within the Push/Pull tool, to enable simultaneous offsetting of multiple planar and non-planar faces on a 3D model. Users can adjust multiple surfaces at one time, without having to recreate dependent features such as fillets. The tool also provides a real-time preview and allows for on-the-fly parameter adjustments.

Architect-specific enhancements

In the Vectorworks Architect 2026 product, updates focus on advanced BIM workflows and integrated sustainability analysis. For example, there are now tools to assist in designing in line with certifications such as LEED and BREEAM and in compliance with regulations such as the UK’s Biodiversity Net Gain (BNG) law.

A new sustainability dashboard provides a number of environmental analysis tools via one interface. It provides real-time monitoring of sustainability metrics as a design evolves, tracking specific data points including embodied carbon calculations, urban greening scores, biomass density and BNG.

A door and window assembly tool supports the creation of complex architectural openings, enabling users to combine elements such as doors, windows, symbols and panels, into single, unified assembly objects. (Previously, this was an error-prone process that often omitted data from schedules and quantity take-offs). This new tool replaces manual workarounds with fully parametric and data-rich objects.

New detailing capabilities for 2D graphical representation of walls, doors, and windows, allows for the customisation of 2D graphics at multiple detail levels, ensuring construction documents appear exactly as intended. By automating the creation of high-quality, standards-compliant drawings, the tool helps maintain consistency and accuracy across document sets while saving time.

Vectorworks’ cloud services can process Revit and IFC file imports, offloading the processing of large files, so that workstations aren’t locked up for 30 minutes

Data Manager, meanwhile, now has an enhanced focus on accelerating and simplifying BIM workflows. The tool’s primary role is to automate data standards compliance. This version streamlines Industry Foundation Classes (IFC) data, mapping across different versions and driving data compliance with project-specific or industry-wide BIM standards.

Landmark for landscaping

Vectorworks is the industry’s only BIM tool with a dedicated ‘flavour’ for landscaping design. In this release, there’s a new Plant Style Manager, a spreadsheetstyle tool that helps users to build, manage and customise a dedicated plant library. It supports batch editing, importing data from nursery partners and plant placement. Since it’s based on a centralised system, this capability drives data consistency from design through to procurement.

The existing Tree tool is improved to support the creation of more realistic and data-rich tree models for regulation-compliant landscape design. The most significant enhancements are support for Maxon Plant Geometry, image props and 3D symbols. Existing trees can be integrated with geographic information system (GIS) data.

Grade Objects have been enhanced and can be created using curves and polylines in both 2D and 3D views. The tool integrates with data tags, allowing for instant labelling of elevations and streamlined reporting of site grading information.

Finally, the Massing Model tool has been updated to accommodate the planning of mixed-use structures. The tool now allows designers to define unique heights, classes and usages for individual floors within a single massing model object.


Vectorworks Landmark 2026: plant style manager

Spotlight for entertainment

The one market in which Vectorworks stands alone is in providing CAD/BIM capabilities for entertainment design, particularly stage and theatre design, covering everything from lighting and mixing desks to stage elements.

The updates in Spotlight 2026 focus on streamlining the design of advanced A/V equipment and on improving collaborative workflows for live events and installations.

There’s a new LED Wall creation tool, which can create walls of virtually any shape, including flat, curved and three dimensional forms. The tool supports the ability to calculate technical specifications, such as power and data requirements, overall size and weight, and pixel resolution.

A new, dedicated tool for common rigging hardware (specifically clamps and side arms) has been added, replacing the previous method of using generic symbols or complex grouped objects, which often lead to inaccurate inventory counts and imprecise geometry in rigging plots, requiring significant manual verification.

Spotlight now supports the MVRxchange Protocol, which powers a local network protocol allowing users to instantly share, commit and request My Virtual Rig (MVR) files with other connected applications, such as lighting consoles or pre-visualisation software.

The Showcase feature for real-time visualisation has had several enhancements including animated fog for creating atmospheric effects, false colour rendering for technical lighting analysis and DMX-driven control of lighting devices. There are also user interface enhancements for tuning the output.


Vectorworks Spotlight 2026: LED video wall

Future directions

Vectorworks is fleshing out its formative cloud services offering. In this release, it aims to offload some of the processing work from the desktop to the cloud. There’s a new ‘Cloud Status’ widget integrated directly into the Vectorworks view bar, which provides real-time updates on the progress of cloud processing jobs and direct access to results without leaving the desktop application.

For subscribers only, Vectorworks’ cloud services can process Revit and IFC file imports, offloading the processing of large files, so that workstations aren’t locked up for 30 minutes. Users can work on, uninterrupted.

For now, there seems to be a pretty good spread of features for all users in the various disciplines that Vectorworks targets. There is a clear drive to assist with automation and reporting, increasing documented accuracy and productivity.

Those features that are limited to subscribers, we would suggest, are highly desirable and fit well with the company’s drive to get customers onto subscription contracts.

With a new CEO on board – and one recruited from a SaaS provider – we anticipate an increasing effort to convert the customer base to subscription payments over the coming years, along with greater cloud integration.

The post Vectorworks 2026 appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/vectorworks-2026/feed/ 0
Snaptrude on AI https://aecmag.com/bim/snaptrude-on-ai/ https://aecmag.com/bim/snaptrude-on-ai/#disqus_thread Mon, 06 Oct 2025 07:46:28 +0000 https://aecmag.com/?p=24904 AEC Magazine spoke with Snaptrude CEO Altaf Ganihar about the AI capabilities that his company is about to launch

The post Snaptrude on AI appeared first on AEC Magazine.

]]>
After years of AI hype, we’re starting to see AI technology appear in established BIM applications. Autodesk is already on the case, but Revit’s competitors are not far behind. AEC Magazine spoke with Snaptrude CEO Altaf Ganihar about the AI capabilities that his company is about to launch

Artificial intelligence and machine learning promise so much for automation and retention of knowledge in the future – but right now, we’re still looking for killer features and applications that can be used by everybody.

Key vendors such as Autodesk are starting to provide clues as to how these might look, in the form of new AI capabilities in specific workflows. In this first phase of deployment, we expect to see AI applied as a copilot in defined functions and workflows and delivering productivity benefits in very generic workflows, particularly those associated with conceptual and querying tools.

The likely long-term implications of AI in the AEC industry are much harder to assess. Customers will have access to software on demand, where AI will create custom programmes to solve client-specified problems, without the need to acquire or download a vendor’s generic application. New levels of automation will significantly challenge current thinking around architectural billable hours as it proves its ability to make decisions based on huge numbers of competing constraints far faster than any human. It will radically transform detail and drawing output.


Find this article plus many more in the September / October 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Recently, AEC Magazine caught up with Snaptrude CEO Altaf Ganihar to discuss the company’s imminent AI update and its likely impact. We kicked off the conversation with a brief look at Snaptrude’s development to date and how its new AI tools are designed to complement the highly workflow-led nature of its BIM tool.

Four phases in Snaptrude

In this year’s release, a lot of thought has gone into how Snaptrude breaks down the design process into distinct phases and at which point new AI updates should help automate and rationalise Snaptrude’s workflow methodology.

Phase 1 focuses on concept design generation via AI (Autonomous Mode). This process begins when the user inputs a prompt (such as an RFP, a brief or an Excel-based room schedule) that describes the desired building. This could be a seven-storey culinary institute that includes student and faculty housing at a particular university or college, for example. This process involves:

Setting data and constraints. Here, accurate site data, including the plot’s parcelling and zoning codes, is loaded. Snaptrude then studies the site context and considers the requirements, automatically creating an RFP by making assumptions (if no specific rationale is provided) and considering factors such as climate, floor allowance, flood zones and the plot’s zoning code.

AI orchestration. A master AI orchestrates the sequence, instructing specialised AI agents what to do and when. This orchestration involves physics and climate-aware models, as well as large language models or LLMs, to perform tasks such as creating the initial programme, conducting climate analysis, studying adjacencies, and generating a massing envelope.

Output. The autonomous AI process generates a working model, aiming for an LOD 250/300 ‘ish’ model, which follows adjacencies and complies with zoning/ building codes. It typically delivers this output in about seven to ten minutes. The AI also provides reasoning for its decisions, and presents diagrams, which serve as the first few presentation slides for a developer.


Snaptrude

Snaptrude


Phase 2 involves refinement and artistic input, conducted in design/editing mode by an architect. This involves:

Envelope editing. The initial envelope generated by the AI is considered a ‘draft’ and can be modified to make it ‘more fancy’ by using tools such as Boolean operations, or importing complex elements such as facades from a tool like Rhino.

Repacking/resolving. When an envelope is altered, the AI understands the new geometry and can be instructed to repack the programme (space planning) within any new constraints. If the required programme cannot fit, the software flags up the violation by showing that ‘target versus achieved as gone down’ in the programme mode.

Delegation. The architect can delegate specific tasks back to the AI, which refines or applies checks, such as researching building codes, showing best-practice adjacencies or providing floor-planning for a specific floor.


Phase 3 aims at achieving a more detailed state and sees the project move into BIM mode. This involves:

Detailing and compliance. At this stage, elements like doors, fire exits and detailed components are addressed. The AI helps transition the design by choosing appropriate detailed components, such as fire-rated walls for corridors, based on metadata and historical project data (for example, from ten previous hospital projects).

A lot of thought has gone into how Snaptrude breaks down the design process into distinct phases and at which point new AI updates should help automate and rationalise Snaptrude’s workflow methodology

Model quality. The goal is to reach an LOD 300 model, which requires more detailed Revit families, though users can manually make changes, as the environment is a full authoring tool. The software uses its own data schema to understand and rationalise all geometry, including imported Revit files, helping it make decisions based on metadata such as construction cost, demolition cost, and procurement processes.


Finally, in Phase 4, we see the project move into presentation mode for documentation. Here, we see AI driving auto-documentation and autodrawings, with the system automatically creating floor plans, 3D views and adjacency and bubble diagrams. The user can then configure these.

The goal is that, right up until the schematic phase, users should not have to touch traditional BIM tools such as Revit.



Q&A with Snaptrude CEO Altaf Ganihar

AEC Magazine: Altaf, as Snaptrude has developed, you and your team have clearly rethought many of its workflows and features. So what can you tell us about how AI will change the way that Snaptrude sees next generation BIM tools?

Snaptrude
Altaf Ganihar

Altaf Ganihar: Snaptrude’s first AI deliverables are aimed to automate much of the early concept design phase, combining various critical checks into a single process. What we are launching in October 2025 will do most of the concept design, literally taking users from an RFP to an LOD 300 model. And it’s not just some random shape that is generated, but something that follows adjacencies, something which looks at zoning codes, building codes, and takes into account the climate to generate a building option, automatically, in seven to ten minutes. It’s very different and much quicker than manual BIM 1.0 development.

The strength of the software lies in its comprehensive, connected ecosystem, initiated by a spreadshee-tlike environment. This drives the programming, the massing tool, the early BIM tool, and the presentation and Miro-like interface to do documentation. Everything is live. So you make a change in your spreadsheet, design updates, presentation updates, render – it’s all here. You can tell the full story without having to go out of Snaptrude.


AEC Magazine: Is it fair to say that you rethought Snaptrude and, instead of a single application, you chose to break it down?

Altaf Ganihar: Snaptrude is built on four modes: programming to host data, design for geometry, BIM for detail, and geometry and presentation for documentation. We built it such that each one of these modes is an independent product, but also connected, and each one of them has AI agents to do tasks.

Regarding geometry flexibility, Snaptrude has added Boolean functions and integrates Rhino geometry both ways, allowing users to import complex designs, edit them in Snaptrude and potentially take them back to Rhino.

(Note: Altaf added that Snaptrude is also working on allowing users to import a complex Rhino envelope and reverse-engineer the internals, which we think would be hugely useful to signature architects using a Rhino-first approach.)


AEC Magazine: Many people are worried about AI’s propensity to hallucinate. Is this a concern for you and how does Snaptrude address this risk?

Altaf Ganihar: Snaptrude’s approach uses a sophisticated, multi-layered AI architecture to ensure outcomes are constrained, deterministic and compliant with real-world physics and codes. The way to achieve this is if you follow the recent AI developments closely, and if you layer AI using sophisticated techniques, then you can get very little hallucination – in fact, almost no hallucination.

The Snaptrude system uses multiple AI models. The only technique which you need to use is to have one AI to do creation and another AI to critique it. That critiquing should be based on either actual geometry or numbers, something deterministic. It’s a combination of AI modules. Some of them are LLM. Some of them are not LLM. Some of them we had to build, ones that use physics and climate-aware models. They’re all run by this master AI which figures out which one to use and when.


AEC Magazine: And what about the concern that AI will eliminate jobs in the AEC industry, with automation meaning that fewer architects are required?

Altaf Ganihar: The goal of the AI is not to replace the architect entirely, but to automate repetitive and ‘boring’ tasks, allowing professionals to focus on creativity. We position our AI as a helpful collaborator or intern. You are the principal architect and you do the creative jobs. You don’t want to sit and do research on building codes and fit this mass in. Well, delegate that to the AI, come back after a coffee, and then edit the design. You should be able to go from an RFP to a design presentation in a few minutes, and take as much control as you need, because at the end of the day, you’re the architect.


AEC Magazine: The whole business model for software firms in an age of AI has still to be worked out. On the face of it, it would seem to remove the need for licences with automation. What do you think the business model of the future might look like?

Altaf Ganihar: I think we have to move away from subscriptions. We are moving away from subscriptions with this launch. The planned pricing structure involves tokens, and customers can use these tokens however they want, in terms of paying for processing.


AEC Magazine: For architects who charge per hour, might it not be problematic that AI is not only automating but also speeding up workflows?

Altaf Ganihar: I think fees generally would have to go up, and people have to move away from this ‘billable hour’ concept. Maybe software costs need to be directly linked to project profitability. You need to align it to the right outcomes. If you’re getting more projects, you spend more, and you consider that spend as part of the profit/loss of the project. Maybe if you want to stick to that, you’ll have to consider software as a person.


AEC Magazine: And what can you tell us about training and protecting the IP of your customers?

Altaf Ganihar: Our software is built to be enterprise-ready and capable of handling the proprietary intellectual property of large architectural firms such as Gensler and HOK, treating the AI as a platform using firm-specific IP. The thing is, we can customise it for each company. Like, you can connect your Google Drive or Dropbox tomorrow and start using your data to make decisions privately. We have built the product that way.

So, the AI is a platform, it’s not a tool. We can swap out the knowledge base. You can think of those customers I mentioned using their own Gensler knowledge base or HOK knowledge base.

And when it comes to training, we have made it possible to connect your Google Drive or SharePoint or Egnyte or ACC with us. The AI can then dynamically call up all the past 20 hospital projects, find all the PDFs, all the spreadsheets, all the Revit files and discover the design data and adjacencies.


AEC Magazine: To conclude, what comes next following this initial AI-enabled release?

Altaf Ganihar: This is Version 1. It’s the starting point, similar to early GPT models. There will be a V2 or V3 with significant improvements by the end of the year and the immediate feedback loop from users is what will drive incremental development at Snaptrude.


The post Snaptrude on AI appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/snaptrude-on-ai/feed/ 0
Explorations in GeoBIM https://aecmag.com/geospatial/explorations-in-geobim/ https://aecmag.com/geospatial/explorations-in-geobim/#disqus_thread Thu, 09 Oct 2025 05:00:29 +0000 https://aecmag.com/?p=24854 We caught up with Esri’s Marc Goldman to discuss the geospatial company’s focus on BIM integration

The post Explorations in GeoBIM appeared first on AEC Magazine.

]]>
With more AEC collaborative design solutions available, employees in disciplines that once worked in silos are increasingly connected and sharing information with their colleagues. Martyn Day caught up with Marc Goldman, director of AEC industry at Esri, to discuss the company’s focus on BIM integration

Since 2017, Esri and Autodesk have pursued a strategic partnership to bridge longstanding divides between GIS (geospatial) and BIM (building/infrastructure design) data.

The shared ambition of executives at the two companies is to enable engineers, planners and asset owners to author, analyse and manage projects in a unified, spatially aware environment, from design through to operations.

Initially, the two companies announced plans to build a ‘bridge’ between BIM and GIS, so that Revit models could be brought into Esri platforms and to support enhanced workflows in ArcGIS Indoors and ArcGIS Urban.


Find this article plus many more in the September / October 2025 Edition
👉 Subscribe FREE here 👈

Over time, this partnership has evolved, to include Connectors for ArcGIS – tools for Civil 3D, InfraWorks, and AutoCAD Map3D – that support live linking of GIS data into BIM software with bidirectional updates.

Today, that integration is embodied by ArcGIS GeoBIM, a web-based platform linking Autodesk Construction Cloud (also known as ACC and previously named BIM 360) to Esri’s ArcGIS. This enables project teams to visualise, query and coordinate BIM models within their real-world geographic context, according to Marc Goldman, director of AEC industry at Esri.

“GeoBIM provides a common dashboard for large-scale projects, allowing AEC firms and owner-operators to visualise GIS context alongside BIM content and object properties, even though the source files may reside in ACC,” he explains.

The technical integration now takes two distinct forms, tailored to project needs.

Esri
ArcGIS for Autodesk Forma

The first is Building Layers with ArcGIS Pro, to support detailed, element-level analysis, design review and asset management. Models retain full BIM structure, including geometry, categories, phases and attributes, enabling precise filtering by architectural element or building level.

The second is Simplified 3D Models with ArcGIS GeoBIM, introduced in June 2025, to optimise performance and agility for construction monitoring, mobile workflows and stakeholder engagement. The Add Document Models tool generates lightweight, georeferenced models from Revit and IFC files while preserving links back to their source.

Esri has also extended its partnership with Autodesk with ArcGIS for Autodesk Forma, embedding geospatial reference data directly into Autodesk’s cloud-based planning platform. Forma users can now draw on the ArcGIS Living Atlas, municipal datasets and enterprise geodatabases, all natively georeferenced. This allows environmental, infrastructure, zoning and demographic layers to be overlaid onto early-stage conceptual designs.

GeoBIM provides a common dashboard for large-scale projects, allowing AEC firms and owner-operators to visualise GIS context alongside BIM content and object properties, even though the source files may reside in ACC Marc Goldman, director of AEC industry, Esri

As Goldman notes, “Designs created in Forma inherit coordinate systems and spatial metadata, ensuring that when they move downstream into Revit, Civil 3D or ArcGIS Pro, they remain consistent and location-aware. Beyond visualisation, ArcGIS for Forma supports rapid scenario testing, such as climate risk or transport connectivity, within the context of a live GIS fabric.”

Autodesk Tandem and the broader world of digital twins have also caught the attention of executives at Esri, he adds: “Esri is working with the Tandem team to serve GIS context for customers managing clusters of buildings. This could enable Tandem to evolve into a multi-building digital twin platform.”

AI, NLQ et al

According to Goldman, Esri has been using AI technology internally for years – long before the recent surge of hype around the technology. Now, he says, AI is being deployed to automate complex GIS tasks for users, lowering the barrier to entry for non-specialists.

One example of this can be found in reality capture and asset management. Esri’s reality suite, based on its 2020 acquisition of nFrames, uses geosplatting and computer vision to create high-quality 3D objects from 360-degree cameras or video inspections.


Esri
ArcGIS GeoBIM

“AI enables automated feature extraction from reality capture data, such as LiDAR,” he explains. “Organisations like Caltrans can process hundreds of miles of roads overnight. Segmentation automatically recognises barriers, trees, signage and more, making the data assetmanagement ready.”

Meanwhile, natural language query (NLQ) capabilities in ArcGIS are also paving the way for the democratisation of GIS With more AEC collaborative design solutions available, employees in disciplines that once worked in silos are increasingly connected and sharing information with their colleagues. Martyn Day caught up with Marc Goldman, director of AEC industry at Esri, to discuss the company’s focus on BIM integration Explorations in GeoBIM Technology data. Users can now perform advanced analysis without specialist training.

“Say I need a map of central London, showing the distance between tube stops and grocery stores, overlaid with poverty levels,” Goldman illustrates. “The system generates the map and suggests visualisations, making spatial insights accessible to anyone.”

Urban planning remains a hot topic. That was certainly the case at our recent NXT BLD event, where innovations were showcased by Cityweft, Giraffe, GeoPogo and, of course, Esri.

It’s a domain in which Esri has long contributed and continues to do so, with technologies to enable scenario evaluation and parametric city modelling.

As Goldman puts it: “Architects and planners need to evaluate scenarios, like population growth, by bringing in demographic and visual context. Esri’s tools ensure design choices are made in the right place, with the right influences. And with AI, the possibilities for urban planning expand even further.”

In summary, Esri’s partnership with Autodesk continues to transform the relationship between GIS and BIM data, with AI set to drive the next great wave of integration. As both companies continue to expand their cloud portfolios and ecosystems, Esri is embedding spatial intelligence, predictive analytics and automated decision support directly into AEC workflows.

The convergence of ArcGIS, GeoBIM and Forma with AI-driven insights offers the AEC industry a significant opportunity to move beyond static models towards dynamic, learning digital twins. In this way, says Goldman, the Esri and Autodesk partnership will help that industry “create a more sustainable, resilient and contextaware built environment.”

The post Explorations in GeoBIM appeared first on AEC Magazine.

]]>
https://aecmag.com/geospatial/explorations-in-geobim/feed/ 0
Infrastructure design automation https://aecmag.com/civil-engineering/infrastructure-design-automation/ https://aecmag.com/civil-engineering/infrastructure-design-automation/#disqus_thread Thu, 09 Oct 2025 05:00:29 +0000 https://aecmag.com/?p=24933 Transcend is looking to bring new efficiencies to the design of water, wastewater and power infrastructure

The post Infrastructure design automation appeared first on AEC Magazine.

]]>
Transcend aims to automate one of engineering’s slowest frontend processes – the design of water, wastewater and power infrastructure. Its cloud-based tool generates LOD 200 designs in hours rather than weeks and is already reshaping how some utilities, consultants and OEMs approach projects

The Transcend story begins inside Organica Water, a company based in Budapest, Hungary and specialising in the design and construction of wastewater treatment facilities.

Transcend was a tool built by engineers at Organica to solve the persistent headache of producing preliminary designs for these facilities quickly and at scale. They found traditional manual design processes too limiting, so they put together a digital tool that connected spreadsheets, calculations and process logic in order to automate much of the work associated with early-stage design.

This tool, the Transcend Design Generator (TDG), was a big success at Organica, slashing the time it took engineers to produce proposals and enabling them to explore multiple design scenarios side-by-side.

By 2019, it was clear that while Transcend may have started off as an internal productivity aid, it had matured sufficiently to represent a significant business opportunity in its own right. Transcend was spun off as an independent company, led by Ari Raivetz, who served as Organica CEO between 2011 and 2020.


Find this article plus many more in the September / October 2025 Edition
👉 Subscribe FREE here 👈

Today, TDG is positioned as a generative design and automation solution for the infrastructure sector, targeted at companies building critical infrastructure assets such as water and wastewater plants and power stations. It is billed as accelerating the way that such facilities are conceived, embedding sustainability and resilience into designs from their earliest stages.

Among Transcend’s strategic partnerships is one with Autodesk, which sees TDG integrated with mainstream BIM workflows, providing a bridge between early engineering and detailed designs. Autodesk is also an investor in Transcend, having contributed to its 2023 Series B funding round. To date, Transcend has raised over $35 million and employs some 100 people globally.

A look at Transcend’s tech

A wealth of capability is baked into the TDG software, which goes beyond geometry generation and parametric modelling to also embrace process engineering, civil and electrical logic, simulation and cost modelling.

Engineers enter a minimal set of inputs, such as site characteristics, flow rates and regulatory requirements, and the tool generates complete conceptual designs that are validated against engineering rules. Outputs include models, drawings, bills of quantities, schematics, cost estimates and carbon footprint calculations. Every decision and iteration is tracked, producing an audit trail that would be difficult to achieve in manual workflows.

The difference compared to traditional design practices is quite stark. With manual conceptual design, weeks of work may yield only one or two viable options, locking in assumptions before alternatives can be properly tested.

Transcend compresses this process into hours, producing multiple design variants that can be compared quickly and objectively. Because the data structures and outputs are already aligned with BIM and downstream processes, the work does not need to be redone at the detailed design stage.


Transcend


Transcend
Transcend has a strategic partnership with Autodesk, which sees TDG integrated with mainstream BIM workflows, providing a bridge between early engineering and detailed designs

Transcend executives say that using TDG on a project creates a shift from reactive, labour-intensive conceptual engineering to a more proactive approach. The tool, they claim, is capable of delivering part of a typical initial design package, with outputs detailed enough to support option analysis, secure stakeholder approval, underpin bids and provide reliable cost and carbon estimates.

The intent, however, is not to replace detailed design teams. Instead, it is to accelerate and standardise the slowest stage of the workflow, so that engineers can move into the final stage of detailed design with a far clearer, validated baseline.

Impressively transdiscipline

TDG is very much a BIM 2.0 product for civil/infrastructure design and is, at its heart, generative design software.

It uses rules-based automation and algorithms to generate early-stage models, drawings and documentation, solving complex engineering problems through auditable, traceable data, rather than relying on less-reliable LLMs.

All TDG’s processing is on the cloud, so it works without the need of a desktop application and can be accessed from any device with a web browser.

We also find it to be impressively transdiscipline, integrating the design processes of mixed teams to produce complete, multi-option design packages that reflect the work and experience of mechanical, civil and electrical design experts.

This end-to-end, multidisciplinary approach certainly appears to be a key differentiator for Transcend in the automation space.


Q&A with Transcend co-founder Adam Tank

Adam Tank is co-founder and chief communications officer at Transcend. AEC Magazine met with Tank to focus on the company’s Transcend Design Generator (TDG) tool and hear more about its future product roadmap.

Transcend
Adam Tank

AEC Magazine: To begin, we’re curious to know how you define TDG, or Transcend Design Generator, Adam. Is it a configurator, is it AI, is it both – or is it something else entirely?

Adam Tank: TDG is fundamentally a parametric design software. While people often mistake sophisticated autoutomation for artificial intelligence, our software is built on processes that are really thought-out. It operates as a massive parametric solver, similar to tools used in site development like TestFit, but applied to multidisciplinary engineering for critical infrastructure.

We utilise rules-based automation and algorithms to generate complete, viable design options, based on inputs, constraints and standards. TDG can produce designs quickly, by combining first-principles engineering, parametric design rules and proprietary data sets.

Our primary focus is on solving complex engineering problems through auditable, traceable data, rather than relying solely on large language models that might hallucinate. Every decision the software makes can be traced back to a literal textbook calculation or a rule of thumb provided by an expert engineer.


AEC Magazine: So what exactly does the output for a project produced by TDG look like and how deep does the generated geometry go?

Adam Tank: TDG supports the entire early-stage design process. The software is built to follow the same sequential workflow as a multi-disciplinary engineering team, beginning with process calculations, then moving on to mechanical, electrical and civil calculations.

Consequently, it is capable of generating a comprehensive set of validated, reusable data sets and outputs. These outputs include PFDs (process flow diagrams), BOQs (bills of quantities), and full P&IDs (piping and instrumentation diagrams), because it captures all the required data, such as the full equipment list, the geometry, the motor horsepower rating and the electrical consumption of the equipment.

These schematics can be produced in either AutoCAD or Revit. TDG also produces 3D BIM files with geometry generated at LOD 200. This includes key components like slabs, walls, doors, windows, concrete quantities and steel structures. LOD 200 is sufficient for the conceptual design phase, enabling teams to determine the total capital cost of a project within a 10% to 20% margin.

Furthermore, Transcend also generates drawings from the model. Because the model geometry is guaranteed to be accurate through automation, starting from precise specifications rather than attempting to fix poor modelling errors in the drawings, the resulting drawings can be relied upon.


AEC Magazine: So how does TDG effectively combine knowledge and requirements of multiple engineering disciplines into one unified solution?

Adam Tank: The key to TDG is that it functions as an end-to-end, multi-disciplinary, first-principles engineering automation tool. We built the software to follow the exact same sequential thought process that a multidisciplinary team of engineers uses today.

The process begins with the software taking user inputs regarding location, desired consumption, and facility requirements, and combining this with first principles engineering, parametric design rules, and proprietary data sets. Critically, every decision the software makes can be traced back to a textbook calculation or an engineer’s rule of thumb, providing the auditable, traceable data required in this high-risk industry.

The engine then executes the workflow. It starts with the process set of calculations. Once that data is validated, the software transfers that data to the next stage, flowing through a mechanical engine that handles the calculations and then subsequently translating the data for electrical and civil engineering needs.

Essentially, TDG integrates process, mechanical, civil and electrical design logic into one tool, acting as an engine that ‘chews it all up’, from a multi-disciplinary perspective, and produces the unified outputs required by engineers.

This complex system handles local and regional standards, equipment standards and regulatory constraints, guaranteeing that the design options generated are viable and grounded in real engineering standards.


AEC Magazine: The process certainly sounds heavily automated – but where, specifically, does TDG use AI today and what are the company’s future plans for incorporating more AI into the tool?

Adam Tank: Currently, the only part of our software that uses AI is the site arrangement, where we employ an evolutionary algorithm to optimise site layout. When a user inputs the parcel of land and specifications, the software checks constraints and runs through thousands of combinations to determine the optimal arrangement. This algorithm optimises site footprint, while taking into consideration required ingress/egress points for power and water, traffic flow and other necessary clearances.

For future AI development, we are focused on applications that build user trust and enhance productivity. For example, while TDG already produces a preliminary engineering report as part of its output package, we are looking at leveraging AI for text generation within this report.

There’s also scope for an engineering co-pilot. We’d like to integrate an AI-powered co-pilot that guides the user through the TDG interface and, critically, explains the reasoning behind the software’s design decisions. Engineers are accustomed to manipulating every variable manually, so when the computer generates the solution, they need to understand why certain components are placed the way they are. This co-pilot could quote bylaws, manufacturer limitations or engineering standards, effectively allowing the user to query the model itself.


AEC Magazine: How does Transcend handle the complexity of standards and multi-disciplinary data flow across separate but collaborating engineering functions?

Adam Tank: Our software must handle local and regional standards, equipment standards and regulatory constraints, so the amount of data collection is immense.

The complex engine we have built follows the standard engineering workflow. It starts with a user inputting project data, like location, water flow, desired treatment, existing site conditions. This data is used by the process engineer calculation models, which run sophisticated simulations to predict kinetics and mathematics.

TDG acts as the multi-disciplinary engine. It feeds data into those process models, takes the output and then translates it into the next required discipline—mechanical, then electrical, then civil.

This means the engineering itself is still being done, but our engine chews up all the multi-disciplinary requirements and produces the unified outputs that engineers require.


AEC Magazine: Into which markets does Transcend hope to expand next – and why hasn’t the company so far sought to offer higher levels of detail, such as LOD 300 and LOD 400?

Adam Tank: Our focus has been to remain the only company offering end-to-end, multi-disciplinary, first principles engineering automation for critical infrastructure. We don’t have a direct competitor, because our competition is scattered across specialised automation tools that only handle specific parts of the process, such as MEP automation or architectural configuration. We were purpose-built specifically for water, power and wastewater infrastructure, and we are the only generative design software focused entirely on these complex sectors.

Regarding LODs, we have made a deliberate strategic decision not to pursue higher LOD specifications. In the conceptual design phase, we generate geometry at LOD 200. The time and complexity required to achieve that depth would divert resources from attracting new clients and expanding into new conceptual design verticals.

If it were entirely up to me, the next big market we would pursue is transportation, covering roads and bridges, which represents a massive market in terms of total design dollars spent, eclipsing water and wastewater by almost double.

We also get asked a lot about data centre design. This expansion is technologically feasible for us. For instance, early in our company history, we developed a similar rapid configuration tool for Black & Veatch to design COVID testing facilities during the pandemic. We see a potential natural fit with companies like Augmenta, which specialises in electrical wiring automation, where we could automate the building structure and they could handle the wiring complexity.

The post Infrastructure design automation appeared first on AEC Magazine.

]]>
https://aecmag.com/civil-engineering/infrastructure-design-automation/feed/ 0
Bentley introduces iTwin Platform APIs for Cesium https://aecmag.com/digital-twin/bentley-systems-introduces-itwin-platform-apis-to-cesium-developers/ https://aecmag.com/digital-twin/bentley-systems-introduces-itwin-platform-apis-to-cesium-developers/#disqus_thread Fri, 22 Aug 2025 17:54:23 +0000 https://aecmag.com/?p=24586 Bentley Systems introduces iTwin Platform APIs to Cesium Developers.

The post Bentley introduces iTwin Platform APIs for Cesium appeared first on AEC Magazine.

]]>
Bentley has released iTwin Platform APIs for Cesium, enabling developers to integrate engineering design data with Cesium’s 3D geospatial visualisation.

Bentley Systems has released new resources to help developers use its iTwin Platform APIs within Cesium, the 3D geospatial visualisation technology that Bentley acquired in September 2024. The move is intended to streamline integration between infrastructure digital twins and large-scale geospatial applications.


The iTwin Platform provides a set of open-core APIs for creating and managing digital twins—data-rich virtual models used across the lifecycle of infrastructure assets. Cesium, now operating as part of Bentley, is known for its 3D globe and mapping technology and for originating the 3D Tiles open standard, widely adopted for streaming large-scale 3D datasets.

The latest release provides tutorials and example workflows that demonstrate how iTwin APIs can be used inside CesiumJS applications. Developers can, for instance, combine geospatial context streamed from Cesium ion with engineering design data managed in iTwin, and visualise both in a single environment.


Key functions highlighted in the release include:

  • Data integration: Support for engineering formats from applications such as MicroStation, Revit, AutoCAD, Navisworks, and Rhino. Data is automatically converted into optimised 3D Tiles for visualisation.
  • Design history: APIs allow applications to display versioned design states, enabling comparisons of alternate options.
  • Metadata access: ECSQL support makes it possible to query and filter models based on attributes.
  • Project workflows: Developers can incorporate tasks such as clash detection, change management, and issue tracking directly within Cesium-based applications.

Three tutorials currently available focus on adding real-world geospatial context, visualising iTwin design data in CesiumJS, and switching between design options. Bentley has said further tutorials will expand to metadata querying, navigating to individual model elements, combining multiple iTwins in a single scene, and exposing advanced iTwin APIs.

The integration reflects Bentley’s strategy following the Cesium acquisition: to bring together detailed engineering models with scalable geospatial visualisation under a single umbrella, while continuing to support open standards. For infrastructure owners, operators, and developers, the alignment is designed to reduce duplication of effort when linking project data to broader geographic settings.

The iTwin–Cesium connection is particularly relevant for organisations that need to situate detailed infrastructure data within a regional or national context, such as utilities, transportation agencies, and government bodies. It also supports use cases that involve public communication, planning, and monitoring, where both engineering detail and geographic scale are required.


 

By publishing these APIs and supporting resources, Bentley is signalling its intention to make its digital twin technology more accessible to developers working with open geospatial ecosystems. With Cesium now part of Bentley, the release formalises an integration that has been evolving since the company first backed the 3D Tiles standard in 2018.

Documentation and tutorials are available through Bentley’s iTwin developer portal and Cesium’s channels.

The post Bentley introduces iTwin Platform APIs for Cesium appeared first on AEC Magazine.

]]>
https://aecmag.com/digital-twin/bentley-systems-introduces-itwin-platform-apis-to-cesium-developers/feed/ 0
Arcol unleashed – BIM 2.0 https://aecmag.com/bim/arcol-unleashed-bim-2-0/ https://aecmag.com/bim/arcol-unleashed-bim-2-0/#disqus_thread Thu, 24 Jul 2025 06:00:59 +0000 https://aecmag.com/?p=24431 We explore the recent launch of the BIM 2.0 start-up that has an initial focus on collaborative conceptual design

The post Arcol unleashed – BIM 2.0 appeared first on AEC Magazine.

]]>
BIM 2.0 start-up Arcol officially launched its product at the start of June and presented at our recent NXT BLD conference. With an initial focus on providing a browser-based environment for collaborative conceptual design, the software is already attracting a growing fan base

While the idea of BIM 2.0 is exciting, there is a lot of confusion among BIM users as to how new code streams should manifest themselves. If the intention is to compete against Revit, then surely rivals should resemble Revit in terms of user experience and offer comparable features?

Not necessarily. Revit, after all, is 25 years old. It covers a wide range of design phases, including concept, detail design, rendering and documentation/ drawings. It also provides tools for multiple disciplines, such as MEP and structural engineering.

In contrast, the BIM 2.0 software developers are all opting to drop in and develop products at different points along the design phase. Their aim is to provide something useful, quickly, and then grow out their applications over time, adding breadth and depth to the tools along the way.


Find this article plus many more in the July / August 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈


Take, for example, Arcol: the leadership team at this start-up has chosen to start specifically at the conceptual phase. That makes sense, since it’s the start of the AEC process.

However, this is also an extremely busy segment of the market, boasting many established players as well as so many start-ups that we have lost count.

Obviously, the 800lb gorillas of this segment are SketchUp and, at the highend, McNeel Rhino with Grasshopper. But then we also have Autodesk Forma, which is essentially ‘given away’ as part of the company’s AEC Collection.

Simply put, customers are not short of options and some capabilities are already offered in the boxes of tools for which they’ve already paid. In software terms, this is a tough neighbourhood. Any start-up that wants to elbow in needs to be able to offer a product sufficiently compelling for its buzz to be heard above the general noise.

Origin story

Arcol was the original ‘Figma for BIM’ protagonist. Its founder and CEO Paul O’Carroll was already a huge Figma fan and recognised how it was possible to make a generic web-based, collaborative interface design tool. His father, meanwhile, is an architect, so he has always been attracted to the industry and understands its frustration with the lack of innovation and collaboration in the current generation of BIM tools.

With a background in games development and experience of running a design agency that built custom tools for clients, O’Carroll felt these were challenges he could tackle. In 2021, he founded Arcol in order to build a 3D, sketch-based, collaborative design tool. He quickly raised seed money from a cast of industry notables. These include Craig ‘Tooey’ Courtemanche, founder of Procore, and Amar Hanspal, ex-joint CEO of Autodesk and now CEO at competitor Motif (but that’s another story), along with some core Figma alumni and VC firms. With that funding tied down, O’Carroll moved from Ireland to New York and started building out Arcol’s young team.

Four years later, having taken a cautious approach to publicly showing exactly what Arcol was developing, O’Carroll deemed that the product had enough features to address key pain points and was ready to charge for subscription.

Prior to its June 2025 launch, Arcol opened up the product for a free trial for one week last year. Most of the beta testing, however, was conducted with trusted architectural firms. In late 2024, the company attracted a further round of VC funding, of an undisclosed amount, on top of its original seed funding of $5 million.

Arcol presented at NXT BLD this year and has been actively hiring all summer. Its plan now is to build on its newly commercialised code base and start expanding its concept capabilities, as well as begin to address more complex modelling centric issues.


Arcol
Arcol is looking to address the architect’s need for presentation tools though ‘Boards’, which can include 3D model views, design-related data, mood images and external content such as client project briefs

Core capabilities

Arcol is a next-generation, cloud-native BIM 2.0 modelling system for architects. As it’s all web-based, it eliminates the need for downloads, installations or specific client operating systems. But being web-based doesn’t mean it’s slow. In fact, it’s remarkably zippy on large context models (although the level of detail it currently supports is not complex geometry).

As mentioned, Arcol primarily focuses on pre-design and early-stage design and offers real-time, multi-user collaboration, complex modelling and seamless data integration, along with integrated presentation tools – and all within a web-based environment. A single version of the truth is shared and collaborated in real time between teams. The user interface is fresh, simple to navigate and easy to use, with a very low barrier to entry and gentle learning curve.

For now, the capabilities of Arcol break down to Boards, Modelling, Metrics and Collaboration. In some ways, Arcol is a combination of SketchUp, TestFit, Miro, InDesign, Slack, and Figma, all in one package. Concepts can be modelled in context. Building metrics can be derived from multiple complex design decisions made by teams and then used to develop presentations that can be shared internally or to clients to sell architectural designs.

Support for more complex detailed modelling is under development. Deep integration with Rhino and Grasshopper is also in the works, with Arcol working closely with McNeel, according to O’Carroll

First, let’s take a look at Boards. From very early on, O’Carroll wanted to address the architect’s need for presentation tools. While lockdown saw the growth of Miro and Mural, Arcol was the first BIM 2.0 tool to recognise that AEC-specific mood board creation and presentations should be a core function. That’s a capability that we have since seen copied by Motif and Snaptrude in their applications.

Arcol boards are ‘live-synced’ and presentation layouts automatically update as the building changes. These boards can include 3D model views, design-related data, mood images and external content such as client project briefs. Sharing of boards is performed via a web link and users don’t need a paid licence for Arcol if they’re just there for viewing purposes. Arcol wants to replace static PDF exports.

Another key capability is Arcol’s presentation to users of a single version of truth. Being cloud-based, data is centralised and shared amongst collaborators. Only one version of the model and its related data exists, so users don’t have to worry about multiple files or navigating a revision management system. Every user, in every browser, gets access to the same data. It’s always consistent.

This also extends to the comments capability. In order to keep communication connected to a design, Arcol has a built-in commenting system, linked to dropped bubbles in the design space, so that teammates can add their input to designs. Clicking on a bubble brings up complete related threads in the sidebar. When problems are resolved, users hit the check mark and the comment gets hidden.

At NXT BLD, the team gave a demonstration that involved one user in the UK modelling the bottom half of a building, while a colleague in New York modelled the top half. This was all done in the same session, using the same drafting tools. Meanwhile, a third user was working on the presentation layout of the yet-to-be-completed building – all in one continuous stream of collaborative work.

Modelling and metrics

Modelling in Arcol is deceptively easy. Deceptive, because under the surface, there’s a lot of complexity being masked from the user. Each modelling operation exists in a live, editable history graph, so users can adjust past steps at any time and the model regenerates. Due to the constraints of working in-browser, this must be done quickly and with memory efficiency.

Arcol can be used to create lofts, push/ pull edits, sweeps, extrudes and Booleans. It supports custom drawing planes and parametrics and comes with a range of architectural primitive elements, covering all the basics.

Masses can be divided into floors, to generate plans that then feed into the metric calculations engine, providing information on areas, costs and parking needs, for example. Materials can be added and shadow studies created.

Site context and DWGs can be imported to anchor an Arcol model. These Arcol mass designs can be exported to Revit, which brings the geometry in as native Revit masses. At this moment in time, Arcol’s role is preRevit detail design, and is therefore competitive to massing in Revit, Forma or SketchUp. It’s also possible to export GLTF for game engines and arch viz tools.

In terms of metrics, Arcol currently offers the following analyses: Site Zoning, Floor & Building, Cost Estimation and Shadow Studies. It connects the results with any live associated documents, updating not just related presentation board layouts and the physical model in a board view, but also any related text, such as area and cost.

In its right-hand panel, Arcol displays any relevant building metric data, supporting a range of building, site, environment and cost metrics. For building information, it calculates total floor area, number of floors, floor area ratio, gross internal area, unit count and floor height. For site information, Arcol calculates site area, percentage coverage, setbacks, landuse allocation and parking counts. Costwise, the software performs construction cost estimates, cost per unit or per floor, and envelope area.

To generate a shadow study, the user simply enters a project’s geolocation, date and time. Again, using the boards function, multiple sun studies can be compared. The team can’t be far off delivering daylight studies, as the reverse of this.

Pricing and future plans

As with most software today, Arcol is offered on a subscription basis. There is a free version available for solo users, although this comes with some omissions, such as Revit Export, and some limitations, in terms of project numbers. A Team subscription costs $100 per user, per month, with a few extra perks for subscribers who pay annually. There are ‘editor’ seats for those who design, and ‘collaborator’ seats for those who only need to view and comment on designs. While editors need a licence, collaborators (such as clients or contractors) do not and simply get access via a shared link.

It’s worth noting that, at present, Arcol doesn’t support Apple Safari and recommends Google Chrome or Microsoft Edge.

What strikes us about Arcol is that O’Carroll’s vision hasn’t changed since the first day we spoke to him. The aim is to build on top of Arcol and eventually compete against Revit by offering everything from concept to detail design and on through to documentation.

While his team could take shortcuts and licence OEM technology in order to build out faster, O’Carroll has already got ideas as to how Arcol should implement features such as the creation of drawings. As such, he intends to chart his own path, relying on internal development effort, rather than bolting on third-party generic capabilities. He explains that he wants to build trust with customers in the early stages of design before tackling drawings.

Either way, Arcol looks set to carve its own niche in the AEC space. It is clearly currently a conceptual tool and blends a number of features to help solve a variety of competing design requirements, while also giving teams a new way to work together and create and present schemes to clients. There is an argument that Arcol could pay for itself by replacing a number of seats of various tools, such as Miro, SketchUp, and InDesign, although SketchUp is more of a design and drawing environment.

Arcol’s interface is sleek and its graphics really are eye-popping. While geometrically, models look fairly simple, it does offer parametrics, curves, push/pull and Booleans for massing. Support for more complex detailed modelling is under development. Deep integration with Rhino and Grasshopper is also in the works, with Arcol working closely with McNeel, according to O’Carroll. This will open up Arcol to more advanced architectural practices where Rhino is the core design environment.

While the obvious target market for Arcol is architects, O’Carroll tells us that he’s excited at the traction the company is achieving among contractors. Here, we guess, the company’s close links with Procore could be of serious benefit.

For developers, there are a range of feasibility tools that enable real-time costs to be displayed as a model is updated. This data is just as useful to architects as it approaches the floorplan level of detail. It’s obviously not as fully featured for site development as something like TestFit Site Solver, which costs $8k a year for a full seat, but I think the target markets are slightly different.

Since Arcol’s launch and its presentations at both the AIA annual conference and our own NXT BLD event, the company has received a lot of attention from investors and competitors.

By focusing on a very identifiable phase of the design process, the team has developed a product that is easy to compare and contrast with other well-known products. This is something that’s very hard to do with offerings from other BIM 2.0 startups such as Snaptrude, Motif, Hypar and Qonic.

The fact is that, by choosing to develop software that would compete in an overheated and over-serviced market where ‘freemium’ models are commonplace, O’Carroll took a big risk. In fact, we’d bet that he probably got bored of hearing that warning, over and over again.

But through a hefty dose of self-belief, a clear execution strategy, surviving the occasional shower of shit and a strong streak of bloody mindedness, Arcol has arrived. What’s been delivered is very impressive and strikingly close to what O’Carroll described to us at the company’s earliest stages. With new investment and new additions to the team, we expect to see the velocity of development accelerate sharply in months to come.


Recommended viewing

At AEC Magazine’s NXT BLD, Arcol’s Aaron Fife & Mike Buss demonstrated how the browser-based design tool can unify model, data, and presentations in a real-time, multiplayer environment using designers located in London and New York.

CLICK HERE to watch the whole presentation

Watch the teaser below


Main image: Arcol’s user interface is fresh, simple to navigate and easy to use, with a very low barrier to entry and a gentle learning curve.

The post Arcol unleashed – BIM 2.0 appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/arcol-unleashed-bim-2-0/feed/ 0
NavLive – ‘scan to drawings’ https://aecmag.com/reality-capture-modelling/navlive-scan-to-drawings/ https://aecmag.com/reality-capture-modelling/navlive-scan-to-drawings/#disqus_thread Thu, 24 Jul 2025 05:59:23 +0000 https://aecmag.com/?p=24402 ‘Scan to BIM’ is fast becoming a reality. This UK starup is addressing one step before that

The post NavLive – ‘scan to drawings’ appeared first on AEC Magazine.

]]>
With all the progress being made to convert point clouds to 3D models, ‘scan to BIM’ is fast becoming a reality. One step before that would be ‘scan to drawings’ — and an Oxford-based startup sparked plenty of buzz around this at our recent NXT BLD event, writes Martyn Day

True industry disruption rarely comes from a single new technology. More often, it’s the convergence of multiple innovations that reshapes workflows and drives meaningful change. Ai is clearly one of the most influential technologies in this mix — and it’s now being woven into nearly every aspect of software and hardware development.

A great example of this convergence is NavLive, which combines LiDAR technology with advanced Ai processing to scan buildings and generate precise site drawings in minutes.

The company was formed in 2022 as an Oxford University spin out from the PhD research on SLAM, 3D mapping and autonomous robots, carried out by co-founder David Wisth, who is the company CTO.

The CEO and other co-founder, Chris Davison, comes from an investment background and was one of the co-founders and CEO of BigPay, a Singapore challenger bank.


Find this article plus many more in the July / August 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈


The company has raised £4 million from investment and grants, to develop a unique SLAM scanner, which captures and processes point clouds, digital images using Ai powered by Nvidia GPUs on the device. Data is then shared, via cloud, to deliver a rapid scanning solution that automatically generates 2D floor plans, 3D models, sections and elevations with a claimed accuracy of about 1cm for 1:100 RICS-grade surveys.

NavLive has a team of around 15 people and, at the moment, the handheld scanners are hand-made in the UK.

Currently, in this space you have Matterport which has a tripod-based solution at around £6k and products like Leica BLK2GO at £40,000, the Faro Orbis at £45,000 and the NavVis VLX 2 or 3 at about $30k – $60K.

At just £25,000, NavLive hits a sweet spot for rapid SLAM-style scanning, with the added benefit of delivering 2D drawings and 3D models. It comes with all the necessary software and on-board processing and the company is also working on how it could convert these models from 3D to intelligent BIM.

While at NXT BLD, NavLive scanned the Queen Elizabeth II building as a data set (see Figure 1) and gave demos showing how quickly it scanned spaces, simply by walking around. These were instantly turned into 2D drawings on the Samsung device built into the scanner. It was also possible to see the model and interactively create sections and elevations.


Navlive.ai
Raw scan from NXT BLD: floorplan of the QEII Centre

Key features

NavLive is a real time system. The input is the point cloud (SLAM is typically ‘noisy’) and the output are the 2D plans, sections and elevations. While it will capture people and other items within the scan, these can be cleaned up.

The device contains three HD image capture cameras for visual reference and documentation. NavLive will automatically work out and plot the path the user has walked, and photos can be looked at, at any point, to identify features that might not be obvious from the scan drawing.

The team claims that the NavLive device is the quickest AI-powered scan-to-BIM tool on the market, delivering ‘instant site surveys’ in one self-contained unit. The scanner is light and requires very little training to operate. It is capable of being used in multiple environments and has already been trialled in nuclear facilities.

At just £25,000, NavLive hits a sweet spot for rapid SLAM-style scanning, with the added benefit of delivering 2D drawings and 3D models

The software is mobile and desktop enabled. It automatically syncs to cloud – point clouds, drawings and models – and the results can be seen live by any other team member irrespective of geography to the actual scanning. Users can easily download files in all standard formats, including DWG, DXF, PDF for drawings, E57 or LAS for point clouds, and JPG for images. It integrates with CAD/BIM software, such as Revit, AutoCAD, and Archicad, speeding up scan-to-BIM workflows. Conclusion It’s highly unusual to find such an initially well-funded and interesting scanning device coming out of the UK; even more impressive to assemble the scanner here too. While SLAM techniques are well understood, the big benefit here is having the necessary ‘oomph’ on-board to do the processing and not just in cleaning up the point cloud but actually delivering something immediately useful – 2D drawings, plans and sections and (hopefully) ultimately BIM models.

This is an Oxford University spin out and start-up that certainly has legs.

It raises the question: could NavLive’s automatic 2D floorplan algorithms work with point clouds from any scanner?

Having covered BIMify in the May/June edition of AEC Magazine, one wonders whether you could scan a building using NavLive and then send the generated drawings to BIMify — enabling the creation of Revit BIM models using a customer’s own component libraries. That said, BIMify also tells us they support direct 3D point cloud to BIM conversion.

At NXT BLD NavLive certainly created a buzz. Unfortunately, BIMify’s CEO was unable to make the event, but that would have been a good introduction!

Rapid reality modelling is evolving fast — Robert Klashka hosted an excellent panel at NXT DEV that explored some of the latest developments.

Automation is certainly coming to nearly every granular process within traditional BIM workflows. Tasks that once took days or even weeks are being dramatically compressed by emerging technologies. The tedious, time-consuming “grunt work” is being minimised — freeing up teams to focus more on design and decision-making.

Solutions like NavLive are helping drive this shift, lowering the cost of capture while significantly reducing the time it takes to go from survey to as-built drawings — and even to as-built BIM. It’s no exaggeration to say this is the most exciting era for AEC technology innovation in the past 30 years.

The post NavLive – ‘scan to drawings’ appeared first on AEC Magazine.

]]>
https://aecmag.com/reality-capture-modelling/navlive-scan-to-drawings/feed/ 0