Features Archives - AEC Magazine https://aecmag.com/features/ Technology for the product lifecycle Fri, 07 Nov 2025 08:37:37 +0000 en-GB hourly 1 https://aecmag.com/wp-content/uploads/2021/02/cropped-aec-favicon-32x32.png Features Archives - AEC Magazine https://aecmag.com/features/ 32 32 Autodesk shows its AI hand https://aecmag.com/ai/autodesk-shows-its-ai-hand/ https://aecmag.com/ai/autodesk-shows-its-ai-hand/#disqus_thread Thu, 02 Oct 2025 08:33:27 +0000 https://aecmag.com/?p=24818 At AU Autodesk presented live, production-ready tools, giving customers a clear view of how AI could soon reshape workflows

The post Autodesk shows its AI hand appeared first on AEC Magazine.

]]>
Autodesk’s AI story has matured. While past Autodesk University events focused on promises and prototypes, this year Autodesk showcased live tools, giving customers a clear view of how AI could soon reshape workflows across design and engineering, writes Greg Corke

At AU 2025, Autodesk took a significant step forward in its AI journey, extending far beyond the slide-deck ambitions of previous years.

During CEO Andrew Anagnost’s keynote, the company unveiled brand-new AI tools in live demonstrations using pre-beta software. It was a calculated risk — particularly in light of recent high-profile hiccups from Meta — but the reasoning was clear: Autodesk wanted to show it has tangible, functional AI technology and it will be available for customers to try soon.

The headline development is ‘neural CAD’, a completely new category of 3D generative AI foundation models that Autodesk says could automate up to 80–90% of routine design tasks, allowing professionals to focus on creative decisions rather than repetitive work. The naming is very deliberate, as Autodesk tries to differentiate itself from the raft of generic AEC-focused AI tools in development.


Find this article plus many more in the September / October 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

neural CAD AI models will be deeply integrated into BIM workflows through Autodesk Forma, and product design workflows through Autodesk Fusion. They will ‘completely reimagine the traditional software engines that create CAD geometry.’

Autodesk is also making big AI strides in other areas. Autodesk Assistant is evolving beyond its chatbot product support origins into a fully agentic AI assistant that can automate tasks and deliver insights based on natural-language prompts.

Big changes are also afoot in Autodesk’s AEC portfolio – developments that will have a significant impact on the future of Revit.

The big news was the release of Forma Building Design, a brand-new tool for LoD 200 detailed design (learn more in this AEC Magazine article). Autodesk also announced that its existing early-stage planning tool, Autodesk Forma, will be rebranded as Forma Site Design and Revit will gain deeper integration with the Forma industry cloud, becoming Autodesk’s first Connected client.

neural CAD

neural CAD marks a fundamental shift in Autodesk’s core CAD and BIM technology. As Anagnost explained, “The various brains that we’re building will change the way people interact with design systems.”

Unlike general-purpose large language models (LLMs) such as ChatGPT and Claude, or AI image generation models like Stable Diffusion and Nano Banana, neural CAD models are specifically designed for 3D CAD. They are trained on professional design data, enabling them to reason at both a detailed geometry level and at a systems and industrial process level.

neural CAD marks a big leap forward from Project Bernini, which Autodesk demonstrated at AU 2024. Bernini turned a text, sketch or point cloud ‘prompt’ into a simple mesh that was not best suited for further development in CAD. In contrast, neural CAD delivers ‘high quality’ ‘editable’ 3D CAD geometry directly inside Forma or Fusion, just like ChatGPT generates text and Midjourney generates pixels.


Autodesk University
Autodesk CEO Andrew Anagnost joins experts on stage to live-demo upcoming AI software during the AU keynote

Autodesk has so far presented two types of neural CAD models: ‘neural CAD for geometry’, which is being used in Fusion and ‘neural CAD for buildings’, which is being used in Forma.

For Fusion, there are two AI model variants, as Tonya Custis, senior director, AI research, explained, “One of them generates the whole CAD model from a text prompt. It’s really good for more curved surfaces, product use cases. The second one, that’s for more prismatic sort of shapes. We can do text prompts, sketch prompts and also what I call geometric prompts. It’s more of like an auto complete, like you gave it some geometry, you started a thing, and then it will help you continue that design.”

On stage, Mike Haley, senior VP of research, demonstrated how neural CAD for geometry could be used in Fusion to automatically generate multiple iterations of a new product, using the example of a power drill.

“Just enter the prompts or even drawing and let the CAD engines start to produce options for you instantly,” he said. “Because these are first class CAD models, you now have a head start in the creation of any new product.”

It’s important to understand that the AI doesn’t just create dumb 3D geometry – neural CAD also generates the history and sequence of Fusion commands required to create the model. “This means you can make edits as if you modelled it yourself,” he said.

Meanwhile, in the world of BIM, Autodesk is using neural CAD to extend the capabilities of Forma Building Design to generate BIM elements.

The current aim is to enable architects to ‘quickly transition’ between early design concepts and more detailed building layouts and systems with the software ‘autocompleting’ repetitive aspects of the design.

Instead of geometry, ‘neural CAD for buildings’ focuses more on the spatial and physical relationships inherent in buildings as Haley explained. “This foundation model rapidly discovers alignments and common patterns between the different representations and aspects of building systems.



“If I was to change the shape of a building, it can instantly recompute all the internal walls,” he said. “It can instantly recompute all of the columns, the platforms, the cores, the grid lines, everything that makes up the structure of the building. It can help recompute structural drawings.”

At AU, Haley demonstrated ‘Building Layout Explorer’, a new AI-driven feature coming to Forma Building Design. He presented an example of an architect exploring building concepts with a massing model, “As the architect directly manipulates the shape, the neural CAD engine responds to these changes, auto generating floor plan layouts,” he said.

But, as Haley pointed out, for the system to be truly useful the architect needs to have control over what is generated, and therefore be able to lock down certain elements, such as a hallway, or to directly manipulate the shape of the massing model.

“The software can re-compute the locations and sizes of the columns and create an entirely new floor layout, all while honouring the constraints the architect specified,” he said.

This feels like a pivotal moment in Autodesk’s AI journey, as the company moves beyond ambitions and experimentation into production-ready AI that is deeply integrated into its core software

Of course, it’s still very early days for neural CAD and, in Forma, ‘Building Layout Explorer’ is just the beginning.

Haley alluded to expanding to other disciplines within AEC, “Imagine a future where the software generates additional architectural systems like these structural engineering plans or plumbing, HVAC, lighting systems and more.”

In the future, neural CAD in Forma will also be able to handle more complexity, as Custis explains. “People like to go between levels of detail, and generative AI models are great for that because they can translate between each other. It’s a really nice use case, and there will definitely be more levels of detail. We’re currently at LoD 200.”

The training challenge

neural CAD models are trained on the typical patterns of how people design. “They’re learning from 3D design, they’re learning from geometry, they’re learning from shapes that people typically create, components that people typically use, patterns that typically occur in buildings,” said Haley.

In developing these AI models, one of the biggest challenges for Autodesk has been the availability of training data. “We don’t have a whole internet source of data like any text or image models, so we have to sort of amp up the science to make up for that,” explained Custis.

For training, Autodesk uses a combination of synthetic data and customer data. Synthetic data can be generated in an ‘endless number of ways’, said Custis, including a ‘brute force’ approach using generative design or simulation.


Autodesk University
Tonya Custis, senior director, AI research, Autodesk

Customer data is typically used later-on in the training process. “Our models are trained on all data we have permission to train on,” said Amy Bunszel, EVP, AEC.

But customer data is not always perfect, which is why Autodesk also commissions designers to model things for them, generating what chief scientist Daron Green describes as gold standard data. “We want things that are fully constrained, well annotated to a level that a customer wouldn’t [necessarily] do, because they just need to have the task completed sufficiently for them to be able to build it, not for us to be able to train against,” he said.

Of course, it’s still very early days for neural CAD and Autodesk plans to improve and expand the models, “These are foundation models, so the idea is we train one big model and then we can task adapt it to different use cases using reinforcement learning, fine tuning. There’ll be improved versions of these models, but then we can adapt them to more and more different use cases,” said Custis. In the future, customers will be able to customise the neural CAD foundation models, by tuning them to their organisation’s proprietary data and processes. This could be sandboxed, so no data is incorporated into the global training set unless the customer explicitly allows it.

“Your historical data and processes will be something you can use without having to start from scratch again and again, allowing you to fully harness the value locked away in your historical digital data, creating your own unique advantages through models that embody your secret source or your proprietary methods,” said Haley.

Agentic AI: Autodesk Assistant

When Autodesk first launched Autodesk Assistant, it was little more than a natural language chatbot to help users get support for Autodesk products.

Now it’s evolved into what Autodesk describes as an ‘agentic AI partner’ that can automate repetitive tasks and help ‘optimise decisions in real time’ by combining context with predictive insights.

Autodesk demonstrated how in Revit, Autodesk Assistant could be used to quickly calculate the window to wall ratio on a particular façade, then replace all the windows with larger units. The important thing to note here is that everything is done though natural language prompts, without the need to click through multiple menus and dialogue boxes.


Autodesk University
Building Layout Explorer’, a new AI-driven feature coming to Forma Building Design
Autodesk University
Autodesk Assistant in Revit enables teams to quickly surface project insights using natural language prompts, here showing how it could be used to quickly calculate the window to wall ratio on a particular façade, then replace all the windows with larger units

Autodesk Assistant can also help with documentation in Revit, making it easier to use drawing templates, populate title blocks and automatically tag walls, doors and rooms. While this doesn’t yet rival the auto-drawing capabilities of Fusion, when asked about bringing similar functionality to Revit, Bunszel noted, ‘We’re definitely starting to explore how much we can do.’

Autodesk also demonstrated how Autodesk Assistant can be used to automate manual compliance checking in AutoCAD, a capability that could be incredibly useful for many firms.

“You’ll be able to analyse a submission against your drawing standards and get results right away, highlighting violations and layers, lines, text and dimensions,” said Racel Amour, head of generative AI, AEC.

Meanwhile, in Civil 3D it can help ensure civil engineering projects comply with regulations for safety, accessibility and drainage, “Imagine if you could simply ask the Autodesk Assistant to analyse my model and highlight the areas that violate ADA regulations and give me suggestions on how to fix it,” said Amour.

So how does Autodesk ensure that Assistant gives accurate answers? Anagnost explained that it takes into account the context that’s inside the application and the context of work that users do.

“If you just dumped Copilot on top of our stuff, the probability that you’re going to get the right answer is just a probability. We add a layer on top of that that narrows the range of possible answers.”

“We’re building that layer to make sure that the probability of getting what you want isn’t 70%, it’s 99.99 something percent,” he said.

While each Autodesk product will have its own Assistant, the foundation technology has also been built with agent-to-agent communication in mind – the idea being that one Assistant can ‘call’ another Assistant to automate workflows across products and, in some cases, industries.

“It’s designed to do three things: automate the manual, connect the disconnected, and deliver real time insights, freeing your teams to focus on their highest value work,” said CTO, Raji Arasu.


Autodesk University
Autodesk CTO Raji Arasu

In the context of a large hospital construction project, Arasu demonstrated how a general contractor, manufacturer, architect and cost estimator could collaborate more easily through natural language in Autodesk Assistant. She showed how teams across disciplines could share and sync select data between Revit, Inventor and Power Bi, and manage regulatory requirements more efficiently by automating routine compliance tasks. “In the future, Assistant can continuously check compliance in the background. It can turn compliance into a constant safeguard, rather than just a one-time step process,” she said.

Arasu also showed how Assistant can support IT administration — setting up projects, guiding managers through configuring Single Sign-On (SSO), assigning Revit access to multiple employees, creating a new project in Autodesk Construction Cloud (ACC), and even generating software usage reports with recommendations for optimising licence allocation.

Agent-to-agent communication is being enabled by Model Context Protocol (MCP) servers and Application Programming Interfaces (APIs), including the AEC data model API, that tap into Autodesk’s cloud-based data stores.

APIs will provide the access, while Autodesk MCP servers will orchestrate and enable Assistant to act on that data in real time.

As MCP is an open standard that lets AI agents securely interact with external tools and data, Autodesk will also make its MCP servers available for third-party agents to call.

All of this will naturally lead to an increase in API calls, which were already up 43% year on year even before AI came into the mix. To pay for this Autodesk is introducing a new usage-based pricing model for customers with product subscriptions, as Arasu explains, “You can continue to access these select APIs with generous monthly limits, but when usage goes past those limits, additional charges will apply.”

But this has raised understandable concerns among customers about the future, including potential cost increases and whether these could ultimately limit design iterations.

The human in the loop

Autodesk is designing its AI systems to assist and accelerate the creative process, not replace it. The company stresses that professionals will always make the final decisions, keeping a human firmly in the loop, even in agent-to-agent communications, to ensure accountability and design integrity.

“We are not trying to, nor do we aspire to, create an answer, “says Anagnost. “What we’re aspiring to do is make it easy for the engineer, the architect, the construction professional – reconstruction professional in particular – to evaluate a series of options, make a call, find an option, and ultimately be the arbiter and person responsible for deciding what the actual final answer is.”

AI computation

It’s no secret that AI requires substantial processing power. Autodesk trains all its AI models in the cloud, and while most inferencing — where the model applies its knowledge to generate real-world results — currently happens in the cloud, some of this work will gradually move to local devices.

This approach not only helps reduce costs (since cloud GPU hours are expensive) but also minimises latency when working with locally cached data.


With Project Forma Sketch, an architect can generate 3D models in Forma by sketching out simple massing designs with a digital pencil and combining that with speech.

AI research

Autodesk also gave a sneak peek into some of its experimental AI research projects. With Project Forma Sketch, an architect can generate 3D models in Forma by sketching out simple massing designs with a digital pencil and combining that with speech. In this example, the neural CAD foundation model interacts with large language models to interpret the stream of information.

Elsewhere, Amour showed how Pointfuse in Recap Pro is building on its capability to convert point clouds into segmented meshes for model coordination and clash detection in Revit. “We’re launching a new AI powered beta that will recognise objects directly from scans, paving the way for automated extraction, for building retrofits and renovations,” she said.

Autodesk has also been working with global design, engineering, and consultancy firm Arcadis to pilot a new technology that uses AI to see inside walls to make it easier and faster to retrofit existing buildings.

Instead of destructive surveys, where walls are torn down, the AI uses multimodal data – GIS, floor plans, point clouds, Thermal Imaging, and Radio Frequency (RF) scans – to predict hidden elements, such as mechanical systems, insulation, and potential damage.


The AI-assisted future

AU 2025 felt like a pivotal moment in Autodesk’s AI journey. The company is now moving beyond ambitions and experimentation into a phase where AI is becoming deeply integrated into its core software.

With the neural CAD and Autodesk Assistant branded functionality, AI will soon be able to generate fully editable CAD geometry, automate repetitive tasks, and gain ‘actionable insights’ across both AEC and product development workflows.

As Autodesk stresses, this is all being done while keeping humans firmly in the loop, ensuring that professionals remain the final decision-makers and retain accountability for design outcomes.

Importantly, customers do not need to adopt brand new design tools to get onboard with Autodesk AI. While neural CAD is being integrated into Forma and Fusion, users of traditional desktop CAD/BIM tools can still benefit through Autodesk Assistant, which will soon be available in Revit, Civil 3D, AutoCAD, Inventor and others.

With Autodesk Assistant, the ability to optimise and automate workflows using natural-language feels like a powerful proposition, but as the technology evolves, the company faces the challenge of educating users on its capabilities — and its limitations.

Meanwhile, data interoperability remains front and centre, with Autodesk routing everything through the cloud and using MCP servers and APIs to enable cross-product and even cross-discipline workflows.

It’s easy to imagine how agent-to-agent communication might occur within the Autodesk world, but AEC workflows are fragmented, and it remains to be seen how this will play out with third parties.

Of course, as with other major design software providers, fully embracing AI means fully committing to the cloud, which will be a leap of faith for many AEC firms.

From customers we have spoken with there remain genuine concerns about becoming locked into the Autodesk ecosystem, as well as the potential for rising costs, particularly related to increased API usage. ‘Generous monthly limits’ might not seem so generous once the frequency of API calls increase, as it inevitably will in an iterative design process. It would be a real shame if firms end up actively avoiding using these powerful tools because of budgetary constraints.

Above all, AU is sure to have given Autodesk customers a much clearer idea of Autodesk’s long-term vision for AI-assisted design. There’s huge potential for Autodesk Assistant to grow into a true AI agent while neural CAD foundation models will continue to evolve, handling greater complexity, and blending text, speech and sketch inputs to further slash design times.

We’re genuinely excited to see where this goes, especially as Autodesk is so well positioned to apply AI throughout the entire design build process.


Main image: Mike Haley, senior VP of research, presents the AI keynote at Autodesk University 2025  

The post Autodesk shows its AI hand appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/autodesk-shows-its-ai-hand/feed/ 0
Contract killers: how EULAs are shifting power from users to shareholders https://aecmag.com/business/contract-killers-how-eulas-are-shifting-power-from-users-to-shareholders/ https://aecmag.com/business/contract-killers-how-eulas-are-shifting-power-from-users-to-shareholders/#disqus_thread Fri, 03 Oct 2025 08:06:58 +0000 https://aecmag.com/?p=24946 Most architects overlook software small print, but today’s EULAs are redefining ownership, data rights and AI use — shifting power from users to vendors

The post Contract killers: how EULAs are shifting power from users to shareholders appeared first on AEC Magazine.

]]>
Most architects and engineers never read the fine print of software licences. But today’s End User Licence Agreements (EULAs) and Terms of Use reach far beyond stating your installation rights. Software vendors are using them to have rights over your designs and control project data, limit AI training, and reshape developer ecosystems — shifting power from customers to shareholders. Martyn Day explores the rapidly changing EULA landscape

The first time I used AutoCAD professionally was about 37 years ago. At the time I knew a licence cost thousands of pounds and was protected by a hardware dongle, which plugged into the back of the PC.

The company I worked for had been made aware by its dealer that the dongle was the proof of purchase and if stolen it would cost the same amount to replace, so we were encouraged to have it insured. This was probably the first time I read a EULA and had that weird feeling of having bought something without actually owning it. Instead, we had just paid for the right to use the software.

Back then, the main concern was piracy. Vendors were less worried about what you did with your drawings and more worried about stopping you from copying the software itself. That’s why early EULAs, and the hardware dongles that enforced them, focused entirely on access.

The contract was clear: you hadn’t bought the software, you had bought permission to use it, and that right could be revoked if you broke the rules.


Find this article plus many more in the September / October 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

As computing spread through the 1980s and 1990s, so did the mechanisms of digital rights management (DRM). Dongles gave way to serial numbers, activation codes and eventually online licence checks tied to your machine or network. Each step made it harder to install the software without permission, but the scope was narrow. The EULA told you how many copies of the software you could run, what hardware it could be installed on, and that you could not reverse-engineer it.

What it didn’t do was tell you what you could or could not do with your own work. Your drawings, models and outputs were your business. The protection was wrapped tightly around the software, not around the data created with it. That boundary is what has changed today.

The rising power of the EULA

As software moved from standalone desktop products to subscription and cloud delivery, the scope of EULAs began to widen. No longer tied to a single boxed copy or physical medium, licences became fluid, covering not just installation but how services could be accessed, updated and even terminated.

The legal fine print shifted from simple usage restrictions to broad behavioural rules, often with the caveat that terms could be changed unilaterally by the vendor.

At first the transition was subtle. Subscription agreements introduced automatic renewals, service-level clauses and restrictions on transferring licences. Cloud services were layered in terms around uptime, data storage, and security responsibilities. What once was a static contract at the point of sale evolved into a living document, updated whenever the vendor saw fit. And in the last five to seven years, we have seen more frequent updates.

Software firms now have an extraordinary new power: the ability to reshape the customer relationship through the EULA itself. Where early agreements were about protecting intellectual property against piracy, modern ones increasingly function as business strategy tools. They dictate not just who can access the software, but how customers interact with their data, APIs, and even with third-party developers. The fine print was no longer just about access control; it became a mechanism of control.

EULAs are no longer obscure boilerplate legalese, tucked at the end of an installer. They have become the front line in a new battle, not over software piracy, but over who controls the data, workflows, and ecosystems that shape the future of design

Profound changes

The most striking shift in recent years is that EULAs have moved beyond software access and into the realm of customer data. What you produce with the tools (models, drawings, schedules, and outputs) has become strategically valuable to the software developers – as valuable as the software itself. Vendors now see customer data as fuel for things like analytics, training, and new AI services. The contract language has followed and there are varying degrees of land grab going on.

This year alone we have seen two firms – Midjourney and D5 Render – attempt to change their EULAs to automatically lay claim to perpetual rights access and use customer created data (mainly AI renderings), as well as the right to pass on lawsuits if any of those images infringe copyright and are subsequently used by the software vendor to train its AI models.

Many of the pure-play AI firms will lay claim to your first born given half a chance.



D5 Render provided a response to this article to clarify its position on customer data rights including details on ownership of content, training data and liability published below.



EULA



Autodesk

Closer to home, Autodesk provides another example. Its current Terms of Use, which serves as the primary agreement for subscription and cloud users, includes a clause which prohibits training AI systems on data or models created with its software. An earlier draft of this article suggested the restriction was recent, but Autodesk has since clarified that it dates back to 2018.

On a strict reading, this clause implies that even if you create designs entirely in-house, you may not be allowed to use your own data to train and develop your own AI models. If correct, Autodesk could hold the right to decide if, when, or how your data can be used for such purposes.

As we are on the cusp of an AI revolution, this is a profound change. Historically, your files were yours: a Revit model or AutoCAD drawing was protected only by your own governance. Now the licence agreement could potentially dictate not only how the software runs, but also how you can use the fruits of your own labour.

Autodesk’s licensing language creates a subtle but important tension between ownership and control. In its Terms of Use (which serves as the effective EULA for all subscription and cloud customers), Autodesk reassures customers with familiar phrases such as “You own Your Work” and “Your Content remains yours.”

On the surface, this means that the models, drawings, and other outputs you create belong to you, not Autodesk. However, deeper in the Terms of Use and the accompanying Acceptable Use Policy (AUP), the scope of what you can do with that work becomes more constrained — particularly in relation to AI or derivative use cases.

Talking with May Winfield, global director of commercial, legal and digital risks for global engineering consultancy Buro Happold, she suggests this goes further: Autodesk’s Acceptable Use Policy’s purported restrictions on customer outputs may even conflict with copyright laws in certain jurisdictions, where authors automatically own their creations unless they expressly transfer or license those rights. The question becomes: if copyright law guarantees authorship, but Autodesk contractually limits permitted uses, which prevails?

In these documents, Autodesk introduces the term “Output,” meaning any file or result generated using its software. The AUP states that customers must not use “any Offering or related Output in connection with the training of any machine learning or artificial intelligence algorithm, software, or system.” In practice, this means that even though Autodesk concedes ownership of your designs, it may contractually restrict you from applying them in one of the most strategically valuable ways: training your own AI models.

I know many of the more progressive AEC firms that attend our NXT BLD event are training their own in-house AI based on their Revit models, Revit derived DWGs and PDFs. With no caveats or carve outs for customers, they potentially now have the Sword of Damocles hanging over their data. As worded, the broad use of the word ‘output’ could theoretically even apply to an Industry Foundation Classes (IFC) file exported from Revit, as it’s an output from Autodesk’s product stack, which could mean you are not even allowed to train AI on an open standard!

Legally, the company has not taken your intellectual property; instead, it may have ring-fenced its permitted uses, in a very specific way. This creates what I’d characterise as a “legal DRM moat” around customer data.

Autodesk potentially positions itself as the arbiter of how your data can be exploited, leaving you in possession of your files but without full freedom to decide their fate. The fine print ensures Autodesk maintains leverage over emerging AI workflows, even while telling customers their data still belongs to them. And the one place where this restriction doesn’t apply is within Autodesk’s cloud ecosystem, now called Autodesk Platform Services (APS). Only last month at Autodesk University, Autodesk was showing the AI training of data within the Autodesk Cloud.



Autodesk provided a response to this article, published below.

For clarity, several edits have since been made to this article.



Knock-on risks for consultants

Winfield also points out that Autodesk’s broad claims over “outputs” may have knock-on consequences for customer–client agreements. Most design and consultancy contracts require the consultant to warrant that deliverables are original and fully owned by them. If a vendor asserts ownership rights through its licence terms, that warranty could be undermined. The risk goes further: consultancy agreements often contain indemnities, requiring the designer to protect the client against copyright breaches or claims. If a software vendor were to allege ownership or misuse under its EULA, a client might look to recover damages from the consultant. This creates a potential double exposure — liability to the vendor, and liability to the client.

Possible reasons

The rationale behind this clause is open to interpretation. Autodesk maintains that its intent is to protect intellectual property and ensure AI use occurs within secure, governed environments. Some industry observers worry that the breadth could inadvertently chill legitimate customer innovation, despite Autodesk’s stated intent.

Others have speculated that such clauses could serve to pre-empt potential misuse of design data by large AI firms. However, Autodesk’s 2018 publication date predates the current wave of generative AI, suggesting the clause was originally framed more broadly as an IP-protection measure, challenging Autodesk’s hold on its customers. 2018 is a long time before these major AI players were a potential threat.

The short solution to this would be for Autodesk to refine the language in its Terms of Use and not have such an implied broad restriction on customers creating their own trained AIs on their own design data, irrespective of the software that produced it.

There is a lot of daylight between what Autodesk claims to be its intent and the plain language of what is written. If the intent is to stop reverse engineering of Autodesk AI IP, then why not state that clearly?

The reverse engineering of its products and services is covered quite extensively in section 13 Autodesk Proprietary Rights in its General Terms. Machine Learning, AI, data harvesting and API, are all in addition to this.

When Nathan Miller, a digital design strategist and developer from Proving Ground, discovered these limitations, he ran a a series of posts on Linkedin. Prior to this none of the AEC firms we had spoken with for this article had any insight into this, despite the Terms of Service being published seven years ago.

While it was certainly a topic hotly commented on, the only Autodesk-related person to add their thoughts to the LinkedIn posts was Aaron Wagner of reseller Arkance. He commented:

“I don’t think the common interpretation is accurate to the spirit of that clause. Your data is your data and the way you use it is under your own discretion. Of course, you should always seek legal counsel to refine any grey areas.

“This statement to me reads that the clause is from a standpoint of Autodesk wanting to protect its products from being reverse engineered and hold themselves free of liability of sharing private information, but model element authors can still freely use AI/ML to study their own data / designs and improve them.”

Buro Happold’s Winfield gave her perspective, “Contract interpretation is generally not impacted by spirit of a clause – if the drafting is clear, it is not changed by the assertion of a different intention? Unless there are contradictions in other clauses and copyright law then it all needs to be read together and squared up to be interpreted in a workable way? It may be the “outputs” in the clause needs to qualify / clarify its intentions, if different from the seemingly clear drafting of read alone?”

The interpretation that this was a sweeping restriction on AI training using any output from Autodesk software has not gone unnoticed by major customers. Autodesk already has a reputation for running compliance audits and issuing fines when licence breaches are discovered, so the presence of this clause in an updated, binding contract has raised alarm.

The fear was simple: if the restriction exists, it can be enforced. Several design IT directors have already told their boards that, on a strict reading of Autodesk’s updated terms, their firms are probably now out of compliance – not for piracy, but for training their own AI models, on their own project data.

Some of the commenters on Miller’s original LinkedIn post, reported that they raised the issue with Autodesk execs in meetings. By and large these execs had not heard of the EULA changes and said they would go find out more information.

Other vendors

Looking around at what other firms have done here, their EULAs include clauses about AI training of data, but it always appears to be in relation to protecting IP or reverse engineering commercial software – not broad prohibitions.

Adobe has explicit rules around its Firefly generative AI features and the company’s Generative AI User Guidelines forbid customers from using any Firefly-generated output to train other AI or machine learning models. However, in product-specific terms, Adobe defines “Input” and “Output” as your content and extends the same protections to both.

Graphisoft has so far left customer data largely unconstrained in terms of AI use. Bentley Systems sits somewhere in between, allowing AI outputs for your use but prohibiting their use in building competing AI systems. The standard Allplan EULA / licence terms do not appear to contain blanket prohibitions on using output for AI training.

Meanwhile, Autodesk’s wording has no caveats or carve out for customers’ data just what appears to be a blanket restriction on AI training using outputs from its software, combined with an exception for its own cloud ecosystem. This appears to effectively grant the company a monopoly over how design data can fuel AI. Customers are free to create, but if they wish to train internal AI on their own project history, the contract could shut the door — unless that training happens inside Autodesk’s APS environment. The effect is to funnel innovation into Autodesk’s platform, where the company retains commercial leverage.

This mirrors tactics used in other industries. Social media platforms, for example, restrict third-party scraping to ensure AI training occurs only within their walls – although in that instance the third party would be using data it does not own.

If licence agreements prevent firms from using their own outputs to train AI, they forfeit the ability to build unique, in-house intelligence from their past projects

In finance, regulators have intervened to stop institutions from controlling both infrastructure and the datasets flowing through them. Europe’s Digital Markets Act directly targets such gatekeeping, while US antitrust agencies are scrutinising restrictive contract terms that entrench platform dominance.

For the AEC sector, the potential impact of the restrictions in Autodesk’s Acceptable Use Policy is clear: it risks concentrating AI innovation inside Autodesk’s ecosystem, raising barriers for independent development and narrowing customer choice.

Proving is difficult

How Autodesk might enforce an AI training ban is an open question. Traditional licence audits can detect unlicensed installs or overuse. Meanwhile, proving that a customer has trained an AI on Autodesk outputs is way more complex. But Autodesk file formats (DWG, RVT, etc.) do contain unique structural fingerprints that could, in theory, be detected in a trained model’s weights or outputs – for example, if an AI consistently reproduces proprietary layering systems, metadata tags, or parametric structures unique to Autodesk tools.

Autodesk could also monitor API usage patterns: large-scale systematic exports or conversions may signal that datasets are being harvested for training. Another possible avenue is watermarking — embedding invisible markers in outputs that survive export and could later be detected.

APIs, APS and developers

Autodesk is also making significant changes to other areas of its business – changes that could have a big impact on those that develop or use complementary software tools. Autodesk’s API and Autodesk Platform Services (APS) ecosystem has long been central to the company’s success, enabling customers and commercial third parties to extend tools like Autodesk Revit and Autodesk Construction Cloud (ACC).

But what was once a relatively open environment is now being reshaped into a monetised, tightly governed platform — with serious implications for customers and developers.

Nathan Miller of Proving Ground points out that virtually every practice he has worked with relies on opensource scripts, third-party add-ins, or in-house extensions. These are the utilities that make Autodesk’s software truly productive. By introducing broad restrictions and fresh monetisation barriers, Autodesk risks eroding the very ecosystem that helped drive its dominance.

The most visible change is the shift of APS into a metered, consumption-based service. Previously bundled into subscriptions, APIs will now incur line-item costs for common tasks such as model translations, batch automations and dashboard integrations.

A capped free tier remains, but high value services like Model Derivative, Automation and Reality Capture will now be billed per use. For firms, this means operational budgets must now account for API spend, with the risk of projects stalling mid-delivery if quotas are exceeded or unexpected charges triggered.

Autodesk has also tightened authentication rules. All integrations must be registered with APS and use Autodesk-controlled OAuth scopes. These scopes, which define the exact permissions an app has, can be added, redefined or retired by Autodesk — improving security, but also centralising control over what kinds of applications are permitted.

Perhaps the most profound change is not technical, but contractual. Firms can still create internal tools for their own use. But turning those into commercial products — or even sharing them with peers — now requires Autodesk’s explicit approval. The line between “internal tool” and “commercial app” is no longer a matter of technology but of contract law. Innovation, once free to circulate, is now fenced in.

This changing landscape for software development is not unique to Autodesk. Dassault Systèmes (DS), which is strong in product design, manufacturing, automotive, and aerospace, has sparked controversy by revising its agreements with third party developers for its Solidworks MCAD software. DS is demanding developers hand over 10% of their turnover along with detailed financial disclosures. Small firms fear such terms could make their businesses unviable.

Across the CAD/BIM sector, ecosystems are being re-engineered into revenue streams. What were once open pipelines of user-driven innovation are narrowing into gated conduits, designed less to empower customers than to deliver shareholder returns.

Why all this matters

The stakes are high for both customers and developers. For customers, the greatest risk is losing meaningful control over their design history. Project files, BIM models and CAD data are no longer just records of completed work; they are the foundation for future AI-driven workflows. If licence agreements prevent firms from using their own outputs to train AI, they forfeit the ability to build unique, in-house intelligence from their past projects. The value of their data, arguably their most strategic asset, is redirected into the vendor’s ecosystem. The result is growing dependence: firms must rely on vendor tools, AI models and pricing, with fewer options to innovate independently or move their data elsewhere.

For software developers, the risks are equally severe. Independent vendors and in-house innovators who once built add-ons or utilities to extend core CAD/BIM platforms now face new costs and restrictions. Revenue-sharing models, such as Dassault Systèmes’ 10% royalty scheme, threaten commercial viability, especially for smaller firms. When API use is metered and distribution fenced in by contract, ecosystems shrink. Innovation slows, customer choice narrows, and vendor lock-in grows.

AI is the existential threat vendors don’t want to admit. Smarter systems could slash the number of licences needed on a project, deliver software on demand, and let firms build private knowledge vaults more valuable than off-the-shelf tools. Vendors see the danger: EULAs are now their defensive moat, crafted to block customers from using their own data to fuel AI. The fine print isn’t just about compliance — it’s about making sure disruption happens on the vendor’s terms, not those of the customer.

This trajectory is not inevitable. Customers and developers can push back. Large firms, government bodies and consortia hold leverage through procurement. They can demand carve-outs that preserve ownership of outputs and guarantee the right to train AI. Developers, too, can resist punitive revenue-sharing schemes and press for fairer terms. Only collective action will ensure innovation remains in the hands of the wider AEC community, not locked in vendor boardrooms.

The tightening of EULAs and developer agreements is not happening in a vacuum. In Europe, new regulations like the Digital Markets Act (DMA) and the Data Act could directly challenge these practices. The DMA targets “gatekeepers” that restrict competition, while the Data Act enshrines customer rights to access and use data they generate, including for AI training. Clauses restricting firms from training AI on their own outputs may sit uncomfortably with these principles.

In the US, antitrust law is less settled but moving in the same direction. The FTC has signalled increased scrutiny of contract terms that suppress competition, and restrictions such as Autodesk’s AI-output restriction or Solidworks’ 10% developer royalty could draw attention.

For customers and developers, this creates negotiating leverage. Large firms, government clients, and consortia can push for carve-outs citing regulatory rights, while developers may resist punitive revenue-sharing as disproportionate. Yet smaller players face a harder reality: challenging vendors risks losing access to platforms that underpin longstanding businesses.

A Bill of Rights?

With so many software firms busily updating their business models, EULAs and terms, the one group here that is standing still and taking the full force of this wave are customers. A constructive way forward could be the creation of a Bill of Rights for AEC Software customers — a simple but powerful framework that customers could insist their vendors sign up to and be held accountable against. The goal is not to hobble innovation, but to ensure it happens on a foundation of fairness and trust. Knowing this month’s ‘we have updated our EULA’ will not transgress some core concepts.

At its heart we’re suggesting five core principles:

Data Ownership – a statement that customers own what they create; vendors cannot claim control of drawings, models, or project data through the fine print.

AI Freedom – guarantees that firms may use their own outputs to train internal AI systems, preserving the ability to innovate independently rather than relying solely on vendor-driven tools.

Developer fairness – ensures that APIs remain open, with transparent and non-punitive revenue models that allow third-party ecosystems to thrive.

Transparency – requires vendors to clearly disclose when and how customer data is used in their own AI training or analytics.

Portability – commits vendors to interoperability and open standards, so that customers are never locked into one ecosystem against their will.

Such a Bill of Rights would not prevent Autodesk, Bentley Systems, Nemetschek, Trimble and others from building profitable AI services or new subscription tiers. But it would establish clear boundaries: vendors innovate and capture value, but not at the expense of customer autonomy. For customers, developers, and ultimately the built environment itself, this would restore balance and accountability in a market where the fine print has become as important as the software itself.

AEC Magazine is now working with a group of customers, developers and software vendors to see how this could be shaped in the coming months.

Conclusion

EULAs are no longer obscure boilerplate legalese, tucked at the end of an installer. They have become the front line in a new battle, not over software piracy, but over who controls the data, workflows, and ecosystems that shape the future of design.

In my view, as currently worded, Autodesk’s clause could be interpreted as a prohibition on AI training, although this may be counter to Autodesk’s intentions with regards to customer ‘outputs’. Furthermore, Dassault Systèmes’ demand for a slice of developer revenues illustrates just how quickly the ground is shifting. Contracts are no longer just protective wrappers around software; they are strategic levers which can be used to lock in customers and monetise ecosystems.

This should concern everyone in AEC. Customers risk losing the ability to use their own project history to innovate, while mature developers face sudden, new revenue-sharing models that could undermine entire businesses. Left unchallenged, the result will be less competition, less innovation, and greater dependency on a handful of large vendors whose first loyalty is to shareholders, not users.

The only path forward I see is collective action. Customers and developers must push back, demand transparency, insist on long-term contractual safeguards, and possibly unite around a shared Bill of Rights for AEC software. The question is no longer academic: in the age of AI, do you own your tools and your data — or does your vendor own you?


Editor’s note / Autodesk response:

In response to this article, Autodesk provided the following statement:

“The clause included in Martyn Day’s recent article has been part of our Terms of Use since they were originally published in May 2018. 

 “This clause was written to prevent the use of AI/ML technology to reverse engineer Autodesk’s IP or clone Autodesk’s product functionalities, a common protection for software companies. It does not broadly restrict our users’ ability to use their IP or give Autodesk ownership to our users’ content.

“We know things are moving fast with the accelerated advancement in AI/ML technology. We, along with just about every software company, are adapting to this changing landscape, which includes actively assessing how best to meet the evolving needs and use cases of our customers while protecting Autodesk’s core IP rights. As these technologies advance, so will our approach, and we look forward to sharing more in the months ahead.”

Autodesk also clarified that the License and Services Agreement only applies to legacy customers who still use perpetual licenses. The Terms of Use from May 2018 supersedes that agreement to cover both desktop and cloud services.


Correction (8 Oct 2025): An earlier version of this article incorrectly suggested that the changes to the Terms of Use were made in May 2025. Based on Autodesk’s statement above this article has now been corrected and updated for clarity


D5 Render’s response

In response to this article, D5 Render provided the following statement:

We fully understand and share the community’s concerns regarding data rights in the evolving field of AI. We remain committed to maintaining clear and fair agreements that protect user rights while fostering innovation.

Our Terms of Service (publicly available at www.d5render.com/service-agreement) do not claim any ownership or perpetual usage rights over user-generated content, including AI-rendered images. On the contrary, Section 6 of our Terms of Service explicitly states that users “retain rights and ownership of the Content to the fullest extent possible under applicable law; D5 does not claim any ownership rights to the Content.”

When users upload content to our services, D5 is granted only a non-exclusive, purpose-limited operational license, which is a standard clause in most cloud-based software products. This license merely allows us to technically operate, maintain, and improve the service. D5 will never use user content as training data for the Services or for developing new products or services without users’ express consent.

As for liability, Sections 8 and 9 of our Terms of Service are standard in the software industry. They are designed to protect D5 from risks arising from user-uploaded content that infringes on third-party rights. These clauses are not intended to transfer the liability of D5’s own actions to users.


Explainer #1 – EULA vs Terms of Use: what’s the difference?

At first glance, a EULA (End User Licence Agreement) and Terms of Use can look like the same thing. In practice, they operate at different levels — and together form the legal framework that governs how customers engage with software and cloud services.

The EULA is the traditional licence agreement tied to desktop software. It explains that you do not own the software itself, only the right to use it under certain conditions. Typical clauses cover installation limits, restrictions on copying or reverse-engineering, and confirmation that the software is licensed, not sold.

The Terms of Use apply more broadly to online services, platforms, APIs and cloud tools. They include acceptable use rules, data storage and sharing conditions, API restrictions, and often a right for the vendor to change the terms unilaterally.

One unresolved issue is how to interpret contradictions. If the EULA states ‘you own your work’ but the Acceptable Use Policy restricts what you can do with that work, and neither agreement specifies which takes precedence, which clause governs? In practice, customers may only discover the answer in the event of a dispute — an unsettling prospect for firms relying on predictable rights.


Explainer #2 – Why is data the new goldmine?

As the industry moves into an era defined by artificial intelligence and machine learning, customer content has become more than just the product of design work, it has become the raw material for training and insight.

BIM and CAD models are no longer viewed solely as deliverables for projects, but as vast datasets that can be mined for patterns, efficiencies, and predictive value. This is why software vendors increasingly frame customer content as “data goods” rather than private work.

With so much of the design process shifting to cloud-based platforms, vendors are in a powerful position to influence, and often restrict, how those datasets can be accessed and reused.

The old mantra that “data is the new oil” captures this shift neatly: just as oil companies controlled not only the drilling but also the refining and distribution, software firms now want to control both the pipelines of design data and the AI refineries that turn it into intelligence.

What used to be customer-owned project history is being reconceptualised as a strategic asset for software vendors themselves and EULAs and Terms of Use are the contractual tools that allow them to lock down that value.


Explainer #3 – Autodesk’s Terms of Use

What it says

Autodesk’s Acceptable Use Policy (AUP) appears to ban AI/ML training on any “output” from its software unless done within Autodesk’s APS cloud. This could include models, drawings, exports, even IFCs.

Why it matters

Customers risk losing the ability to train internal AI on their own design history. Strict licence audits mean firms could be flagged non-compliant even without intent.

Legal experts warn the AUP’s broad claims over “outputs” may conflict with copyright law, which in many jurisdictions gives authors automatic ownership of their creations.

Consultants could face knock-on risks if client contracts require them to warrant full ownership of deliverables — raising potential indemnity exposure.

Autodesk gains leverage by funnelling AI innovation into its paid ecosystem.

The big picture

This move mirrors gatekeeping strategies in other tech sectors, where platforms wall off data to consolidate control. Regulators in the EU (Digital Markets Act, Data Act) and US antitrust bodies are increasingly scrutinising such practices.


Explainer #4 – Developers at risk

What changed?

Autodesk has overhauled Autodesk Platform Services (APS): APIs are now metered, consumption-based, and gated by stricter terms. While firms can still build internal tools, sharing or commercialising scripts now requires Autodesk’s explicit approval.

Why it matters

Independent developers face new costs and quotas for integrations that were once bundled into subscription fees. In-house teams must now budget for API usage, turning process automation into an ongoing operational cost.

Quota limits mean projects risk disruption if thresholds are unexpectedly exceeded mid-delivery.

The contractual line between “internal tool” and “commercial app” is now defined by Autodesk, not developers.

Innovation that once flowed freely into the wider ecosystem is fenced in, with Autodesk deciding what can be shared.

The big picture

Across the CAD/BIM sector, developer ecosystems are being monetised and restricted to generate shareholder returns. What were once open innovation pipelines are narrowing into vendor-controlled platforms, threatening the independence of smaller developers and reducing customer choice.


Recommended viewing: May Winfield @ NXT DEV

May Winfield
May Winfield

At AEC Magazine’s NXT DEV event this year, May Winfield, global director of commercial, legal and digital risks for Buro Happold presented “EULA and Other Agreements: You signed up to what?”, where she invited the audience to reconsider the contracts they’ve implicitly accepted.

How many users digest the fine print of EULAs and AI tool terms? Winfield warns that their assumptions often misalign with contractual reality and highlights key clauses that tend to lurk in user agreements: ownership of content, usage rights, and liability limitations.

In her presentation May does not offer legal advice but she provides a practical reminder: what you think you own or can do might be constrained by what you signed up to — underscoring the urgency for users, developers, and governance bodies to delve into EULAs and demand clarity.

■ Watch @ www.nxtaec.com

The post Contract killers: how EULAs are shifting power from users to shareholders appeared first on AEC Magazine.

]]>
https://aecmag.com/business/contract-killers-how-eulas-are-shifting-power-from-users-to-shareholders/feed/ 0
Chaos: from pixels to prompts https://aecmag.com/visualisation/chaos-from-pixels-to-prompts/ https://aecmag.com/visualisation/chaos-from-pixels-to-prompts/#disqus_thread Thu, 09 Oct 2025 05:00:40 +0000 https://aecmag.com/?p=24806 Chaos is blending AI with traditional viz, rethinking how architects explore, present and refine ideas

The post Chaos: from pixels to prompts appeared first on AEC Magazine.

]]>
Chaos is blending generative AI with traditional visualisation, rethinking how architects explore, present and refine ideas using tools like Veras, Enscape, and V-Ray, writes Greg Corke

From scanline rendering to photorealism, real-time viz to realt-ime ray tracing, architectural visualisation has always evolved hand in hand with technology.

Today, the sector is experiencing what is arguably its biggest shift yet: generative AI. Tools such as Midjourney, Stable Diffusion, Flux, and Nano Banana are attracting widespread attention for their ability to create compelling, photorealistic visuals in seconds — from nothing more than a simple prompt, sketch, or reference image.

The potential is enormous, yet many architectural practices are still figuring out how to properly embrace this technology, navigating practical, cultural, and workflow challenges along the way.

The impact on architectural visualisation software as we know it could be huge. But generative AI also presents a huge opportunity for software developers.


Find this article plus many more in the September / October 2025 Edition
👉 Subscribe FREE here 👈

Like some of its peers, Chaos has been gradually integrating AI-powered features into its traditional viz tools, including Enscape and V-Ray. Earlier this year, however, it went one step further by acquiring EvolveLAB and its dedicated AI rendering solution, Veras.

Veras allows architects to take a simple snapshot of a 3D model or even a hand drawn sketch and quickly create ‘AI-rendered’ images with countless style variations. Importantly, the software is tightly integrated with CAD / BIM tools like SketchUp, Revit, Rhino, Archicad and Vectorworks, and offers control over specific parts within the rendered image.

With the launch of Veras 3.0, the software’s capabilities now extend to video, allowing designers to generate short clips featuring dynamic pans and zooms, all at the push of a button.

“Basically, [it takes] an image input for your project, then generates a five second video using generative AI,” explains Bill Allen, director of products, Chaos. “If it sees other things, like people or cars in the scene, it’ll animate those,” he says.

This approach can create compelling illusions of rotation or environmental activity. A sunset prompt might animate lighting changes, while a fireplace in the scene could be made to flicker. But there are limits. “In generative AI, it’s trying to figure out what might be around the corner [of a building], and if there’s no data there, it’s not going to be able to interpret it,” says Allen.

Chaos is already looking at ways to solve this challenge of showcasing buildings from multiple angles. “One of the things we think we could do is take multiple shots – one shot from one angle of the building and another one – and then you can interpolate,” says Allen.


Model behaviour

Veras uses Stable Diffusion as its core ‘render engine’. As the generative AI model has advanced, newer versions of Stable Diffusion have been integrated into Veras, improving both realism and render speed, and allowing users to achieve more detailed and sophisticated results.

“We’re on render engine number six right now,” says Allen. “We still have render engine, four, five and six available for you to choose from in Veras.”

But Veras does not necessarily need to be tied to a specific generative AI model. In theory it could evolve to support Flux, Nano Banana or whatever new or improved model variant may come in the future.

But, as Allen points out, the choice of model isn’t just down to the quality of the visuals it produces. “It depends on what you want to do,” he says. “One of the reasons that we’re using Stable Diffusion right now instead of Flux is because we’re getting better geometry retention.”

One thing that Veras doesn’t yet have out of the box is the ability for customers to train the model using their own data, although as Allen admits, “That’s something we would like to do.”

In the past Chaos has used LORAs (Low-Rank Adaptations) to fine-tune the AI model for certain customers in order to accurately represent specific materials or styles within their renderings.

Roderick Bates, head of product operations, Chaos, imagines that the demand for fine tuning will go up over time, but there might be other ways to get the desired outcome, he says. “One of the things that Veras does well is that you can adjust prompts, you can use reference images and things like that to kind of hone in on style.”


Chaos Veras 3.0 – still #1
Chaos Veras 3.0 – still #2

Post-processing

While Veras experiments with generative creation, Chaos is also exploring how AI can be used to refine output from its established viz tools using a variety of AI post-processing techniques.

Chaos AI Upscaler, for example, enlarges render output by up to four times while preserving photorealistic quality. This means scenes can be rendered at lower resolutions (which is much quicker), then at the click of a button upscaled to add more detail.

While AI upscaling technology is widely available – both online and in generic tools like Photoshop – Chaos AI Upscaler benefits from being directly accessible at the click of a button directly inside the viz tools like Enscape that architects already use. Bates points out that if an architect uses another tool for this process, they must download the rendered image first, then upload it to another place, which fragments the workflow. “Here, it’s all part of an ecosystem,” he explains, adding that it also avoids the need for multiple software subscriptions.

Chaos is also applying AI in more intelligent ways, harnessing data from its core viz tools. Chaos AI Enhancer, for example, can improve rendered output by refining specific details in the image. This is currently limited to humans and vegetation, but Chaos is looking to extend this to building materials.

“You can select different genders, different moods, you can make a person go from happy to sad,” says Bates, adding that all of this can be done through a simple UI.

There are two major benefits: first, you don’t have to spend time searching for a custom asset that may or may not exist and then have to re-render; second, you don’t need highly detailed 3D asset models to achieve the desired results, which would normally require significant computational power, or may not even be possible in a tool like Enscape.

With Veras 3.0, the software’s capabilities now extend to video, allowing designers to generate short clips featuring dynamic pans and zooms, all at the push of a button

The real innovation lies in how the software applies these enhancements. Instead of relying on the AI to interpret and mask off elements within an image, Chaos brings this information over from the viz tool directly. For example, output from Enscape isn’t just a dumb JPG — each pixel carries ‘voluminous metadata’, so the AI Enhancer automatically knows that a plant is a plant, or a human is a human. This makes selections both easy and accurate.

As it stands, the workflow is seamless: a button click in Enscape automatically sends the image to the cloud for enhancement.

But there’s still room for improvement. Currently, each person or plant must be adjusted individually, but Chaos is exploring ways to apply changes globally within the scene. Chaos

AI Enhancer was first introduced in Enscape in 2024 and is now available in Corona and V-Ray 7 for 3ds Max, with support for additional V-Ray integrations coming soon.

AI materials

Chaos is also extending its application of AI into materials, allowing users to generate render-ready materials from a simple image. “Maybe you have an image from an existing project, maybe you have a material sample you just took a picture of,” says Bates. “With the [AI Material Generator] you can generate a material that has all the appropriate maps.”

Initially available in V-Ray for 3ds Max, the AI Material Generator is now being rolled out to Enscape. In addition, a new AI Material Recommender can suggest assets from the Chaos Cosmos library, using text prompts or visual references to help make it faster and easier to find the right materials.

Cross pollination

Chaos is uniquely positioned within the design visualisation software landscape. Through Veras, it offers powerful oneclick AI image and video generation, while tools like Enscape and V-Ray use AI to enhance classic visualisation outputs. This dual approach gives Chaos valuable insight into how AI can be applied across the many stages of the design process, and it will be fascinating to see how ideas and technologies start to cross-pollinate between these tools.

A deeper question, however, is whether 3D models will always be necessary. “We used to model to render, and now we render to model,” replies Bates, describing how some firms now start with AI images and only later build 3D geometry.

“Right now, there is a disconnect between those two workflows, between that pure AI render and modelling workflow – and those kind of disconnects are inefficiencies that bother us,” he says.

For now, 3D models remain indispensable. But the role of AI — whether in speeding up workflows, enhancing visuals, or enabling new storytelling techniques — is growing fast. The question is not if, but how quickly, AI will become a standard part of every architect’s viz toolkit.

The post Chaos: from pixels to prompts appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/chaos-from-pixels-to-prompts/feed/ 0
Infrastructure design automation https://aecmag.com/civil-engineering/infrastructure-design-automation/ https://aecmag.com/civil-engineering/infrastructure-design-automation/#disqus_thread Thu, 09 Oct 2025 05:00:29 +0000 https://aecmag.com/?p=24933 Transcend is looking to bring new efficiencies to the design of water, wastewater and power infrastructure

The post Infrastructure design automation appeared first on AEC Magazine.

]]>
Transcend aims to automate one of engineering’s slowest frontend processes – the design of water, wastewater and power infrastructure. Its cloud-based tool generates LOD 200 designs in hours rather than weeks and is already reshaping how some utilities, consultants and OEMs approach projects

The Transcend story begins inside Organica Water, a company based in Budapest, Hungary and specialising in the design and construction of wastewater treatment facilities.

Transcend was a tool built by engineers at Organica to solve the persistent headache of producing preliminary designs for these facilities quickly and at scale. They found traditional manual design processes too limiting, so they put together a digital tool that connected spreadsheets, calculations and process logic in order to automate much of the work associated with early-stage design.

This tool, the Transcend Design Generator (TDG), was a big success at Organica, slashing the time it took engineers to produce proposals and enabling them to explore multiple design scenarios side-by-side.

By 2019, it was clear that while Transcend may have started off as an internal productivity aid, it had matured sufficiently to represent a significant business opportunity in its own right. Transcend was spun off as an independent company, led by Ari Raivetz, who served as Organica CEO between 2011 and 2020.


Find this article plus many more in the September / October 2025 Edition
👉 Subscribe FREE here 👈

Today, TDG is positioned as a generative design and automation solution for the infrastructure sector, targeted at companies building critical infrastructure assets such as water and wastewater plants and power stations. It is billed as accelerating the way that such facilities are conceived, embedding sustainability and resilience into designs from their earliest stages.

Among Transcend’s strategic partnerships is one with Autodesk, which sees TDG integrated with mainstream BIM workflows, providing a bridge between early engineering and detailed designs. Autodesk is also an investor in Transcend, having contributed to its 2023 Series B funding round. To date, Transcend has raised over $35 million and employs some 100 people globally.

A look at Transcend’s tech

A wealth of capability is baked into the TDG software, which goes beyond geometry generation and parametric modelling to also embrace process engineering, civil and electrical logic, simulation and cost modelling.

Engineers enter a minimal set of inputs, such as site characteristics, flow rates and regulatory requirements, and the tool generates complete conceptual designs that are validated against engineering rules. Outputs include models, drawings, bills of quantities, schematics, cost estimates and carbon footprint calculations. Every decision and iteration is tracked, producing an audit trail that would be difficult to achieve in manual workflows.

The difference compared to traditional design practices is quite stark. With manual conceptual design, weeks of work may yield only one or two viable options, locking in assumptions before alternatives can be properly tested.

Transcend compresses this process into hours, producing multiple design variants that can be compared quickly and objectively. Because the data structures and outputs are already aligned with BIM and downstream processes, the work does not need to be redone at the detailed design stage.


Transcend


Transcend
Transcend has a strategic partnership with Autodesk, which sees TDG integrated with mainstream BIM workflows, providing a bridge between early engineering and detailed designs

Transcend executives say that using TDG on a project creates a shift from reactive, labour-intensive conceptual engineering to a more proactive approach. The tool, they claim, is capable of delivering part of a typical initial design package, with outputs detailed enough to support option analysis, secure stakeholder approval, underpin bids and provide reliable cost and carbon estimates.

The intent, however, is not to replace detailed design teams. Instead, it is to accelerate and standardise the slowest stage of the workflow, so that engineers can move into the final stage of detailed design with a far clearer, validated baseline.

Impressively transdiscipline

TDG is very much a BIM 2.0 product for civil/infrastructure design and is, at its heart, generative design software.

It uses rules-based automation and algorithms to generate early-stage models, drawings and documentation, solving complex engineering problems through auditable, traceable data, rather than relying on less-reliable LLMs.

All TDG’s processing is on the cloud, so it works without the need of a desktop application and can be accessed from any device with a web browser.

We also find it to be impressively transdiscipline, integrating the design processes of mixed teams to produce complete, multi-option design packages that reflect the work and experience of mechanical, civil and electrical design experts.

This end-to-end, multidisciplinary approach certainly appears to be a key differentiator for Transcend in the automation space.


Q&A with Transcend co-founder Adam Tank

Adam Tank is co-founder and chief communications officer at Transcend. AEC Magazine met with Tank to focus on the company’s Transcend Design Generator (TDG) tool and hear more about its future product roadmap.

Transcend
Adam Tank

AEC Magazine: To begin, we’re curious to know how you define TDG, or Transcend Design Generator, Adam. Is it a configurator, is it AI, is it both – or is it something else entirely?

Adam Tank: TDG is fundamentally a parametric design software. While people often mistake sophisticated autoutomation for artificial intelligence, our software is built on processes that are really thought-out. It operates as a massive parametric solver, similar to tools used in site development like TestFit, but applied to multidisciplinary engineering for critical infrastructure.

We utilise rules-based automation and algorithms to generate complete, viable design options, based on inputs, constraints and standards. TDG can produce designs quickly, by combining first-principles engineering, parametric design rules and proprietary data sets.

Our primary focus is on solving complex engineering problems through auditable, traceable data, rather than relying solely on large language models that might hallucinate. Every decision the software makes can be traced back to a literal textbook calculation or a rule of thumb provided by an expert engineer.


AEC Magazine: So what exactly does the output for a project produced by TDG look like and how deep does the generated geometry go?

Adam Tank: TDG supports the entire early-stage design process. The software is built to follow the same sequential workflow as a multi-disciplinary engineering team, beginning with process calculations, then moving on to mechanical, electrical and civil calculations.

Consequently, it is capable of generating a comprehensive set of validated, reusable data sets and outputs. These outputs include PFDs (process flow diagrams), BOQs (bills of quantities), and full P&IDs (piping and instrumentation diagrams), because it captures all the required data, such as the full equipment list, the geometry, the motor horsepower rating and the electrical consumption of the equipment.

These schematics can be produced in either AutoCAD or Revit. TDG also produces 3D BIM files with geometry generated at LOD 200. This includes key components like slabs, walls, doors, windows, concrete quantities and steel structures. LOD 200 is sufficient for the conceptual design phase, enabling teams to determine the total capital cost of a project within a 10% to 20% margin.

Furthermore, Transcend also generates drawings from the model. Because the model geometry is guaranteed to be accurate through automation, starting from precise specifications rather than attempting to fix poor modelling errors in the drawings, the resulting drawings can be relied upon.


AEC Magazine: So how does TDG effectively combine knowledge and requirements of multiple engineering disciplines into one unified solution?

Adam Tank: The key to TDG is that it functions as an end-to-end, multi-disciplinary, first-principles engineering automation tool. We built the software to follow the exact same sequential thought process that a multidisciplinary team of engineers uses today.

The process begins with the software taking user inputs regarding location, desired consumption, and facility requirements, and combining this with first principles engineering, parametric design rules, and proprietary data sets. Critically, every decision the software makes can be traced back to a textbook calculation or an engineer’s rule of thumb, providing the auditable, traceable data required in this high-risk industry.

The engine then executes the workflow. It starts with the process set of calculations. Once that data is validated, the software transfers that data to the next stage, flowing through a mechanical engine that handles the calculations and then subsequently translating the data for electrical and civil engineering needs.

Essentially, TDG integrates process, mechanical, civil and electrical design logic into one tool, acting as an engine that ‘chews it all up’, from a multi-disciplinary perspective, and produces the unified outputs required by engineers.

This complex system handles local and regional standards, equipment standards and regulatory constraints, guaranteeing that the design options generated are viable and grounded in real engineering standards.


AEC Magazine: The process certainly sounds heavily automated – but where, specifically, does TDG use AI today and what are the company’s future plans for incorporating more AI into the tool?

Adam Tank: Currently, the only part of our software that uses AI is the site arrangement, where we employ an evolutionary algorithm to optimise site layout. When a user inputs the parcel of land and specifications, the software checks constraints and runs through thousands of combinations to determine the optimal arrangement. This algorithm optimises site footprint, while taking into consideration required ingress/egress points for power and water, traffic flow and other necessary clearances.

For future AI development, we are focused on applications that build user trust and enhance productivity. For example, while TDG already produces a preliminary engineering report as part of its output package, we are looking at leveraging AI for text generation within this report.

There’s also scope for an engineering co-pilot. We’d like to integrate an AI-powered co-pilot that guides the user through the TDG interface and, critically, explains the reasoning behind the software’s design decisions. Engineers are accustomed to manipulating every variable manually, so when the computer generates the solution, they need to understand why certain components are placed the way they are. This co-pilot could quote bylaws, manufacturer limitations or engineering standards, effectively allowing the user to query the model itself.


AEC Magazine: How does Transcend handle the complexity of standards and multi-disciplinary data flow across separate but collaborating engineering functions?

Adam Tank: Our software must handle local and regional standards, equipment standards and regulatory constraints, so the amount of data collection is immense.

The complex engine we have built follows the standard engineering workflow. It starts with a user inputting project data, like location, water flow, desired treatment, existing site conditions. This data is used by the process engineer calculation models, which run sophisticated simulations to predict kinetics and mathematics.

TDG acts as the multi-disciplinary engine. It feeds data into those process models, takes the output and then translates it into the next required discipline—mechanical, then electrical, then civil.

This means the engineering itself is still being done, but our engine chews up all the multi-disciplinary requirements and produces the unified outputs that engineers require.


AEC Magazine: Into which markets does Transcend hope to expand next – and why hasn’t the company so far sought to offer higher levels of detail, such as LOD 300 and LOD 400?

Adam Tank: Our focus has been to remain the only company offering end-to-end, multi-disciplinary, first principles engineering automation for critical infrastructure. We don’t have a direct competitor, because our competition is scattered across specialised automation tools that only handle specific parts of the process, such as MEP automation or architectural configuration. We were purpose-built specifically for water, power and wastewater infrastructure, and we are the only generative design software focused entirely on these complex sectors.

Regarding LODs, we have made a deliberate strategic decision not to pursue higher LOD specifications. In the conceptual design phase, we generate geometry at LOD 200. The time and complexity required to achieve that depth would divert resources from attracting new clients and expanding into new conceptual design verticals.

If it were entirely up to me, the next big market we would pursue is transportation, covering roads and bridges, which represents a massive market in terms of total design dollars spent, eclipsing water and wastewater by almost double.

We also get asked a lot about data centre design. This expansion is technologically feasible for us. For instance, early in our company history, we developed a similar rapid configuration tool for Black & Veatch to design COVID testing facilities during the pandemic. We see a potential natural fit with companies like Augmenta, which specialises in electrical wiring automation, where we could automate the building structure and they could handle the wiring complexity.

The post Infrastructure design automation appeared first on AEC Magazine.

]]>
https://aecmag.com/civil-engineering/infrastructure-design-automation/feed/ 0
Studio Tim Fu: AI-driven design https://aecmag.com/ai/studio-tim-fu-ai-driven-design/ https://aecmag.com/ai/studio-tim-fu-ai-driven-design/#disqus_thread Wed, 16 Apr 2025 05:00:05 +0000 https://aecmag.com/?p=23386 The London practice is reimagining architectural workflows, blending human creativity with machine intelligence

The post Studio Tim Fu: AI-driven design appeared first on AEC Magazine.

]]>
The pioneering London practice is reimagining architectural workflows through AI, blending human creativity with machine intelligence to accelerate and elevate design, writes Greg Corke

It’s rare to see an architectural practice align itself so openly with a specific technology. But Studio Tim Fu is breaking that mould. Built from the ground up as an AI-first practice, the London-based studio is unapologetically committed to exploring how generative AI can reshape architecture—from the earliest concepts to fully constructable designs.

“We want to explore in depth how we can use the technology of generative AI, of neural networks, deep learning, and large language models as well, in an effort to facilitate an accelerated way of designing and building, but also thinking,” explains founder Tim Fu.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Studio Tim Fu’s current methodology uses AI early in the design process to boost creativity, accelerate visualisation, and improve client communication — all while maintaining technical feasibility.

The technological journey began during Fu’s time at Zaha Hadid Architects, where he explored the potential of computational design to rationalise complex geometries. “We were thinking about the complexity of design and how we can bring that to fruition through computational processes and technologies,” he recalls.

This early exploration laid the groundwork to the Studio’s current AI-driven approach, which involves a sophisticated iterative process that blends human intention with machine learning capabilities. Initial AI-generated concepts are refined through human guidance, then reinterpreted by diffusion AI technology. This creates a dynamic feedback loop for rapid conceptualisation, where hundreds of design expressions can be explored in a single day.

Fu’s technical approach employs a complex system of AI tools, from common text-to-image generators such as Midjourney, Dall-E and Stable Diffusion to custom-trained models. Using these tools at the start of a project presents a ‘gradient of possibilities’, says Fu, both using AI’s creative agency and incorporating human intentions. The team uses text prompts to spark fresh ideas, producing ‘mood boards’ of synthetic visuals, as well as hand sketches to guide the AI.

“We use a mesh of back and forth with different design tools,” he explains. Ideas are generated and refined before they are translated into 3D geometry using modelling tools like Rhino.

Once we figure out the architectural design and planning that solves real life situation and constraints and context, we bring those back into the AI visualising models, to visualise and continue to iterate over our existing 3D models

“Once we figure out the architectural design and planning that solves real life situation and constraints and context, we bring those back into the AI visualising models, to visualise and continue to iterate over our existing 3D models,” he says. This enables the design team to see, for example, different possible expressions of window details and geometries. It’s a continuous loop—a creative dialogue between human intention and machine imagination.

Fu believes the results speak for themselves: in just one week, his team can deliver high-quality, client-ready concepts that far exceed what’s possible using conventional methods within the same time frame.


Studio Tim Fu
Lake Bled Estate masterplan in Slovenia. Credit: Studio Tim Fu

Studio Tim Fu


This level of efficiency brings new economic opportunities. Studio Tim Fu can charge clients less than traditional architects while boosting its earnings, all within conventional pricing structures. “We can lower the price because we can, and we can up the value, so it’s a win for the client and it’s good for us,” he says.

AI meets heritage

The Studio’s work on the Lake Bled Estate masterplan in Slovenia, its first fully AI-driven architectural project, serves as a landmark demonstration of these technical capabilities.

Spanning an expansive 22,000 square metre site, the project comprises six ultra-luxury villas set alongside the historic Vila Epos, a protected cultural monument of the highest national significance.

To produce a design that respects its historical context while creating an elevated luxury space, Studio Tim Fu synthesises heritage data with AI.

The Studio captured the local architectural vernacular by analysing material characteristics and extracting geometric parameters to comply with strict heritage regulations, including roof layout, height, and slope.

“This is the first time we are showing AI in its most contextually reflective way,” says Fu, “Something that is contrary to all the AI experiments that have come out since the dawn of diffusion AI processes.

“We want to showcase that this whole diffusion process can be completely controlled under our belt and be used for specifically addressing those issues [of respecting historical context].”


Delivering the details

Studio Tim Fu currently applies AI primarily at the concept-to-detail design stage. However, Fu believes we’re at a pivotal moment where AI is poised to take on more technical aspects of architectural design—particularly in areas like BIM modelling and dataset management.

“Because these are technical requirements, technical needs, and technical goals, it’s something that can be quantified,” he explains. “If it’s maximising certain functionality, while minimising the use of material and budget, these are numerical data that can be optimised. We’re just beginning that process of developing artificial general intelligence.”

But where does this leave humans? While Fu acknowledges that we must humbly recognise our limitations, he believes that human specialists—architects, designers, and fabricators—will remain essential, each working with AI within their own domain. At the same time, he sees enormous potential for AI to unify these fields.

“What AI can do is bring all of the human processes into a cohesive, streamlined decision making, to design to production process, because that’s what AI is good at. It’s good at cohesing large data sets, it’s good at addressing macro scale and micro scale values in the same time.”


Main image: Lake Bled Estate masterplan in Slovenia. Credit: Studio Tim Fu

The post Studio Tim Fu: AI-driven design appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/studio-tim-fu-ai-driven-design/feed/ 0
AI agents for civil engineers https://aecmag.com/civil-engineering/ai-agents-for-civil-engineers/ https://aecmag.com/civil-engineering/ai-agents-for-civil-engineers/#disqus_thread Wed, 16 Apr 2025 05:00:31 +0000 https://aecmag.com/?p=23487 How LLMs can help engineers work more efficiently, while still respecting professional responsibilities

The post AI agents for civil engineers appeared first on AEC Magazine.

]]>
Anande Bergman explores how AI agents can be used to create powerful solutions to help engineers work more efficiently but still respect their professional responsibilities

As a structural engineer, I’ve watched how AI is transforming various industries with excitement. But I’ve also noticed our field’s hesitation to adopt these technologies — and for good reason. We deal with safety-critical systems where reliability is a requirement.

In this article, I’ll show you how we can harness AI’s capabilities while maintaining the reliability we need as engineers. I’ll demonstrate this with an AI agent I created that can interpret truss drawings and run FEM analysis (code repository included), and I’ll give you resources to create your own agents.



The possibilities here have me truly excited about our profession’s future! I’ve been in this field for years, and I haven’t been this excited about a technology’s potential to transform how we work since I first discovered parametric modelling.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

What makes AI agents different?

Unlike traditional automation that follows fixed rules, AI agents can understand natural language, adapt to different situations, and even solve problems creatively. Think of them as smart assistants that can understand what you want and get it done.

For example, while a traditional Python script needs exact coordinates, boundary conditions, and forces to analyse a truss, an AI agent can look at a hand-drawn sketch or AutoCAD drawing and figure out the structure’s geometry by itself (see image below). It can even request any missing information needed for the analysis. This flexibility is powerful, but it also introduces unpredictability — something we engineers typically try to avoid.


Anande Bergman


The rise of specialised AI agents It’s 2025, and you’ve probably heard of ChatGPT, Claude, Llama, and other powerful Large Language Models (LLMs) that can do amazing things, like being incredibly useful coding assistants. However, running these large models in production is expensive, and their general-purpose nature sometimes makes them underperform in specific tasks.

This is where specialised agents come in. Instead of using one large model for everything, we can create smaller, fast, focused agents for specific tasks — like analysing drawings or checking building codes. These specialised agents are:

  • More cost-effective to run
  • Better at specific tasks
  • Easier to validate

Agents are becoming the next big thing. As Microsoft CEO Satya Nadella points out, “We’re entering an agent era where business logic will increasingly be handled by specialised AI agents that can work across multiple systems and data sources”.

For engineering firms, this means we can create agents that understand our specific workflows and seamlessly integrate with our existing tools and databases.

The engineering challenge

Here’s our core challenge: while AI offers amazing flexibility, engineering demands absolute reliability. When you’re designing a bridge or a building, you need to be certain about your calculations. You can’t tell your client “the AI was 90% sure this would work.”

On the other hand, creating a rule-based engineering automation tool that can handle all kinds of inputs and edge cases while maintaining 100% reliability is a significant challenge. But there’s a solution.

Bridging the gap: reliable AI agents

We can combine the best of both worlds by creating a system with three key components (see image below):


Anande Bergman


  1. AI agents handle the flexible parts – understanding requests, interpreting drawings, and searching for data.
  2. Validated engineering tools perform the critical calculations.
  3. Human in the loop: You, the engineer, maintain control — verifying data, checking results, and approving modifications.

Let me demonstrate this approach with a practical example I built: a truss analysis agent.

Engineering agent to analyse truss structures

Just as an example, I created a simple agent that calculates truss structures using the LLM Claude Sonnet. You give it an image of the truss, it extracts all the data it needs, runs the analysis, and gives you the results.

You can also ask the agent for any kind of information, like material and section properties, or to modify the truss geometry, loads, forces, etc. You can even give it some more challenging problems, like “Find the smallest IPE profile so the stresses are under 200 MPa”, and it does!

The first time I saw this working I couldn’t help but feel that childlike excitement engineers get when something cool actually works. Here is where you start seeing the power of AI agents in action.

It is capable of interpreting different types of drawings and creating a model, which saves a lot of time in comparison with the typical Python script where you would need to enter all the node coordinates by hand, define the elements and their properties, loads, etc.

Additionally, it solves problems using information I did not define in the code, like the section properties of IPE profiles or material properties of steel, or what is the process to choose the smallest beam to fulfil the stress requirement. It does everything by itself. N.B. You can find the source code of this agent in the resources section at the end.

In the video below, you can see the app I made using VIKTOR.AI


How does it work: an overview

Now let’s look behind the screen to understand how our AI agent works, so you can make one yourself.

In the image below you can see that in the centre you have the main AI agent, the brains of the operation. This is the agent that chats with the user and accepts text and images as input.


Anande Bergman


Additionally, it has a set of tools at its disposal, including another AI Agent, which it uses when it believes they are needed to complete the job:

  • Analyse Image: AI Agent specialised in interpreting images of truss structures and returning the data needed to build the FEM model.
  • Plot Truss: A simple Python function to display the truss structures.
  • FEM Analysis: Validated FEM analysis script programmed in Python.

The Main agent

The Main agent is powered by Claude 3.7 Sonnet, which is the latest LLM provided by Anthropic. Basically, you are using the same model you are chatting with when using Claude in the browser, but you use it in your code using their API, and you give the model clear guidelines on how to behave and provide it with a set of tools it can use to solve problems.

You can also use other models like ChatGPT, Llama 3.x, and more, as long as they support tool calling natively (using functions). Otherwise, it gets complicated to use your validated engineering scripts.

For example, here’s how we get an answer from Claude using Python (see image below).


Anande Bergman


Let’s break down these key components:

  • SYSTEM MESSAGE: This is a text that defines the agent’s role, behaviour guidelines, boundaries, etc.
  • TOOLS_DESCRIPTION: Description of what tools the agent can use, their input and output.
    messages: This is the complete conversation, including all previous user and assistant (Claude) messages, so Claude knows the context of the conversation.

Tools use

One of the most powerful features of Claude and other modern LLMs is their ability to use tools autonomously. When the agent needs to solve a problem, it can decide which tools to use and when to use them. All it needs is a description of the available tools, like in the image below.


Anande Bergman


The agent can’t directly access your computer or tools — it can only request to use them. You need a small intermediary function that listens to these requests, runs the appropriate tool, and sends the results back. So don’t worry, Claude won’t take over your laptop… yet 😉

The Analyse image agent

Here’s a fun fact: the agent that analyses truss images is actually another instance of Claude! So yes, we have Claude talking to Claude (shhh…. don’t tell him 🤫). I did this to show how agents can work together, and honestly, it was the simplest way to get the job done.

This second agent uses Claude’s ability to understand both images and text. I give it an image and ask it to return the truss data in a specific JSON format that we can use for FEM analysis. Here is the prompt I use.


Anande Bergman


I’m actually quite impressed by how well Claude can interpret truss drawings right out of the box. For complex trusses, though, it sometimes gets confused, as you can see in the test cases later.

This is where a specialised agent, trained specifically for analysing truss images, would make a difference. You could create this using machine learning or by fine-tuning an LLM. Fine-tuning means giving the model additional training on your specific type of data, making it better at that task (though potentially worse at others).

Test case: book example

The first test case is an image of a book (see image below). What’s interesting is that the measurements and forces are given with symbols, and then the values are provided below. You can also see the x and y axis with arrows and numbers, which could be distracting.


Anande Bergman


The agent did a very good job. Dimensions, forces, boundary conditions, and section properties are correct. The only issue is that element 8 is pointing in the wrong direction, which is something I ask the agent to correct, and it did.

Test case: AutoCAD drawing

This technical drawing has many more elements than the first case (see image below). You can also see many numerical annotations, which could be distracting.


Anande Bergman


Again, the agent did a great job. Dimensions and forces are perfect. Notice how the agent understands that, for example, the force 60k is 60,000 N. The only error I could spot is that, while the supports are placed at the correct location, two of them should be rolling instead of fixed, but given how small the symbols are, this is very impressive. Note that the agent gets a low-resolution (1,600 x 400 pixel) PNG image, not a real CAD file.

Test case: transmission tower

This is definitely the most challenging of the three trusses, and all data is in the text. It also requires the agent to do a lot of math. For example, the forces are at an angle, so it needs to calculate the x and y components of each force. It also needs to calculate x and y positions of nodes by adding different measurements like this: x = a + a + b + a + a.

As you can see in the image below, this was a bit too much of a challenge for our improvised truss vision agent, and for more serious jobs, we need specialist agents. Now, in defence of the agent, the image size was quite small (700 x 600 pixels), so maybe with larger images and better prompts, it would do a better job.


Anande Bergman


An open-source agent for you

I’ve created a simplified version of this agent that demonstrates the core concepts we’ve discussed. This implementation focuses on the essential components:

  • A basic terminal interface for interaction
  • Core functionality for truss analysis
  • Integration with the image analysis and FEM tools

The code is intentionally kept minimal to make it easier to understand and experiment with. You can find it in this GitHub repository. This simplified version is particularly useful for:

  • Understanding how AI agents can integrate with engineering tools
  • Learning how to structure agent-based systems
  • Experimenting with different approaches to truss analysis

While it doesn’t include all the features of the full implementation, it provides a solid foundation for learning and extending the concept. You can use it as a starting point to build your own specialised engineering agents. See video below.



Conclusions

After building and testing this truss analysis agent, here are my key takeaways:

1) AI agents are game changers for engineering workflows

  • They can handle ambiguous inputs like hand-drawn sketches
  • They adapt to different ways of describing problems
  • They can combine information from multiple sources to solve complex tasks

2) Reliability comes from smart architecture

  • Let AI handle the flexible, creative parts
  • Use validated engineering tools for critical calculations
  • Keep engineers in control of key decisions

3) The future is specialised

  • Instead of one large AI trying to do everything
  • Create focused agents for specific engineering tasks
  • Connect them into powerful workflows

4) Getting started is easier than you think

  • Modern LLMs provide a great foundation
  • Tools and APIs are readily available
  • Start small and iterate

Remember: AI agents aren’t meant to replace engineering judgment — they’re tools to help us work more efficiently while maintaining the reliability our profession demands. By combining AI’s flexibility with validated engineering tools and human oversight, we can create powerful solutions that respect our professional responsibilities.

I hope you’ll join me in exploring what’s possible!

Resources


About the author

Anande Bergman is a product strategist and startup founder who has contributed to multiple successful tech ventures, including a globally-scaled engineering automation platform.

With a background in aerospace engineering and a passion for innovation, he specialises in developing software and hardware products and bringing them to market.

Drawing on his experience in both structural engineering and technology, he writes about how emerging technologies can enhance professional practices while maintaining industry standards of reliability.

The post AI agents for civil engineers appeared first on AEC Magazine.

]]>
https://aecmag.com/civil-engineering/ai-agents-for-civil-engineers/feed/ 0
Regarding digital twins https://aecmag.com/digital-twin/regarding-digital-twins/ https://aecmag.com/digital-twin/regarding-digital-twins/#disqus_thread Wed, 16 Apr 2025 05:00:29 +0000 https://aecmag.com/?p=23518 We spoke with the developer of Twinview to hear the latest on digital twins

The post Regarding digital twins appeared first on AEC Magazine.

]]>
AEC Magazine caught up with Rob Charlton, CEO of Newcastle’s Space Group to talk about digital twin adoption and advances. Twinview, created by the company’s BIM Technologies spin off, is one of the most mature solutions on the market today and now has global customers

It’s tough being one of the first to enter a market but for Space, one of the country’s most BIM-centric architectural practices, it was a case of needs must. In 2016, its BIM consultancy spin-off, BIM Technologies, identified a need for its clients to be able to access their model data without the need for expensive software or hardware. Development started and this eventually became Twinview, launched in 2019.

Space Group is a practicing architecture firm, a BIM software developer, a services supplier, a BIM components / objects library creator and distributor. So, not only does it develop BIM software, it also uses the software in its own practice, as well as sell its solutions and services to other firms.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Selling twins

In previous conversations with CEO Rob Charlton on the market’s appetite for digital twins, he has been frank in the difficulty in getting buy in from fellow architects, developers and even owner operators. The customers who got into twins early were firms that owned portfolios of buildings which were sold as eco-grade investments.

Charlton acknowledges that he always expected it to be a long-term endeavour, “We started this development knowing it was it was a five year plus journey to any level of maturity or even awareness”. He draws a parallel to the adoption of BIM, recalling that even though Space bought its first license of Revit around 2001, it didn’t gain significant traction until around 2011, and even then, this was largely due to UK BIM mandates.

The early digital twin market development was a ‘slow burn’. Charlton contrasts BIM Technologies’ patient, self-funded approach with companies that seek large VC funding, arguing that “ the market will move at the level it’s ready for”.

He explains that the good news is that over the last year, there has been an increase in awareness of the value
of digital twins, particularly in the last six months.

This awareness is seen in the fact that clients are now putting out Requests for Proposals (RFPs) for digital twin solutions. For Charlton, this is a fundamental difference compared to the past, where they would have to approach firms to explain the benefits of digital twins. Now, the clients themselves have made the decision that they want a digital twin and are seeking proposals from providers.

Priorities and needs

There’s a lot of talk about digital twins but very little talk concerning the actual benefits of investing in building them. Charlton explains a lot of twin clients are increasingly interested in reducing carbon in buildings, whether that be in embodied or operational and compliance and safety. “It’s an area that Space is particularly passionate about but there is an inconsistency in how embodied carbon reviews and measurements are conducted,” he says.
Customer access to operational data is also important, explains Charlton, “Clients want to gain insights into how their buildings are actually performing in real time.”

He also notes that the facilities and the integration with facilities management is equally important, to streamline maintenance, manage issues, and improve overall building operations.

Clients value the ability to have “access to their information in one place” adds Charlton. And here, the cloud is the perfect solution to deliver a unified platform which consolidates models, and documents related to building assets.

Twinview clients are especially interested in owning their own data. Charlton gives the example of a New Zealand archive project, explaining that the client was particularly interested in having Twinview to maintain independence when using a subcontractor or external service provider, which might come and go over the project lifetime.

Back in the UK, Twinview is being used in conjunction with ‘desk sensors’ on an NHS project to optimise space and potentially avoid unnecessary capital expenditure. Charlton explains that the client was finding the digital twin useful for “analysis on how the space is used” because they were seeking to validate or challenge space needs assessments by consultants.

Increasingly, contractual obligations include performance data. For one of Space’s school clients, the DFA Woodman Academy, there’s a contractual obligation to provide energy performance data at month, three months and 12-months. Digital twin technology facilitated the compliance goal within the performance-based contract. The IoT sensors also identified high levels of CO2 in the classrooms, prompting an investigation into the cause.
Twinview goes beyond the traditional digital twin model for operations and has been used to connect residents to live building information. On a residential project, tenants access the Twinview data on their mobile phones to see energy levels in the buildings, temperatures and CO2, all through their own app.

Artificial Intelligence

Everyone is talking about AI, and Twinview now features a ChatGPT-like front end. This enables plain language search within the digital twin, both at an asset level and with regards to performance data. Charlton explains that while the AI in Twinview has a ‘ChatGPT-like interface’, it is not directly ChatGPT, although it does connect to it. He explains that Twinview developed its own system. This is possibly due to the commercial costs associated with using ChatGPT for continuous queries. The AI in Twinview accesses all building information, including the model, operational data, and tickets, which are stored in a single bucket on AWS. Looking to the future, Charlton mentions that the next stage of AI development for Twinview will be focused on prediction and learning. This includes the ability to generate reports automatically (e.g. weekly on average CO2 levels), predict future energy usage, and suggest ways to improve building performance. A key differentiator for AI in Twinview in the future, will be in its capacity to understand correlations between disparate datasets that are often siloed, such as occupancy data, fire analysis, and energy consumption. By applying a GPT-like technology over this connected data, the aim is to uncover new insights and solutions.

Development Journey

From a slow burn start, despite being a relatively small UK business and competing with big software firms with
deep pockets, Charlton told us that Twinview had already won international clients and is currently being
shortlisted for other significant international projects, including one on the west coast of America, against international competition.


Screenshot

The post Regarding digital twins appeared first on AEC Magazine.

]]>
https://aecmag.com/digital-twin/regarding-digital-twins/feed/ 0
Motif to take on Revit: exclusive interview https://aecmag.com/bim/motif-to-take-on-revit-exclusive-interview/ https://aecmag.com/bim/motif-to-take-on-revit-exclusive-interview/#disqus_thread Fri, 07 Feb 2025 07:03:35 +0000 https://aecmag.com/?p=22472 BIM startup is led by former Autodesk co-CEO Amar Hanspal and backed by a whopping $46 million in funding

The post Motif to take on Revit: exclusive interview appeared first on AEC Magazine.

]]>
BIM startup Motif has just emerged from stealth, aiming to take on Revit and provide holistic solutions to the fractured AEC industry. Led by former Autodesk co-CEO Amar Hanspal and backed by a whopping $46 million in funding, Motif stands out in a crowded field. In an exclusive interview, Martyn Day explores its potential impact.

The race to challenge Autodesk Revit with next-generation BIM tools has intensified with the launch of Motif, a startup that has just emerged out of stealth. Motif joins other startups including Arcol, Qonic, and Snaptrude, who are already on steady development paths to tackle collaborative BIM. However, like any newcomer competing with a well-established incumbent, it will take years to achieve full feature parity. This is even the case for Autodesk’s next generation cloud-based AEC technology, Forma.

What all these new tools can do quickly, is bring new ideas and capabilities into existing Revit (RVT) AEC workflows. This year, we’re beginning to see this happening across the developer community, a topic that will be discussed in great detail at our NXT BLD and NXT DEV conferences on 11 and 12 June 2025 at the Queen Elizabeth II Centre in London.

Though a late entrant to the market, Motif stands out. It’s led by Amar Hanspal and Brian Mathews, two former Autodesk executives who played pivotal roles in shaping Autodesk’s product development portfolio.



Hanspal was Autodesk CPO and, for a while, joint CEO. Mathews was Autodesk VP platform engineering / Autodesk Labs and lead the industry’s charge into adopting reality capture. They know where the bodies are buried and have decades of experience in software ideation, running large teams and have immediate global networks with leading design IT directors. Their proven track record also makes it easier for them to raise capital and be taken as a serious contender from the get-go.


Further reading – Motif V1: our first thoughts

 


Motif

In late January, the company had its official launch alongside key VC investors. Motif secured $46 million in seed and Series A funding. The Series A round was led by CapitalG, Alphabet’s independent growth fund, while the seed round was led by Redpoint Ventures. Pre-seed venture firm Baukunst also participated in both rounds. This makes Motif the second largest funded start-up in the ‘BIM’ space – the biggest being HighArc, a cloud-based expert system for US homebuilders, at $80 million.

While Motif has been in stealth for almost two years, operating under the name AmBr (we are guessing for Amar and Brian). Major global architecture firms have been involved in shaping the development of the software, even before any code was written, all under strict NDAs (Non-disclosure Agreements).

The firms working with Hanspal’s team deliver the most geometrically complex and large projects. The core idea is that by tackling the needs of signature architectural practices, the software should deliver more than enough capability for those who focus on more traditional, low risk.



There is considerable appetite to replace the existing industry standard software tools. This hunger has been expressed in multiple ‘Open Letters to Autodesk’, based on a wish for more capable BIM tools – a zeitgeist which Motif is looking to harness, as BIM eventually becomes a replacement market.

The challenge

Motif’s mission is to modernise the AEC software industry, which it sees as being dominated by ‘outdated 20th-century technology’. Motif aims to create a next-generation platform for building design, integrating 3D, cloud, and machine learning technologies. Challenges such as climate resilience, rapid urbanisation modelling, and working with globally distributed teams will be addressed, and the company’s solutions will integrate smart building technology.

Motif will fuse 3D, cloud, and AI with support for open data standards within a real-time collaborative platform, featuring deep automation. The unified database will be granular, enabling sharing at the element level. This, in many ways follows the developments of other BIM start-ups such as Snaptrude and Arcol, which pitch themselves as the ‘Figma’ for BIM. In fact, Hanspal was an early investor in Arcol, alongside Procore’s Tooey Courtemanche.

At the moment, there is no software for the public to see, just some hints of the possible interface on the company’s website. Access is request only. AEC Magazine is not privy to any product demonstrations, only what we have gleamed through conversations with Motif employees. The launch provided us with an exclusive interview with Hanspal to discuss the company, the technology and what the BIM industry needs.

A quantum of history

Before we dive into the interview, let’s have a quick look at how we got here. At Autodesk University 2016, while serving as Autodesk’s joint CEO, Hanspal introduced his bold vision for the future of BIM. Called Project Quantum, the aim was to create a new platform that would move BIM workflows to the cloud, providing a common data environment (CDE) for collaborative working.

Hanspal aimed to address problems which were endemic in the industry, arising from the federated nature of Architecture, Engineering, and Construction (AEC) processes and how software, up to that point, doubled down on this problem by storing data in unconnected silos.

Instead of focusing on rewriting or regenerating Revit as a desktop application, the vision was to create a cloud-based environment to enable different professionals to work on the same project data, but with different views and tools, all connected through the Quantum platform.


Advertisement

Quantum would feature connecting workspaces, breaking down the monolithic structure of typical AEC solutions. This would allow data and logic to be accessible anywhere on the network and available on demand, in the appropriate application for a given task. These workspaces were to be based on professional definitions, providing architects, structural engineers, MEP (Mechanical, Electrical, and Plumbing) professionals, fabricators, and contractors with access to the specific tools they need.

Hanspal recognised that interoperability was a big problem, and any new solution needed to facilitate interoperability between different software systems, acting as a broker, moving data between different data silos. One of the key aspects of Quantum was that the data would be granular, so instead of sharing entire models, Quantum could transport just the components required. This would mean users receive only the information pertinent to their task, without the “noise” of unnecessary data.

Eight months later, the Autodesk board elected fellow joint CEO, Andrew Anagnost as Autodesk CEO and Hanspal left Autodesk. Meanwhile, the concept of Quantum lived on and development teams continued exploratory work under Jim Awe, Autodesk’s chief software architect.

Months turned into years and by 2019, Project Quantum had been rebranded Project Plasma, as the underlying technology was seen as a much broader company-wide effort to build a cloud-based data-centric approach to design data . Ultimately, Autodesk acquired Spacemaker in 2020 and assigned its team to develop the technology into Autodesk Forma, which launched in 2023—more than six years after Hanspal first introduced the Quantum concept.

However, Forma is still at the conceptual stage, with Revit continuing to be the desktop BIM workflow, with all its underlying issues.

In many respects, Hanspal predicted the future for next generation BIM in his 2016 Autodesk University address. Up until that point Autodesk had wrestled for years with cloud-based design tools, with its first test being Mechanical CAD (MCAD) software, Autodesk Fusion, which demoed in 2009 and shipped in 2013. Cloud-based design applications were a tad ahead of the web standards and infrastructure which have helped product like Figma make an impact.


Advertisement

In conversation

On leaving Autodesk in 2017, after his 15+ year stint, Hanspal thought long and hard about what to do next. In various conversations over the years, he admitted that the most obvious software demand was for a new modern-coded BIM tool, as he had proposed in some detail with Quantum. However, Hanspal was mindful that it might be seen as sour grapes. Plus, developing a true Revit competitor came with a steep price tag—he estimated it would take over $200 million. Instead, Hanspal opted to start Bright Machines, a company which delivers the scalable automation of robot modules with control software which uses computer vision machine learning to manufacture small goods, like electronics.

After almost four years at Bright Machines, in 2021, Hanspal exited and returned to the AEC problem, which, in the meantime, had not made any progress. During COVID, AEC Magazine was talking with some very early start-ups, and pretty much all had been in contact with Hanspal for advice and/or stewardship.


Martyn Day: Your approach to the market isn’t a single-platform approach, like Revit?

Amar Hanspal: In contrast to the monolithic approach of applications like Revit, we aim to target specific issues and workflows. There will be common elements. With the cloud, you build a common back end, but the idea is that you solve specific problems along the way. You only need one user management system, one payment system, collaboration etc. There are some technology layers that are common. But the idea is about solving end-user problems like design review, modelling, editing, QA, QC.

This isn’t a secret! I talked about this in the Quantum thing seven years ago! I always say ideas are not unique. Execution is. When it comes down to it, can anybody else do this? Of course they can. Will they do this? Of course not!


The current Motif website

Martyn Day: Data storage and flow is a core differential from BIM 2.0. Will your system use granular data, and how will you bypass limitations of browser-based applications. You talk about ‘open’, which is very in vogue. Does that mean that your core database is Industry Foundation Classes (IFC), or is there a proprietary database?

Amar Hanspal: There are three things we have to figure out. One how to run in a browser, where you have the limited memory, so you can’t just send everything. You’ve got to get really clever about how to figure out what [data] people receive – and there’s all sorts of modern ways of doing that.

Second is you have to be open from the get-go. However we store the data, anybody should be able to access it, from day one.

And then the third thing is, you can’t assume that you have all the data, so you have to be able to link to other sources and integrate where it makes sense. If it’s a Revit object, you should be able to handle it but if it’s not, you should be able to link to it.

You have to do some things for performance – it’s not proprietary, but you’re always doing something to speed up your user experience. The one path is, here’s your client, then you have to get data fast to them, and you have to do that in a very clever way, all while you’re encrypting and decrypting it. That’s just for user experience and performance, but from a customer perspective, anytime you want to interrogate the data send and request all the objects in the database – there is a very standard web API that you can use, and it’s always available.

Of course we’ll support IFC, just like we support RVT and all these formats. But that’s not connected, not our core data format. Our core data format is a lot looser, because we realised in this industry, it’s not just geometric objects you’re dealing with, you must deal with materials, and all sorts of data types. In some ways, you must try and make it more like the internet in a way. Brian [Mathews] would explain that the internet is this kind of weirdly structured yet linked data, all at the same time. And I think that’s what we are figuring out how to do well.


Advertisement

Martyn Day: We have seen all sorts of applications now being developed for the web. Some are thick clients with a 20 GB download – basically a desktop application running in a web browser, utilising all the local compute, with the data on the cloud. Some are completely on the cloud with little resource requirement on the local machine. Autodesk did a lot of experimentation to try and work out the best balance. What are you doing?

Amar Hanspal:  It’s a bit of a moving edge right now. I would say that you want to begin first principles. You want to get the client as thin as possible so that if you can, you avoid the big download at all costs. That can be through trickery, it’s also where WebGPU and all these new things that are showing up are helping. You can start using browsers for more and more [things] every day that will help deliver applications. But I do think that there are situations in which the browser is going to get overwhelmed, in which case, you’re going to require people to add something. Like, when the objects get really large and very graphical, sometimes you can deliver a better user experience if you give somebody a thicker client.  I think that’s some way off for us to try and deal with, but our first principle is to just leverage the browser as much as possible and not require users to download something to use our application. I think it may become, ‘you hit this wall for this particular capability’, then you’ll need to add something local.


Martyn Day: You have folks that have worked on Revit in your team. Will this help your RVT ability form the get go?

Amar Hanspal: We’ve not reverse engineered the file format, but, you know, we do know how this works. We’re staying good citizens and will play nice. We’re not doing any hacks, we’re going to integrate very cleanly with whatever – Revit, Rhino, other things that people use – in a very clean way. We’re doing it in an intelligent way, to understand how these things are constructed.


Martyn Day: The big issue is that Revit is designed to predominantly model, in order to produce drawings. Many firms are fed up with documentation and modelling to produce low level of detail output. Are you looking to go beyond the BIM 1.0 paradigm?

Amar Hanspal: Yes, fabrication is very critical for modular construction. Fabrication is really one of the things that you have to ‘rethink’ in some way. It’s probably the most obvious other thing that you have to do. I also think that there are other experiences coming out, not that we are an AR/VR play, but you’re creating other sorts of experiences, and deliverables that people want like. We need to think through that more expansively.


Amar Hanspal sharing his vast experience in software development at AEC Magazine’s NXT DEV conference. (Click the image to watch the vide


Martyn Day: Are you using a solid modelling engine underneath, like Qonic?

Amar Hanspal: Yes, there is an answer to that, but what we’re coming out with first, won’t need all that complexity, but yeah, of course, we will do all that stuff over time.  There is a mixture of tech that we can use – off the shelf – like license one or use something that is relatively open source.


Martyn Day: Most firms who have entered this space, taking on Revit, is the software equivalent of scaling the North face of the Eiger – 20 years of development, multidiscipline, broadly adopted. All of the new tools initially look like SketchUp, as there’s so much to develop. Some have focused on one area, like conceptual, others have opted to develop all over the place to have broad, but shallow functionality. Are you coming to market focussing on a sweet spot?

Amar Hanspal:  One of the things we learned from speaking to customers is that [in] this whole concept modelling / Skema / TestFit world there are so many things that developers are doing. We’re going after a different problem set. In some ways, the first thing that we’re doing will feel much more like a companion, collaboration product, and it will look like a creation thing. I don’t want to take anything out of market that feels half incomplete. The lessons we’ve learned from everything is that even to do the MVP (Minimum Viable Product) in modelling, we will be just one of sixteen things that people are using. I think, you know, I’d much rather go up to the North face and scale it.



Martyn Day: Many of the original letter writers were signature architects, complaining that they couldn’t model the geometry in Revit so used Rhino / Grasshopper then dropped the geometry into Revit. So, are you talking to the most demanding group of users to please?

Amar Hanspal:  I 100% agree with you. I think someone has to go up the North face of the Eiger. That’s my thing, it’s the hardest thing to do. It’s why we need this special team. It’s why we need this big capital. That’s why Brian and I decided to do it. I was thinking, who else is going to do it? Autodesk isn’t doing it! This Forma stuff isn’t really leading to the reinvention of Revit.

All these small developers that are showing up, are going to the East face. I give them credit. I’m not dissing them, but if they’re not going to scale the North face… I’m like, OK, this is hard, but we have got to go up the North face of the Eiger, and that’s what we’re going to do.

It’s like Onshape [cloud-based MCAD software] took ten years. Autodesk Fusion took ten years. And this might take us ten years to do it – I don’t think it will. So, what you will see from us – and maybe you might even criticise us for – is while we’re scaling, it’s going to look like little, tiny subsets coming out. But there’s no escaping the route we have to go.


Advertisement

Martyn Day: From talking with other developers, it looks like it will take five years to be feature comparative. The problem is products come to the market and aren’t fleshed out, they get evaluated and dismissed because they look like SketchUp, not a Revit replacement and it’s hard to get the market’s attention again after that.

Amar Hanspal:  Yeah, I think it’s five years. And that’s why, deliberately, the first product that’s going to come out is not going to be the editor. It’s going to look a little bit more Revizto-like because I think that’s what gives us time to go do the big thing. If you’re gonna come for the King, you better not miss. We’ve got to get to that threshold where somebody looks at it and goes, ‘It doesn’t do 100% but it does 50% or 60%’ or I can do these projects on it and that’s where we are – it’s why we’re working [with] these big guys to keep us honest. When they tell us they can really use this, then we open it up to everybody else. Up until then, we’ll do this other thing that is not a concept modeller but will feel useful.


Martyn Day: How many people are in the team now?

Amar Hanspal:  We’re getting 35 plus. I think we’re getting close to 40. It’s mostly engineering people. Up until two weeks ago, it was 32 engineers and myself. Now I have one sales guy, one marketing, so we’ll have a little bit of go to market. But it’s mainly all product people. We are a distributed company, based around Boston, New York or the Bay Area – that’s our core.

We’re constructing the team with three basic capabilities. There’s classic geometry, folks – and these are the usual suspects. The place where we have newer talent is on the cloud side, both on trying to do 3D on the browser front end, and then on the back-end side, when we’re talking about the data structures. None of those people come from CAD companies, none of them, they are all Twitter, Uber or robotics companies – different universes to traditional CAD.

The third skill set that we’re developing is machine learning. Again, none of those guys are coming from Cloud or 3D companies. These are research-focused, coming from first principles, that kind of focus.



Martyn Day: By trying to rethink BIM and being heavily influenced by what came before, like Revit, is there a danger of being constrained by past concepts? Somone described Revit to me as 70s thinking in 80s programming. Obviously now computer science, processors, the cloud have all moved on. The same goes for business models. This weekend, I watched the CEO of Microsoft say SaaS was dead!

Amar Hanspal:  We know we’re living in a post subscription world. Post ‘named user’ world is the way I would describe it. The problem with subscription right now, is that it’s all named user, you’ve got to be onboard, and then this token model at Autodesk is if you use the product for 30 seconds, then you get charged for the whole day.

It’s still very tied to, sort of like a human being in front in a chair. That’s what makes the change. Now, what does that end up looking like? You know the prevalent model, there’s three that are getting a lot of interest: one is the Open AI ChatGPT model. It’s get a subscription, you get a bunch of tokens. You exceed them, you get more.

The other one, which I don’t think works in AEC, is outcome-based pricing, which works for callcentres. You close a call, you create seven bucks for the software. I don’t see that happening. What’s the equivalent in AEC time? Produce drawing, seven bucks? What is the equivalent of that? That just seems wrong. I think we’re going to end up in this somewhat hybrid tokenised / ChatGPT style model, but you know we have to figure that out. We have to account for people’s ability to flex up and down. They have work what comes in and out. Yeah, that’s the weakness of the subscription business model, is that customers are just stuck.


Martyn Day: Why didn’t Autodesk redevelop Revit in the 2010 to 2015?

Amar Hanspal:  What I remember of those days – it’s been a while – is I think there was a lot of focus on just trying to finish off Revit Structure and MEP. I think that was the one Revit idea, and then suites and subscriptions. There was so much focus on business models on that. But you’re right. I think looking back, that was the time we should have have redone Revit. I started to it with Quantum, but I didn’t last long enough to be able to do it!


Conclusion

One could argue that the decision by Autodesk not to rewrite Revit and minimise the development was a great move, profit-wise. For the last eight years, Revit sales haven’t slowed down and copies are still flying off the shelves. Revit is a mature product with millions of trained users and RVT is the lingua franca of the AEC world, as defined in many contracts. There is proof to the argument that software is sticky and there’s plenty of time with that sticky grip, for Autodesk to flesh out and build its Forma cloud strategy.

Autodesk has taken active interest in the start ups that have appeared, even letting Snaptrude exhibit at Autodesk University, while it assesses the threat and considers investing in or buying useful teams and tech. If there is one thing Autodesk has, it’s deep pockets and throughout its history has bought each subsequent replacement BIM technology – from Architectural Desktop (ADT) to Revit. Forma would have been the first in-house development, although I guess that’s partially come out of the SpaceMaker acquisition.

But this isn’t the whole story. With Revit, it’s not just that the software that is old, or the files are big, or that the Autodesk team has given up on delivering major new productivity benefits. From talking with firms there’s an almost allergic reaction to the business model, coupled with the threat of compliance audits, added to the perceived lack of product development. In the 35+ years of doing this, it’s still odd seeing Autodesk customers inviting in BIM start-ups to try and help the competitive products become match-fit in order to provide real productivity benefits – and this has been happening for two years.

With Hanspal now throwing his hat officially in the ring, it feels like something has changed, without anything changing. The BIM 2.0 movement now has more gravitas, adding momentum to the idea that cloud-based collaborative workflows are now inevitable.  This is not to take anything away from Arcol, Snaptrude and Qonic which are possibly years ahead of Motif, having already delivered products to market, with much more to come.

From our conversation with Hanspal, we have an indication of what Motif will be developing without any real physical proof of concept. We know it has substantial backing from major VCs and this all adds to the general assessment that Revit and BIM is ripe for the taking.

At this moment in the AEC space, trying to do a full-frontal assault of the Revit installed-base, is like climbing North Face of the Eiger – you better take a mighty big run up and have plenty of reserves. And, for a long time, it’s going to look like you are going nowhere. Here, Motif is playing its cards close to its chest, unlike the other start-ups which have been sharing in open development from very early on, dropping new capabilities weekly. While it is clear to assess the velocity with which Snaptrude, Arcol and Qonic deliver, I think it’s going to be hard to measure Motif’s modeller technology until it’s considerably along in the development phase. It’s a different approach. It doesn’t mean it’s wrong and with regular workshops and collaboration with the signature architects, there should be some comfort for investors that progress is being made. But, as Hanspal explained, it’s going to be a slow drip of capability.

While Autodesk may have been inquisitive about the new BIM start-ups, I suspect the ex-Autodesk talent in Motif, carrying out a similar Quantum plan, would be seen as a competitor that might do some damage if given space, time and resources. Motif is certainly well funded but with a US-based dev team, it will have a high cash burn rate.

By the same measurement, Snaptrude is way ahead, has a larger, purely Indian development team, with substantially lower costs and lower capital burn rate. Arcol has backing from Tooey Courtemanche (aka Mr. Procore) and Qonic is doing fast things with big datasets that just look like magic and have been totally self-funded. BIM 2.0 already has quality and depth. The challenge is to offer enough benefit, at the right price, to make customers want to switch, for which there is a minimal viable product.

It’s only February and we already know that this will be the year that BIM 2.0 gets real. All the key players and interested parties will all be at our NXT BLD and NXT DEV conferences in London on 11-12 June 2025 – that’s Arcol, Autodesk, Bentley Systems, Dassault Systèmes, Graphisoft, Snaptrude, Qonic and others. As these products are being developed, we need as many AEC firms onboard to helping guide their direction. We need to ensure the next generation of tools are what is needed, not what software programmers think we need, or limited to concepts which constrained workflows in the past. Welcome Motif to the melee for the hearts and minds of next generation users!

The post Motif to take on Revit: exclusive interview appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/motif-to-take-on-revit-exclusive-interview/feed/ 0
The NXT 2025 experience https://aecmag.com/nxt-bld/the-nxt-2025-experience/ https://aecmag.com/nxt-bld/the-nxt-2025-experience/#disqus_thread Mon, 10 Feb 2025 13:38:16 +0000 https://aecmag.com/?p=22984 On 11 - 12 June, our annual NXT BLD and NXT DEV conferences will bring together the AEC industry

The post The NXT 2025 experience appeared first on AEC Magazine.

]]>
AEC firms constantly fine-tune their workflows and software estates, seeking productivity improvements. On 11 – 12 June, our annual NXT BLD and NXT DEV conferences will bring together leading AEC firms and software developers to help drive next generation workflows and tools

Planning is already underway for AEC Magazine’s annual, two day, dual-focus conference, NXT BLD (Next Build) and NXT DEV (Next Development), in conjunction with Lenovo workstations. The event will be held on 11 and 12 June 2025 at the prestigious Queen Elizabeth II Conference Centre in London.

Year on year, the NXT experience has grown in reputation, and we now attract design IT directors from multiple continents, together with a plethora of innovative start-ups looking to push the industry forward to next generation workflows and BIM 2.0.

NXT BLD brings innovative industry ideas, in-house development, new workflows and bleeding-edge technology to two conference stages, plus an exciting exhibition. Presentations range from design IT directors sharing insights into their processes to the latest in workstation, AR and VR technology.


Find this article plus many more in the Jan / Feb 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

NXT DEV addresses the fact that the AEC technologies we use are at a crossroads. The industry is reliant on old software that doesn’t utilise modern processor architectures, while the benefits of combining cloud, database and granular data await with the next generation of tools. AEC professionals can’t leave it to software developers and computer scientists to deliver change and need to help shape what comes next. NXT DEV is a forum for discussion, a great way to meet the start-ups, venture capitalists (VCs) and fellow design IT directors who are eager to find more productivity and smarter tools.

AEC Magazine is inviting you to come, get inspired and join the discussion.

For more info visit www.nxtbld.com and www.nxtdev.build.
Early bird tickets with 20% discount are available until 15 April 2025.

Topics

We are early in the planning stages for the events but you can be sure that we will be talking about BIM 2.0, Autodrawings, AI, Generative Design, AR and VR, GIS and BIM, Open Source, Rapid Reality Capture, Expert Automation Systems, Digital Fabrication, the future of data and API access.

Talks

There will be inspirational presentations from Heatherwicks, Alain Waha (Buro Happold), Patrick Cozzi (Cesium, now Bentley Systems), Lenovo, Perkins and Will, Augmenta, Finch3D, Ismail Seleit (LoRA and ControlNet AI rendering), Antonio Gonzalez Viegas (ThatOpenCompany), Qonic, Snaptrude, Arcol, Gräbert (Autodrawings), Autodesk, Foster + Partners, and Jonathan Asher (Dassault Systèmes) – to name but a few.

More speakers will be announced in the coming weeks, as we shape the two-day NXT 2025 program. The editorial team are looking forward to seeing you there!

The two days of NXT offer an intense dive into the future of the industry. Simultaneous stages offer a breadth of topics and areas of interest, plus there’s plenty of exciting new technologies to see on the show floor. You would certainly benefit from bringing a team to ensure you don’t miss anything important.


NXT BLD 2025
Wednesday 11 June 2025

NXT DEV 2025
Thursday 12 June 2025

Queen Elizabeth II Centre
Westminster, London, UK


NXTAEC – inspirational presentations on demand

Presentations from previous NXT events are available to view free on our dedicated website – NXTAEC.com
Here are some highlights

The future AEC software specification
Aaron Perry
AHMM

Transforming the future of home construction
Bruce Bell & Oliver Thomas
Facit Homes

10 things you should know about developing AEC software products
Amar Hanspal
Motif

Synthesising design and execution
John Cerone
SHoP Architects

The post The NXT 2025 experience appeared first on AEC Magazine.

]]>
https://aecmag.com/nxt-bld/the-nxt-2025-experience/feed/ 0
Hypar 2.0 – putting the spotlight on space planning https://aecmag.com/bim/hypar-2-0/ https://aecmag.com/bim/hypar-2-0/#disqus_thread Wed, 12 Feb 2025 07:59:25 +0000 https://aecmag.com/?p=22427 Hypar co-founder Ian Keough gives us the inside track as his cloud-based design tool puts the spotlight on space planning

The post Hypar 2.0 – putting the spotlight on space planning appeared first on AEC Magazine.

]]>
Towards the end of 2024, software developer Hypar released a whole new take on its cloud-based design tool, focused on space planning and with a cool new web interface. Martyn Day spoke with Hypar co-founder Ian Keough to get the inside track on this apparent pivot

Founded in 2018 by Anthony Hauck and Ian Keough, Hypar has certainly been on a journey in terms of its public-facing aims and capabilities.

Both co-founders are well-established figures in the software field. Hauck previously led Revit’s product development and pioneered Autodesk’s generative design initiatives. Keough, meanwhile, is widely recognised as the creator of Dynamo, a visual programming platform for Revit.

Initially, their creation Hypar looked very much like a single, large sandpit for generative designers familiar with scripting, enabling them to create system-level design applications, as well as for nonprogrammers looking to rapidly generate layouts, duct routing and design variations and get feedback on key metrics, which could then be exported to Revit.


Find this article plus many more in the Jan / Feb 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Back in 2023, we were blown away with Hypar’s integration of ChatGPT at the front end. This aimed to give users the ability to rapidly generate conceptual buildings and then progress on to fabrication-level models. This capability was subsequently demonstrated in tandem with DPR Construction.

One year later and the company’s front end has changed yet again. With a whole new interface and a range of capabilities specifically focused on space planning and layout, it feels as if Hypar has made a big pivot. What was once the realm of scripters now looks very much like a cloud planning tool that could be used by anyone.

AEC Magazine’s Martyn Day caught up with the always insightful Ian Keough to discuss Hypar’s development and better understand what seems like a change in direction at the company, as well as to get his more general views on AEC development trends.


Martyn Day: Developers such as Arcol, Snaptrude and Qonic are all aiming firmly at Revit, albeit coming at the market from different directions and picking their own entry points in the workflow to add value, while supporting RVT. Since Revit is so broad, it seems clear that it will take years before any of these newer products are feature-comparable with Revit, and all these companies have different takes on how to get there. With that in mind, how do you define a nextgeneration design tool and what is Hypar’s strategy in this regard?

Ian Keough: At Hypar, we’ve been thinking about this problem for five or six years from a fundamentally different place. Our very first pitch deck for Hypar showed images from work done in the 1960s at MIT, when they were starting to imagine what computers would be used for in design. They weren’t imagining that computers would be used for drafting, of course. Ivan Sutherland had already done that years before and we have all seen those images.

I think there are a lot of people who have very uninteresting ideas around AI in architecture, and those involve things like using AI to generate renderings and stuff like that. It’s nifty to look at, but it’s so low value in terms of the larger story of what all this computing power could do for us – Ian Keough

What they were imagining is that computers would be used to design buildings, and they were making punch card programmes to lay out hospitals and stuff and that. To me, that’s a very pro-future kind of vision. It imagined that computing capacity would grow to a point where the computer would become a partner in the process of design, as opposed to a slightly better version of the drafting board.

However, when it eventually happened, AutoCAD was released in the 1980s and instead we took the other fork of history. The result of taking that other fork has been interesting. If you look at this from a historic perspective, computers did what they did and they got massively more powerful over the years. But the small layer on top of that was all of our CAD software, which used very little of that available computing power. In a real sense, it used the local CPU, but not the computing power of all the data centres around the world which have come online. We were not leveraging that compute power to help us design more efficiently, more quickly, more correctly. We were just complaining that we couldn’t visualise giant models, and that’s still a thing that people talk about.


Hypar 2.0


Hypar 2.0

That’s still a big problem for people’s workloads. I don’t want to dismiss it. If you’re building an airport, you have got to load it, federate all of these models and be able to visualise it. I get that problem. But the larger problem is that, i n order to get to that giant model that you’re complaining about, there are many, many years of labour, of people building in sticks-and-bricks models. How many airports have we designed in the history of human civilisation?

So, thinking about the fork we face – and I think we’re experiencing a ‘come to Jesus’ moment here – people are now seeing AI. As a result, they’re getting equal parts hopeful that it will suddenly, at a snap of the fingers, remove all the toil that they’re experiencing in building these bigger and bigger and more complicated models, and equal parts afraid that it will embody all the expertise that is in their heads, and will leave them out of a job!


Martyn Day: I can envisage a time where AI can design a building in detail, but I can’t see it happening in our lifetime. What are your thoughts?

Ian Keough: I don’t think that’s the goal. I don’t think that’s the goal of anybody out there – even the people who I think have the most interesting and compelling ideas around AI and architecture. But I do think there are a lot of people who have very uninteresting ideas around AI in architecture, and those involve things like using AI to generate renderings and stuff like that. It’s nifty to look at, but it’s so low value in terms of the larger story of what all this computing power could do for us.

At AEC Magazine, you’ve already written about experiments that we’ve conducted in terms of designing through our chat prompt/text-to-BIM capability. So, we took the summation of the five years of work that we have done on Hypar as a platform, the compute infrastructure and, when LLMs came along, Andrew Heumann on our team suggested it would be cool if we could see if we could map human natural language down into input parameters for our generative system.

We did that. We put it out there. And everybody got really, really excited. But we quickly realised the limitations of that system. It’s very, very hard to design anything real through a check prompt. It’s one thing to generate an image of a building. It’s another thing to generate a building. You’ll see in the history of Hypar that the creation of this new version of the product directly follows the ‘text-to-BIM thing’, because what the ‘text-to-BIM thing’ showed us is that we have this very powerful platform.


Hypar 2.0

The new Hypar 2.0, which was released in September 2024, and more specifically, the layout suggestions capability, was our first nod towards AI-infused capabilities. The platform is all about seeing if we can make a design tool that’s a design tool first and foremost.

The problem with AI-generated rendering is you get what you get, and you can’t really change it, except for changing that prompt, and you’re totally out of control. What designers want is control. They want to be able to move quickly and to be able to control the design and understand the input parameters design. Hypar 2.0 is really about that. It’s about how you create a design tool and then lift all of this compute and seamlessly integrate it with the design experience, so that computation is not some other experience on top of your model.


Martyn Day: Historically, we have been used to seeing Hypar perform rapid conceptual modelling through scripting, generate building systems and be capable of multiple levels of detail to quickly model and then swap out to scale fidelity. The whole Hypar experience, looking at the website now, seems to be about space planning. Would you agree?

Ian Keough: That’s the head-scratcher for a lot of people when it comes to this new version. People who have seen me present on the work we did with DPR and other firms to make these incredibly detailed and sophisticated building systems are saying, “Wait, now you’re a space planning software now?”

That may seem like a little bit of a left turn. But the mission continues to enable anyone to build really richly detailed models from simple primitives without extra effort. We do this in the same way that we could take a low-resolution Revit wall and turn it into a fully clad DPR drywall layout, including all the fabrication instructions and the robotic layout instructions that go on the floor, and everything else. That capability still lives in Hypar, underneath the new interface.

What we are doing is getting back to software that solves real problems, again. This is a very gross simplification of what’s going on, but what problem does Revit actually solve? The answer is drawings, documentation. That’s the problem that Revit solves today and has solved since the beginning. What it does not solve is the problem of how to turn an Excel spreadsheet that represents a financial model into the plan for a hospital. It does not solve that at all. That is solved by human labour and human intellect. And right now, it’s solved in a very haphazard way, because the software doesn’t help you. It doesn’t offer you any affordances to help you do that. Everybody is largely either doing this as cockamamie-crazy, nested-family Lego blocks and jelly cubes in Revit, or trying to do it as just a bunch of coloured polygons in Bluebeam. That’s not how we’re utilising compute.

At the end of a design tool, it is still the architect’s experience and intellect that creates a building. What the design tool should do is remove all of the toil.

To give you an example of this, now that we’ve reached a point where users can use our software in a certain production context, to create these larger space plans, they’re starting to ask for the next layer of capabilities such as clearances as a semantic concept. This is the idea that, if I’m sitting at this desk, there should be a clearance in front of this desk, so that people have enough room to walk by. Sometimes, clearances are driven by code – so why has no piece of architectural design software in the last 20 years had a semantic notion of a clearance that you could either set specifically or derive from code? You might be able to write a checker in Solibri in the postdesign phase, but what about the designer at the point of creating the model?

Clearances are just one example. There are plenty of others, but the other impetus for a lot of what we’re doing right now is the fact that organisations like HOK have a vast storehouse of encoded design knowledge, in the form of all of the work that they’ve done in the past. Often, they cannot reuse this knowledge, except by way of hiring architects and transmitting this expertise from one person to the next, in a form that we have used for thousands of years – by storytelling, right?

What firms want is a way to capture that knowledge in the form of spaces, specific spaces, and all the stuff that’s in a space and the reasons for that stuff being there. And then they just want to transfer that knowledge from one project to another, whether it’s a healthcare project or any other kind of project that they’ve carried out before.

At the beginning of defining the next version of Hypar, when we started talking with architects about this problem, I was amazed by the cleverness of the architects. They’re actually finding solutions to do this with the software they have now. They build these giant, elaborate Revit models with hundreds of standard room types in them, and then they have people open those Revit models and copy and paste out stuff from the library.

I had one guy who referred to his model as ‘the Dewey Decimal System’. He had grids in Revit numbered in the Dewey Decimal System manner, such that he could insert new standards into this crazy grid system. And he referred to them by their grid locations.

In other words, architects have overcome the limitations that we’ve put in place in terms of software. But why isn’t it possible in Revit to select a room and save it as a standard, so the next time I put a room tag in that set exam room, such as a paediatric exam room, it just infills it with what I’ve done for the last ten projects.

To get back to your question about what the next generation looks like, I guess the simplest way to explain how we’re approaching it is that we’re picking a problem to solve that’s at the heart of designing buildings. It’s at the moment of creation, literally, of a building. We want to solve that problem and use software as a way to accelerate the designer, rather than a way to demonstrate that we can visualise larger models. That will come in time, but really, we want to use this vast computational resource that we have to undergird this sort of design, and make a great, snappy, fun design tool.


Martyn Day: Old BIM systems are oneway streets. They are about building a detailed model to produce drawings. But you have gone on record talking about tasks that need different levels of abstraction and multiple levels of scale, depending on the task. Can you explain how this functions in Hypar?

Ian Keough: You’ll notice in the new version of Hypar that there’s something called ‘bubble mode’. It’s a diagram mode for drawing spaces, but you’re drawing them in this kind of diagrammatic, ‘bubbly’ way.

That was an insight that we gleaned from spending literally hundreds of hours watching architects at the very early stage of designing buildings. They would use that way of communicating when they were doing departmental layout or whatever. They were hacking tools like Miro and other things, where they were having these conversations to do this stuff. But it was never at scale.

We were already thinking of this idea of being able to move them from lowlevel detail to a high level of detail without extra effort by means of leveraging compute. Now, in Hypar, and I’ll admit the bits are not totally connected yet in this idea, you’ll notice that people will start planning in this bubble mode, and then they’ll have conversations around bubble mode, at that level of detail.

Meanwhile, the software is already working behind the scenes, creating a network of rooms for them. And then they’ll perform the next step and use this clever stuff to intelligently lay out those rooms, the contents in the rooms. The next level of detail passed that will be connectors to other building systems, so let’s generate the building system. There’s this continuous thread that follows levels of detail from diagram to space – to spaces with equipment and furniture and to building systems.


Martyn Day: We have seen Hypar focus on conceptual work, space planning, fabrication-level modelling. Is the goal here to try and tackle every design phase?

Ian Keough: We’re marching there. The great thing about this is that there’s already value in what we offer. This is something that I think start-ups need to think about. You’re solving a problem, and if you want to make any money at all, that problem needs to have value at every point along the trajectory. That’s unless you raise a ton of capital, and say, ‘Ten years from now, we’ll have something that does everything.’

The reality is at day five, after you’ve built some software, and you put it in customers’ hands, that thing has to have value for them. The good news is that just in the way that we design buildings now, from low-level detail to high-level detail, there’s value in all those places along the design journey.

Why isn’t it possible in Revit to select a room and save it as a standard, so the next time I put a room tag in that set exam room, such as a paediatric exam room, it just infills it with what I’ve done for the last ten projects

The other thing that I think is going to happen, to achieve what we’ve been envisioning since the beginning of Hypar, is fully generated buildings. I do not believe in the idea that there’s this zerosum game that we’re all playing, where somebody’s going to build the one thing that ‘owns the universe’.

This is a popular construct in people’s minds, because they love this notion of somebody coming along and slaying the dragon of Revit in some way, and replacing it with another dragon.

What’s going to happen is, in the same way that we see with massively connected systems of apps on your phone and on the internet, these things are going to talk to each other. It’s quite possible that the API of the future for generating electrical systems is going to be owned by a developer like Augmenta (www.augmenta.ai). And since we’re allowing people to layout space in a very agile way, Hypar plugs into that and asks the user, ‘Would you like this app to asynchronously generate a system for you?’

Now, it might be that, over Hypar’s lifetime, there will be real value in us building those things as well, because most of the work that we’re doing right now is really about the tactility of the experience. So it might be that, to achieve the experience that we want, we have to be the ones who own the generation of those systems as well, but I can’t say yet whether or not that’s the case.

Everything we’re doing right now in terms of the new application is around just building that design experience. What we do in the next six months to one year, vis-à-vis how we connect back into functions that are on the platform and start to expose that capability, I can’t speculate right now.

What we need to do is land this thing in the market and then get enough people interested in using it, so that it starts to take hold. Some of the challenge in doing that is what you alluded to earlier, which is that people are trying to pigeon-hole you. They’ll ask, ‘Are you trying to kill Revit?’, or, ‘Are you trying to kill this part of the process that I currently do in Revit?’ That’s a challenge for all start-ups.

The decision that we made to rebuild the UI is about the long-term vision we have for Hypar. That vision has always been to put the world’s building expertise in the hands of everyone, everywhere. And if you think about that longterm vision, everybody will have access to the world’s building expertise. But how do they access it? If it’s through an interface that only the Dynamo and Grasshopper script kids can use or want to use, then we will not have fulfilled our vision.

The post Hypar 2.0 – putting the spotlight on space planning appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/hypar-2-0/feed/ 0