Featured Archives - AEC Magazine https://aecmag.com/featured/ Technology for the product lifecycle Tue, 18 Nov 2025 07:56:29 +0000 en-GB hourly 1 https://aecmag.com/wp-content/uploads/2021/02/cropped-aec-favicon-32x32.png Featured Archives - AEC Magazine https://aecmag.com/featured/ 32 32 Agentic AI platform to help automate engineering https://aecmag.com/structural-engineering/agentic-ai-platform-to-help-automate-engineering/ https://aecmag.com/structural-engineering/agentic-ai-platform-to-help-automate-engineering/#disqus_thread Mon, 17 Nov 2025 11:00:58 +0000 https://aecmag.com/?p=25570 A.Engineer enables civil and structural engineers to build their own calculation tools

The post Agentic AI platform to help automate engineering appeared first on AEC Magazine.

]]>
A.Engineer enables civil and structural engineers to build their own calculation tools

A.Engineer, a new agentic AI platform designed to help civil and structural engineers automate calculations and reporting, has launched in Europe.

Developed by Tyréns NEXT, the innovation arm of engineering consultancy Tyréns Group, A.Engineer combines engineering data, calculation tools, and report generation into a single “intelligent workspace”.

The platform is designed to simplify workflows, allowing engineers to spend less time on manual tasks and more time on creative design, instructions, and quality assurance.


Discover what’s new in technology for architecture, engineering and construction — read the latest edition of AEC Magazine
👉 Subscribe FREE here

According to the developers, A.Engineer is built the principle that engineering is empowered by AI yet always verified by a professional user.

The platform provides a full audit trail for every step the AI takes. Engineers can see all data inputs, outputs, underlying code, and the reasoning behind each decision. There are no outputs without verification.

A.Engineer includes an Agentic Calculation Tool Builder, where engineers can upload their own data, Excel tools, or connect to legacy systems, and the system then generates the necessary calculation tools — including both the code and the user interface — “in minutes”.

Meanwhile, an Agentic Report Builder connects data and verified calculations to “automatically build” professional reports with tables, graphs, summaries, and visualisations.

The platform can integrate (via MCP connection) with established engineering tools such as Revit, Sparkel, ETABS, SAP2000, and Strusoft.

According to Richard Parker, senior structural engineer at AKT II and product lead for A.Engineer, early users are already seeing major efficiency gains, “Our research shows that 40–80% of engineering work is still manual,” he said. “With A.Engineer, we can automate over half of those manual tasks. A calculation report that might have taken half a day can now be done in under an hour. Engineers working side by side with A.Engineer deliver world‑class results in record time.”

Europe, the home market of the Tyréns Group, will be the first region to gain access to A.Engineer. A global rollout of A.Engineer is scheduled for Q1/2026.


The post Agentic AI platform to help automate engineering appeared first on AEC Magazine.

]]>
https://aecmag.com/structural-engineering/agentic-ai-platform-to-help-automate-engineering/feed/ 0
NHS Foundation Trust creates smart estate with digital twin https://aecmag.com/digital-twin/nhs-foundation-trust-creates-smart-estate-with-digital-twin/ https://aecmag.com/digital-twin/nhs-foundation-trust-creates-smart-estate-with-digital-twin/#disqus_thread Mon, 17 Nov 2025 11:52:03 +0000 https://aecmag.com/?p=25580 3D model of six hospitals supports digital transformation at one of UK’s largest NHS Trusts, Manchester University

The post NHS Foundation Trust creates smart estate with digital twin appeared first on AEC Magazine.

]]>
3D model of six hospitals supports digital transformation at one of UK’s largest NHS Trusts, Manchester University

Manchester University NHS Foundation Trust (MFT) has gone live with a digital twin of six hospitals as part of its strategy to create a smart estate. Designed to provide a single source of estates data to support new workflows and better decision making, the 3D model is a major milestone in MFT’s digital transformation to improve operational efficiency and patient safety.

Replacing disparate systems and paper-based processes, the digital twin visualises floors, rooms and spaces with associated data and is already being used to understand space optimisation and support the management of RAAC and asbestos. Future plans include adding indoor navigation, patient contact tracing and real-time asset tracking.

Created using Esri UK’s GIS platform, which includes indoor mapping, spatial analysis, navigation and asset tracking, the digital twin went live in October 2025. BIS Consult, MFT’s strategic data partner, led the development of the underlying data strategy and the integration of the multiple information sources required.


Discover what’s new in technology for architecture, engineering and construction — read the latest edition of AEC Magazine
👉 Subscribe FREE here

Spanning 274,000 square metres of internal floor space, the 3D model includes Manchester Royal Infirmary, Royal Manchester Children’s Hospital, Manchester Royal Eye Hospital and Saint Mary’s Hospital on the Oxford Road campus, plus Altrincham Hospital and Withington Community Hospital.

David Bailey, Head of Digital Estates at MFT, who led the project, said: “Integrating all of our existing data into one 3D model has created the foundation for building a digital twin and is driving new opportunities for efficiency gains. Moving from analogue to digital achieves a better understanding of our buildings and assets which helps improve their management and maintenance, as well as improving patient safety.”

The digital twin is being used in a trial to better understand the use of space, by quickly showing where room usage is not being optimised. Full roll-out will provide all staff with a real-time view of occupancy levels and space requests, while clinicians will be able to examine existing facilities more easily and plan new services.

New applications for RAAC and asbestos management involve performing digital surveys on mobile devices, which feed directly into the 3D model and visualise the different risk levels.

The next phase will map the remaining four hospitals in MFT’s estate and digitise building condition surveys to help tackle the maintenance backlog. This will involve mobile data capture feeding into the digital twin, providing a clearer picture of requirements and helping to prioritise resources. Replacing a manual spreadsheet approach, data and reports will be shared more easily among project teams. Energy usage data will also be added to the digital twin to help analyse and reduce energy costs.

The project overcame a major data integration challenge, which involved combining MFT data from multiple systems, including CAFM (Computer-Aided Facility Management) and CAD floor plans and improving the overall data quality. Establishing new data governance so information connected to the 3D model was accurate and up to date was also achieved.


 

Duncan Booth, Head of Health & Social Care, Esri UK, said: “Indoor mapping is playing a central role in the modernisation of MFT’s estates and facilities department by giving users situational awareness of the entire site. Optimising the use of existing buildings and making RAAC and asbestos management more efficient are the first of many new benefits. Already used at airports, universities and industrial sites, the technology is helping large organisations realise plans for digital twins and is now experiencing growth in healthcare.”

Plans for the future include using Esri’s GIS platform to create applications for indoor navigation for patients and staff to reduce missed appointments, contact tracing of patients to help stop the spread of pathogens inside the hospital and digital asset tracking, enabling equipment such as beds, scanners or wheelchairs to be located more quickly.

Nicholas Campbell-Voegt, Ddrector at BIS Consult, commented: “This project shows how smart use of data can transform NHS estates. By creating a single source of truth for assets and space, MFT is paving the way for a new standard in how Trusts manage their estates. The approach provides a blueprint that other NHS organisations can follow, helping build smarter, safer and more sustainable healthcare environments.”


The post NHS Foundation Trust creates smart estate with digital twin appeared first on AEC Magazine.

]]>
https://aecmag.com/digital-twin/nhs-foundation-trust-creates-smart-estate-with-digital-twin/feed/ 0
Trimble launches Agentic AI Platform https://aecmag.com/ai/trimble-launches-agentic-ai-platform/ https://aecmag.com/ai/trimble-launches-agentic-ai-platform/#disqus_thread Wed, 12 Nov 2025 13:35:23 +0000 https://aecmag.com/?p=25556 Supports the creation of AI-driven workflows to help users analyse data and automate tasks

The post Trimble launches Agentic AI Platform appeared first on AEC Magazine.

]]>
Supports the creation of AI-driven workflows to help users analyse data and automate tasks

Trimble has launched a new “Open and extensible” agentic AI platform, a collection of core services, security frameworks, and tools allow Trimble to build and deploy agentic AI systems.

The company’s vision is to enable partners and customers develop AI agents and multi-agent workflows across Trimble’s suite of construction solutions. Trimble is currently piloting the platform, Trimble Agent Studio, with select customers.

“As agentic AI use cases multiply, there is a growing need for common infrastructure that allows creators to rapidly and responsibly develop, deploy, monitor, and maintain high-value AI agents at scale,” said Mark Schwartz, senior vice president of AECO software at Trimble.

“We see the platform as the engine that will help Trimble, its partners, and its customers extract more value from both our solutions and their data.”


Discover what’s new in technology for architecture, engineering and construction — read the latest edition of AEC Magazine
👉 Subscribe FREE here

According to Trimble, its agentic AI platform and other AI capabilities are currently being used to help users learn and navigate Trimble software more efficiently. AI is also being used to eliminate many of the manual steps traditionally required to model from scratch, allowing users to generate 3D objects simply by describing what they want to create.

Trimble’s AI can also convert voice memos into field documents, capture status updates from crews, and reduce the time teams spend in front of computer screens back at the office. Beyond design, Trimble’s AI accelerates access to data and streamlines asset maintenance and permitting workflows.

“We are building an industry ecosystem aimed at breaking down data silos and empowering our customers to make smarter decisions, collaborate effectively and work faster,” said Rob Painter, Trimble CEO.

“By embedding AI into our solutions and enabling improved data flow, we’re taking the next steps towards unlocking the power of connected data.”

Trimble is making several AI tools available through Trimble Labs (Labs), a pre-release, early engagement program that enables customers to test new features and provide user feedback.

The post Trimble launches Agentic AI Platform appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/trimble-launches-agentic-ai-platform/feed/ 0
Trimble ProjectSight 360 Capture enhances site visibility https://aecmag.com/reality-capture-modelling/trimble-projectsight-360-capture-enhances-site-visibility/ https://aecmag.com/reality-capture-modelling/trimble-projectsight-360-capture-enhances-site-visibility/#disqus_thread Wed, 12 Nov 2025 08:40:43 +0000 https://aecmag.com/?p=25536 Reality capture technology applies AI to 360-degree images to enhance project management workflows

The post Trimble ProjectSight 360 Capture enhances site visibility appeared first on AEC Magazine.

]]>
Reality capture technology applies AI to images captured by 360-degree cameras to enhance project management workflows

Trimble has announced ProjectSight 360 Capture a new tool designed to give contractors better visibility into site progress by applying AI to 360-degree images, captured while walking the construction site.

ProjectSight 360 Capture, which is built into Trimble’s ProjectSight project management software, enables construction teams to conduct virtual jobsite walkthroughs, track progress and resolve issues collaboratively.

A cloud-based AI algorithm automatically processes the 360-degree images, identifies key locations, and links them to project drawings to, enabling comparison of as-built conditions over time or against the design. AI-powered privacy filtering also blurs faces on the jobsite, protecting individual privacy.


Discover what’s new in technology for architecture, engineering and construction — read the latest edition of AEC Magazine
👉 Subscribe FREE here

Project managers can connect these images directly to workflows — such as change orders or RFIs — by embedding captures within requests. Through Trimble Connect’s common data environment, all imagery is shared and accessed in one centralised location.

“Traditional methods of capturing and communicating project status are typically time consuming, complex and incomplete, making it difficult for contractors to collaborate, quickly correct problems and keep projects on budget and on schedule,” said Lawrence Smith, vice president and general manager of construction management solutions at Trimble. “Directly pairing critical project management tasks with 360-degree image captures gives users a clear picture of actual conditions and status on job sites, helping turn data into effective decision making.

“ProjectSight 360 Capture makes real-world data easy to capture and use, giving project managers critical insights through intuitive visualisation and navigation,” added Smith. “By streamlining documentation, tracking changes over time, and simplifying issue management across the Trimble ecosystem, it gives project managers a reliable, real-time view of job site conditions regardless of their location.”



The post Trimble ProjectSight 360 Capture enhances site visibility appeared first on AEC Magazine.

]]>
https://aecmag.com/reality-capture-modelling/trimble-projectsight-360-capture-enhances-site-visibility/feed/ 0
Part3 to give architects control over construction drawings https://aecmag.com/data-management/part3-to-give-architects-control-over-construction-drawings/ https://aecmag.com/data-management/part3-to-give-architects-control-over-construction-drawings/#disqus_thread Fri, 07 Nov 2025 08:31:55 +0000 https://aecmag.com/?p=25505 ProjectFiles provides a single source of truth for drawings, feeding into submittals, RFIs, and field reports.

The post Part3 to give architects control over construction drawings appeared first on AEC Magazine.

]]>
ProjectFiles is designed to provide a single source of truth for drawings, feeding into submittals, RFIs, change documents, instructions, and field reports.

ProjectFiles from Part3 is a new construction drawing and documentation management system for architects designed to help ensure the right drawings are always accessible on site, in real time, to everyone that needs them.

According to the company, unlike other tools that were built for contractors and retrofitted for everyone else, ProjectFiles was designed specifically with architects in mind.

ProjectFiles is a key element of Part3’s broader construction administration platform, and also connects drawings to the day-to-day management of submittals, RFIs, change documents, instructions, and field reports.


Discover what’s new in technology for architecture, engineering and construction — read the latest edition of AEC Magazine
👉 Subscribe FREE here

Automatic version tracking helps ensures the entire team is working from the most up-to-date drawings and documents. According to Part3, it’s designed to overcome problems such as walking onto site and finding contractors working from outdated drawings, or wasting time hunting through folders trying to find the current structural set before an RFI deadline.

The software also features AI-assisted drawing detection, where files are automatically tagged with the correct drawing numbers, titles, and disciplines.


Meanwhile, learn more about Part3’s AI capabilities, along with tonnes of other AI-powered tools, in AEC Magazine’s AI Spotlight Drectory

The post Part3 to give architects control over construction drawings appeared first on AEC Magazine.

]]>
https://aecmag.com/data-management/part3-to-give-architects-control-over-construction-drawings/feed/ 0
Autodesk shows its AI hand https://aecmag.com/ai/autodesk-shows-its-ai-hand/ https://aecmag.com/ai/autodesk-shows-its-ai-hand/#disqus_thread Thu, 02 Oct 2025 08:33:27 +0000 https://aecmag.com/?p=24818 At AU Autodesk presented live, production-ready tools, giving customers a clear view of how AI could soon reshape workflows

The post Autodesk shows its AI hand appeared first on AEC Magazine.

]]>
Autodesk’s AI story has matured. While past Autodesk University events focused on promises and prototypes, this year Autodesk showcased live tools, giving customers a clear view of how AI could soon reshape workflows across design and engineering, writes Greg Corke

At AU 2025, Autodesk took a significant step forward in its AI journey, extending far beyond the slide-deck ambitions of previous years.

During CEO Andrew Anagnost’s keynote, the company unveiled brand-new AI tools in live demonstrations using pre-beta software. It was a calculated risk — particularly in light of recent high-profile hiccups from Meta — but the reasoning was clear: Autodesk wanted to show it has tangible, functional AI technology and it will be available for customers to try soon.

The headline development is ‘neural CAD’, a completely new category of 3D generative AI foundation models that Autodesk says could automate up to 80–90% of routine design tasks, allowing professionals to focus on creative decisions rather than repetitive work. The naming is very deliberate, as Autodesk tries to differentiate itself from the raft of generic AEC-focused AI tools in development.


Find this article plus many more in the September / October 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

neural CAD AI models will be deeply integrated into BIM workflows through Autodesk Forma, and product design workflows through Autodesk Fusion. They will ‘completely reimagine the traditional software engines that create CAD geometry.’

Autodesk is also making big AI strides in other areas. Autodesk Assistant is evolving beyond its chatbot product support origins into a fully agentic AI assistant that can automate tasks and deliver insights based on natural-language prompts.

Big changes are also afoot in Autodesk’s AEC portfolio – developments that will have a significant impact on the future of Revit.

The big news was the release of Forma Building Design, a brand-new tool for LoD 200 detailed design (learn more in this AEC Magazine article). Autodesk also announced that its existing early-stage planning tool, Autodesk Forma, will be rebranded as Forma Site Design and Revit will gain deeper integration with the Forma industry cloud, becoming Autodesk’s first Connected client.

neural CAD

neural CAD marks a fundamental shift in Autodesk’s core CAD and BIM technology. As Anagnost explained, “The various brains that we’re building will change the way people interact with design systems.”

Unlike general-purpose large language models (LLMs) such as ChatGPT and Claude, or AI image generation models like Stable Diffusion and Nano Banana, neural CAD models are specifically designed for 3D CAD. They are trained on professional design data, enabling them to reason at both a detailed geometry level and at a systems and industrial process level.

neural CAD marks a big leap forward from Project Bernini, which Autodesk demonstrated at AU 2024. Bernini turned a text, sketch or point cloud ‘prompt’ into a simple mesh that was not best suited for further development in CAD. In contrast, neural CAD delivers ‘high quality’ ‘editable’ 3D CAD geometry directly inside Forma or Fusion, just like ChatGPT generates text and Midjourney generates pixels.


Autodesk University
Autodesk CEO Andrew Anagnost joins experts on stage to live-demo upcoming AI software during the AU keynote

Autodesk has so far presented two types of neural CAD models: ‘neural CAD for geometry’, which is being used in Fusion and ‘neural CAD for buildings’, which is being used in Forma.

For Fusion, there are two AI model variants, as Tonya Custis, senior director, AI research, explained, “One of them generates the whole CAD model from a text prompt. It’s really good for more curved surfaces, product use cases. The second one, that’s for more prismatic sort of shapes. We can do text prompts, sketch prompts and also what I call geometric prompts. It’s more of like an auto complete, like you gave it some geometry, you started a thing, and then it will help you continue that design.”

On stage, Mike Haley, senior VP of research, demonstrated how neural CAD for geometry could be used in Fusion to automatically generate multiple iterations of a new product, using the example of a power drill.

“Just enter the prompts or even drawing and let the CAD engines start to produce options for you instantly,” he said. “Because these are first class CAD models, you now have a head start in the creation of any new product.”

It’s important to understand that the AI doesn’t just create dumb 3D geometry – neural CAD also generates the history and sequence of Fusion commands required to create the model. “This means you can make edits as if you modelled it yourself,” he said.

Meanwhile, in the world of BIM, Autodesk is using neural CAD to extend the capabilities of Forma Building Design to generate BIM elements.

The current aim is to enable architects to ‘quickly transition’ between early design concepts and more detailed building layouts and systems with the software ‘autocompleting’ repetitive aspects of the design.

Instead of geometry, ‘neural CAD for buildings’ focuses more on the spatial and physical relationships inherent in buildings as Haley explained. “This foundation model rapidly discovers alignments and common patterns between the different representations and aspects of building systems.



“If I was to change the shape of a building, it can instantly recompute all the internal walls,” he said. “It can instantly recompute all of the columns, the platforms, the cores, the grid lines, everything that makes up the structure of the building. It can help recompute structural drawings.”

At AU, Haley demonstrated ‘Building Layout Explorer’, a new AI-driven feature coming to Forma Building Design. He presented an example of an architect exploring building concepts with a massing model, “As the architect directly manipulates the shape, the neural CAD engine responds to these changes, auto generating floor plan layouts,” he said.

But, as Haley pointed out, for the system to be truly useful the architect needs to have control over what is generated, and therefore be able to lock down certain elements, such as a hallway, or to directly manipulate the shape of the massing model.

“The software can re-compute the locations and sizes of the columns and create an entirely new floor layout, all while honouring the constraints the architect specified,” he said.

This feels like a pivotal moment in Autodesk’s AI journey, as the company moves beyond ambitions and experimentation into production-ready AI that is deeply integrated into its core software

Of course, it’s still very early days for neural CAD and, in Forma, ‘Building Layout Explorer’ is just the beginning.

Haley alluded to expanding to other disciplines within AEC, “Imagine a future where the software generates additional architectural systems like these structural engineering plans or plumbing, HVAC, lighting systems and more.”

In the future, neural CAD in Forma will also be able to handle more complexity, as Custis explains. “People like to go between levels of detail, and generative AI models are great for that because they can translate between each other. It’s a really nice use case, and there will definitely be more levels of detail. We’re currently at LoD 200.”

The training challenge

neural CAD models are trained on the typical patterns of how people design. “They’re learning from 3D design, they’re learning from geometry, they’re learning from shapes that people typically create, components that people typically use, patterns that typically occur in buildings,” said Haley.

In developing these AI models, one of the biggest challenges for Autodesk has been the availability of training data. “We don’t have a whole internet source of data like any text or image models, so we have to sort of amp up the science to make up for that,” explained Custis.

For training, Autodesk uses a combination of synthetic data and customer data. Synthetic data can be generated in an ‘endless number of ways’, said Custis, including a ‘brute force’ approach using generative design or simulation.


Autodesk University
Tonya Custis, senior director, AI research, Autodesk

Customer data is typically used later-on in the training process. “Our models are trained on all data we have permission to train on,” said Amy Bunszel, EVP, AEC.

But customer data is not always perfect, which is why Autodesk also commissions designers to model things for them, generating what chief scientist Daron Green describes as gold standard data. “We want things that are fully constrained, well annotated to a level that a customer wouldn’t [necessarily] do, because they just need to have the task completed sufficiently for them to be able to build it, not for us to be able to train against,” he said.

Of course, it’s still very early days for neural CAD and Autodesk plans to improve and expand the models, “These are foundation models, so the idea is we train one big model and then we can task adapt it to different use cases using reinforcement learning, fine tuning. There’ll be improved versions of these models, but then we can adapt them to more and more different use cases,” said Custis. In the future, customers will be able to customise the neural CAD foundation models, by tuning them to their organisation’s proprietary data and processes. This could be sandboxed, so no data is incorporated into the global training set unless the customer explicitly allows it.

“Your historical data and processes will be something you can use without having to start from scratch again and again, allowing you to fully harness the value locked away in your historical digital data, creating your own unique advantages through models that embody your secret source or your proprietary methods,” said Haley.

Agentic AI: Autodesk Assistant

When Autodesk first launched Autodesk Assistant, it was little more than a natural language chatbot to help users get support for Autodesk products.

Now it’s evolved into what Autodesk describes as an ‘agentic AI partner’ that can automate repetitive tasks and help ‘optimise decisions in real time’ by combining context with predictive insights.

Autodesk demonstrated how in Revit, Autodesk Assistant could be used to quickly calculate the window to wall ratio on a particular façade, then replace all the windows with larger units. The important thing to note here is that everything is done though natural language prompts, without the need to click through multiple menus and dialogue boxes.


Autodesk University
Building Layout Explorer’, a new AI-driven feature coming to Forma Building Design
Autodesk University
Autodesk Assistant in Revit enables teams to quickly surface project insights using natural language prompts, here showing how it could be used to quickly calculate the window to wall ratio on a particular façade, then replace all the windows with larger units

Autodesk Assistant can also help with documentation in Revit, making it easier to use drawing templates, populate title blocks and automatically tag walls, doors and rooms. While this doesn’t yet rival the auto-drawing capabilities of Fusion, when asked about bringing similar functionality to Revit, Bunszel noted, ‘We’re definitely starting to explore how much we can do.’

Autodesk also demonstrated how Autodesk Assistant can be used to automate manual compliance checking in AutoCAD, a capability that could be incredibly useful for many firms.

“You’ll be able to analyse a submission against your drawing standards and get results right away, highlighting violations and layers, lines, text and dimensions,” said Racel Amour, head of generative AI, AEC.

Meanwhile, in Civil 3D it can help ensure civil engineering projects comply with regulations for safety, accessibility and drainage, “Imagine if you could simply ask the Autodesk Assistant to analyse my model and highlight the areas that violate ADA regulations and give me suggestions on how to fix it,” said Amour.

So how does Autodesk ensure that Assistant gives accurate answers? Anagnost explained that it takes into account the context that’s inside the application and the context of work that users do.

“If you just dumped Copilot on top of our stuff, the probability that you’re going to get the right answer is just a probability. We add a layer on top of that that narrows the range of possible answers.”

“We’re building that layer to make sure that the probability of getting what you want isn’t 70%, it’s 99.99 something percent,” he said.

While each Autodesk product will have its own Assistant, the foundation technology has also been built with agent-to-agent communication in mind – the idea being that one Assistant can ‘call’ another Assistant to automate workflows across products and, in some cases, industries.

“It’s designed to do three things: automate the manual, connect the disconnected, and deliver real time insights, freeing your teams to focus on their highest value work,” said CTO, Raji Arasu.


Autodesk University
Autodesk CTO Raji Arasu

In the context of a large hospital construction project, Arasu demonstrated how a general contractor, manufacturer, architect and cost estimator could collaborate more easily through natural language in Autodesk Assistant. She showed how teams across disciplines could share and sync select data between Revit, Inventor and Power Bi, and manage regulatory requirements more efficiently by automating routine compliance tasks. “In the future, Assistant can continuously check compliance in the background. It can turn compliance into a constant safeguard, rather than just a one-time step process,” she said.

Arasu also showed how Assistant can support IT administration — setting up projects, guiding managers through configuring Single Sign-On (SSO), assigning Revit access to multiple employees, creating a new project in Autodesk Construction Cloud (ACC), and even generating software usage reports with recommendations for optimising licence allocation.

Agent-to-agent communication is being enabled by Model Context Protocol (MCP) servers and Application Programming Interfaces (APIs), including the AEC data model API, that tap into Autodesk’s cloud-based data stores.

APIs will provide the access, while Autodesk MCP servers will orchestrate and enable Assistant to act on that data in real time.

As MCP is an open standard that lets AI agents securely interact with external tools and data, Autodesk will also make its MCP servers available for third-party agents to call.

All of this will naturally lead to an increase in API calls, which were already up 43% year on year even before AI came into the mix. To pay for this Autodesk is introducing a new usage-based pricing model for customers with product subscriptions, as Arasu explains, “You can continue to access these select APIs with generous monthly limits, but when usage goes past those limits, additional charges will apply.”

But this has raised understandable concerns among customers about the future, including potential cost increases and whether these could ultimately limit design iterations.

The human in the loop

Autodesk is designing its AI systems to assist and accelerate the creative process, not replace it. The company stresses that professionals will always make the final decisions, keeping a human firmly in the loop, even in agent-to-agent communications, to ensure accountability and design integrity.

“We are not trying to, nor do we aspire to, create an answer, “says Anagnost. “What we’re aspiring to do is make it easy for the engineer, the architect, the construction professional – reconstruction professional in particular – to evaluate a series of options, make a call, find an option, and ultimately be the arbiter and person responsible for deciding what the actual final answer is.”

AI computation

It’s no secret that AI requires substantial processing power. Autodesk trains all its AI models in the cloud, and while most inferencing — where the model applies its knowledge to generate real-world results — currently happens in the cloud, some of this work will gradually move to local devices.

This approach not only helps reduce costs (since cloud GPU hours are expensive) but also minimises latency when working with locally cached data.


With Project Forma Sketch, an architect can generate 3D models in Forma by sketching out simple massing designs with a digital pencil and combining that with speech.

AI research

Autodesk also gave a sneak peek into some of its experimental AI research projects. With Project Forma Sketch, an architect can generate 3D models in Forma by sketching out simple massing designs with a digital pencil and combining that with speech. In this example, the neural CAD foundation model interacts with large language models to interpret the stream of information.

Elsewhere, Amour showed how Pointfuse in Recap Pro is building on its capability to convert point clouds into segmented meshes for model coordination and clash detection in Revit. “We’re launching a new AI powered beta that will recognise objects directly from scans, paving the way for automated extraction, for building retrofits and renovations,” she said.

Autodesk has also been working with global design, engineering, and consultancy firm Arcadis to pilot a new technology that uses AI to see inside walls to make it easier and faster to retrofit existing buildings.

Instead of destructive surveys, where walls are torn down, the AI uses multimodal data – GIS, floor plans, point clouds, Thermal Imaging, and Radio Frequency (RF) scans – to predict hidden elements, such as mechanical systems, insulation, and potential damage.


The AI-assisted future

AU 2025 felt like a pivotal moment in Autodesk’s AI journey. The company is now moving beyond ambitions and experimentation into a phase where AI is becoming deeply integrated into its core software.

With the neural CAD and Autodesk Assistant branded functionality, AI will soon be able to generate fully editable CAD geometry, automate repetitive tasks, and gain ‘actionable insights’ across both AEC and product development workflows.

As Autodesk stresses, this is all being done while keeping humans firmly in the loop, ensuring that professionals remain the final decision-makers and retain accountability for design outcomes.

Importantly, customers do not need to adopt brand new design tools to get onboard with Autodesk AI. While neural CAD is being integrated into Forma and Fusion, users of traditional desktop CAD/BIM tools can still benefit through Autodesk Assistant, which will soon be available in Revit, Civil 3D, AutoCAD, Inventor and others.

With Autodesk Assistant, the ability to optimise and automate workflows using natural-language feels like a powerful proposition, but as the technology evolves, the company faces the challenge of educating users on its capabilities — and its limitations.

Meanwhile, data interoperability remains front and centre, with Autodesk routing everything through the cloud and using MCP servers and APIs to enable cross-product and even cross-discipline workflows.

It’s easy to imagine how agent-to-agent communication might occur within the Autodesk world, but AEC workflows are fragmented, and it remains to be seen how this will play out with third parties.

Of course, as with other major design software providers, fully embracing AI means fully committing to the cloud, which will be a leap of faith for many AEC firms.

From customers we have spoken with there remain genuine concerns about becoming locked into the Autodesk ecosystem, as well as the potential for rising costs, particularly related to increased API usage. ‘Generous monthly limits’ might not seem so generous once the frequency of API calls increase, as it inevitably will in an iterative design process. It would be a real shame if firms end up actively avoiding using these powerful tools because of budgetary constraints.

Above all, AU is sure to have given Autodesk customers a much clearer idea of Autodesk’s long-term vision for AI-assisted design. There’s huge potential for Autodesk Assistant to grow into a true AI agent while neural CAD foundation models will continue to evolve, handling greater complexity, and blending text, speech and sketch inputs to further slash design times.

We’re genuinely excited to see where this goes, especially as Autodesk is so well positioned to apply AI throughout the entire design build process.


Main image: Mike Haley, senior VP of research, presents the AI keynote at Autodesk University 2025  

The post Autodesk shows its AI hand appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/autodesk-shows-its-ai-hand/feed/ 0
Managing aging water infrastructure: a proactive approach https://aecmag.com/sponsored-content/managing-aging-water-infrastructure-a-proactive-approach/ https://aecmag.com/sponsored-content/managing-aging-water-infrastructure-a-proactive-approach/#disqus_thread Tue, 18 Nov 2025 07:54:24 +0000 https://aecmag.com/?p=25473 By Keaton Clay, National Account Executive: Oldcastle Infrastructure CivilSense

The post Managing aging water infrastructure: a proactive approach appeared first on AEC Magazine.

]]>
By Keaton Clay, National Account Executive:
Oldcastle Infrastructure CivilSense

Beneath our communities lies a complex network of pipes, pumps, and treatment infrastructure essential for public health, economic stability, and daily life. Much of this critical water infrastructure is aging, deteriorating, and increasingly vulnerable to failure. The consequences of failure are significant, including disruptive service interruptions, catastrophic main breaks and substantial financial losses through non-revenue water (NRW). For municipalities and water managers, shifting from a reactive repair model to a proactive asset management strategy is now a necessity.

The challenge is immense. Many of our nation’s water systems were built decades ago, with some pipes dating back over a century. The American Society of Civil Engineers (ASCE) consistently gives the nation’s drinking water infrastructure a “C-” grade, pointing to a water main break every two minutes and an estimated loss of six billion gallons of treated water each day. This NRW represents a massive drain on resources. It is water that has been abstracted, treated and pressurized at a significant cost, only to leak back into the ground without ever reaching the customer.

Addressing this issue requires a fundamental change in mindset, moving beyond emergency repairs to embrace a forward-thinking, data-driven approach to infrastructure management.

The High Cost of Reactive Maintenance

Historically, many water utilities have operated on a “run-to-failure” basis. Assets are repaired or replaced only after they break. While this might seem cost-effective in the short term, the long-term consequences are severe and multifaceted.

  • Financial Strain: Emergency repairs are significantly more expensive than planned maintenance. They involve overtime labor, expedited material procurement, and often result in more extensive damage to surrounding public and private property like roads and buildings.
  • Disruption: Unplanned shutdowns inconvenience residents and can cripple businesses that rely on a consistent water supply, such as restaurants, hospitals, and manufacturing plants, while main breaks directly disrupt and reduce access to businesses, infrastructure, and essential services.
  • Public Health Risks: Main breaks can lead to pressure loss in the system, creating a risk of contamination from groundwater infiltration. This can trigger boil-water advisories and pose a serious threat to community health.
  • Resource Depletion: Every gallon of water lost through leaks is a waste of the energy, chemicals, and labor used to treat and distribute it. In an era of increasing water scarcity and focus on sustainability, this level of waste is untenable.

The issue of NRW exacerbates these problems. It silently drains utility budgets, forces rate hikes for paying customers, and places unnecessary strain on water sources. Effectively managing an aging system means getting a handle on NRW.

The Power of Proactive Asset Management

A proactive approach to water infrastructure management focuses on understanding the condition of assets, predicting potential failures, and prioritizing maintenance and replacement activities to maximize system reliability and minimize lifecycle costs. This strategy empowers water managers to make informed decisions that extend the life of existing infrastructure while planning for future needs.

The benefits are clear and compelling:

  1. Reduced Operational Costs: By identifying and repairing small leaks before they become major breaks, utilities can save millions in emergency repair costs, reduce water loss, and lower energy consumption for pumping.
  2. Enhanced System Reliability: A planned maintenance schedule ensures that the most critical components of the network are in good working order, significantly reducing the frequency and impact of unexpected service interruptions.
  3. Improved Capital Planning: With a comprehensive understanding of asset health, utilities can develop accurate, long-term capital improvement plans. This allows for strategic budgeting and prevents the financial shock of a sudden, large-scale failure.
  4. Increased Sustainability: Conserving water by minimizing leaks is a powerful act of environmental stewardship. It protects precious water resources, reduces the carbon footprint associated with water treatment and distribution, and builds community resilience.

Implementing a proactive strategy is about moving from guesswork to data-driven certainty and requires the right tools and technologies.

Leveraging Technology for Intelligent Water Management

Modern advancements in sensor technology and data analytics have revolutionized how underground infrastructure is monitored and managed. These smart solutions provide the real-time visibility needed to transition from a reactive to a proactive operational model, allowing managers to see what’s happening within their distribution network without expensive and disruptive excavation.

This is where solutions like Oldcastle Infrastructure’s CivilSense™ prove invaluable. This AI-driven technology operates alongside existing systems, using network and sensor data to analyze network performance, identify risks, and optimize asset management. By temporarily deploying sensors that gather data to identify the telltale acoustic signature of an undetected leak, CivilSense™ transforms passive structures into active data collection points across the water network.

To further empower water managers in making informed investment decisions, Oldcastle Infrastructure offers the CivilSense™ ROI Calculator. This user-friendly tool enables utilities and municipalities to evaluate the financial impact of adopting proactive solutions, including early leak detection and predictive maintenance. By inputting system-specific variables, decision-makers can quantify potential cost savings, reductions in non-revenue water, and improved operational efficiency. The ROI Calculator provides clear, data-backed guidance, demonstrating how technologies like CivilSense™ deliver measurable value over time — making it easier to justify investments and prioritize projects.

With access to precise ROI data, water managers can transition confidently from traditional practices to advanced asset management models, ensuring sustainability and long-term savings.

This constant stream of data and actionable insights provides unprecedented system performance visibility. For water managers, this provides the ability to:

  • Detect Leaks Early: Subtle changes in flow patterns or water levels detected by sensors can indicate a leak long before it surfaces. This early warning allows crews to perform a targeted, planned repair at a fraction of the cost of an emergency response.
  • Monitor System-Wide Health: Aggregating data from multiple points provides a holistic view of the entire distribution network. This helps identify systemic issues and pinpoint areas of high water loss.
  • Predict Failures: Advanced analytics can process historical and real-time data to identify trends that signal an impending asset failure. This predictive capability allows utilities to intervene proactively and replace a vulnerable pipe or valve before it leads to a catastrophic event.

By equipping decision makers with this insight into the health of their water infrastructure systems — and leveraging financial evaluation tools like the CivilSense™ ROI Calculator — building a smarter, more resilient foundation for our communities can become a reality. It’s an investment in prevention that pays dividends through reliability, cost savings, and sustainability.

Building the Resilient Communities of Tomorrow

The challenge of aging water infrastructure is not going away. As pipes continue to age and climate change places additional stress on water resources, the need for intelligent, proactive management will only intensify. The cost of inaction is far too high to ignore.

Municipalities and water managers must take decisive steps to get ahead of the problem. This involves securing funding for infrastructure renewal, adopting comprehensive asset management plans, and embracing innovative technologies that provide critical operational insights.

Oldcastle Infrastructure is committed to engineering solutions that help build stronger, safer, and more sustainable communities. By integrating smart monitoring like CivilSense™ into the very fabric of our water systems, Oldcastle Infrastructure provides the tools necessary to manage these vital resources effectively. A proactive approach protects our infrastructure, conserves water, and ensures that our communities can continue to thrive for generations to come. The future of water management is not about reacting to the past; it’s about intelligently preparing for the future.


About CivilSense™

Oldcastle Infrastructure’s CivilSense™ is an advanced water infrastructure management platform that leverages artificial intelligence, predictive analysis, and real-time data to proactively detect and address leaks before they escalate into emergencies. With a market-leading accuracy rate of 93%, CivilSense™ enables municipalities to transition from reactive maintenance to strategic, data-driven asset management, effectively reducing water loss and associated costs. By integrating multi-source data — including GIS, infrastructure, and climate insights — CivilSense™ identifies high-risk pipeline segments, allowing for targeted interventions that enhance the resilience and sustainability of water systems. This innovative solution empowers communities to safeguard their water resources, minimize service disruptions, and optimize infrastructure investments.


About Oldcastle Infrastructure

Oldcastle Infrastructure, a CRH company, is a leading provider of engineered building solutions across North America. With nearly 80 manufacturing facilities and a workforce of over 4,000 employees, the company delivers a comprehensive portfolio of more than 16,000 products, including precast concrete, polymer concrete, and plastic components. These solutions serve critical sectors such as water, energy, communications, and transportation, supporting the development and maintenance of essential infrastructure.

Committed to sustainability and innovation, Oldcastle Infrastructure aligns its operations with the United Nations Sustainable Development Goals, focusing on responsible consumption, climate action, and the advancement of sustainable communities. As part of CRH plc, one of the world’s largest building materials companies, Oldcastle Infrastructure combines global resources with local expertise to deliver reliable, high-quality solutions that meet the evolving needs of modern infrastructure projects.


The post Managing aging water infrastructure: a proactive approach appeared first on AEC Magazine.

]]>
https://aecmag.com/sponsored-content/managing-aging-water-infrastructure-a-proactive-approach/feed/ 0
How AI Is transforming productivity in architecture https://aecmag.com/sponsored-content/how-ai-is-transforming-productivity-in-architecture/ https://aecmag.com/sponsored-content/how-ai-is-transforming-productivity-in-architecture/#disqus_thread Fri, 07 Nov 2025 08:00:30 +0000 https://aecmag.com/?p=25419 A new Chaos white paper shows where AI adds value, where it falls short, and what should come next.

The post How AI Is transforming productivity in architecture appeared first on AEC Magazine.

]]>
Three years after AI hit the mainstream, it still dominates architectural discourse. A new Chaos white paper based on 2025 practitioner interviews and internal research clarifies where AI adds value, where it falls short, and what should come next.

AI as a Creative Partner in Architecture

While AI is shifting the balance of work in architecture, the designer remains central to every major decision. Instead of displacing creativity, new tools are reducing the manual steps that have long slowed early design. Tasks like documentation setup and visualization preparation are moving faster, allowing architects to devote more attention to ideas that shape the experience and performance of a project.

As a result, architects are becoming guides and interpreters, ensuring that AI-generated suggestions serve the narrative and the brief. As clients experiment with AI on their own, professional authorship increasingly depends on how well designers communicate context, feasibility, and direction.

These shifts elevate the role of judgment. When visuals match the level of development, conversations stay focused on what matters most. When iteration answers a specific question, progress becomes decisive rather than overwhelming. Tools such as Chaos’ AI Image Enhancer support that shift by refining visuals within the design environment, while still relying on architects to determine alignment and meaning. Used with intention, AI strengthens the thread of design intent that runs from concept through delivery.

Responsible Use of AI Is Critical

As adoption grows, architects are discovering that AI introduces new risks alongside new efficiencies. Some are familiar, but others are emerging only as usage spreads across real projects. Data privacy is one clear concern. Public models often learn from what is uploaded into them, which poses challenges for work containing client-sensitive or proprietary information. Some firms are now guided by contract language that defines which tools may touch project data and which must remain strictly internal.

Authorship is another area of attention. When multiple stakeholders generate ideas with the same tools, styles can begin to converge and creative identity can flatten. Designers are learning to protect the distinctive qualities of their work by curating references carefully and reviewing outputs with context in mind. Overtrust remains a risk as well, since AI can look convincing while misunderstanding the brief.

These new responsibilities are quickly becoming standard parts of delivery. Firms are establishing policies for safe use, training teams to verify outputs early, and keeping clients informed about how AI influences decisions. It’s clear that responsibility is no longer a separate topic, but rather, a core ingredient of design integrity.

AI Delivers Real Productivity Gains in Design Workflows

According to the insights from this white paper, the most valuable gains will come from eliminating redundant steps in the process rather than speeding up existing ones. Instead of rebuilding ideas multiple times, early inputs could move farther without interruption, supported by tools that understand phase, context, and performance goals from the start.

This shift is likely to be driven by AI capabilities embedded inside the authoring environments architects already rely on. When tasks like asset creation stop interrupting modeling flow — for instance, using Chaos’ AI Material Generator to turn a reference photo into a ready-to-use texture — teams keep momentum focused on design rather than tool switching.

The experts interviewed expect this transition to unfold gradually, but the direction is already visible.

The Future of AI in AEC

The firms best prepared for what comes next will be those that pair strong design judgment with adaptability and clear process discipline. AI will not replace expertise, but it will make it more valuable, amplifying the importance of skills that filter, interpret, and communicate design intent.

To design smarter with AI, visit www.chaos.com.


Download the white paper 


The post How AI Is transforming productivity in architecture appeared first on AEC Magazine.

]]>
https://aecmag.com/sponsored-content/how-ai-is-transforming-productivity-in-architecture/feed/ 0
PropVR’s digital twins help real estate developer boost revenue by 10x https://aecmag.com/sponsored-content/propvrs-digital-twins-help-real-estate-developer-boost-revenue-by-10x/ https://aecmag.com/sponsored-content/propvrs-digital-twins-help-real-estate-developer-boost-revenue-by-10x/#disqus_thread Mon, 10 Nov 2025 11:52:30 +0000 https://aecmag.com/?p=25338 DAMAC gave its clients a compelling Unreal Engine-powered immersive real-time experience

The post PropVR’s digital twins help real estate developer boost revenue by 10x appeared first on AEC Magazine.

]]>
When DAMAC wanted an immersive and compelling real-time experience for its clients, PropVR provided a robust, realistic, and engaging digital twin framework, powered by Unreal Engine, to help DAMAC increase revenue from $40 million to $400 million.

PropVR is pioneering the creation of large-scale, photorealistic digital twins for cities, and has already modeled Dubai, Abu Dhabi, and Riyadh with their procedural workflow, using 2D maps, geospatial (GIS) data, and street view imagery for enhanced realism. This approach enables rapid development, turning vast urban landscapes into immersive 3D models in days rather than months.



While city digital twins bring long-term gains for planning, infrastructure, and even tourism, real estate digital twins bring a more personal perspective—the ability to experience a future home or office space in immersive, realistic 3D before it’s built, and truly get a feel for what life or work will feel like for residents or tenants. These digital twins also give real estate developers a competitive edge by capitalizing on the novelty factor to generate deeper engagement from potential buyers during the sales process.

For DAMAC Properties, one of the top luxury real estate developers in the Middle East, this collaboration with PropVR has enabled the deployment of unique interactive experiences, powered by Unreal Engine. DAMAC was able to boost conversions from $24 million to $400 million, and improved their customer engagement by up to 400%.

How PropVR made it happen

We spoke to Sunder Jagannathan, the Co-founder and CEO of PropVR, to get more details on the company’s approach to digital twins, and about the comprehensive digital twin project the company created for DAMAC.



Q: Please tell us about your career background. What led you to found PropVR?

Jagannathan: My career has always been at the intersection of technology, architecture, and immersive experiences. Over the years, I’ve focused on blending advanced 3D technologies like Unreal Engine, XR, and AI with industries such as real estate and smart cities—creating new ways for people to visualize, interact with, and make better decisions about spaces.

At PropVR, I lead our strategy, innovation, and business growth, with a mission to scale digital twin solutions globally and redefine how environments are experienced.


Q: Tell us about your view and vision for digital twins.

J: A digital twin, to me, is much more than a 3D model. It is a dynamic, data-driven replica of a physical asset or environment that allows for real-time interaction, monitoring, and insights. It bridges the physical and virtual worlds through data integration—empowering smarter decision-making and stakeholder collaboration.

At PropVR, we bring this vision to life by creating interactive, high-fidelity 3D environments with the future potential of integrating live data streams like IoT sensors, GIS layers, and AI analytics. Our approach helps stakeholders not just visualize but deeply understand and engage with their assets.



Q: What kinds of projects does PropVR work on?

J: PropVR is a spatial technology company focused on building immersive digital twins, real-time applications, and interactive 3D experiences across real estate, hospitality, aviation, and smart cities.

Over time, we’ve developed robust pipelines that allow companies to build and deploy these experiences faster across multiple platforms.

Whether it’s for virtual sales, masterplan visualization, or smart city twins, our focus remains on making data-rich environments explorable, interactive, and actionable.


Q: Tell us more about the digital twins project for DAMAC. Why was creating a digital twin the right approach?

J: The DAMAC project involved creating a hyper-realistic digital twin of their master-planned communities to transform how they approach sales, marketing, and stakeholder engagement.

Traditional real estate sales often rely on static brochures and renders—but with a digital twin, DAMAC was able to offer interactive walkthroughs, real-time project updates, and immersive presentations. This approach enabled their teams to showcase projects globally, improve buyer confidence, and close deals more efficiently.

Today, DAMAC is one of the leading adopters of this technology in the region, actively scaling digital twins across their diverse portfolio that includes skyscrapers, communities, and branded residences.



Q: How does a digital twin help stakeholders explore the building?

J: The DAMAC digital twin has been designed as a next-generation engagement platform with several interactive features like interactive 3D navigation, AI-powered MetaHuman sales avatars, and customization of interior spaces.

Clients are able to explore the project across levels—from masterplan to location highlights, amenities, floor plans, unit layouts, and interior spaces—in an immersive, interactive format. The sales avatars offer opportunities for interactive sales conversations within the digital twin.


Q: What if the digital twin needs to be customized or expanded?

J: The DAMAC digital twin environment is highly adaptable to customization, and for expansion to other types of sales content.

For one thing, the system is set up in such a way that sales teams can easily create custom animations and dynamic walkthroughs for presentations and social media. They can also repurpose and customize content for offline touchpoints like exhibitions, events, and sales galleries, ensuring a consistent buyer journey.

They can also use the same high-fidelity assets for browser-based experiences, maintaining quality across devices.



Q: What kinds of live, continuous data does the digital twin process in real time?

J: The digital twin connects directly with DAMAC’s inventory and CRM systems for real-time sales tracking and lead management within the digital twin. It also has the capability to integrate IoT sensor data—including environmental metrics, occupancy, lighting, and utilities—to enable operational twins for smarter asset management.

While the current focus is on interactive sales and visualization, the system has been architected to scale toward live data integration for operational and smart city use cases. Future upgrades will support live construction updates, phase tracking, and inventory management.


Q: Wow, that’s a lot of features! What made Unreal Engine the right tool for developing this type of digital twin?

J: Unreal Engine provides the perfect balance of cinematic-quality visuals and real-time interactivity. Its capabilities in lighting realism, scalability, and multi-platform support—including VR, AR, web, and touchscreen—make it our go-to solution.

Additionally, Unreal’s flexibility has allowed us to build custom 3D pipelines that accelerate the creation, deployment, and optimization of these experiences across varied devices and channels.



Q: Can you walk us through the process of creating your digital twin solution, from import CAD data to final delivery?

J: Our creation process follows these structured stages:

  1. Data collection – Gather CAD drawings, elevation plans, drone captures, mood boards, and marketing materials.
  2. 3D modeling – Convert these inputs into detailed 3D models of interiors and exteriors.
  3. Environment building – Integrate models into Unreal Engine with lighting, materials, animations, and interactivity.
  4. Feature development – Add camera points of interest, guided tours, UI/UX layers, and data dashboards.
  5. Deployment optimization – Use our in-house pipeline to deploy across VR, AR, web, touchscreen displays, and high-end devices.
  6. Testing and quality assurance – Optimize for performance and compatibility across platforms.
  7. Final delivery – Deploy as standalone EXE, browser-based experience, VR/MR application, or streaming solution based on the use case.

Q: How has this digital twin improved sales and marketing?

J: The DAMAC digital twin has had a transformative impact on sales and marketing. DAMAC reported a sales jump from $24 million to over $400 million, directly attributed to the interactive digital experience.

Buyer engagement increased by over 400% across sales channels. The platform enabled global reach, allowing potential buyers to explore properties remotely with confidence.

Sales teams and agents reported faster deal closures and stronger client relationships due to the immersive presentations.



Q: How do people experience the digital twin?

J: We offer the digital twin experience across multiple formats based on the audience and use case:

  • Touchscreen applications (sales galleries)
  • Pixel Streaming sales tool with video calling (web)
  • Browser-based interactive platforms (remote scalability)
  • VR and MR experiences (high-immersion investor or stakeholder demos)
  • LED walls and experience centers (large-scale immersive walkthroughs)
  • Holographic displays (for exhibitions and events)

The choice of platform depends on the context—buyers and sales teams typically use web and touchscreen versions, while government stakeholders and investors prefer VR and immersive formats.


Q: What has been the reaction of prospective tenants?

J: Prospective tenants and investors have been highly impressed by the interactive and personalized experience. Being able to explore the project remotely, select units, view balcony options, and customize interiors has significantly enhanced their confidence in decision-making.

The customizability of the platform enables us to tailor the experience to:

  • Buyers: Unit selection, interiors, views, and amenity exploration
  • Investors: Development phases, ROI modeling, and construction progress
  • Authorities: Compliance checks, infrastructure planning, and masterplan visualization

This adaptability ensures that every stakeholder sees what matters most to them.



Q: What are the biggest learnings you gained while working on this project?

J: One of the biggest learnings was the power of interactive storytelling in real estate. Moving beyond static visuals to fully explorable environments dramatically changes buyer engagement and perception.

Another key insight was the need for ease of content updates—enabling marketing and sales teams to manage inventory changes, pricing, and availability without requiring technical intervention.


Q: What do you see for the future of digital twins?

J: I believe digital twins will evolve into living, connected ecosystems where real-time data, AI insights, and immersive experiences seamlessly converge.

In the next five years, we expect that digital twins will play a critical role in smart building operations, sustainability, and energy management, and they will extend beyond sales into asset management, predictive maintenance, and urban planning. With advancements in AI, procedural generation, and automation, creating and scaling digital twins will become faster, cost-effective, and accessible to more industries.

At PropVR, we’re excited to continue pushing the boundaries of what’s possible—shaping the future of how environments are visualized, experienced, and managed.


Company bios

DAMAC Properties is the UAE’s leading property developer with a global footprint that includes Dubai, Riyadh, Toronto, London, and Miami. DAMAC has delivered over 48,000 homes, with more than 50,000 underway

PropVR specializes in immersive 3D visualization and virtual reality (VR) solutions for real estate. Based in Dubai, PropVR offers digital twin solutions for virtual sales, masterplan visualization, and smart city monitoring with high-fidelity digital environments.

The post PropVR’s digital twins help real estate developer boost revenue by 10x appeared first on AEC Magazine.

]]>
https://aecmag.com/sponsored-content/propvrs-digital-twins-help-real-estate-developer-boost-revenue-by-10x/feed/ 0
Contract killers: how EULAs are shifting power from users to shareholders https://aecmag.com/business/contract-killers-how-eulas-are-shifting-power-from-users-to-shareholders/ https://aecmag.com/business/contract-killers-how-eulas-are-shifting-power-from-users-to-shareholders/#disqus_thread Fri, 03 Oct 2025 08:06:58 +0000 https://aecmag.com/?p=24946 Most architects overlook software small print, but today’s EULAs are redefining ownership, data rights and AI use — shifting power from users to vendors

The post Contract killers: how EULAs are shifting power from users to shareholders appeared first on AEC Magazine.

]]>
Most architects and engineers never read the fine print of software licences. But today’s End User Licence Agreements (EULAs) and Terms of Use reach far beyond stating your installation rights. Software vendors are using them to have rights over your designs and control project data, limit AI training, and reshape developer ecosystems — shifting power from customers to shareholders. Martyn Day explores the rapidly changing EULA landscape

The first time I used AutoCAD professionally was about 37 years ago. At the time I knew a licence cost thousands of pounds and was protected by a hardware dongle, which plugged into the back of the PC.

The company I worked for had been made aware by its dealer that the dongle was the proof of purchase and if stolen it would cost the same amount to replace, so we were encouraged to have it insured. This was probably the first time I read a EULA and had that weird feeling of having bought something without actually owning it. Instead, we had just paid for the right to use the software.

Back then, the main concern was piracy. Vendors were less worried about what you did with your drawings and more worried about stopping you from copying the software itself. That’s why early EULAs, and the hardware dongles that enforced them, focused entirely on access.

The contract was clear: you hadn’t bought the software, you had bought permission to use it, and that right could be revoked if you broke the rules.


Find this article plus many more in the September / October 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

As computing spread through the 1980s and 1990s, so did the mechanisms of digital rights management (DRM). Dongles gave way to serial numbers, activation codes and eventually online licence checks tied to your machine or network. Each step made it harder to install the software without permission, but the scope was narrow. The EULA told you how many copies of the software you could run, what hardware it could be installed on, and that you could not reverse-engineer it.

What it didn’t do was tell you what you could or could not do with your own work. Your drawings, models and outputs were your business. The protection was wrapped tightly around the software, not around the data created with it. That boundary is what has changed today.

The rising power of the EULA

As software moved from standalone desktop products to subscription and cloud delivery, the scope of EULAs began to widen. No longer tied to a single boxed copy or physical medium, licences became fluid, covering not just installation but how services could be accessed, updated and even terminated.

The legal fine print shifted from simple usage restrictions to broad behavioural rules, often with the caveat that terms could be changed unilaterally by the vendor.

At first the transition was subtle. Subscription agreements introduced automatic renewals, service-level clauses and restrictions on transferring licences. Cloud services were layered in terms around uptime, data storage, and security responsibilities. What once was a static contract at the point of sale evolved into a living document, updated whenever the vendor saw fit. And in the last five to seven years, we have seen more frequent updates.

Software firms now have an extraordinary new power: the ability to reshape the customer relationship through the EULA itself. Where early agreements were about protecting intellectual property against piracy, modern ones increasingly function as business strategy tools. They dictate not just who can access the software, but how customers interact with their data, APIs, and even with third-party developers. The fine print was no longer just about access control; it became a mechanism of control.

EULAs are no longer obscure boilerplate legalese, tucked at the end of an installer. They have become the front line in a new battle, not over software piracy, but over who controls the data, workflows, and ecosystems that shape the future of design

Profound changes

The most striking shift in recent years is that EULAs have moved beyond software access and into the realm of customer data. What you produce with the tools (models, drawings, schedules, and outputs) has become strategically valuable to the software developers – as valuable as the software itself. Vendors now see customer data as fuel for things like analytics, training, and new AI services. The contract language has followed and there are varying degrees of land grab going on.

This year alone we have seen two firms – Midjourney and D5 Render – attempt to change their EULAs to automatically lay claim to perpetual rights access and use customer created data (mainly AI renderings), as well as the right to pass on lawsuits if any of those images infringe copyright and are subsequently used by the software vendor to train its AI models.

Many of the pure-play AI firms will lay claim to your first born given half a chance.



D5 Render provided a response to this article to clarify its position on customer data rights including details on ownership of content, training data and liability published below.



EULA



Autodesk

Closer to home, Autodesk provides another example. Its current Terms of Use, which serves as the primary agreement for subscription and cloud users, includes a clause which prohibits training AI systems on data or models created with its software. An earlier draft of this article suggested the restriction was recent, but Autodesk has since clarified that it dates back to 2018.

On a strict reading, this clause implies that even if you create designs entirely in-house, you may not be allowed to use your own data to train and develop your own AI models. If correct, Autodesk could hold the right to decide if, when, or how your data can be used for such purposes.

As we are on the cusp of an AI revolution, this is a profound change. Historically, your files were yours: a Revit model or AutoCAD drawing was protected only by your own governance. Now the licence agreement could potentially dictate not only how the software runs, but also how you can use the fruits of your own labour.

Autodesk’s licensing language creates a subtle but important tension between ownership and control. In its Terms of Use (which serves as the effective EULA for all subscription and cloud customers), Autodesk reassures customers with familiar phrases such as “You own Your Work” and “Your Content remains yours.”

On the surface, this means that the models, drawings, and other outputs you create belong to you, not Autodesk. However, deeper in the Terms of Use and the accompanying Acceptable Use Policy (AUP), the scope of what you can do with that work becomes more constrained — particularly in relation to AI or derivative use cases.

Talking with May Winfield, global director of commercial, legal and digital risks for global engineering consultancy Buro Happold, she suggests this goes further: Autodesk’s Acceptable Use Policy’s purported restrictions on customer outputs may even conflict with copyright laws in certain jurisdictions, where authors automatically own their creations unless they expressly transfer or license those rights. The question becomes: if copyright law guarantees authorship, but Autodesk contractually limits permitted uses, which prevails?

In these documents, Autodesk introduces the term “Output,” meaning any file or result generated using its software. The AUP states that customers must not use “any Offering or related Output in connection with the training of any machine learning or artificial intelligence algorithm, software, or system.” In practice, this means that even though Autodesk concedes ownership of your designs, it may contractually restrict you from applying them in one of the most strategically valuable ways: training your own AI models.

I know many of the more progressive AEC firms that attend our NXT BLD event are training their own in-house AI based on their Revit models, Revit derived DWGs and PDFs. With no caveats or carve outs for customers, they potentially now have the Sword of Damocles hanging over their data. As worded, the broad use of the word ‘output’ could theoretically even apply to an Industry Foundation Classes (IFC) file exported from Revit, as it’s an output from Autodesk’s product stack, which could mean you are not even allowed to train AI on an open standard!

Legally, the company has not taken your intellectual property; instead, it may have ring-fenced its permitted uses, in a very specific way. This creates what I’d characterise as a “legal DRM moat” around customer data.

Autodesk potentially positions itself as the arbiter of how your data can be exploited, leaving you in possession of your files but without full freedom to decide their fate. The fine print ensures Autodesk maintains leverage over emerging AI workflows, even while telling customers their data still belongs to them. And the one place where this restriction doesn’t apply is within Autodesk’s cloud ecosystem, now called Autodesk Platform Services (APS). Only last month at Autodesk University, Autodesk was showing the AI training of data within the Autodesk Cloud.



Autodesk provided a response to this article, published below.

For clarity, several edits have since been made to this article.



Knock-on risks for consultants

Winfield also points out that Autodesk’s broad claims over “outputs” may have knock-on consequences for customer–client agreements. Most design and consultancy contracts require the consultant to warrant that deliverables are original and fully owned by them. If a vendor asserts ownership rights through its licence terms, that warranty could be undermined. The risk goes further: consultancy agreements often contain indemnities, requiring the designer to protect the client against copyright breaches or claims. If a software vendor were to allege ownership or misuse under its EULA, a client might look to recover damages from the consultant. This creates a potential double exposure — liability to the vendor, and liability to the client.

Possible reasons

The rationale behind this clause is open to interpretation. Autodesk maintains that its intent is to protect intellectual property and ensure AI use occurs within secure, governed environments. Some industry observers worry that the breadth could inadvertently chill legitimate customer innovation, despite Autodesk’s stated intent.

Others have speculated that such clauses could serve to pre-empt potential misuse of design data by large AI firms. However, Autodesk’s 2018 publication date predates the current wave of generative AI, suggesting the clause was originally framed more broadly as an IP-protection measure, challenging Autodesk’s hold on its customers. 2018 is a long time before these major AI players were a potential threat.

The short solution to this would be for Autodesk to refine the language in its Terms of Use and not have such an implied broad restriction on customers creating their own trained AIs on their own design data, irrespective of the software that produced it.

There is a lot of daylight between what Autodesk claims to be its intent and the plain language of what is written. If the intent is to stop reverse engineering of Autodesk AI IP, then why not state that clearly?

The reverse engineering of its products and services is covered quite extensively in section 13 Autodesk Proprietary Rights in its General Terms. Machine Learning, AI, data harvesting and API, are all in addition to this.

When Nathan Miller, a digital design strategist and developer from Proving Ground, discovered these limitations, he ran a a series of posts on Linkedin. Prior to this none of the AEC firms we had spoken with for this article had any insight into this, despite the Terms of Service being published seven years ago.

While it was certainly a topic hotly commented on, the only Autodesk-related person to add their thoughts to the LinkedIn posts was Aaron Wagner of reseller Arkance. He commented:

“I don’t think the common interpretation is accurate to the spirit of that clause. Your data is your data and the way you use it is under your own discretion. Of course, you should always seek legal counsel to refine any grey areas.

“This statement to me reads that the clause is from a standpoint of Autodesk wanting to protect its products from being reverse engineered and hold themselves free of liability of sharing private information, but model element authors can still freely use AI/ML to study their own data / designs and improve them.”

Buro Happold’s Winfield gave her perspective, “Contract interpretation is generally not impacted by spirit of a clause – if the drafting is clear, it is not changed by the assertion of a different intention? Unless there are contradictions in other clauses and copyright law then it all needs to be read together and squared up to be interpreted in a workable way? It may be the “outputs” in the clause needs to qualify / clarify its intentions, if different from the seemingly clear drafting of read alone?”

The interpretation that this was a sweeping restriction on AI training using any output from Autodesk software has not gone unnoticed by major customers. Autodesk already has a reputation for running compliance audits and issuing fines when licence breaches are discovered, so the presence of this clause in an updated, binding contract has raised alarm.

The fear was simple: if the restriction exists, it can be enforced. Several design IT directors have already told their boards that, on a strict reading of Autodesk’s updated terms, their firms are probably now out of compliance – not for piracy, but for training their own AI models, on their own project data.

Some of the commenters on Miller’s original LinkedIn post, reported that they raised the issue with Autodesk execs in meetings. By and large these execs had not heard of the EULA changes and said they would go find out more information.

Other vendors

Looking around at what other firms have done here, their EULAs include clauses about AI training of data, but it always appears to be in relation to protecting IP or reverse engineering commercial software – not broad prohibitions.

Adobe has explicit rules around its Firefly generative AI features and the company’s Generative AI User Guidelines forbid customers from using any Firefly-generated output to train other AI or machine learning models. However, in product-specific terms, Adobe defines “Input” and “Output” as your content and extends the same protections to both.

Graphisoft has so far left customer data largely unconstrained in terms of AI use. Bentley Systems sits somewhere in between, allowing AI outputs for your use but prohibiting their use in building competing AI systems. The standard Allplan EULA / licence terms do not appear to contain blanket prohibitions on using output for AI training.

Meanwhile, Autodesk’s wording has no caveats or carve out for customers’ data just what appears to be a blanket restriction on AI training using outputs from its software, combined with an exception for its own cloud ecosystem. This appears to effectively grant the company a monopoly over how design data can fuel AI. Customers are free to create, but if they wish to train internal AI on their own project history, the contract could shut the door — unless that training happens inside Autodesk’s APS environment. The effect is to funnel innovation into Autodesk’s platform, where the company retains commercial leverage.

This mirrors tactics used in other industries. Social media platforms, for example, restrict third-party scraping to ensure AI training occurs only within their walls – although in that instance the third party would be using data it does not own.

If licence agreements prevent firms from using their own outputs to train AI, they forfeit the ability to build unique, in-house intelligence from their past projects

In finance, regulators have intervened to stop institutions from controlling both infrastructure and the datasets flowing through them. Europe’s Digital Markets Act directly targets such gatekeeping, while US antitrust agencies are scrutinising restrictive contract terms that entrench platform dominance.

For the AEC sector, the potential impact of the restrictions in Autodesk’s Acceptable Use Policy is clear: it risks concentrating AI innovation inside Autodesk’s ecosystem, raising barriers for independent development and narrowing customer choice.

Proving is difficult

How Autodesk might enforce an AI training ban is an open question. Traditional licence audits can detect unlicensed installs or overuse. Meanwhile, proving that a customer has trained an AI on Autodesk outputs is way more complex. But Autodesk file formats (DWG, RVT, etc.) do contain unique structural fingerprints that could, in theory, be detected in a trained model’s weights or outputs – for example, if an AI consistently reproduces proprietary layering systems, metadata tags, or parametric structures unique to Autodesk tools.

Autodesk could also monitor API usage patterns: large-scale systematic exports or conversions may signal that datasets are being harvested for training. Another possible avenue is watermarking — embedding invisible markers in outputs that survive export and could later be detected.

APIs, APS and developers

Autodesk is also making significant changes to other areas of its business – changes that could have a big impact on those that develop or use complementary software tools. Autodesk’s API and Autodesk Platform Services (APS) ecosystem has long been central to the company’s success, enabling customers and commercial third parties to extend tools like Autodesk Revit and Autodesk Construction Cloud (ACC).

But what was once a relatively open environment is now being reshaped into a monetised, tightly governed platform — with serious implications for customers and developers.

Nathan Miller of Proving Ground points out that virtually every practice he has worked with relies on opensource scripts, third-party add-ins, or in-house extensions. These are the utilities that make Autodesk’s software truly productive. By introducing broad restrictions and fresh monetisation barriers, Autodesk risks eroding the very ecosystem that helped drive its dominance.

The most visible change is the shift of APS into a metered, consumption-based service. Previously bundled into subscriptions, APIs will now incur line-item costs for common tasks such as model translations, batch automations and dashboard integrations.

A capped free tier remains, but high value services like Model Derivative, Automation and Reality Capture will now be billed per use. For firms, this means operational budgets must now account for API spend, with the risk of projects stalling mid-delivery if quotas are exceeded or unexpected charges triggered.

Autodesk has also tightened authentication rules. All integrations must be registered with APS and use Autodesk-controlled OAuth scopes. These scopes, which define the exact permissions an app has, can be added, redefined or retired by Autodesk — improving security, but also centralising control over what kinds of applications are permitted.

Perhaps the most profound change is not technical, but contractual. Firms can still create internal tools for their own use. But turning those into commercial products — or even sharing them with peers — now requires Autodesk’s explicit approval. The line between “internal tool” and “commercial app” is no longer a matter of technology but of contract law. Innovation, once free to circulate, is now fenced in.

This changing landscape for software development is not unique to Autodesk. Dassault Systèmes (DS), which is strong in product design, manufacturing, automotive, and aerospace, has sparked controversy by revising its agreements with third party developers for its Solidworks MCAD software. DS is demanding developers hand over 10% of their turnover along with detailed financial disclosures. Small firms fear such terms could make their businesses unviable.

Across the CAD/BIM sector, ecosystems are being re-engineered into revenue streams. What were once open pipelines of user-driven innovation are narrowing into gated conduits, designed less to empower customers than to deliver shareholder returns.

Why all this matters

The stakes are high for both customers and developers. For customers, the greatest risk is losing meaningful control over their design history. Project files, BIM models and CAD data are no longer just records of completed work; they are the foundation for future AI-driven workflows. If licence agreements prevent firms from using their own outputs to train AI, they forfeit the ability to build unique, in-house intelligence from their past projects. The value of their data, arguably their most strategic asset, is redirected into the vendor’s ecosystem. The result is growing dependence: firms must rely on vendor tools, AI models and pricing, with fewer options to innovate independently or move their data elsewhere.

For software developers, the risks are equally severe. Independent vendors and in-house innovators who once built add-ons or utilities to extend core CAD/BIM platforms now face new costs and restrictions. Revenue-sharing models, such as Dassault Systèmes’ 10% royalty scheme, threaten commercial viability, especially for smaller firms. When API use is metered and distribution fenced in by contract, ecosystems shrink. Innovation slows, customer choice narrows, and vendor lock-in grows.

AI is the existential threat vendors don’t want to admit. Smarter systems could slash the number of licences needed on a project, deliver software on demand, and let firms build private knowledge vaults more valuable than off-the-shelf tools. Vendors see the danger: EULAs are now their defensive moat, crafted to block customers from using their own data to fuel AI. The fine print isn’t just about compliance — it’s about making sure disruption happens on the vendor’s terms, not those of the customer.

This trajectory is not inevitable. Customers and developers can push back. Large firms, government bodies and consortia hold leverage through procurement. They can demand carve-outs that preserve ownership of outputs and guarantee the right to train AI. Developers, too, can resist punitive revenue-sharing schemes and press for fairer terms. Only collective action will ensure innovation remains in the hands of the wider AEC community, not locked in vendor boardrooms.

The tightening of EULAs and developer agreements is not happening in a vacuum. In Europe, new regulations like the Digital Markets Act (DMA) and the Data Act could directly challenge these practices. The DMA targets “gatekeepers” that restrict competition, while the Data Act enshrines customer rights to access and use data they generate, including for AI training. Clauses restricting firms from training AI on their own outputs may sit uncomfortably with these principles.

In the US, antitrust law is less settled but moving in the same direction. The FTC has signalled increased scrutiny of contract terms that suppress competition, and restrictions such as Autodesk’s AI-output restriction or Solidworks’ 10% developer royalty could draw attention.

For customers and developers, this creates negotiating leverage. Large firms, government clients, and consortia can push for carve-outs citing regulatory rights, while developers may resist punitive revenue-sharing as disproportionate. Yet smaller players face a harder reality: challenging vendors risks losing access to platforms that underpin longstanding businesses.

A Bill of Rights?

With so many software firms busily updating their business models, EULAs and terms, the one group here that is standing still and taking the full force of this wave are customers. A constructive way forward could be the creation of a Bill of Rights for AEC Software customers — a simple but powerful framework that customers could insist their vendors sign up to and be held accountable against. The goal is not to hobble innovation, but to ensure it happens on a foundation of fairness and trust. Knowing this month’s ‘we have updated our EULA’ will not transgress some core concepts.

At its heart we’re suggesting five core principles:

Data Ownership – a statement that customers own what they create; vendors cannot claim control of drawings, models, or project data through the fine print.

AI Freedom – guarantees that firms may use their own outputs to train internal AI systems, preserving the ability to innovate independently rather than relying solely on vendor-driven tools.

Developer fairness – ensures that APIs remain open, with transparent and non-punitive revenue models that allow third-party ecosystems to thrive.

Transparency – requires vendors to clearly disclose when and how customer data is used in their own AI training or analytics.

Portability – commits vendors to interoperability and open standards, so that customers are never locked into one ecosystem against their will.

Such a Bill of Rights would not prevent Autodesk, Bentley Systems, Nemetschek, Trimble and others from building profitable AI services or new subscription tiers. But it would establish clear boundaries: vendors innovate and capture value, but not at the expense of customer autonomy. For customers, developers, and ultimately the built environment itself, this would restore balance and accountability in a market where the fine print has become as important as the software itself.

AEC Magazine is now working with a group of customers, developers and software vendors to see how this could be shaped in the coming months.

Conclusion

EULAs are no longer obscure boilerplate legalese, tucked at the end of an installer. They have become the front line in a new battle, not over software piracy, but over who controls the data, workflows, and ecosystems that shape the future of design.

In my view, as currently worded, Autodesk’s clause could be interpreted as a prohibition on AI training, although this may be counter to Autodesk’s intentions with regards to customer ‘outputs’. Furthermore, Dassault Systèmes’ demand for a slice of developer revenues illustrates just how quickly the ground is shifting. Contracts are no longer just protective wrappers around software; they are strategic levers which can be used to lock in customers and monetise ecosystems.

This should concern everyone in AEC. Customers risk losing the ability to use their own project history to innovate, while mature developers face sudden, new revenue-sharing models that could undermine entire businesses. Left unchallenged, the result will be less competition, less innovation, and greater dependency on a handful of large vendors whose first loyalty is to shareholders, not users.

The only path forward I see is collective action. Customers and developers must push back, demand transparency, insist on long-term contractual safeguards, and possibly unite around a shared Bill of Rights for AEC software. The question is no longer academic: in the age of AI, do you own your tools and your data — or does your vendor own you?


Editor’s note / Autodesk response:

In response to this article, Autodesk provided the following statement:

“The clause included in Martyn Day’s recent article has been part of our Terms of Use since they were originally published in May 2018. 

 “This clause was written to prevent the use of AI/ML technology to reverse engineer Autodesk’s IP or clone Autodesk’s product functionalities, a common protection for software companies. It does not broadly restrict our users’ ability to use their IP or give Autodesk ownership to our users’ content.

“We know things are moving fast with the accelerated advancement in AI/ML technology. We, along with just about every software company, are adapting to this changing landscape, which includes actively assessing how best to meet the evolving needs and use cases of our customers while protecting Autodesk’s core IP rights. As these technologies advance, so will our approach, and we look forward to sharing more in the months ahead.”

Autodesk also clarified that the License and Services Agreement only applies to legacy customers who still use perpetual licenses. The Terms of Use from May 2018 supersedes that agreement to cover both desktop and cloud services.


Correction (8 Oct 2025): An earlier version of this article incorrectly suggested that the changes to the Terms of Use were made in May 2025. Based on Autodesk’s statement above this article has now been corrected and updated for clarity


D5 Render’s response

In response to this article, D5 Render provided the following statement:

We fully understand and share the community’s concerns regarding data rights in the evolving field of AI. We remain committed to maintaining clear and fair agreements that protect user rights while fostering innovation.

Our Terms of Service (publicly available at www.d5render.com/service-agreement) do not claim any ownership or perpetual usage rights over user-generated content, including AI-rendered images. On the contrary, Section 6 of our Terms of Service explicitly states that users “retain rights and ownership of the Content to the fullest extent possible under applicable law; D5 does not claim any ownership rights to the Content.”

When users upload content to our services, D5 is granted only a non-exclusive, purpose-limited operational license, which is a standard clause in most cloud-based software products. This license merely allows us to technically operate, maintain, and improve the service. D5 will never use user content as training data for the Services or for developing new products or services without users’ express consent.

As for liability, Sections 8 and 9 of our Terms of Service are standard in the software industry. They are designed to protect D5 from risks arising from user-uploaded content that infringes on third-party rights. These clauses are not intended to transfer the liability of D5’s own actions to users.


Explainer #1 – EULA vs Terms of Use: what’s the difference?

At first glance, a EULA (End User Licence Agreement) and Terms of Use can look like the same thing. In practice, they operate at different levels — and together form the legal framework that governs how customers engage with software and cloud services.

The EULA is the traditional licence agreement tied to desktop software. It explains that you do not own the software itself, only the right to use it under certain conditions. Typical clauses cover installation limits, restrictions on copying or reverse-engineering, and confirmation that the software is licensed, not sold.

The Terms of Use apply more broadly to online services, platforms, APIs and cloud tools. They include acceptable use rules, data storage and sharing conditions, API restrictions, and often a right for the vendor to change the terms unilaterally.

One unresolved issue is how to interpret contradictions. If the EULA states ‘you own your work’ but the Acceptable Use Policy restricts what you can do with that work, and neither agreement specifies which takes precedence, which clause governs? In practice, customers may only discover the answer in the event of a dispute — an unsettling prospect for firms relying on predictable rights.


Explainer #2 – Why is data the new goldmine?

As the industry moves into an era defined by artificial intelligence and machine learning, customer content has become more than just the product of design work, it has become the raw material for training and insight.

BIM and CAD models are no longer viewed solely as deliverables for projects, but as vast datasets that can be mined for patterns, efficiencies, and predictive value. This is why software vendors increasingly frame customer content as “data goods” rather than private work.

With so much of the design process shifting to cloud-based platforms, vendors are in a powerful position to influence, and often restrict, how those datasets can be accessed and reused.

The old mantra that “data is the new oil” captures this shift neatly: just as oil companies controlled not only the drilling but also the refining and distribution, software firms now want to control both the pipelines of design data and the AI refineries that turn it into intelligence.

What used to be customer-owned project history is being reconceptualised as a strategic asset for software vendors themselves and EULAs and Terms of Use are the contractual tools that allow them to lock down that value.


Explainer #3 – Autodesk’s Terms of Use

What it says

Autodesk’s Acceptable Use Policy (AUP) appears to ban AI/ML training on any “output” from its software unless done within Autodesk’s APS cloud. This could include models, drawings, exports, even IFCs.

Why it matters

Customers risk losing the ability to train internal AI on their own design history. Strict licence audits mean firms could be flagged non-compliant even without intent.

Legal experts warn the AUP’s broad claims over “outputs” may conflict with copyright law, which in many jurisdictions gives authors automatic ownership of their creations.

Consultants could face knock-on risks if client contracts require them to warrant full ownership of deliverables — raising potential indemnity exposure.

Autodesk gains leverage by funnelling AI innovation into its paid ecosystem.

The big picture

This move mirrors gatekeeping strategies in other tech sectors, where platforms wall off data to consolidate control. Regulators in the EU (Digital Markets Act, Data Act) and US antitrust bodies are increasingly scrutinising such practices.


Explainer #4 – Developers at risk

What changed?

Autodesk has overhauled Autodesk Platform Services (APS): APIs are now metered, consumption-based, and gated by stricter terms. While firms can still build internal tools, sharing or commercialising scripts now requires Autodesk’s explicit approval.

Why it matters

Independent developers face new costs and quotas for integrations that were once bundled into subscription fees. In-house teams must now budget for API usage, turning process automation into an ongoing operational cost.

Quota limits mean projects risk disruption if thresholds are unexpectedly exceeded mid-delivery.

The contractual line between “internal tool” and “commercial app” is now defined by Autodesk, not developers.

Innovation that once flowed freely into the wider ecosystem is fenced in, with Autodesk deciding what can be shared.

The big picture

Across the CAD/BIM sector, developer ecosystems are being monetised and restricted to generate shareholder returns. What were once open innovation pipelines are narrowing into vendor-controlled platforms, threatening the independence of smaller developers and reducing customer choice.


Recommended viewing: May Winfield @ NXT DEV

May Winfield
May Winfield

At AEC Magazine’s NXT DEV event this year, May Winfield, global director of commercial, legal and digital risks for Buro Happold presented “EULA and Other Agreements: You signed up to what?”, where she invited the audience to reconsider the contracts they’ve implicitly accepted.

How many users digest the fine print of EULAs and AI tool terms? Winfield warns that their assumptions often misalign with contractual reality and highlights key clauses that tend to lurk in user agreements: ownership of content, usage rights, and liability limitations.

In her presentation May does not offer legal advice but she provides a practical reminder: what you think you own or can do might be constrained by what you signed up to — underscoring the urgency for users, developers, and governance bodies to delve into EULAs and demand clarity.

■ Watch @ www.nxtaec.com

The post Contract killers: how EULAs are shifting power from users to shareholders appeared first on AEC Magazine.

]]>
https://aecmag.com/business/contract-killers-how-eulas-are-shifting-power-from-users-to-shareholders/feed/ 0