Greg Corke, Author at AEC Magazine https://aecmag.com/author/greg/ Technology for the product lifecycle Sat, 15 Nov 2025 14:14:06 +0000 en-GB hourly 1 https://aecmag.com/wp-content/uploads/2021/02/cropped-aec-favicon-32x32.png Greg Corke, Author at AEC Magazine https://aecmag.com/author/greg/ 32 32 Part3 to give architects control over construction drawings https://aecmag.com/data-management/part3-to-give-architects-control-over-construction-drawings/ https://aecmag.com/data-management/part3-to-give-architects-control-over-construction-drawings/#disqus_thread Fri, 07 Nov 2025 08:31:55 +0000 https://aecmag.com/?p=25505 ProjectFiles provides a single source of truth for drawings, feeding into submittals, RFIs, and field reports.

The post Part3 to give architects control over construction drawings appeared first on AEC Magazine.

]]>
ProjectFiles is designed to provide a single source of truth for drawings, feeding into submittals, RFIs, change documents, instructions, and field reports.

ProjectFiles from Part3 is a new construction drawing and documentation management system for architects designed to help ensure the right drawings are always accessible on site, in real time, to everyone that needs them.

According to the company, unlike other tools that were built for contractors and retrofitted for everyone else, ProjectFiles was designed specifically with architects in mind.

ProjectFiles is a key element of Part3’s broader construction administration platform, and also connects drawings to the day-to-day management of submittals, RFIs, change documents, instructions, and field reports.


Discover what’s new in technology for architecture, engineering and construction — read the latest edition of AEC Magazine
👉 Subscribe FREE here

Automatic version tracking helps ensures the entire team is working from the most up-to-date drawings and documents. According to Part3, it’s designed to overcome problems such as walking onto site and finding contractors working from outdated drawings, or wasting time hunting through folders trying to find the current structural set before an RFI deadline.

The software also features AI-assisted drawing detection, where files are automatically tagged with the correct drawing numbers, titles, and disciplines.


Meanwhile, learn more about Part3’s AI capabilities, along with tonnes of other AI-powered tools, in AEC Magazine’s AI Spotlight Drectory

The post Part3 to give architects control over construction drawings appeared first on AEC Magazine.

]]>
https://aecmag.com/data-management/part3-to-give-architects-control-over-construction-drawings/feed/ 0
Autodesk shows its AI hand https://aecmag.com/ai/autodesk-shows-its-ai-hand/ https://aecmag.com/ai/autodesk-shows-its-ai-hand/#disqus_thread Thu, 02 Oct 2025 08:33:27 +0000 https://aecmag.com/?p=24818 At AU Autodesk presented live, production-ready tools, giving customers a clear view of how AI could soon reshape workflows

The post Autodesk shows its AI hand appeared first on AEC Magazine.

]]>
Autodesk’s AI story has matured. While past Autodesk University events focused on promises and prototypes, this year Autodesk showcased live tools, giving customers a clear view of how AI could soon reshape workflows across design and engineering, writes Greg Corke

At AU 2025, Autodesk took a significant step forward in its AI journey, extending far beyond the slide-deck ambitions of previous years.

During CEO Andrew Anagnost’s keynote, the company unveiled brand-new AI tools in live demonstrations using pre-beta software. It was a calculated risk — particularly in light of recent high-profile hiccups from Meta — but the reasoning was clear: Autodesk wanted to show it has tangible, functional AI technology and it will be available for customers to try soon.

The headline development is ‘neural CAD’, a completely new category of 3D generative AI foundation models that Autodesk says could automate up to 80–90% of routine design tasks, allowing professionals to focus on creative decisions rather than repetitive work. The naming is very deliberate, as Autodesk tries to differentiate itself from the raft of generic AEC-focused AI tools in development.


Find this article plus many more in the September / October 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

neural CAD AI models will be deeply integrated into BIM workflows through Autodesk Forma, and product design workflows through Autodesk Fusion. They will ‘completely reimagine the traditional software engines that create CAD geometry.’

Autodesk is also making big AI strides in other areas. Autodesk Assistant is evolving beyond its chatbot product support origins into a fully agentic AI assistant that can automate tasks and deliver insights based on natural-language prompts.

Big changes are also afoot in Autodesk’s AEC portfolio – developments that will have a significant impact on the future of Revit.

The big news was the release of Forma Building Design, a brand-new tool for LoD 200 detailed design (learn more in this AEC Magazine article). Autodesk also announced that its existing early-stage planning tool, Autodesk Forma, will be rebranded as Forma Site Design and Revit will gain deeper integration with the Forma industry cloud, becoming Autodesk’s first Connected client.

neural CAD

neural CAD marks a fundamental shift in Autodesk’s core CAD and BIM technology. As Anagnost explained, “The various brains that we’re building will change the way people interact with design systems.”

Unlike general-purpose large language models (LLMs) such as ChatGPT and Claude, or AI image generation models like Stable Diffusion and Nano Banana, neural CAD models are specifically designed for 3D CAD. They are trained on professional design data, enabling them to reason at both a detailed geometry level and at a systems and industrial process level.

neural CAD marks a big leap forward from Project Bernini, which Autodesk demonstrated at AU 2024. Bernini turned a text, sketch or point cloud ‘prompt’ into a simple mesh that was not best suited for further development in CAD. In contrast, neural CAD delivers ‘high quality’ ‘editable’ 3D CAD geometry directly inside Forma or Fusion, just like ChatGPT generates text and Midjourney generates pixels.


Autodesk University
Autodesk CEO Andrew Anagnost joins experts on stage to live-demo upcoming AI software during the AU keynote

Autodesk has so far presented two types of neural CAD models: ‘neural CAD for geometry’, which is being used in Fusion and ‘neural CAD for buildings’, which is being used in Forma.

For Fusion, there are two AI model variants, as Tonya Custis, senior director, AI research, explained, “One of them generates the whole CAD model from a text prompt. It’s really good for more curved surfaces, product use cases. The second one, that’s for more prismatic sort of shapes. We can do text prompts, sketch prompts and also what I call geometric prompts. It’s more of like an auto complete, like you gave it some geometry, you started a thing, and then it will help you continue that design.”

On stage, Mike Haley, senior VP of research, demonstrated how neural CAD for geometry could be used in Fusion to automatically generate multiple iterations of a new product, using the example of a power drill.

“Just enter the prompts or even drawing and let the CAD engines start to produce options for you instantly,” he said. “Because these are first class CAD models, you now have a head start in the creation of any new product.”

It’s important to understand that the AI doesn’t just create dumb 3D geometry – neural CAD also generates the history and sequence of Fusion commands required to create the model. “This means you can make edits as if you modelled it yourself,” he said.

Meanwhile, in the world of BIM, Autodesk is using neural CAD to extend the capabilities of Forma Building Design to generate BIM elements.

The current aim is to enable architects to ‘quickly transition’ between early design concepts and more detailed building layouts and systems with the software ‘autocompleting’ repetitive aspects of the design.

Instead of geometry, ‘neural CAD for buildings’ focuses more on the spatial and physical relationships inherent in buildings as Haley explained. “This foundation model rapidly discovers alignments and common patterns between the different representations and aspects of building systems.



“If I was to change the shape of a building, it can instantly recompute all the internal walls,” he said. “It can instantly recompute all of the columns, the platforms, the cores, the grid lines, everything that makes up the structure of the building. It can help recompute structural drawings.”

At AU, Haley demonstrated ‘Building Layout Explorer’, a new AI-driven feature coming to Forma Building Design. He presented an example of an architect exploring building concepts with a massing model, “As the architect directly manipulates the shape, the neural CAD engine responds to these changes, auto generating floor plan layouts,” he said.

But, as Haley pointed out, for the system to be truly useful the architect needs to have control over what is generated, and therefore be able to lock down certain elements, such as a hallway, or to directly manipulate the shape of the massing model.

“The software can re-compute the locations and sizes of the columns and create an entirely new floor layout, all while honouring the constraints the architect specified,” he said.

This feels like a pivotal moment in Autodesk’s AI journey, as the company moves beyond ambitions and experimentation into production-ready AI that is deeply integrated into its core software

Of course, it’s still very early days for neural CAD and, in Forma, ‘Building Layout Explorer’ is just the beginning.

Haley alluded to expanding to other disciplines within AEC, “Imagine a future where the software generates additional architectural systems like these structural engineering plans or plumbing, HVAC, lighting systems and more.”

In the future, neural CAD in Forma will also be able to handle more complexity, as Custis explains. “People like to go between levels of detail, and generative AI models are great for that because they can translate between each other. It’s a really nice use case, and there will definitely be more levels of detail. We’re currently at LoD 200.”

The training challenge

neural CAD models are trained on the typical patterns of how people design. “They’re learning from 3D design, they’re learning from geometry, they’re learning from shapes that people typically create, components that people typically use, patterns that typically occur in buildings,” said Haley.

In developing these AI models, one of the biggest challenges for Autodesk has been the availability of training data. “We don’t have a whole internet source of data like any text or image models, so we have to sort of amp up the science to make up for that,” explained Custis.

For training, Autodesk uses a combination of synthetic data and customer data. Synthetic data can be generated in an ‘endless number of ways’, said Custis, including a ‘brute force’ approach using generative design or simulation.


Autodesk University
Tonya Custis, senior director, AI research, Autodesk

Customer data is typically used later-on in the training process. “Our models are trained on all data we have permission to train on,” said Amy Bunszel, EVP, AEC.

But customer data is not always perfect, which is why Autodesk also commissions designers to model things for them, generating what chief scientist Daron Green describes as gold standard data. “We want things that are fully constrained, well annotated to a level that a customer wouldn’t [necessarily] do, because they just need to have the task completed sufficiently for them to be able to build it, not for us to be able to train against,” he said.

Of course, it’s still very early days for neural CAD and Autodesk plans to improve and expand the models, “These are foundation models, so the idea is we train one big model and then we can task adapt it to different use cases using reinforcement learning, fine tuning. There’ll be improved versions of these models, but then we can adapt them to more and more different use cases,” said Custis. In the future, customers will be able to customise the neural CAD foundation models, by tuning them to their organisation’s proprietary data and processes. This could be sandboxed, so no data is incorporated into the global training set unless the customer explicitly allows it.

“Your historical data and processes will be something you can use without having to start from scratch again and again, allowing you to fully harness the value locked away in your historical digital data, creating your own unique advantages through models that embody your secret source or your proprietary methods,” said Haley.

Agentic AI: Autodesk Assistant

When Autodesk first launched Autodesk Assistant, it was little more than a natural language chatbot to help users get support for Autodesk products.

Now it’s evolved into what Autodesk describes as an ‘agentic AI partner’ that can automate repetitive tasks and help ‘optimise decisions in real time’ by combining context with predictive insights.

Autodesk demonstrated how in Revit, Autodesk Assistant could be used to quickly calculate the window to wall ratio on a particular façade, then replace all the windows with larger units. The important thing to note here is that everything is done though natural language prompts, without the need to click through multiple menus and dialogue boxes.


Autodesk University
Building Layout Explorer’, a new AI-driven feature coming to Forma Building Design
Autodesk University
Autodesk Assistant in Revit enables teams to quickly surface project insights using natural language prompts, here showing how it could be used to quickly calculate the window to wall ratio on a particular façade, then replace all the windows with larger units

Autodesk Assistant can also help with documentation in Revit, making it easier to use drawing templates, populate title blocks and automatically tag walls, doors and rooms. While this doesn’t yet rival the auto-drawing capabilities of Fusion, when asked about bringing similar functionality to Revit, Bunszel noted, ‘We’re definitely starting to explore how much we can do.’

Autodesk also demonstrated how Autodesk Assistant can be used to automate manual compliance checking in AutoCAD, a capability that could be incredibly useful for many firms.

“You’ll be able to analyse a submission against your drawing standards and get results right away, highlighting violations and layers, lines, text and dimensions,” said Racel Amour, head of generative AI, AEC.

Meanwhile, in Civil 3D it can help ensure civil engineering projects comply with regulations for safety, accessibility and drainage, “Imagine if you could simply ask the Autodesk Assistant to analyse my model and highlight the areas that violate ADA regulations and give me suggestions on how to fix it,” said Amour.

So how does Autodesk ensure that Assistant gives accurate answers? Anagnost explained that it takes into account the context that’s inside the application and the context of work that users do.

“If you just dumped Copilot on top of our stuff, the probability that you’re going to get the right answer is just a probability. We add a layer on top of that that narrows the range of possible answers.”

“We’re building that layer to make sure that the probability of getting what you want isn’t 70%, it’s 99.99 something percent,” he said.

While each Autodesk product will have its own Assistant, the foundation technology has also been built with agent-to-agent communication in mind – the idea being that one Assistant can ‘call’ another Assistant to automate workflows across products and, in some cases, industries.

“It’s designed to do three things: automate the manual, connect the disconnected, and deliver real time insights, freeing your teams to focus on their highest value work,” said CTO, Raji Arasu.


Autodesk University
Autodesk CTO Raji Arasu

In the context of a large hospital construction project, Arasu demonstrated how a general contractor, manufacturer, architect and cost estimator could collaborate more easily through natural language in Autodesk Assistant. She showed how teams across disciplines could share and sync select data between Revit, Inventor and Power Bi, and manage regulatory requirements more efficiently by automating routine compliance tasks. “In the future, Assistant can continuously check compliance in the background. It can turn compliance into a constant safeguard, rather than just a one-time step process,” she said.

Arasu also showed how Assistant can support IT administration — setting up projects, guiding managers through configuring Single Sign-On (SSO), assigning Revit access to multiple employees, creating a new project in Autodesk Construction Cloud (ACC), and even generating software usage reports with recommendations for optimising licence allocation.

Agent-to-agent communication is being enabled by Model Context Protocol (MCP) servers and Application Programming Interfaces (APIs), including the AEC data model API, that tap into Autodesk’s cloud-based data stores.

APIs will provide the access, while Autodesk MCP servers will orchestrate and enable Assistant to act on that data in real time.

As MCP is an open standard that lets AI agents securely interact with external tools and data, Autodesk will also make its MCP servers available for third-party agents to call.

All of this will naturally lead to an increase in API calls, which were already up 43% year on year even before AI came into the mix. To pay for this Autodesk is introducing a new usage-based pricing model for customers with product subscriptions, as Arasu explains, “You can continue to access these select APIs with generous monthly limits, but when usage goes past those limits, additional charges will apply.”

But this has raised understandable concerns among customers about the future, including potential cost increases and whether these could ultimately limit design iterations.

The human in the loop

Autodesk is designing its AI systems to assist and accelerate the creative process, not replace it. The company stresses that professionals will always make the final decisions, keeping a human firmly in the loop, even in agent-to-agent communications, to ensure accountability and design integrity.

“We are not trying to, nor do we aspire to, create an answer, “says Anagnost. “What we’re aspiring to do is make it easy for the engineer, the architect, the construction professional – reconstruction professional in particular – to evaluate a series of options, make a call, find an option, and ultimately be the arbiter and person responsible for deciding what the actual final answer is.”

AI computation

It’s no secret that AI requires substantial processing power. Autodesk trains all its AI models in the cloud, and while most inferencing — where the model applies its knowledge to generate real-world results — currently happens in the cloud, some of this work will gradually move to local devices.

This approach not only helps reduce costs (since cloud GPU hours are expensive) but also minimises latency when working with locally cached data.


With Project Forma Sketch, an architect can generate 3D models in Forma by sketching out simple massing designs with a digital pencil and combining that with speech.

AI research

Autodesk also gave a sneak peek into some of its experimental AI research projects. With Project Forma Sketch, an architect can generate 3D models in Forma by sketching out simple massing designs with a digital pencil and combining that with speech. In this example, the neural CAD foundation model interacts with large language models to interpret the stream of information.

Elsewhere, Amour showed how Pointfuse in Recap Pro is building on its capability to convert point clouds into segmented meshes for model coordination and clash detection in Revit. “We’re launching a new AI powered beta that will recognise objects directly from scans, paving the way for automated extraction, for building retrofits and renovations,” she said.

Autodesk has also been working with global design, engineering, and consultancy firm Arcadis to pilot a new technology that uses AI to see inside walls to make it easier and faster to retrofit existing buildings.

Instead of destructive surveys, where walls are torn down, the AI uses multimodal data – GIS, floor plans, point clouds, Thermal Imaging, and Radio Frequency (RF) scans – to predict hidden elements, such as mechanical systems, insulation, and potential damage.


The AI-assisted future

AU 2025 felt like a pivotal moment in Autodesk’s AI journey. The company is now moving beyond ambitions and experimentation into a phase where AI is becoming deeply integrated into its core software.

With the neural CAD and Autodesk Assistant branded functionality, AI will soon be able to generate fully editable CAD geometry, automate repetitive tasks, and gain ‘actionable insights’ across both AEC and product development workflows.

As Autodesk stresses, this is all being done while keeping humans firmly in the loop, ensuring that professionals remain the final decision-makers and retain accountability for design outcomes.

Importantly, customers do not need to adopt brand new design tools to get onboard with Autodesk AI. While neural CAD is being integrated into Forma and Fusion, users of traditional desktop CAD/BIM tools can still benefit through Autodesk Assistant, which will soon be available in Revit, Civil 3D, AutoCAD, Inventor and others.

With Autodesk Assistant, the ability to optimise and automate workflows using natural-language feels like a powerful proposition, but as the technology evolves, the company faces the challenge of educating users on its capabilities — and its limitations.

Meanwhile, data interoperability remains front and centre, with Autodesk routing everything through the cloud and using MCP servers and APIs to enable cross-product and even cross-discipline workflows.

It’s easy to imagine how agent-to-agent communication might occur within the Autodesk world, but AEC workflows are fragmented, and it remains to be seen how this will play out with third parties.

Of course, as with other major design software providers, fully embracing AI means fully committing to the cloud, which will be a leap of faith for many AEC firms.

From customers we have spoken with there remain genuine concerns about becoming locked into the Autodesk ecosystem, as well as the potential for rising costs, particularly related to increased API usage. ‘Generous monthly limits’ might not seem so generous once the frequency of API calls increase, as it inevitably will in an iterative design process. It would be a real shame if firms end up actively avoiding using these powerful tools because of budgetary constraints.

Above all, AU is sure to have given Autodesk customers a much clearer idea of Autodesk’s long-term vision for AI-assisted design. There’s huge potential for Autodesk Assistant to grow into a true AI agent while neural CAD foundation models will continue to evolve, handling greater complexity, and blending text, speech and sketch inputs to further slash design times.

We’re genuinely excited to see where this goes, especially as Autodesk is so well positioned to apply AI throughout the entire design build process.


Main image: Mike Haley, senior VP of research, presents the AI keynote at Autodesk University 2025  

The post Autodesk shows its AI hand appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/autodesk-shows-its-ai-hand/feed/ 0
KREOD to bring “aerospace-grade precision” to AECO https://aecmag.com/digital-fabrication/kreod-to-bring-aerospace-grade-precision-to-aeco/ https://aecmag.com/digital-fabrication/kreod-to-bring-aerospace-grade-precision-to-aeco/#disqus_thread Wed, 05 Nov 2025 18:54:12 +0000 https://aecmag.com/?p=25480 KREODx platform aims to redefine how buildings are designed, engineered, manufactured, assembled, operated, and maintained

The post KREOD to bring “aerospace-grade precision” to AECO appeared first on AEC Magazine.

]]>
KREODx platform aims to redefine how buildings are designed, engineered, manufactured, assembled, operated, and maintained

London-based KREOD is planning to bring “aerospace-grade precision” to the built environment, with its new KREODx platform which has just launched in beta.

The software harnesses Parasolid from Siemens Digital Industries Software, a geometric modelling kernel that is typically found inside mechanical CAD tools such as Dassault Systèmes Solidworks, Siemens Solid Edge, and Siemens NX.

The software combines Design for Manufacture and Assembly (DfMA) principles with a building-centric approach to Product Lifecycle Management (PLM) — a process commonly used in manufacturing to manage a product’s data, design, and development throughout its entire lifecycle.

KREODx is said to be powered by “Intelligent Automation” with parametric design and engineering workflows that “eliminate errors and accelerate delivery”.

The software offers full support for Bill of Materials (BoM) to deliver what the company describes as a single source of truth for costs, materials, and procurement, giving transparency from model to assembly.

According to the company, KREODx is also aligned with the circular economy, extending building lifespans, reducing waste, and enabling re-use and adaptability over time.


Discover what’s new in technology for architecture, engineering and construction — read the latest edition of AEC Magazine
👉 Subscribe FREE here

The post KREOD to bring “aerospace-grade precision” to AECO appeared first on AEC Magazine.

]]>
https://aecmag.com/digital-fabrication/kreod-to-bring-aerospace-grade-precision-to-aeco/feed/ 0
Chaos V-Ray to support AMD GPUs https://aecmag.com/visualisation/chaos-v-ray-to-support-amd-gpus/ https://aecmag.com/visualisation/chaos-v-ray-to-support-amd-gpus/#disqus_thread Mon, 13 Oct 2025 16:38:03 +0000 https://aecmag.com/?p=25267 Photorealistic rendering software will now work on AMD Ryzen AI Max Pro processor with up to 96 GB of graphics memory

The post Chaos V-Ray to support AMD GPUs appeared first on AEC Magazine.

]]>
Includes AMD Ryzen AI Max Pro processor with up to 96 GB of graphics memory

Chaos V-Ray will soon support AMD GPUs, so users of the photorealistic rendering software can choose from a wider range of graphics hardware including the AMD Radeon Pro W7000 series and the AMD Ryzen AI Max Pro processor that has an integrated Radeon GPU.

Until now, V-Ray’s GPU renderer has been limited to Nvidia RTX GPUs via the CUDA platform, while its CPU renderer has long worked with processors from both Intel and AMD.

Chaos plans to roll out the changes publicly in every edition of V-Ray, including 3ds Max, SketchUp, Revit and Rhino, Maya, and Blender.

At Autodesk University last month, both Dell and HP showcased V-Ray GPU running on AMD GPUs – Dell on a desktop workstation with a discrete AMD Radeon Pro W7600 GPU and HP on a HP ZBook Ultra G1a with the new AMD Ryzen AI Max+ 395 processor, where up to 96 GB of the 128 GB unified memory can be allocated as VRAM.



“[With the AMD Ryzen AI Max+ 395} you can load massive scenes without having to worry so much about memory limitations,” says Vladimir Koylazov, head of innovation, Chaos. “We have a massive USD scene that we use for testing, and it was really nice to see it actually being rendered on an AMD [processor]. It wouldn’t be possible on [most] discrete GPUs, because they don’t normally have that much memory.”

This new capability has been made possible through AMD HIP (Heterogeneous-Compute Interface for Portability) — an open-source toolkit that allows developers to port CUDA-based GPU applications to run on AMD hardware without the need to create and maintain a new code base.

“HIP handles complicated pieces of code, like V-Ray GPU, a lot better than OpenCL used to do, says Koylazov. “Everything we support in V-Ray GPU on other platforms is now supported on AMD GPUs.”

Chaos isn’t alone in embracing AMD GPUs. Earlier this year, product design focused viz tool KeyShot also added support, which we put to the test in our HP ZBook Ultra G1a review.


The post Chaos V-Ray to support AMD GPUs appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/chaos-v-ray-to-support-amd-gpus/feed/ 0
Trimble brings collaboration directly into SketchUp https://aecmag.com/concept-design/trimble-brings-collaboration-directly-into-sketchup/ https://aecmag.com/concept-design/trimble-brings-collaboration-directly-into-sketchup/#disqus_thread Wed, 08 Oct 2025 12:54:13 +0000 https://aecmag.com/?p=25165 3D modelling tool now offers private sharing control, in-app commenting, and more

The post Trimble brings collaboration directly into SketchUp appeared first on AEC Magazine.

]]>
3D modelling tool now offers private sharing control, in-app commenting, and more

Trimble has built a new suite of collaboration tools directly into the heart of SketchUp for Desktop, alongside improvements to documentation, site context, and visualisation.

The latest release of the popular push/pull 3D modelling software introduces private sharing control, in-app commenting, and real-time viewing, allowing designers to collect feedback from clients and stakeholders without leaving the SketchUp environment.

“Great designs are shaped by conversation, iteration and shared insight,” said Sandra Winstead, senior director of product management, architecture and design at Trimble. “Rather than jumping between email threads or third-party tools to hold conversations, collaborate and make design decisions, we’ve built collaboration directly into SketchUp.”

With these new tools, designers can securely share models with selected stakeholders, controlling who can view and comment. Feedback is attached directly to 3D geometry, ensuring comments are linked to the right part of the model.


All collaborators see updates instantly, creating what Trimble describes as a shared space for real-time design conversations. Cursor and camera tracking features also allow clients and colleagues to follow along during live presentations.

Elsewhere, SketchUp now includes professional 2D drafting tools in LayOut, the companion application used for presentations and documentation.
According to Trimble, users gain access to more intuitive and precise drawing features for common documentation tasks, along with new scrapbooks offering standard architectural graphics such as doors and windows for scaled 2D composition.

An enhanced DWG export workflow helps ensure that SketchUp geometry and Tags are accurately preserved when transferring designs from 3D SketchUp into 2D CAD or BIM tools.

Trimble has also upgraded Scan Essentials, the SketchUp plug-in for turning point cloud data into 3D models. The latest release makes it easier to incorporate existing buildings into terrain as pre-built 3D geometry, supporting more accurate visualisation, climate analysis, and site planning.

SketchUp’s visualisation capabilities have been further refined, offering greater stylistic control and a broader set of rendering options, including Color Ambient Occlusion, Ambient Occlusion Scale Multiplier, and Invert Roughness.

Finally, for AI-assisted rendering, a new Diffusion Labs update delivers higher-fidelity imagery and greater creative control over AI-generated imagery.

The post Trimble brings collaboration directly into SketchUp appeared first on AEC Magazine.

]]>
https://aecmag.com/concept-design/trimble-brings-collaboration-directly-into-sketchup/feed/ 0
Viktor to simplify engineering automation with App Builder https://aecmag.com/automation/viktor-to-simplify-engineering-automation-with-ai-builder/ https://aecmag.com/automation/viktor-to-simplify-engineering-automation-with-ai-builder/#disqus_thread Wed, 15 Oct 2025 11:00:53 +0000 https://aecmag.com/?p=25240 New AI tool enables engineers without coding skills to build custom tools ‘in minutes’

The post Viktor to simplify engineering automation with App Builder appeared first on AEC Magazine.

]]>
New AI tool enables engineers without coding skills to build custom tools ‘in minutes’

Viktor, a specialist in AI for engineering automation, has launched App Builder, a new tool designed to enable engineers without coding experience to automate tasks and build tools ‘in minutes’.

App Builder is designed not only to accelerate workflows but also to improve consistency and quality by eliminating the need to retype data or copy and paste between Excel and design tools, while ensuring transparency, governance, and compliance.

Automation tasks include design checks, selection workflows, and calculations through a ‘user-friendly’ interface that can be shared and reused across projects.

App Builder can also integrate with AEC platforms and tools such as Autodesk Construction Cloud (ACC), Autodesk Platform Services (APS), Plaxis, Rhino, and Revit.

According to Viktor, during beta testing, 80% of engineers created a working tool in under an hour, suggesting a quick learning curve. Viktor cites a digital transformation lead in the industrial sector who used AI Builder to create multiple tools — including ones for pump sizing and tank selection — each within minutes.

Firms including Vinci, Arcadis, Jacobs, and WSP have already been using App Builder.

The post Viktor to simplify engineering automation with App Builder appeared first on AEC Magazine.

]]>
https://aecmag.com/automation/viktor-to-simplify-engineering-automation-with-ai-builder/feed/ 0
Amulet Hotkey boosts 1:1 datacentre workstations https://aecmag.com/workstations/amulet-hotkey-boosts-11-datacentre-workstations/ https://aecmag.com/workstations/amulet-hotkey-boosts-11-datacentre-workstations/#disqus_thread Sat, 11 Oct 2025 09:40:55 +0000 https://aecmag.com/?p=25249 Updated CoreStation HX2000 gains Intel’s Core Ultra 9 285H processor, with higher clocks and AI acceleration

The post Amulet Hotkey boosts 1:1 datacentre workstations appeared first on AEC Magazine.

]]>
CoreStation HX2000 gains Intel’s Core Ultra 9 285H processor, while forthcoming CoreStation HX3000 promises even greater performance

Amulet Hotkey has updated its CoreStation HX2000 datacentre remote workstation with a new Intel Core Ultra 9 285H processor option, delivering higher clock speeds and built-in NPU AI acceleration.

The CoreStation HX2000 is built around a 5U rack mounted enclosure that can accommodate up to 12 single-width workstation nodes that can be removed, replaced, or upgraded.

Each workstation node is accessed by a single user over a 1:1 connection and can be configured with a choice of discrete MXM laptop GPUs – the Nvidia RTX A1000 (4 GB) or Nvidia RTX 2000 Ada (8 GB) – making it well suited to mainstream CAD and BIM workflows.

Features include redundant power and cooling, hot-swappable components and ‘full remote system management’ including core management capabilities such as secure remote access, power control, BIOS-level KVM access, and system-wide firmware updates.

The Intel Core Ultra 9 285H features six performance (P) cores and 8 efficient (E) cores and delivers clock speeds of up to 5.4 GHz. Although the processor is typically used in laptops, Amulet Hotkey says its datacentre integration provides greater power and cooling headroom than a mobile platform.

Amulet Hotkey is also developing a new CoreStation HX3000, due to launch in Q1 2026. Built around the same 5U enclosure, the HX3000 will feature full Intel Core desktop processors, up to the Intel Core Ultra 9 285K, alongside low-profile Nvidia RTX and Intel Arc Pro GPUs.

With eight P-cores, 16 E-cores, and a higher TDP, the Core Ultra 9 285K (read our review) promises a significant uplift in multi-threaded workflows.

Supported GPUs such as the Nvidia RTX 2000 Ada (16 GB) (read our review), RTX 4000 SFF Ada (20 GB) (read our review), and RTX 2000 Pro Blackwell (16 GB) are expected to deliver major performance gains over their MXM laptop counterparts – not only offering more speed but also significantly more memory for handling larger datasets and demanding visualisation tools such as Enscape, Lumion, and Twinmotion.

The CoreStation HX is designed, built, and manufactured in the UK by Amulet Hotkey.


The CoreStation HX2000 is purpose built for the datacentre with redundant power and cooling, hot-swappable components and ‘full remote system management’

The post Amulet Hotkey boosts 1:1 datacentre workstations appeared first on AEC Magazine.

]]>
https://aecmag.com/workstations/amulet-hotkey-boosts-11-datacentre-workstations/feed/ 0
Chaos: from pixels to prompts https://aecmag.com/visualisation/chaos-from-pixels-to-prompts/ https://aecmag.com/visualisation/chaos-from-pixels-to-prompts/#disqus_thread Thu, 09 Oct 2025 05:00:40 +0000 https://aecmag.com/?p=24806 Chaos is blending AI with traditional viz, rethinking how architects explore, present and refine ideas

The post Chaos: from pixels to prompts appeared first on AEC Magazine.

]]>
Chaos is blending generative AI with traditional visualisation, rethinking how architects explore, present and refine ideas using tools like Veras, Enscape, and V-Ray, writes Greg Corke

From scanline rendering to photorealism, real-time viz to realt-ime ray tracing, architectural visualisation has always evolved hand in hand with technology.

Today, the sector is experiencing what is arguably its biggest shift yet: generative AI. Tools such as Midjourney, Stable Diffusion, Flux, and Nano Banana are attracting widespread attention for their ability to create compelling, photorealistic visuals in seconds — from nothing more than a simple prompt, sketch, or reference image.

The potential is enormous, yet many architectural practices are still figuring out how to properly embrace this technology, navigating practical, cultural, and workflow challenges along the way.

The impact on architectural visualisation software as we know it could be huge. But generative AI also presents a huge opportunity for software developers.


Find this article plus many more in the September / October 2025 Edition
👉 Subscribe FREE here 👈

Like some of its peers, Chaos has been gradually integrating AI-powered features into its traditional viz tools, including Enscape and V-Ray. Earlier this year, however, it went one step further by acquiring EvolveLAB and its dedicated AI rendering solution, Veras.

Veras allows architects to take a simple snapshot of a 3D model or even a hand drawn sketch and quickly create ‘AI-rendered’ images with countless style variations. Importantly, the software is tightly integrated with CAD / BIM tools like SketchUp, Revit, Rhino, Archicad and Vectorworks, and offers control over specific parts within the rendered image.

With the launch of Veras 3.0, the software’s capabilities now extend to video, allowing designers to generate short clips featuring dynamic pans and zooms, all at the push of a button.

“Basically, [it takes] an image input for your project, then generates a five second video using generative AI,” explains Bill Allen, director of products, Chaos. “If it sees other things, like people or cars in the scene, it’ll animate those,” he says.

This approach can create compelling illusions of rotation or environmental activity. A sunset prompt might animate lighting changes, while a fireplace in the scene could be made to flicker. But there are limits. “In generative AI, it’s trying to figure out what might be around the corner [of a building], and if there’s no data there, it’s not going to be able to interpret it,” says Allen.

Chaos is already looking at ways to solve this challenge of showcasing buildings from multiple angles. “One of the things we think we could do is take multiple shots – one shot from one angle of the building and another one – and then you can interpolate,” says Allen.


Model behaviour

Veras uses Stable Diffusion as its core ‘render engine’. As the generative AI model has advanced, newer versions of Stable Diffusion have been integrated into Veras, improving both realism and render speed, and allowing users to achieve more detailed and sophisticated results.

“We’re on render engine number six right now,” says Allen. “We still have render engine, four, five and six available for you to choose from in Veras.”

But Veras does not necessarily need to be tied to a specific generative AI model. In theory it could evolve to support Flux, Nano Banana or whatever new or improved model variant may come in the future.

But, as Allen points out, the choice of model isn’t just down to the quality of the visuals it produces. “It depends on what you want to do,” he says. “One of the reasons that we’re using Stable Diffusion right now instead of Flux is because we’re getting better geometry retention.”

One thing that Veras doesn’t yet have out of the box is the ability for customers to train the model using their own data, although as Allen admits, “That’s something we would like to do.”

In the past Chaos has used LORAs (Low-Rank Adaptations) to fine-tune the AI model for certain customers in order to accurately represent specific materials or styles within their renderings.

Roderick Bates, head of product operations, Chaos, imagines that the demand for fine tuning will go up over time, but there might be other ways to get the desired outcome, he says. “One of the things that Veras does well is that you can adjust prompts, you can use reference images and things like that to kind of hone in on style.”


Chaos Veras 3.0 – still #1
Chaos Veras 3.0 – still #2

Post-processing

While Veras experiments with generative creation, Chaos is also exploring how AI can be used to refine output from its established viz tools using a variety of AI post-processing techniques.

Chaos AI Upscaler, for example, enlarges render output by up to four times while preserving photorealistic quality. This means scenes can be rendered at lower resolutions (which is much quicker), then at the click of a button upscaled to add more detail.

While AI upscaling technology is widely available – both online and in generic tools like Photoshop – Chaos AI Upscaler benefits from being directly accessible at the click of a button directly inside the viz tools like Enscape that architects already use. Bates points out that if an architect uses another tool for this process, they must download the rendered image first, then upload it to another place, which fragments the workflow. “Here, it’s all part of an ecosystem,” he explains, adding that it also avoids the need for multiple software subscriptions.

Chaos is also applying AI in more intelligent ways, harnessing data from its core viz tools. Chaos AI Enhancer, for example, can improve rendered output by refining specific details in the image. This is currently limited to humans and vegetation, but Chaos is looking to extend this to building materials.

“You can select different genders, different moods, you can make a person go from happy to sad,” says Bates, adding that all of this can be done through a simple UI.

There are two major benefits: first, you don’t have to spend time searching for a custom asset that may or may not exist and then have to re-render; second, you don’t need highly detailed 3D asset models to achieve the desired results, which would normally require significant computational power, or may not even be possible in a tool like Enscape.

With Veras 3.0, the software’s capabilities now extend to video, allowing designers to generate short clips featuring dynamic pans and zooms, all at the push of a button

The real innovation lies in how the software applies these enhancements. Instead of relying on the AI to interpret and mask off elements within an image, Chaos brings this information over from the viz tool directly. For example, output from Enscape isn’t just a dumb JPG — each pixel carries ‘voluminous metadata’, so the AI Enhancer automatically knows that a plant is a plant, or a human is a human. This makes selections both easy and accurate.

As it stands, the workflow is seamless: a button click in Enscape automatically sends the image to the cloud for enhancement.

But there’s still room for improvement. Currently, each person or plant must be adjusted individually, but Chaos is exploring ways to apply changes globally within the scene. Chaos

AI Enhancer was first introduced in Enscape in 2024 and is now available in Corona and V-Ray 7 for 3ds Max, with support for additional V-Ray integrations coming soon.

AI materials

Chaos is also extending its application of AI into materials, allowing users to generate render-ready materials from a simple image. “Maybe you have an image from an existing project, maybe you have a material sample you just took a picture of,” says Bates. “With the [AI Material Generator] you can generate a material that has all the appropriate maps.”

Initially available in V-Ray for 3ds Max, the AI Material Generator is now being rolled out to Enscape. In addition, a new AI Material Recommender can suggest assets from the Chaos Cosmos library, using text prompts or visual references to help make it faster and easier to find the right materials.

Cross pollination

Chaos is uniquely positioned within the design visualisation software landscape. Through Veras, it offers powerful oneclick AI image and video generation, while tools like Enscape and V-Ray use AI to enhance classic visualisation outputs. This dual approach gives Chaos valuable insight into how AI can be applied across the many stages of the design process, and it will be fascinating to see how ideas and technologies start to cross-pollinate between these tools.

A deeper question, however, is whether 3D models will always be necessary. “We used to model to render, and now we render to model,” replies Bates, describing how some firms now start with AI images and only later build 3D geometry.

“Right now, there is a disconnect between those two workflows, between that pure AI render and modelling workflow – and those kind of disconnects are inefficiencies that bother us,” he says.

For now, 3D models remain indispensable. But the role of AI — whether in speeding up workflows, enhancing visuals, or enabling new storytelling techniques — is growing fast. The question is not if, but how quickly, AI will become a standard part of every architect’s viz toolkit.

The post Chaos: from pixels to prompts appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/chaos-from-pixels-to-prompts/feed/ 0
Egnyte puts AEC AI Agents to work https://aecmag.com/ai/egnyte-puts-aec-ai-agents-to-work/ https://aecmag.com/ai/egnyte-puts-aec-ai-agents-to-work/#disqus_thread Mon, 15 Sep 2025 15:33:57 +0000 https://aecmag.com/?p=24757 Agents extract details from specification files and deliver AI guidance for building code compliance

The post Egnyte puts AEC AI Agents to work appeared first on AEC Magazine.

]]>
Agents extract details from specification files and deliver AI guidance for building code compliance

Egnyte has embedded its first ‘secure, domain-specific’ AI agents within its platform to target some of the most time-consuming and costly parts of the AEC process, from bid to completion.

The Specifications Analyst and Building Code Analyst are designed to extract details from large specification files and quickly deliver AI guidance for building code compliance.

“These tools enable customers to take advantage of the power of AI without having to move their data and potentially expose it to security, compliance, and governance risks,” said Amrit Jassal, co-founder and CTO at Egnyte. “The AEC industry relies heavily on complex, content-intensive documents to make informed decisions throughout the project lifecycle, and a single error in a spec sheet or misinterpretation of a building code can lead to significant project delays and cost overruns. These AEC AI agents fundamentally reduce project risk and help firms to deliver better, more profitable outcomes.”


Find this article plus many more in the September / October 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

According to Egnyte, the Specifications Analyst allows users to transform any size specification document or multiple documents into source data that delivers fast and useful answers. Users can apply smart filters, including table of contents and materials, to quickly locate key sections and aggregate extracted spec data across the spec divisions.

Meanwhile, the Building Code Analyst is designed to consolidate disparate codebooks (i.e. state, county, and municipality) into a unified source of truth. Egnyte explains that the agent enables users to quickly find, compare, and check code requirements across relevant codebooks and produce consistent, useful AI-powered answers.

The agent instantly surfaces key passages with links to the relevant source text and automatically flags overlapping or contradictory code provisions, even providing the ability to include previous clarifications to speed up the resolution of such issues.

Egnyte states that all of its AI agents have access to content in the Egnyte repository while preserving its security, compliance, and data governance.

The agents also have access to data sources on the internet. According to the company, this helps ensure their outputs reflect the latest updates and amendments to building codes and other relevant information without compromising data saved in Egnyte repositories.

The post Egnyte puts AEC AI Agents to work appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/egnyte-puts-aec-ai-agents-to-work/feed/ 0
Autodesk unleashes neural CAD https://aecmag.com/ai/autodesk-unleashes-neural-cad/ https://aecmag.com/ai/autodesk-unleashes-neural-cad/#disqus_thread Tue, 16 Sep 2025 14:00:41 +0000 https://aecmag.com/?p=24715 3D generative AI foundation models coming to Fusion and Forma to automate design

The post Autodesk unleashes neural CAD appeared first on AEC Magazine.

]]>
3D generative AI foundation models coming to Fusion and Forma to automate design

Autodesk has introduced neural CAD, a new category of 3D generative AI foundation models coming to Fusion and Forma, which the company says will “completely reimagine the traditional software engines that create CAD geometry” and “automate 80 to 90% of what you [designers] typically do.”

Unlike general-purpose large language models (LLMs) such as ChatGPT, Gemini, and Claude, neural CAD models are trained on professional design data, enabling them to reason at both a detailed geometry level and at a systems and industrial process level – exploring ideas like efficient machine tool paths or standard building floorplan layouts.

According to Mike Haley, senior VP of research, Autodesk, neural CAD models are trained on the typical patterns of how people design, using a combination of synthetic data and customer data. “They’re learning from 3D design, they’re learning from geometry, they’re learning from shapes that people typically create, components that people typically use, patterns that typically occur in buildings.”



Learn more about neural CAD and Autodesk’s evolving AI strategy in AEC Magazine’s in-depth report

Autodesk shows its AI hand



Autodesk says that in the future, customers will be able to customise the neural CAD foundation models, by tuning them to their organisation’s proprietary data and processes.

Autodesk has so far presented two types of neural CAD models: ‘neural CAD for geometry’ and ‘neural CAD for buildings’.

With neural CAD for geometry, designers using Autodesk Fusion will be able to use language, sketching or imagery to produce ‘first-class’ CAD geometry which can then be used directly in manufacturing processes.

With neural CAD for buildings architects using Forma will be able to ‘quickly transition’ between early design concepts and more detailed building layouts and systems with the software ‘autocompleting’ repetitive aspects of the design.

“If I was to change the shape of a building, it can instantly recompute all the internal walls,” says Haley. “It can instantly recompute all of the columns, the platforms, the cores, the grid lines, everything that kind of makes up the structure of the building. It can help recompute structural drawings.”

At Autodesk University this week, Autodesk will be demonstrating Project Think Aloud, a new research project that explores how generative AI neural CAD models can help with architectural blocking.

Designers create buildings by sketching with an electronic pencil and talking at the same time. “The AI is able to take the speech and the text and reason about what your intent is to produce, building directly in Forma,” says Haley.

Meanwhile, in related news, Autodesk has announced Forma Building Design, a detailed building design solution that is said to offer LOD 200/300 detail, ‘AI-powered’ automation and integrated analysis.


neural CAD for geometry, can create accurate CAD designs based on a text prompt.

Find this article plus many more in the September / October 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

The post Autodesk unleashes neural CAD appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/autodesk-unleashes-neural-cad/feed/ 0