Technology Archives - AEC Magazine https://aecmag.com/technology/ Technology for the product lifecycle Mon, 17 Nov 2025 11:52:53 +0000 en-GB hourly 1 https://aecmag.com/wp-content/uploads/2021/02/cropped-aec-favicon-32x32.png Technology Archives - AEC Magazine https://aecmag.com/technology/ 32 32 AEC Magazine September / October 2025 https://aecmag.com/bim/aec-magazine-september-october-2025/ https://aecmag.com/bim/aec-magazine-september-october-2025/#disqus_thread Thu, 09 Oct 2025 05:00:25 +0000 https://aecmag.com/?p=25156 The hidden threat inside EULAs, Autodesk shows its AI hand, Chaos embraces AI and lots more

The post AEC Magazine September / October 2025 appeared first on AEC Magazine.

]]>
Highlights of our September / October 2025 edition
  • Cover story ‘Contract killers’ – most architects overlook software small print, but today’s EULAs are redefining ownership, data rights and AI use — shifting power from users to vendors.
    _
  • From pixels to prompts: How Chaos is blending AI with traditional viz, rethinking how architects explore, present and refine ideas
    _
  • Autodesk shows its AI hand: Autodesk has presented live, production-ready tools, giving customers a clear view of how AI could soon reshape workflows
    _
  • Autodesk Forma is finally expanding beyond its early-stage design roots with a brand-new product – Forma Building Design –  focused on detailed design

It’s available to view now, free, along with all our back issues.

Subscribe to the digital edition free + all the latest AEC technology news in your inbox, or take out a print subscription for $49 per year (free to UK AEC professionals).


The post AEC Magazine September / October 2025 appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/aec-magazine-september-october-2025/feed/ 0
Explorations in GeoBIM https://aecmag.com/geospatial/explorations-in-geobim/ https://aecmag.com/geospatial/explorations-in-geobim/#disqus_thread Thu, 09 Oct 2025 05:00:29 +0000 https://aecmag.com/?p=24854 We caught up with Esri’s Marc Goldman to discuss the geospatial company’s focus on BIM integration

The post Explorations in GeoBIM appeared first on AEC Magazine.

]]>
With more AEC collaborative design solutions available, employees in disciplines that once worked in silos are increasingly connected and sharing information with their colleagues. Martyn Day caught up with Marc Goldman, director of AEC industry at Esri, to discuss the company’s focus on BIM integration

Since 2017, Esri and Autodesk have pursued a strategic partnership to bridge longstanding divides between GIS (geospatial) and BIM (building/infrastructure design) data.

The shared ambition of executives at the two companies is to enable engineers, planners and asset owners to author, analyse and manage projects in a unified, spatially aware environment, from design through to operations.

Initially, the two companies announced plans to build a ‘bridge’ between BIM and GIS, so that Revit models could be brought into Esri platforms and to support enhanced workflows in ArcGIS Indoors and ArcGIS Urban.


Find this article plus many more in the September / October 2025 Edition
👉 Subscribe FREE here 👈

Over time, this partnership has evolved, to include Connectors for ArcGIS – tools for Civil 3D, InfraWorks, and AutoCAD Map3D – that support live linking of GIS data into BIM software with bidirectional updates.

Today, that integration is embodied by ArcGIS GeoBIM, a web-based platform linking Autodesk Construction Cloud (also known as ACC and previously named BIM 360) to Esri’s ArcGIS. This enables project teams to visualise, query and coordinate BIM models within their real-world geographic context, according to Marc Goldman, director of AEC industry at Esri.

“GeoBIM provides a common dashboard for large-scale projects, allowing AEC firms and owner-operators to visualise GIS context alongside BIM content and object properties, even though the source files may reside in ACC,” he explains.

The technical integration now takes two distinct forms, tailored to project needs.

Esri
ArcGIS for Autodesk Forma

The first is Building Layers with ArcGIS Pro, to support detailed, element-level analysis, design review and asset management. Models retain full BIM structure, including geometry, categories, phases and attributes, enabling precise filtering by architectural element or building level.

The second is Simplified 3D Models with ArcGIS GeoBIM, introduced in June 2025, to optimise performance and agility for construction monitoring, mobile workflows and stakeholder engagement. The Add Document Models tool generates lightweight, georeferenced models from Revit and IFC files while preserving links back to their source.

Esri has also extended its partnership with Autodesk with ArcGIS for Autodesk Forma, embedding geospatial reference data directly into Autodesk’s cloud-based planning platform. Forma users can now draw on the ArcGIS Living Atlas, municipal datasets and enterprise geodatabases, all natively georeferenced. This allows environmental, infrastructure, zoning and demographic layers to be overlaid onto early-stage conceptual designs.

GeoBIM provides a common dashboard for large-scale projects, allowing AEC firms and owner-operators to visualise GIS context alongside BIM content and object properties, even though the source files may reside in ACC Marc Goldman, director of AEC industry, Esri

As Goldman notes, “Designs created in Forma inherit coordinate systems and spatial metadata, ensuring that when they move downstream into Revit, Civil 3D or ArcGIS Pro, they remain consistent and location-aware. Beyond visualisation, ArcGIS for Forma supports rapid scenario testing, such as climate risk or transport connectivity, within the context of a live GIS fabric.”

Autodesk Tandem and the broader world of digital twins have also caught the attention of executives at Esri, he adds: “Esri is working with the Tandem team to serve GIS context for customers managing clusters of buildings. This could enable Tandem to evolve into a multi-building digital twin platform.”

AI, NLQ et al

According to Goldman, Esri has been using AI technology internally for years – long before the recent surge of hype around the technology. Now, he says, AI is being deployed to automate complex GIS tasks for users, lowering the barrier to entry for non-specialists.

One example of this can be found in reality capture and asset management. Esri’s reality suite, based on its 2020 acquisition of nFrames, uses geosplatting and computer vision to create high-quality 3D objects from 360-degree cameras or video inspections.


Esri
ArcGIS GeoBIM

“AI enables automated feature extraction from reality capture data, such as LiDAR,” he explains. “Organisations like Caltrans can process hundreds of miles of roads overnight. Segmentation automatically recognises barriers, trees, signage and more, making the data assetmanagement ready.”

Meanwhile, natural language query (NLQ) capabilities in ArcGIS are also paving the way for the democratisation of GIS With more AEC collaborative design solutions available, employees in disciplines that once worked in silos are increasingly connected and sharing information with their colleagues. Martyn Day caught up with Marc Goldman, director of AEC industry at Esri, to discuss the company’s focus on BIM integration Explorations in GeoBIM Technology data. Users can now perform advanced analysis without specialist training.

“Say I need a map of central London, showing the distance between tube stops and grocery stores, overlaid with poverty levels,” Goldman illustrates. “The system generates the map and suggests visualisations, making spatial insights accessible to anyone.”

Urban planning remains a hot topic. That was certainly the case at our recent NXT BLD event, where innovations were showcased by Cityweft, Giraffe, GeoPogo and, of course, Esri.

It’s a domain in which Esri has long contributed and continues to do so, with technologies to enable scenario evaluation and parametric city modelling.

As Goldman puts it: “Architects and planners need to evaluate scenarios, like population growth, by bringing in demographic and visual context. Esri’s tools ensure design choices are made in the right place, with the right influences. And with AI, the possibilities for urban planning expand even further.”

In summary, Esri’s partnership with Autodesk continues to transform the relationship between GIS and BIM data, with AI set to drive the next great wave of integration. As both companies continue to expand their cloud portfolios and ecosystems, Esri is embedding spatial intelligence, predictive analytics and automated decision support directly into AEC workflows.

The convergence of ArcGIS, GeoBIM and Forma with AI-driven insights offers the AEC industry a significant opportunity to move beyond static models towards dynamic, learning digital twins. In this way, says Goldman, the Esri and Autodesk partnership will help that industry “create a more sustainable, resilient and contextaware built environment.”

The post Explorations in GeoBIM appeared first on AEC Magazine.

]]>
https://aecmag.com/geospatial/explorations-in-geobim/feed/ 0
Plugging the leaks with CivilSense https://aecmag.com/civil-engineering/plugging-the-leaks-with-civilsense/ https://aecmag.com/civil-engineering/plugging-the-leaks-with-civilsense/#disqus_thread Thu, 09 Oct 2025 05:00:26 +0000 https://aecmag.com/?p=24866 A two-pronged approach to technology deployment is enabling the timely detection of leaks on water networks

The post Plugging the leaks with CivilSense appeared first on AEC Magazine.

]]>
Small leaks from water networks are not only a major headache for utilities providers, but also have the potential to lead to major outages and significant disruption for the communities they serve – but a two-pronged approach to technology deployment can help, according to Peter Delgado of Oldcastle Infrastructure

In late September 2025, thousands of residents in Novi, Michigan and the surrounding area were forced to contend with days of disruption following a severe water main break. Some homes and businesses faced total water outages. Others were issued a ‘boil water advisory’, recommending they boil the water from their taps due to potential contamination that might make it unsafe to drink.

This kind of scenario is all too common and causes real headaches for communities. It’s hugely damaging for utility providers, too. Alongside the significant costs of a major repair job, they are also likely to experience an angry backlash from customers and, in some cases, financial penalties from regulators.

But even more problematic for utilities is the impact of slow but steady leakage of water from their networks. According to estimates from strategy firm McKinsey, some 14% to 18% of total treated potable water in the US is lost through leaks before it even reaches customers. In England, the figure is around 19%, according to a 2024 report by the UK Environment Agency. In other words, it simply makes sound business sense to catch and fix small leaks before they lead to major problems.


Find this article plus many more in the September / October 2025 Edition
👉 Subscribe FREE here 👈

Technology can help, but to tackle leaks effectively, utilities need to take a two-pronged approach to its deployment, according to Peter Delgado, director of commercial excellence at Oldcastle Infrastructure, which is part of CRH, a global provider of building materials for transportation and critical infrastructure.

Says Delgado: “You need prediction technology, so that you know where leaks are most likely to occur across the many miles of network that you manage. And you need leak detection technology that enables you to pinpoint the location and size of leaks.”

Oldcastle Infrastructure’s CivilSense solution, he claims, is the only solution to enable customers to adopt this bilateral approach and address both sides of the coin. To do so, Oldcastle Infrastructure relies on whitelabelling technology from two other companies and combining these with its CivilSense software platform.

First, CivilSense uses AI-driven predictive analysis from Boston, Massachusetts-based VODA.ai to flag sections of a network that are at higher risk of pipe failure or breakage, based on its analysis of geographic information systems (GIS), climate and infrastructure asset data. These analyses include the ranking of different areas of often vast water networks by risk.

Next comes the deployment by frontline teams of acoustic sensors from Bicester, UK-based FIDO Tech. These sensors detect, locate and size actual leaks in real time and are magnetically attached to valves via manholes in areas of particular concern for a monitoring period typically lasting a day or two, but sometimes up to a week, Delgado says. They are engineered to ‘listen’ for the particular sounds and vibrations produced by a leaking pipe and often at levels far beyond the limits of human hearing and AI-driven analysis of that data can pinpoint a leak to an accuracy of around three metres, says Delgado. That data is also fed back into CivilSense.

Finally, CivilSense is the platform where both types of intelligence – predictive analytics and leak detection – are aggregated and visualised for utility workers, in the form of dashboards and maps accessible from any device, including the smartphones and tablets typically used by workers in the field. In this way, utilities can respond proactively to leaks before they become severe, prioritise repairs, allocate frontline resources to repair jobs and plan preventative maintenance – not to mention avoid the cost, waste and disruption of digging in a location where no leak actually exists.

It’s important to remember that much of this infrastructure is hidden away, deep underground and that’s part of the problem, says Delgado. “The ‘out-of-sight, out-of-mind’ nature of leaks can lead to a reactive mindset, but that’s no solution to the challenges that utilities increasingly face.” Water systems are ageing in the US and many other countries, he points out, with infrastructure such as pipes and valves fast approaching the end of its shelf-life.

Couple that with the impacts of rising demand for water among consumers, extreme weather events and ageing workforces in which many skilled utilities engineers are nearing retirement, he says, and you’ve got a perfect storm that simply demands a more proactive mindset.

“Without more innovative solutions, without new technologies, the current approaches used by utilities providers to deal with leaks will soon prove hopelessly inadequate,” he warns.

Indeed, if you ask the people of Novis, Michigan, they would probably tell you that the utilities industry reached that point some time ago.

The post Plugging the leaks with CivilSense appeared first on AEC Magazine.

]]>
https://aecmag.com/civil-engineering/plugging-the-leaks-with-civilsense/feed/ 0
Review: HP Z2 Mini G1a https://aecmag.com/workstations/review-hp-z2-mini-g1a/ https://aecmag.com/workstations/review-hp-z2-mini-g1a/#disqus_thread Thu, 11 Sep 2025 07:23:53 +0000 https://aecmag.com/?p=24679 HP rewrites the rulebook for compact workstations thanks to a groundbreaking AMD processor

The post Review: HP Z2 Mini G1a appeared first on AEC Magazine.

]]>
With an integrated graphics processor with fast access to more memory than any other GPU in its class, HP is rewriting the rulebook for compact workstations, writes Greg Corke

When the HP Z2 Mini first launched in 2017 it redefined the desktop workstation. By delivering solid performance in an exceedingly compact, monitor-mountable form factor, HP created a new niche — a workstation ideal for space-constrained environments.

Fast forward several generations, and the Z2 Mini has evolved significantly. It’s no longer just a standalone desktop — it’s become a key component of HP’s datacentre workstation ecosystem, providing each worker with remote access to a dedicated workstation over a 1:1 connection.

With the latest model, the Z2 Mini G1a, HP introduces something new: an AMD processor at the heart of the machine, denoted by the ‘a’ suffix in its product name. This is the first time the Z2 Mini has featured AMD silicon, and the results are impressive.

The processor in question is the AMD Ryzen AI Max Pro, the exact same chip found in the hugely impressive HP ZBook Ultra G1a 14-inch mobile workstation, which we reviewed earlier this year.

Unlike traditional processors, this groundbreaking chip features an integrated GPU with performance on par with a mid-range discrete graphics card. Crucially, the GPU can also be configured with up to 96 GB of system memory. This far exceeds the memory ceiling of most discrete GPUs in its class and unlocks new possibilities for memory-intensive workloads, including AI.

While the ZBook Ultra G1a mobile workstation runs the Ryzen AI Max Pro within a 70 W thermal design power (TDP), the Z2 Mini G1a desktop cranks that up significantly — more than doubling the power budget to 150 W. This allows the chip to maintain higher clock speeds for longer, delivering more performance in both multi-threaded CPU workflows like rendering, simulation and reality modelling, as well as GPU-intensive tasks such as real-time visualisation and AI.

That said, doubling the power doesn’t double the performance. As with most processors, the Ryzen AI Max Pro reaches a point of diminishing returns, where additional wattage yields increasingly modest improvements. However, for compute-intensive workflows, that extra headroom can still deliver a meaningful advantage.

The compact workstation

The Z2 Mini G1a debuts with a brand-new chassis that’s even more compact than its Intel-based sibling, the Z2 Mini G1i. The smaller footprint is hardly surprising, given the AMD-based model doesn’t need room for a discrete GPU — unlike the Intel version, which currently supports options up to the double-height, low-profile Nvidia RTX 4000 SFF Ada Generation (read our review).

But what’s really clever is that HP’s engineers have also squeezed the power supply inside the machine. That might not seem like a big deal for desktops, but for datacentre deployments, where external power bricks and excess cabling can create clutter, interfere with airflow, and complicate rack management, it’s a significant improvement. Unfortunately, the HP Remote System Controller, which provides out-of-band management, is still external.


The chassis is divided into two sections, separated by the system board. The top two-thirds house the key components, fans and heatsink, while the bottom third is mostly reserved for the 300W power supply.

Despite its compact form factor, the Z2 Mini G1a doesn’t skimp on connectivity. At the rear you’ll find two Thunderbolt 4 ports (USB-C, 40Gbps), two USB Type-A (480Mbps), two USB Type-A (10Gbps), two Mini DisplayPort 2.1, and a 2.5GbE LAN. For easy access on the side, there’s an additional USB Type-C (10Gbps) and USB Type-A (10Gbps).

Serviceability on the Z2 Mini G1a is limited, as the processor and system memory are soldered to the motherboard, leaving no scope for major upgrades. It’s therefore crucial to select the right specifications at purchase (more on this later). The two M.2 NVMe SSDs and several smaller components, however, are easily replaceable, and two Flex I/O ports allow for additional USB connections or a 10GbE LAN upgrade.

The beating heart

The AMD Ryzen AI Max Pro processor at the heart of the Z2 Mini G1a is a powerful all-in-one chip that combines a high-performance multi-core CPU, with a remarkably capable integrated GPU and a dedicated Neural Processing Unit (NPU) for AI.

While the spotlight is understandably on the flagship model, the AMD Ryzen AI Max+ Pro 395, with a considerable 16 CPU cores and Radeon 8060S graphics capable of handling entry-level to mainstream visualisation, the other processor options shouldn’t be overlooked. With fewer cores and less powerful GPUs, they should still offer more than enough performance for typical CAD and BIM workflows (see table below).


AMD Ryzen AI Max PRO

A massive pool of memory

The standout feature of the AMD Ryzen AI Max Pro is its memory architecture, and how it gives the GPU direct and fast access to a large, unified pool of system RAM. This is in contrast to discrete GPUs, such as Nvidia RTX, which have a fixed amount of on-board memory.

The integrated GPU can use up to 75% of the system’s total RAM, allowing for up to 96 GB of GPU memory when the Z2 Mini G1a is configured with its maximum 128 GB.

This means the workstation can handle certain workloads that simply aren’t possible with other GPUs in its class.

When a discrete GPU runs out of memory, it has to ‘borrow’ from system memory. Because this data transfer occurs over the PCIe bus, it is highly inefficient. Depending on how much memory is borrowed, performance can drop sharply: renders can take much longer, frame rates can fall from double digits to low single digits, and navigating models or scenes can become nearly impossible. In extreme cases, the software may even crash.

The Z2 Mini G1a allows users to control how much memory is allocated to the GPU. In the BIOS, simply choose a profile – from 512 MB, 4 GB, 8 GB, all the way up to 96 GB (should you have 128 GB of RAM to play with). Of course, the larger the profile, the more it eats into your system memory, so it’s important to strike a balance.

The amazing thing about AMD’s technology is that should the GPU run out of its ringfenced memory, in many cases it can seamlessly borrow more from system memory, if available, temporarily expanding its capacity. Since this memory resides in the same physical location, access remains very fast.

Even with the smallest 512 MB profile, borrowing 10 GB for CAD software Solidworks caused only a slight drop in 3D performance, maintaining that all-important smooth experience within the viewport.

This means that if system memory is in short supply, opting for a smaller GPU memory profile can offer more flexibility by freeing up RAM for other tasks.

Of course, because memory is fixed in the Z2 Mini G1a, and cannot be upgraded, you must choose very wisely at time of purchase. For CAD/BIM workflows, we recommend 64 GB as the entry-point with 128 GB giving more flexibility for the future, especially as AI workflows evolve (more on that later).

Performance testing

We put the Z2 Mini G1a to work in a variety of real-world CAD, visualisation, simulation and reality modelling applications. Our test machine was fully loaded with the top-end AMD Ryzen AI Max+ Pro 395 and 128 GB of system memory, of which 32 GB was allocated to the AMD Radeon 8060S GPU. All testing was done at 4K resolution.

We compared the Z2 Mini G1a with an identically configured HP ZBook Ultra G1a, primarily to assess how its 150 W TDP stacks up against the laptop’s more constrained 70 W. For broader context, we also benchmarked it against a range of desktop tower workstation CPUs and GPUs.

CPU tests

In single threaded workloads, we saw very little difference between the Z2 Mini G1a and ZBook Ultra G1a laptop. That’s because the power draw of a single CPU core remains well below 70W so there is no benefit from a larger TDP.

Both machines delivered very similar performance in both single threaded and lightly threaded tasks in Solidworks (CAD), laser scan import in Capturing Reality and the single core test in rendering benchmark Cinebench.

It was only in multi-threaded tests where we started to see a difference and that’s because the Z2 Mini G1a pushes the AMD Ryzen AI Max+ Pro 395 processor much closer to 150W. When rendering – a highly multi-threaded process that makes full use of all cores – the Z2 Mini G1a was around 16-17% faster in Corona Render 10, V-Ray 6.0, and Cinebench 2024.

Meanwhile, when aligning images and laser scans in Capturing Reality, it was around 11% faster. And in select simulation workflows in both SPECwpc benchmarks, the performance increase was as high as 82%!

But how does the Z2 Mini G1a stack up against larger desktop towers? AMD’s top-tier mainstream desktop processor, the Ryzen 9 9950X, shares the same Zen 5 architecture as the Ryzen AI Max+ Pro, but delivers significantly better performance. It’s 22% faster in Cinebench, 18% faster in Capturing Reality, and 15–33% faster in Solidworks. But that’s hardly surprising, given it draws up to 230W, as tested in a Scan 3XS tower workstation with a liquid cooler and heatsink roughly the size of the entire Z2 Mini G1a!

We saw similar from Intel’s flagship Core Ultra 9 285K in a Scan 3XS tower, which pushes power even further to 253W. While this Intel chip is technically available as an option in the HP Z2 Mini G1a’s Intel-based sibling, the HP Z2 Mini G1i, it would almost certainly perform well below its full potential due to the power and thermal limits of the compact chassis.


GPU tests

The Z2 Mini G1a’s 150W TDP pushes the Radeon 8060S GPU harder, outperforming the ZBook Ultra G1a in several demanding graphics workloads.

The Z2 Mini G1a impressed in D5 Render, completing scenes 15% faster and delivering a 39% boost in real-time viewport frame rates. Twinmotion also saw a notable 22% faster raster render time, though in Lumion, performance remained unchanged.

The biggest leap came in AI image generation. In the Procyon AI benchmark, the Z2 Mini G1a was 50% faster than the ZBook Ultra G1a in Stable Diffusion 1.5 and an impressive 118% faster in Stable Diffusion XL.

But how does the Radeon 8060S compare with discrete desktop GPUs like the low-profile Nvidia RTX A1000 (8 GB) and RTX 2000 Ada Generation (16 GB), popular options in the Intel-based Z2 Mini G1i?

In the D5 Render benchmark, which only requires 4 GB of GPU memory, the Radeon 8060S edged ahead of the RTX A1000 but lagged behind the RTX 2000 Ada Generation.

Its real advantage appears when memory demands grow: with 32 GB available, the Radeon 8060S can handle larger datasets that overwhelm the RTX A1000 (8 GB) and even challenge the RTX 2000 Ada Generation (16 GB) in our Twinmotion raster rendering test. Path tracing in Twinomtion, however, caused the AMD GPU to crash, highlighting some of the broader software compatibility challenges faced by AMD, which we explore in our ZBook Ultra G1a review.

Meanwhile, in our Lumion test, which only needs 11 GB for efficient rendering at FHD resolution, the RTX 2000 Ada Generation (16 GB) demonstrated a clear performance advantage.

Of course, while the Radeon 8060S allows large models to be loaded into memory, it’s still an entry-level GPU in terms of raw performance and complex viz scenes may stutter to a few frames per second. Waiting for renders may be acceptable to architects, but laggy viewport navigation is not.

Overall, the Radeon 8060S shines when memory capacity is the limiting factor, but it cannot match higher-end discrete GPUs in sustained rendering performance. For more on these trade-offs, see our review of the HP ZBook Ultra G1a.


Gently does it

Out of the box, the Z2 Mini G1a is impressively quiet when running CAD and BIM software. Fan noise becomes much more noticeable under multithreaded CPU workloads and, to a lesser extent, GPU-intensive tasks. The good news is that this can be easily managed without significantly impacting performance: in the BIOS, users can select from four performance modes — ‘high-performance,’ ‘performance,’ ‘quiet,’ and ‘rack’ — which operate independently of the standard Windows power settings.

The HP Z2 Mini G1a ships with ‘high performance’ mode enabled by default, allowing the processor to run at its full 150W TDP. In V-Ray rendering, it maintains an impressive all-core frequency of 4.6 GHz, although the fans ramp up noticeably after a minute or so.

Switching to Quiet Mode (after a reboot) prioritises acoustics over raw performance. The CPU automatically downclocks, and fan noise becomes barely audible — even during extended V-Ray renders. For short bursts, such as a one-minute render, the system still delivers 140W with a minimal frequency drop. Over a one-hour batch render, however, power levels dipped to 120W, and clock speeds averaged around 4.35 GHz.

The good news: this appeared to have negligible impact on performance, with V-Ray benchmark scores falling by just 1% compared to High Performance mode. In short, Quiet Mode looks to be more than sufficient for most workflows, offering near-peak performance with significantly reduced fan noise.

Finally, Rack Mode prioritises reliability over acoustics. Fans run consistently — even at idle — to ensure thermal stability in densely packed datacentre deployments.

Local AI

Most AEC firms will use the Z2 Mini G1a for everyday tasks — your typical CAD, BIM, and visualisation workflows. But thanks to the way the GPU has access to a large pool of system memory, it also opens the door to some interesting AI possibilities.

With 96 GB to play with the Z2 Mini G1a can take on much bigger AI models than a typical discrete GPU with fixed memory. In fact, AMD recently reported that the Ryzen AI Max Pro can now support LLMs with up to 128 billion parameters — about the same size as Chat GPT 3.0.

This could be a big deal for some AEC firms. Previously, running models of this scale required cloud infrastructure and dedicated datacentre GPUs. Now, they could run entirely on local workstation hardware. AMD goes into more detail in this blog post and FAQ.

Of course, the AMD Ryzen AI Max Pro won’t even get close to matching the performance of a high-end Nvidia GPU, especially one in the cloud. But in addition to cost, the big attraction is that you could run AI locally, under your full control, with no data ever leaving your network.

On a more practical level for AEC firms experimenting with text-to-image AI for early-stage design, AMD also explains that the Ryzen AI Max+ can handle text-to-image models with up to 12 billion parameters, like FLUX Schnell in FP16. This could make it attractive for those wanting more compelling, higher resolution visuals, if they are willing to wait for the results.

Finally, thanks to the Ryzen AI Max Pro’s built-in NPU, there’s also dedicated AI hardware for efficient local inference as well. And at 50 TOPS the NPU is more powerful than other desktop workstation NPUs, and the only one we know that meets Microsoft’s requirements for a CoPilot+ PC.

The verdict

The HP Z2 Mini G1a represents a major step forward for compact workstations, delivering strong performance and enabling new workflows in a datacentre-ready form factor.

At its heart the AMD Ryzen AI Max Pro processor not only delivers a powerful multi-core CPU and remarkably capable integrated GPU, but an advanced memory architecture as well that allows the GPU to tap directly into a large pool of system memory — up to 96 GB.

This makes the Z2 Mini G1a stand out from traditional discrete GPU-based workstations — even some with much larger chassis — by offering an advantage in select memory-intensive workloads, from visualisation to advanced AI.

Of course, the Ryzen AI Max Pro is no silver bullet. While the 16-core chip delivers impressive computational performance, AMD faces tough competition from Nvidia on the graphics front – both in terms of hardware and software compatibility.

Nvidia’s recently announced low-profile Blackwell GPUs offer improved performance and more memory (up to 24 GB) and are expected to debut soon in the HP Z2 Mini G1i.

As reviewed, the Z2 Mini G1a with the AMD Ryzen AI Max+ Pro 395 and 128 GB RAM is priced at £2,280 + VAT, while a lower-spec model with the Ryzen AI Max Pro 390 and 64 GB RAM (our recommended minimum) comes in at £1,710 + VAT.

While this isn’t exactly cheap, pricing is competitive given the performance and workflows potential on offer. More than anything, the Z2 Mini G1a shows how far compact workstations have come — delivering desktop and datacentre power in a form factor that was once considered a compromise.

The post Review: HP Z2 Mini G1a appeared first on AEC Magazine.

]]>
https://aecmag.com/workstations/review-hp-z2-mini-g1a/feed/ 0
AEC Magazine July / August 2025 Edition https://aecmag.com/bim/aec-magazine-july-august-2025-edition/ https://aecmag.com/bim/aec-magazine-july-august-2025-edition/#disqus_thread Thu, 24 Jul 2025 06:27:24 +0000 https://aecmag.com/?p=24496 Arcol, the future of Revit, NXT BLD / DEV on-demand and lots more

The post AEC Magazine July / August 2025 Edition appeared first on AEC Magazine.

]]>
Highlights of AEC Magazine July / August 2025 edition

  • NXT BLD / DEV 2025 on-demand: Watch every presentation on demand, free of charge.
  • We explore the recent launch of Arcol, the BIM 2.0 start-up that has an initial focus on collaborative conceptual design.
  • The future of Revit: Forma is not the replacement for Revit is the latest messaging from Autodesk. We analyse what this might mean.
  • Keir Regan-Alexander explores how architects are using Ai models and take a deeper dive into Midjourney V7 and how it compares to Stable Diffusion and Flux.
  • With all the progress being made to convert point clouds to 3D models, ‘scan to BIM’ is fast becoming a reality. UK startup NavLive is addressing one step before that.
  • Lenovo Access offers ready-made ‘blueprints’ to aid deployment of high-performance remote workstations.
  • Motif introduces ‘single click’ AI rendering: architecturally-tuned AI renderer balances simplicity with easy customisation
  • HP ZBook Fury G1i gets power boost – new 16-inch and 18-inch mobile workstations deliver new chips with higher TDP

It’s available to view now, free, along with all our back issues.

Subscribe to the digital edition free + all the latest AEC technology news in your inbox, or take out a print subscription for $49 per year (free to UK AEC professionals).



The post AEC Magazine July / August 2025 Edition appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/aec-magazine-july-august-2025-edition/feed/ 0
NavLive – ‘scan to drawings’ https://aecmag.com/reality-capture-modelling/navlive-scan-to-drawings/ https://aecmag.com/reality-capture-modelling/navlive-scan-to-drawings/#disqus_thread Thu, 24 Jul 2025 05:59:23 +0000 https://aecmag.com/?p=24402 ‘Scan to BIM’ is fast becoming a reality. This UK starup is addressing one step before that

The post NavLive – ‘scan to drawings’ appeared first on AEC Magazine.

]]>
With all the progress being made to convert point clouds to 3D models, ‘scan to BIM’ is fast becoming a reality. One step before that would be ‘scan to drawings’ — and an Oxford-based startup sparked plenty of buzz around this at our recent NXT BLD event, writes Martyn Day

True industry disruption rarely comes from a single new technology. More often, it’s the convergence of multiple innovations that reshapes workflows and drives meaningful change. Ai is clearly one of the most influential technologies in this mix — and it’s now being woven into nearly every aspect of software and hardware development.

A great example of this convergence is NavLive, which combines LiDAR technology with advanced Ai processing to scan buildings and generate precise site drawings in minutes.

The company was formed in 2022 as an Oxford University spin out from the PhD research on SLAM, 3D mapping and autonomous robots, carried out by co-founder David Wisth, who is the company CTO.

The CEO and other co-founder, Chris Davison, comes from an investment background and was one of the co-founders and CEO of BigPay, a Singapore challenger bank.


Find this article plus many more in the July / August 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈


The company has raised £4 million from investment and grants, to develop a unique SLAM scanner, which captures and processes point clouds, digital images using Ai powered by Nvidia GPUs on the device. Data is then shared, via cloud, to deliver a rapid scanning solution that automatically generates 2D floor plans, 3D models, sections and elevations with a claimed accuracy of about 1cm for 1:100 RICS-grade surveys.

NavLive has a team of around 15 people and, at the moment, the handheld scanners are hand-made in the UK.

Currently, in this space you have Matterport which has a tripod-based solution at around £6k and products like Leica BLK2GO at £40,000, the Faro Orbis at £45,000 and the NavVis VLX 2 or 3 at about $30k – $60K.

At just £25,000, NavLive hits a sweet spot for rapid SLAM-style scanning, with the added benefit of delivering 2D drawings and 3D models. It comes with all the necessary software and on-board processing and the company is also working on how it could convert these models from 3D to intelligent BIM.

While at NXT BLD, NavLive scanned the Queen Elizabeth II building as a data set (see Figure 1) and gave demos showing how quickly it scanned spaces, simply by walking around. These were instantly turned into 2D drawings on the Samsung device built into the scanner. It was also possible to see the model and interactively create sections and elevations.


Navlive.ai
Raw scan from NXT BLD: floorplan of the QEII Centre

Key features

NavLive is a real time system. The input is the point cloud (SLAM is typically ‘noisy’) and the output are the 2D plans, sections and elevations. While it will capture people and other items within the scan, these can be cleaned up.

The device contains three HD image capture cameras for visual reference and documentation. NavLive will automatically work out and plot the path the user has walked, and photos can be looked at, at any point, to identify features that might not be obvious from the scan drawing.

The team claims that the NavLive device is the quickest AI-powered scan-to-BIM tool on the market, delivering ‘instant site surveys’ in one self-contained unit. The scanner is light and requires very little training to operate. It is capable of being used in multiple environments and has already been trialled in nuclear facilities.

At just £25,000, NavLive hits a sweet spot for rapid SLAM-style scanning, with the added benefit of delivering 2D drawings and 3D models

The software is mobile and desktop enabled. It automatically syncs to cloud – point clouds, drawings and models – and the results can be seen live by any other team member irrespective of geography to the actual scanning. Users can easily download files in all standard formats, including DWG, DXF, PDF for drawings, E57 or LAS for point clouds, and JPG for images. It integrates with CAD/BIM software, such as Revit, AutoCAD, and Archicad, speeding up scan-to-BIM workflows. Conclusion It’s highly unusual to find such an initially well-funded and interesting scanning device coming out of the UK; even more impressive to assemble the scanner here too. While SLAM techniques are well understood, the big benefit here is having the necessary ‘oomph’ on-board to do the processing and not just in cleaning up the point cloud but actually delivering something immediately useful – 2D drawings, plans and sections and (hopefully) ultimately BIM models.

This is an Oxford University spin out and start-up that certainly has legs.

It raises the question: could NavLive’s automatic 2D floorplan algorithms work with point clouds from any scanner?

Having covered BIMify in the May/June edition of AEC Magazine, one wonders whether you could scan a building using NavLive and then send the generated drawings to BIMify — enabling the creation of Revit BIM models using a customer’s own component libraries. That said, BIMify also tells us they support direct 3D point cloud to BIM conversion.

At NXT BLD NavLive certainly created a buzz. Unfortunately, BIMify’s CEO was unable to make the event, but that would have been a good introduction!

Rapid reality modelling is evolving fast — Robert Klashka hosted an excellent panel at NXT DEV that explored some of the latest developments.

Automation is certainly coming to nearly every granular process within traditional BIM workflows. Tasks that once took days or even weeks are being dramatically compressed by emerging technologies. The tedious, time-consuming “grunt work” is being minimised — freeing up teams to focus more on design and decision-making.

Solutions like NavLive are helping drive this shift, lowering the cost of capture while significantly reducing the time it takes to go from survey to as-built drawings — and even to as-built BIM. It’s no exaggeration to say this is the most exciting era for AEC technology innovation in the past 30 years.

The post NavLive – ‘scan to drawings’ appeared first on AEC Magazine.

]]>
https://aecmag.com/reality-capture-modelling/navlive-scan-to-drawings/feed/ 0
AEC Magazine May / June 2025 Edition https://aecmag.com/bim/aec-magazine-may-june-2025-edition/ https://aecmag.com/bim/aec-magazine-may-june-2025-edition/#disqus_thread Wed, 28 May 2025 06:30:20 +0000 https://aecmag.com/?p=24138 Discover new technologies that can go from 2D to 3D and back, plus lots more

The post AEC Magazine May / June 2025 Edition appeared first on AEC Magazine.

]]>
In the May / June 2025 edition of AEC Magazine we

  • Put the spotlight on new technologies that can transform 2D drawings into models and generate drawings automatically from 3D model data.
  • Explore how AI is reshaping design culture, streamlining visualisation, and boosting creativity with thought-provoking articles from Keir Regan-Alexander and Tudor Vasiliu
  • Discover new tools like Snaptrude’s AI-powered BIM platform, a free environmental analysis Rhino plug-in from Foster + Partners and new AEC-focused Catia bundles from Dassault Systèmes.
  • Catch up on the key tech themes from our NXT BLD and NXT DEV events, including BIM 2.0, AI, data lakes and the latest workstation breakthroughs.
  • Also inside: Tal Friedman on why AEC still lacks unicorn startups — and what can be done to change that, plus an in-depth review of a game-changing mobile workstation from HP

It’s available to view now, free, along with all our back issues.

Subscribe to the digital edition free + all the latest AEC technology news in your inbox, or take out a print subscription for $49 per year (free to UK AEC professionals).


The post AEC Magazine May / June 2025 Edition appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/aec-magazine-may-june-2025-edition/feed/ 0
AEC Magazine March / April 2025 Edition https://aecmag.com/technology/aec-magazine-march-april-2025-edition/ https://aecmag.com/technology/aec-magazine-march-april-2025-edition/#disqus_thread Wed, 16 Apr 2025 05:00:33 +0000 https://aecmag.com/?p=23585 Get the low down on Motif, the £46 million funded BIM 2.0 startup, plus lots, lots more

The post AEC Magazine March / April 2025 Edition appeared first on AEC Magazine.

]]>
In the March / April 2025 edition of AEC Magazine we  get the low down on Motif, the £46 million funded BIM 2.0 startup, discover how Higharc is using AI to generate 3D BIM models from 2D sketches, hear from Qonic, Snaptrude, Arcol, and Motif about their visions for BIM 2.0, and find out how Studio Tim Fu is reimagining architectural workflows, blending human creativity with machine intelligence – plus lots, lots more.



It’s available to view now, free, along with all our back issues.

Subscribe to the digital edition free + all the latest AEC technology news in your inbox, or take out a print subscription for $49 per year (free to UK AEC professionals).



 

The post AEC Magazine March / April 2025 Edition appeared first on AEC Magazine.

]]>
https://aecmag.com/technology/aec-magazine-march-april-2025-edition/feed/ 0
Autodesk Tandem in 2025 https://aecmag.com/digital-twin/autodesk-tandem-in-2025/ https://aecmag.com/digital-twin/autodesk-tandem-in-2025/#disqus_thread Wed, 16 Apr 2025 05:00:02 +0000 https://aecmag.com/?p=23398 Autodesk’s cloud-based digital twin platform, is evolving at an impressive pace. We take a closer look at what’s new.

The post Autodesk Tandem in 2025 appeared first on AEC Magazine.

]]>
Autodesk Tandem, the cloud-based digital twin platform, is evolving at an impressive pace. Unusually, much of its development is happening out in the open, with regular monthly or quarterly feature preview updates and open Q&A sessions. Martyn Day takes a closer look at what’s new

Project Tandem, as it used be known, was initiated in February 2020, previewed at Autodesk University 2020, and released for public beta in 2021. Four years on, there are still significant layers of technology being added to the product, now focussing on higher levels of functionality beyond dashboards and connecting to IoT sensors, adding systems knowledge, support for timeline events and upgrades to fundamentals such as visualisation quality.

Tandem development seems to have followed a unique path, maintaining its incubator-like status, with Autodesk placing a significant bet on the future size of an embryonic market.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

For those following the development of Tandem the one thing that comes across crystal clear, is that creating a digital twin of even a single building — model generation, tagging and sorting assets, assigning subsystems, connecting to IoT, and building dashboards — is a huge task that requires ongoing maintenance of that data.

It’s not really ‘just an output of BIM’ which many might feel is a natural follow on. It has the capability to go way beyond the scope of what is normally called Facilities Management (FM), which has mainly been carried out with 2D drawings.



The quantitative benefit of building a digital twin requires dedication, investment and an adoption of twins as a core business strategy. For large facilities, like airports, universities, hospitals – anything with significant operating expenses – this should be a ‘no brainer’ but as with any investment the owner/operator has to pay upfront to build the twin, to realise the benefits in the long tail, measured in years and decades. This, to me, makes the digital twins market not a volume product play.


Autodesk Tandem


Tandem evolution

My first observation is that the visual quality of Tandem has really gone up a notch, or three. Tandem is partially developed using Autodesk’s Forge components (now called Autodesk Platform Services). The model viewer front end came from the Forge viewer, which to be honest was blocky and a bit crappy-looking, in a 1990s computer graphics kind of way. The updated display brings up the rendering quality and everything looks sharper. The models look great and the colour feedback when displaying in-model data is fantastic. It’s amazing that this makes such a difference, but it brings the graphics in to the 21st century. Tandem looks good.

As Tandem has added more layers of functionality the interface tool palettes have grown. The interface is still being refined, and Autodesk is now adopting the approach of offering different UIs to cater to different user personas, such as operators who might be more familiar with 2D floor plans than 3D.

Other features that have been added include the ability to use labels or floor plans to isolate them in the display, auto views to simplify navigation, asset property cards (which can appear in view, as opposed to bringing up the large party panel) and thresholds, which can be set to fire off alerts when unexpected behaviour is identified. Users can now create groups of assets and allocate them to concepts such as ‘by room’. Spaces can now also be drawn directly in Tandem.

Speed is also improved. As Tandem is database centric, not file based, it enables dynamic loading of geometry and data, leading to fast performance even with complex models. It also facilitates the ability to retain all historical data and easily integrate new data sources as the product grows. This is the way all design-related software will run. Tandem benefits from being conceived in this modern cloud era.

That said, development of Tandem has moved beyond simply collecting, filtering, tagging and visualising data to providing actionable insights and recommendations. From talking with Bob Bray, vice president and general manager of Autodesk Tandem and Tim Kelly, head of Tandem product strategy, the next big step for Tandem is to analyse the rich data collected to identify issues and suggest optimisations. These proactive insights would include potential cost savings and carbon footprint reduction through intelligent HVAC management based on actual occupancy data.

Systems tracing

Having dumb geometry in dumb spaces was pretty much the full extent of traditional CAFM. Digital twins can and should be way smarter. The systems tracing capability in Tandem simplifies the understanding of all the complex building systems and their spatial relationships, aiding operations, maintenance, and troubleshooting. By clicking on building system elements, you can see the connections between different elements within a building’s systems and see how networks of branches and zones relate to the physical spaces they serve and identify where critical components are located within the space. This means if something goes wrong, should that be discovered via IoT or reported by an occupant, systems tracing allows the issue to be pinpointed down to a specific level and room. Users can select a component like an air supply and then trace its connection down though subsystems to the spaces it serves.

Tandem is a cloud-based conduit, pooling information from multiple sources which is then refined by each user to give them insight into layers of spatial and telemetric data

Building in this connection between components to make a ‘system’, used to be a pretty manual process. Now, Tandem can automatically map the relationships between spaces and systems and use them for analysis to identify the root cause of problems. Timelines Data is valuable and BMS (Building Management Systems) and IoT sensors generate the building equivalent of an ‘ECG’ every couple of seconds. The historical, as well as the live data is incredibly valuable. Timelines in Tandem display this historic sensor data in a visual context. Kelly demonstrated an animated heatmap overlaid on the building model showing how temperature values fluctuate across a facility. It’s now possible to navigate back and forth through a defined period, either stepping through specific points or via animation, seeing changes to assets and spaces.

While the current implementation focuses on visualising historic data, Kelly mentioned the future possibility of the timeline being used to load or hide geometry based on changes over time, reflecting renovations or other physical alterations to the building.

Bray added that Tandem never deletes anything, implying that the historical data required for the timeline functionality is automatically retained within the system. This allows users to access and analyse past performance and conditions within the building at any point in the future, should that become a need.

Asset monitoring

Asset monitoring dashboards in Tandem are designed to provide users with a centralised view for monitoring the performance and status of their key assets. This feature, which is now in beta, aims to help operators identify issues and prioritise their actions. They will be customisable, and users can create dashboards to monitor the specific assets they care about This allows for a tailored overview of the most critical equipment and systems within their facility.

The dashboards will likely allow users to establish KPIs and tolerance thresholds for their assets. By setting these parameters, the system can accurately measure asset performance and identify when an asset is operating outside of expected or acceptable ranges with visual feedback of assets out of optimal performance.

Assets that are consistently operating out of tolerance or experiencing recurring issues can be grouped to aid focus e.g. by level, room, manufacturer. With this in mind, Tandem also has a ‘trend analysis’ capability, allowing users to identify potential future problems based on current performance patterns. The goal of these asset monitoring dashboards is to help drive preventative maintenance and planning for equipment replacement.

Tandem Connect

Digital Twin creation and connectivity to live information means there is a big integration story to tell and it’s different on nearly every implementation. Tandem is a cloud-based conduit, pooling information from multiple sources which is then refined by each user to give them insight into layers of spatial and telemetric data. To do that, Autodesk needed to have integration tools to tap into, or export out to, the established systems, should that be CAFM, IoT, BMS, BIM, CAD, databases etc.

Tandem Connect is designed to simplify that process and comes with prepacked integration solutions for a broad range of commonly used BMS. IoT and asset management tools. This is not to be confused with other developments such as Tandem APIs or SDKs.


Autodesk Tandem


The application was acquired and so has a different style of UI to other Autodesk products. Using a graphical front end, integrations can be initially plug and play, such as connecting to Microsoft Azure, through a graph interface. The core idea behind this is to ‘democratise the development of visual twins’ and not require a software engineer to get involved. However more esoteric connections may require some element of coding. Bray admitted there was significant ‘opportunity for consultancy’ that arises from the whole connectivity piece of the pie and that a few large system integrators were already talking with Autodesk about that opportunity.

Bray explained that Tandem Connect enables not only data inflow and outflow but also ‘workflow automation and data manipulation’. He gave an example where HVAC settings could be read into Tandem Connect, and a comfort index could be written, which was demonstrated at Autodesk University 2024.

Product roadmap

Autodesk keeps a product roadmap which has been pretty accurate to show the development of travel, given the regular video updates.

Two of the more interesting capabilities in development are portfolio optimisation and the development of more SDK options, plus the possibility of future integration of applications. Portfolio optimisation will allow users to view data of multiple facilities in one central location and should provide analytics to predict future events with suggested actions for streamlining operations.

Beyond the current Rest API (Now), Autodesk is developing a full JavaScript Tandem SDK to build custom applications that leverage Tandem’s logic and visual interactivity. In the long-term, Autodesk says it will possibly enable extensions for developers to include functionality within the Tandem application itself.

Conclusion

Tandem development continues relentlessly. The capabilities that are being added now are starting to get into the high value category. While refinements are always being added to the creation and filtering, once the data is in and tagged and intelligently put into systems, it’s then about deep integration, alerts for out of nominal operation at a granular level, historical analysis of systems, spaces and rooms, all with easy visual feedback and the potential for yet more data analysis and intelligence.

Bray uses a digital twin maturity model to outline the key stages of development needed to realise the full potential of digital twin technology. It starts with building a Descriptive Twin (as-built replica), then Informative Twin (granular operational data), then Predictive Twin (enabling predictive analytics), Comprehensive Twin (what-if simulation) and Autonomous Twin (self-tuning facilities).

At the moment, Tandem is crossing from Informative to Predictive, but the stated intent for higher level functionality is there. However the warning is, your digital twin is only ever as good as the quality of the data you have input.

Some of the early users of Tandem are now being highlighted by the company. In a recent webinar, Brendan Dillon, director of digital facilities & infrastructure, Denver International Airport gave a deep dive into how they integrated Maximo with Tandem to monitor facility operations.

Tandem is an Autodesk outlier. It’s not a volume product and it’s not something that Autodesk’s channel can easily sell. It’s an investment in a product development that is quite unusual at the company. It doesn’t necessarily map to the way Autodesk currently operates as, from my perspective, it’s really a consultancy sale, to a relatively small number of asset owners – unlike Bentley Systems, whose digital twin offerings often operate at national scale across sectors like road and rail. The good news is that Autodesk has a lot of customers, and they will be self-selecting potential Tandem customers, knowing they need to implement a digital twin strategy and probably have a good understanding of the arduous journey that may be. The Tandem team is trying to make that as easy as possible and clearly developing it out in the open brings a level of interaction with customers that in these days is to be commended.

Meanwhile, with its acquisition of niche products like Innovyze for hydraulic modelling, there are some indications that Autodesk is perhaps looking to cater to more involved engagements with big facility owners, and I see Tandem as falling into that category at the moment, while the broader twins market has still yet to be clearly identified.

The post Autodesk Tandem in 2025 appeared first on AEC Magazine.

]]>
https://aecmag.com/digital-twin/autodesk-tandem-in-2025/feed/ 0
Higharc AI 3D BIM model from 2D sketch https://aecmag.com/bim/higharc-ai-3d-bim-model-from-2d-sketch/ https://aecmag.com/bim/higharc-ai-3d-bim-model-from-2d-sketch/#disqus_thread Wed, 16 Apr 2025 05:00:07 +0000 https://aecmag.com/?p=23466 A cloud-based design solution for US timber frame housing presents impressive new AI capabilities

The post Higharc AI 3D BIM model from 2D sketch appeared first on AEC Magazine.

]]>
In the emerging world of BIM 2.0, there will be generic new BIM tools and expert systems, dedicated to certain building types. Higharc is a cloud-based design solution for US timber frame housing. The company recently demonstrated impressive new AI capabilities

While AI is in full hype cycle and not a day passes without some grandiose AI claim, there are some press releases that raise the wizzened eyebrows at AEC Magazine HQ. North Carolina-based start-up, Higharc, has demonstrated a new AI capability which can automatically convert 2D hand sketches to 3D BIM models within its dedicated housing design system. This type of capability is something that several generic BIM developers are currently exploring in R&D.

Higharc AI, currently in beta, uses visual intelligence to auto-detect room boundaries and wall types by analysing architectural features sketched in plan. In a matter of minutes, the software then creates a correlated model comprising all the essential 3D elements that were identified in the drawing – doors, windows, and fixtures.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Everything is fully integrated with Higharc’s existing auto-



drafting, estimating, and sales tools, so that construction documents, take-offs, and marketing collateral can be automatically generated once the design work is complete.

In one of the demonstrations we have seen, a 2D sketch of a second floor is imported, analysed and then automatically generates all the sketched rooms and doors, with interior and exterior walls and windows. The AI generated layout even means the roof design adapts accordingly. Higharc AI is now available via a beta program to select customers.



Marc Minor, CEO and co-founder of Higharc explains the driving force behind Higharc AI. “Every year, designers across the US waste weeks or months in decades-old CAD software just to get to a usable 3D model for a home,” he says.

“Higharc AI changes that. For the first time, generative AI has been successfully applied to BIM, eliminating the gap between hand sketches and web-based 3D models. We’re working to radically accelerate the home design process so that better homes can be built more affordably.”

AI demo

In the short video provided by Higharc, we can see a hand drawn sketch imported into the Autolayout tool. The sketch is a plan view of a second floor, with bedrooms, bathrooms and stairs with walls, doors and windows indicated. There are some rough area dimensions and handwritten notes, denoting room allocation type. The image is then analysed. The result is an opaque overlay, with each room (space) tagged appropriately, and a confirmation of how many rooms it found. There are settings for rectangle tolerance, minimum room areas. The next phase is to generate the rooms from this space plan.

We now switch to Higharc’s real-time rendered, modelling and drawing environment, where each room is inserted on the second floor of an existing single floor residential BIM model, where walls, windows, doors and stairs are added, and materials are applied. This is simultaneously referencing an image of the sketch. The accurate BIM model has been created, combining traditional modelling with AI sketch-to-BIM generation.

What is Higharc?

Founded in 2018, Higharc develops a tailored cloud-based BIM platform, specifically designed to automate and integrate the US housing market, streamlining the whole process of design, sales, and constructing new homes.

Higharc is a service sold to home builders, that provides a tailored solution which integrates 3D parametric modelling, the auto creation of drawings, 3D visualisations, material quantities and costing estimates, related construction documents and planning permit application. AEC Magazine looked at the development back in 2022.

The company’s founders, some of which were ex-Autodesk employees, recognised that there needed to be new cloud-based BIM tools and felt the US housing market offered a greenfield opportunity, as most of the developers and construction firms in this space had completely avoided the BIM revolution, and were still tied to CAD and 2D processes. With this new concept Higharc offered construction firms easy to learn design tools, which even prospective house buyers could use to design their dream homes. As the Higharc software models every plank and timber frame, accurate quantities can be connected to ERP systems for immediate and detailed pricing for every modification to the design.

The company claims its technology enhances efficiency, accelerating a builder’s time to market by two to three times, reducing the timeline for designing and launching new plots by 75% (approximately 90 days). Higharc also claims that plan designs and updates are carried out 100 times faster than with traditional 2D CAD software.

To date, Higharc has raised $80 million and has attracted significant investment and support from firms such as Home Depot Ventures, Standard Investments, and former Autodesk CEO Carl Bass. The company has managed to gain traction in the US market and is being used to build over 40,000 homes annually, representing $19 billion in new home sales volume.

While Higharc’s first go to market was established house building firms, the company has used money raised to expand its reach to address those who want to design and build their own homes. The investment by Home Depot would also indicate that the system will integrate with the popular local building merchants, so selfbuilders can get access to more generic material supply information. The company also plans to extend the building types it can design, eventually adding retail and office to its residential origins.

In conversation

After the launch, AEC Magazine caught up with co-founder Michael Bergin and company CEO Marc Minor to dig a little deeper into the origin of the AI tool and how it’s being used. We discovered that this is probably the most useful AI introduction in any BIM solution we have seen to date, as it actually solves a real world problem – not just a nice to have of demoware.

The only reason we were able to do it, is because of what Higharc is in the first place. It’s a data-first BIM system, built for the web from the ground up

In previous conversations with Higharc, it became apparent that the company had become successful, almost too successful, as onboarding new clients to the system was a bottleneck. Obviously, every house builder has different styles and capabilities which have to be captured and encoded in Higharc but there was also the issue of digital skill sets. Typically, firms that were opting to use Higharc were not traditional BIM firms – they were housebuilders, more likely to use AutoCAD or a hand drawn sketch, than have much understanding of BIM or modelling concepts. It turns out that the AI sketch tool originated out of a need to include the non-digital, but highly experienced, house building workforce.


Mark Minor: The sketch we used to illustrate at launch is a real one, from one of our customers. We have a client, a very large builder in Texas who builds 4,000 houses per year just in Texas. They have a team of 45 or so designers and drafters, and they have a process that’s very traditional. They start on drawings boards, just sketching. They spend three months or so in conceptual design and eventually they’ll pass on their sketches to another guy who works on the computer, where he models in SketchUp, so they can do virtual prototype walk-throughs to really understand the building, the design choices, and then make changes to it.

The challenge here is that it takes a long time to go back and forth. We showed them this new AI sketch to model work we were doing, and they gave us one of their sketches for one of their homes that they’re working on. The results blew their mind. They said for them ‘this is huge’. They told us they can cut weeks or months from their conceptual stage and probably bring in more folks at the prototype walk-through stage. It’s a whole new way of interacting with design.

What makes this so special, and is the only reason we were able to do it, is because of what Higharc is in the first place. It’s a data-first BIM system, built for the web from the ground up. Because it’s data first, it means that we can not only generate a whole lot of synthetic data for training rapidly, but we really have a great target for a system like this – taking a sketch and trying to create something meaningful out of the sketch. It’s essentially trying to transform the sketch into our data model. And when you do that, you get all the other features and benefits of the Higharc system right on top of it.


Martyn Day: As the software processes the file, it seems to go through several stages. Is the first form finding?

Mark Minor: It’s not just form finding, actually, it’s mapping the rooms to particular data types. And those types carry with them all kinds of rules and settings.

Michael Bergin: At the conceptual / sketch design phase these are approximate dimensions. Once you’ve converted the rooms into Higharc, the model is extremely flexible. You can stretch all the rooms, you can scale them, and everything will replace itself and update automatically. We also have a grid resolution setting, so the sketch could even be a bubble diagram, or very rough lines, and you just set the grid resolution to be quite high, and you can still get a model out of that.

Higharc contains procedural logic, as to how windows are placed, how the foundation is placed, the relationships between the rooms. So the interaction that you see as the AI processes the sketch and makes the model, places the window, doors and the spaces between the rooms, that is all coming from rules that relate to the specifications for our builder.


Martyn Day: If doors collide, or designs do not comply with local codes, do you get alerted if you transgress some kind of design rule?

Michael Bergin: We have about 1,000 settings in Higharc that relate to the building that are to adjust for and align to issues of code compliance. When you get into automated rule checking, evaluating and digesting code rules and then applying that to the model, we have produced some exciting results in more of a research phase in that direction. There’s certainly lots of opportunities to express design logic and design rules, and we’ll continue to develop in that direction.

Mark Minor: One of the ways we use this, is we go to a home builder we want as a customer. In advance of having a sales chat, we’ll actually go to their website and screenshot one of their floor plans. We’ll pull it the AI tool and set it up as the house. We want to help folks understand that it’s not as painful and as hard as you might think. The whole BIM revolution happened in commercial, that’s kind of what’s happening in home building now. But 90% or more of all home builders use AutoCAD. We rarely come across Revit.


Martyn Day: I can see how you can bring non-digital housebuilders into the model creation side of things, where before everything would be handled by the computer expert. With this AI tool, does that mean suddenly everyone can contribute to the Higharc model?

Michael Bergin: Yes! That’s extremely important to us, bringing more of the business into the realm of the design, that’s really the core of our business. How do we bring the purchasing and the estimating user into the process of design? How do we take the operations user who’s scheduling all of the work to be done on the home into the design, because ultimately, they all have feedback. The sales people have feedback. The field team have feedback, but they’re all blocked out. They are always working through an intermediary, and perhaps through an email to a CAD operator. It goes into a backlog. We are cutting that distance between all the stakeholders in the design process and the artefact of the design has driven a lot of our development.

It’s exciting to see them engaging in the process, to see new opportunities opening up for them, which I think is broadly a great positive aspect of what’s happening with the AI revolution.


Martyn Day: You have focused on converting raster images, which is hard, as opposed to vector. But could you work with vector drawings?

Michael Bergin: While it would have been easier to use a vector representation to do the same AI conversion work, the reason that we did focus on raster was that vector would have been quite limiting. It would have blocked us out from using conceptual representations. If our customers are using a digital tool at all, they are building sketches in something like Figjam. In this early conceptual design stage, we have not seen the Rayon tools or really any of the new class of tools that the market is opening up for. Our market in US home builders tends to be the way that they’ve been doing things for some decades, and it works well for them, and we are fortunate that they have determined that Higharc is the right tool for their business.

Making it possible that the businesses process can change has required us to develop a lot of capabilities like integrating with the purchasing and estimation suite, integrating with the sales team, integrating with ERPs, really mirroring their business. Otherwise, I don’t think that we would have an excellent case for adoption of new tools in this industry.

The post Higharc AI 3D BIM model from 2D sketch appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/higharc-ai-3d-bim-model-from-2d-sketch/feed/ 0