Visualisation Archives - AEC Magazine https://aecmag.com/visualisation/ Technology for the product lifecycle Fri, 14 Nov 2025 15:40:02 +0000 en-GB hourly 1 https://aecmag.com/wp-content/uploads/2021/02/cropped-aec-favicon-32x32.png Visualisation Archives - AEC Magazine https://aecmag.com/visualisation/ 32 32 Chaos boosts Corona 14 with AI https://aecmag.com/visualisation/chaos-boosts-corona-14-with-ai/ https://aecmag.com/visualisation/chaos-boosts-corona-14-with-ai/#disqus_thread Wed, 12 Nov 2025 10:12:06 +0000 https://aecmag.com/?p=25545 New features include support for Gaussian Splats, AI-powered creation, Night Sky, and Fabric Materials

The post Chaos boosts Corona 14 with AI appeared first on AEC Magazine.

]]>
New features include support for Gaussian Splats, AI-powered creation, Night Sky, and Fabric Materials

Chaos has released Corona 14, the latest version of its photorealistic architectural rendering engine for 3ds Max and Cinema 4D. New features include AI-assisted creation tools, support for Gaussian Splats, procedural material generation, and new environmental effects.

In Corona 14, Gaussian Splats enables visualisers to “rapidly create” 3D scenes by placing buildings in a real world context, and rendering complex 3D environments with accurate reflections and refractions.

Gaussian Splats, which use AI to create a rich 3D scene from a series of photos or videos, are said to yield smoother surfaces, richer volumetric detail and a more natural sense of depth for designers looking to bring real-life environments and objects into their work.


Discover what’s new in technology for architecture, engineering and construction — read the latest edition of AEC Magazine
👉 Subscribe FREE here

Corona 14 also provides access to range of AI tools, which can be enabled/disabled so creatives can decide when to apply them or firms need to follow strict AI policies and adhere to client requirements.

The Chaos AI Material Generator allows creators to upload a photo of a real-world surface and then turn the image into a tileable, render-ready PBR, including all necessary maps, in a few clicks. According to Chaos, it’s an ideal solution for secondary materials that don’t require art direction ― but without loss of realism.

Meanwhile, the Chaos AI Image Enhancer is designed to elevate the realism, texture, and detail of supporting elements — such as foliage, people, or terrain — without altering the core design.

Advanced controls allow users to adjust the appearance of people assets and refine vegetation for precision and consistency. Corona 14 can also enhance AI creativity with the power to send LightMix results directly to the AI Image Enhancer to explore multiple lighting scenarios or fine-tune mood.

AI Upscaler is designed to turn low-resolution drafts or renders into high-quality, presentation-ready visual. According to Chaos, this can save hours of rendering time while still delivering crisp, photoreal results.

Elsewhere, a new Night Sky feature allows designers to add realistic moonlight, stars, and the galactic backdrop of the Milky Way without having to rely on HDRIs.

Finally, a new Fabric Material feature creates fabrics with “true-to-life” woven detail, with full control over the weave or threads — including opacity, bump, displacement, and more.


Chaos Night Sky
Chaos Night Sky

The post Chaos boosts Corona 14 with AI appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/chaos-boosts-corona-14-with-ai/feed/ 0
Chaos V-Ray to support AMD GPUs https://aecmag.com/visualisation/chaos-v-ray-to-support-amd-gpus/ https://aecmag.com/visualisation/chaos-v-ray-to-support-amd-gpus/#disqus_thread Mon, 13 Oct 2025 16:38:03 +0000 https://aecmag.com/?p=25267 Photorealistic rendering software will now work on AMD Ryzen AI Max Pro processor with up to 96 GB of graphics memory

The post Chaos V-Ray to support AMD GPUs appeared first on AEC Magazine.

]]>
Includes AMD Ryzen AI Max Pro processor with up to 96 GB of graphics memory

Chaos V-Ray will soon support AMD GPUs, so users of the photorealistic rendering software can choose from a wider range of graphics hardware including the AMD Radeon Pro W7000 series and the AMD Ryzen AI Max Pro processor that has an integrated Radeon GPU.

Until now, V-Ray’s GPU renderer has been limited to Nvidia RTX GPUs via the CUDA platform, while its CPU renderer has long worked with processors from both Intel and AMD.

Chaos plans to roll out the changes publicly in every edition of V-Ray, including 3ds Max, SketchUp, Revit and Rhino, Maya, and Blender.

At Autodesk University last month, both Dell and HP showcased V-Ray GPU running on AMD GPUs – Dell on a desktop workstation with a discrete AMD Radeon Pro W7600 GPU and HP on a HP ZBook Ultra G1a with the new AMD Ryzen AI Max+ 395 processor, where up to 96 GB of the 128 GB unified memory can be allocated as VRAM.



“[With the AMD Ryzen AI Max+ 395} you can load massive scenes without having to worry so much about memory limitations,” says Vladimir Koylazov, head of innovation, Chaos. “We have a massive USD scene that we use for testing, and it was really nice to see it actually being rendered on an AMD [processor]. It wouldn’t be possible on [most] discrete GPUs, because they don’t normally have that much memory.”

This new capability has been made possible through AMD HIP (Heterogeneous-Compute Interface for Portability) — an open-source toolkit that allows developers to port CUDA-based GPU applications to run on AMD hardware without the need to create and maintain a new code base.

“HIP handles complicated pieces of code, like V-Ray GPU, a lot better than OpenCL used to do, says Koylazov. “Everything we support in V-Ray GPU on other platforms is now supported on AMD GPUs.”

Chaos isn’t alone in embracing AMD GPUs. Earlier this year, product design focused viz tool KeyShot also added support, which we put to the test in our HP ZBook Ultra G1a review.


The post Chaos V-Ray to support AMD GPUs appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/chaos-v-ray-to-support-amd-gpus/feed/ 0
Chaos Vantage 3 launches for real time viz https://aecmag.com/visualisation/chaos-vantage-3-launches-for-real-time-viz/ https://aecmag.com/visualisation/chaos-vantage-3-launches-for-real-time-viz/#disqus_thread Mon, 20 Oct 2025 15:56:45 +0000 https://aecmag.com/?p=25355 Latest release of real-time ray tracing software adds support for Gaussian Splats, AI Materials, USD and MaterialX 

The post Chaos Vantage 3 launches for real time viz appeared first on AEC Magazine.

]]>
Latest release of real-time ray tracing software adds support for Gaussian Splats, AI Materials, USD and MaterialX

Chaos has released Chaos Vantage 3, a major update to its visualisation platform that allows AEC professionals to explore arch viz scenes in real time complete with real-time ray tracing.

Headline features include support for gaussian splats, enabling users to place their projects directly into lifelike environments, USD and MaterialX for asset exchange across varied pipelines, and access to the Chaos AI Material Generator, to give AEC users precise control over the look of a scene.

“Vantage has always been about giving artists and designers an immediate, photoreal view of their work, whether they’re creating buildings, products or entire worlds,” said Allan Poore, chief product officer, Chaos. “With Vantage 3, we’ve taken that even further for AEC users by introducing USD and MaterialX support, adding new tools to explore designs in a real-world context, refining materials and lighting with greater control, all while keeping the creative process fast, fluid and inspiring.”

With Vantage 3, AEC users can now make the most of Gaussian splats that are part of their V-Ray Scene files. Gaussian splatting allows the real world to be captured as detailed 3D data, quickly turning photos or scans of objects, streets or entire neighbourhoods into editable 3D scenes.

Architects and designers can then place their projects directly into lifelike environments, creating an immediate sense of scale and context. New volumetric rendering takes the immersion even further by adding fog, smoke and light rays, while the Night Sky system introduces astronomically accurate stars, moon phases and even the Milky Way for striking exterior views.

Users also have access to Chaos AI Material Generator, available directly inside the Chaos Cosmos browser, and a new material editor, giving AEC users precise control over the look of a scene, down to the smallest detail.

Integration with the Chaos Cosmos 3D asset library adds thousands of ready-to-use assets — from people and vegetation to furniture — while support for USD unlocks the entire KitBash3D library of 20,000+ production-ready assets.

The post Chaos Vantage 3 launches for real time viz appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/chaos-vantage-3-launches-for-real-time-viz/feed/ 0
Chaos: from pixels to prompts https://aecmag.com/visualisation/chaos-from-pixels-to-prompts/ https://aecmag.com/visualisation/chaos-from-pixels-to-prompts/#disqus_thread Thu, 09 Oct 2025 05:00:40 +0000 https://aecmag.com/?p=24806 Chaos is blending AI with traditional viz, rethinking how architects explore, present and refine ideas

The post Chaos: from pixels to prompts appeared first on AEC Magazine.

]]>
Chaos is blending generative AI with traditional visualisation, rethinking how architects explore, present and refine ideas using tools like Veras, Enscape, and V-Ray, writes Greg Corke

From scanline rendering to photorealism, real-time viz to realt-ime ray tracing, architectural visualisation has always evolved hand in hand with technology.

Today, the sector is experiencing what is arguably its biggest shift yet: generative AI. Tools such as Midjourney, Stable Diffusion, Flux, and Nano Banana are attracting widespread attention for their ability to create compelling, photorealistic visuals in seconds — from nothing more than a simple prompt, sketch, or reference image.

The potential is enormous, yet many architectural practices are still figuring out how to properly embrace this technology, navigating practical, cultural, and workflow challenges along the way.

The impact on architectural visualisation software as we know it could be huge. But generative AI also presents a huge opportunity for software developers.


Find this article plus many more in the September / October 2025 Edition
👉 Subscribe FREE here 👈

Like some of its peers, Chaos has been gradually integrating AI-powered features into its traditional viz tools, including Enscape and V-Ray. Earlier this year, however, it went one step further by acquiring EvolveLAB and its dedicated AI rendering solution, Veras.

Veras allows architects to take a simple snapshot of a 3D model or even a hand drawn sketch and quickly create ‘AI-rendered’ images with countless style variations. Importantly, the software is tightly integrated with CAD / BIM tools like SketchUp, Revit, Rhino, Archicad and Vectorworks, and offers control over specific parts within the rendered image.

With the launch of Veras 3.0, the software’s capabilities now extend to video, allowing designers to generate short clips featuring dynamic pans and zooms, all at the push of a button.

“Basically, [it takes] an image input for your project, then generates a five second video using generative AI,” explains Bill Allen, director of products, Chaos. “If it sees other things, like people or cars in the scene, it’ll animate those,” he says.

This approach can create compelling illusions of rotation or environmental activity. A sunset prompt might animate lighting changes, while a fireplace in the scene could be made to flicker. But there are limits. “In generative AI, it’s trying to figure out what might be around the corner [of a building], and if there’s no data there, it’s not going to be able to interpret it,” says Allen.

Chaos is already looking at ways to solve this challenge of showcasing buildings from multiple angles. “One of the things we think we could do is take multiple shots – one shot from one angle of the building and another one – and then you can interpolate,” says Allen.


Model behaviour

Veras uses Stable Diffusion as its core ‘render engine’. As the generative AI model has advanced, newer versions of Stable Diffusion have been integrated into Veras, improving both realism and render speed, and allowing users to achieve more detailed and sophisticated results.

“We’re on render engine number six right now,” says Allen. “We still have render engine, four, five and six available for you to choose from in Veras.”

But Veras does not necessarily need to be tied to a specific generative AI model. In theory it could evolve to support Flux, Nano Banana or whatever new or improved model variant may come in the future.

But, as Allen points out, the choice of model isn’t just down to the quality of the visuals it produces. “It depends on what you want to do,” he says. “One of the reasons that we’re using Stable Diffusion right now instead of Flux is because we’re getting better geometry retention.”

One thing that Veras doesn’t yet have out of the box is the ability for customers to train the model using their own data, although as Allen admits, “That’s something we would like to do.”

In the past Chaos has used LORAs (Low-Rank Adaptations) to fine-tune the AI model for certain customers in order to accurately represent specific materials or styles within their renderings.

Roderick Bates, head of product operations, Chaos, imagines that the demand for fine tuning will go up over time, but there might be other ways to get the desired outcome, he says. “One of the things that Veras does well is that you can adjust prompts, you can use reference images and things like that to kind of hone in on style.”


Chaos Veras 3.0 – still #1
Chaos Veras 3.0 – still #2

Post-processing

While Veras experiments with generative creation, Chaos is also exploring how AI can be used to refine output from its established viz tools using a variety of AI post-processing techniques.

Chaos AI Upscaler, for example, enlarges render output by up to four times while preserving photorealistic quality. This means scenes can be rendered at lower resolutions (which is much quicker), then at the click of a button upscaled to add more detail.

While AI upscaling technology is widely available – both online and in generic tools like Photoshop – Chaos AI Upscaler benefits from being directly accessible at the click of a button directly inside the viz tools like Enscape that architects already use. Bates points out that if an architect uses another tool for this process, they must download the rendered image first, then upload it to another place, which fragments the workflow. “Here, it’s all part of an ecosystem,” he explains, adding that it also avoids the need for multiple software subscriptions.

Chaos is also applying AI in more intelligent ways, harnessing data from its core viz tools. Chaos AI Enhancer, for example, can improve rendered output by refining specific details in the image. This is currently limited to humans and vegetation, but Chaos is looking to extend this to building materials.

“You can select different genders, different moods, you can make a person go from happy to sad,” says Bates, adding that all of this can be done through a simple UI.

There are two major benefits: first, you don’t have to spend time searching for a custom asset that may or may not exist and then have to re-render; second, you don’t need highly detailed 3D asset models to achieve the desired results, which would normally require significant computational power, or may not even be possible in a tool like Enscape.

With Veras 3.0, the software’s capabilities now extend to video, allowing designers to generate short clips featuring dynamic pans and zooms, all at the push of a button

The real innovation lies in how the software applies these enhancements. Instead of relying on the AI to interpret and mask off elements within an image, Chaos brings this information over from the viz tool directly. For example, output from Enscape isn’t just a dumb JPG — each pixel carries ‘voluminous metadata’, so the AI Enhancer automatically knows that a plant is a plant, or a human is a human. This makes selections both easy and accurate.

As it stands, the workflow is seamless: a button click in Enscape automatically sends the image to the cloud for enhancement.

But there’s still room for improvement. Currently, each person or plant must be adjusted individually, but Chaos is exploring ways to apply changes globally within the scene. Chaos

AI Enhancer was first introduced in Enscape in 2024 and is now available in Corona and V-Ray 7 for 3ds Max, with support for additional V-Ray integrations coming soon.

AI materials

Chaos is also extending its application of AI into materials, allowing users to generate render-ready materials from a simple image. “Maybe you have an image from an existing project, maybe you have a material sample you just took a picture of,” says Bates. “With the [AI Material Generator] you can generate a material that has all the appropriate maps.”

Initially available in V-Ray for 3ds Max, the AI Material Generator is now being rolled out to Enscape. In addition, a new AI Material Recommender can suggest assets from the Chaos Cosmos library, using text prompts or visual references to help make it faster and easier to find the right materials.

Cross pollination

Chaos is uniquely positioned within the design visualisation software landscape. Through Veras, it offers powerful oneclick AI image and video generation, while tools like Enscape and V-Ray use AI to enhance classic visualisation outputs. This dual approach gives Chaos valuable insight into how AI can be applied across the many stages of the design process, and it will be fascinating to see how ideas and technologies start to cross-pollinate between these tools.

A deeper question, however, is whether 3D models will always be necessary. “We used to model to render, and now we render to model,” replies Bates, describing how some firms now start with AI images and only later build 3D geometry.

“Right now, there is a disconnect between those two workflows, between that pure AI render and modelling workflow – and those kind of disconnects are inefficiencies that bother us,” he says.

For now, 3D models remain indispensable. But the role of AI — whether in speeding up workflows, enhancing visuals, or enabling new storytelling techniques — is growing fast. The question is not if, but how quickly, AI will become a standard part of every architect’s viz toolkit.

The post Chaos: from pixels to prompts appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/chaos-from-pixels-to-prompts/feed/ 0
Chaos launches AI image-to-video generator https://aecmag.com/visualisation/chaos-launches-ai-image-to-video-generator/ https://aecmag.com/visualisation/chaos-launches-ai-image-to-video-generator/#disqus_thread Tue, 16 Sep 2025 07:00:18 +0000 https://aecmag.com/?p=24782 Veras 3.0 is one of several new AI-powered tools from the viz specialist

The post Chaos launches AI image-to-video generator appeared first on AEC Magazine.

]]>
Veras 3.0 is one of several new AI-powered tools from the viz specialist and developer of V-Ray and Enscape

Veras 3.0, the latest release of the AI-powered visualisation software from Chaos, includes a new image-to-video generation capability designed to transform static renderings into ‘dynamic animations’ through simple prompts.

It builds on the software’s original capabilities, which enable AEC professionals to take 3D models, 2D drawings, and images and quickly create AI-rendered design ideas and style variations.

Image-to-video generation in Veras allows designers to pan and zoom cameras, animate weather, and change the time of day with ‘just a few clicks.’ Once the look is determined, motion can be added to the scene through vehicles and digital people, turning still images into ‘immersive, moving stories.’

Veras was one of the first AI visualization tools to be designed specifically for AEC, integrating natively with Revit, Rhino, SketchUp and other modelling software. It was originally developed by EvolveLabs, which Chaos acquired in 2024.


Find this article plus many more in the September / October 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Chaos has also previewed three new AI-powered tools – AI Mood Match, AI Upscaler and Cylindo Quickshot.

AI Mood Match allows users to match ‘complex lighting and environment’ settings automatically to a reference image, removing the need for manual sky and sun adjustments.

AI Upscaler uses AI to enlarge render outputs by up to four times, while ‘preserving photoreal’ quality.

Cylindo Quickshot, a tool purpose-built for furniture, enables product and marketing teams to turn standard product images into photorealistic lifestyle scenes in ‘just a few clicks.’

According to Chaos, Cylindo Quickshot preserves detail and scale accuracy while giving users control over lighting, props, and backgrounds. It addresses common AI shortcomings, such as distorted products, inconsistent details, and off-brand results.

Meanwhile, Chaos has also announced improvements to some existing AI tools – AI Enhancer and AI Material Generator.

AI Enhancer automatically transforms flat renders of people and vegetation into photoreal elements. It was first introduced in Enscape in 2024 and is now available in Corona and V-Ray 7 for 3ds Max, with support for additional V-Ray integrations coming soon.

AI Material Generator, currently available for V-Ray for 3ds Max, will soon be available for more V-Ray integrations and other Chaos products. The software transforms real-world photos into reusable physically-based materials that can be stored in Chaos Cosmos.

“The AI Enhancer saves me time by refining images instantly, reducing the need for extra post-production,” said Agnieszka Klich, Co-founder, archvizartist.com. “When I don’t have textures available online, the Material Generator lets me quickly create a material without disrupting my workflow. Both tools make the process faster and more creative, which is exactly what I value in Corona.“

“As AI continues to transform the AEC industry, Chaos is paving the way with responsible AI tools designed to serve as creative companions for architects, designers, and visualization artists while ensuring that they retain control and ownership of their work,” said Iveta Cabajova, recently appointed Chaos CEO.

The post Chaos launches AI image-to-video generator appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/chaos-launches-ai-image-to-video-generator/feed/ 0
Motif introduces ‘single click’ AI rendering https://aecmag.com/visualisation/motif-introduces-single-click-ai-rendering-tool/ https://aecmag.com/visualisation/motif-introduces-single-click-ai-rendering-tool/#disqus_thread Thu, 17 Jul 2025 12:41:22 +0000 https://aecmag.com/?p=24450 Architecturally tuned AI renderer balances simplicity with easy customisation

The post Motif introduces ‘single click’ AI rendering appeared first on AEC Magazine.

]]>
Architecturally tuned AI renderer balances simplicity with easy customisation

Motif has added a new AI rendering technology to its AEC collaboration platform, which is designed for early-stage design exploration or more developed design presentations.

Motif explains that unlike generic AI rendering tools, its technology is specifically optimised for architectural visualisation, and understand the nuances of building design, materials, and spatial relationships.

The new tool aims to simplify the rendering process by offering single-click generation of architectural images. The idea is to allow architects to generate multiple visualisation options much quicker than is possible with traditional rendering workflows.


Find this article plus many more in the July / August 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈


The renderer works directly with 3D models streamed from Revit or Rhino. Alternatively, it can take input directly from a sketch or image of a physical model, allowing architects to quickly transform simple ideas into rendered images in seconds.

Motif’s AI rendering technology focuses on maintaining original geometric designs while interpreting material properties and environmental contexts.

From any given view, renders can be created with a single click or multiple rendering styles applied, such as ‘photorealistic’, ‘photorealistic black and white’, ‘watercolour’, and posterised. For each style, users are presented with multiple variations on a theme, all displayed alongside side each other in Motif’s ‘infinite canvas’.

Renders can be refined further through simple modifiers for environments (e.g. Rural, forest, mountain desert), weather (clear, cloudy, foggy, snowy) and time (sunrise, daytime, sunset, night time). For more control, users can define open ended text prompts to add specific colours, materials, or to augment renders with people, furniture, and foliage.

Meanwhile, learn more about Motif in this in-depth article about Motif V1


The post Motif introduces ‘single click’ AI rendering appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/motif-introduces-single-click-ai-rendering-tool/feed/ 0
BBC Television Centre https://aecmag.com/sponsored-content/bbc-television-centre/ https://aecmag.com/sponsored-content/bbc-television-centre/#disqus_thread Wed, 30 Jul 2025 08:00:22 +0000 https://aecmag.com/?p=24486 Breathing new life into a cultural icon with technology

The post BBC Television Centre appeared first on AEC Magazine.

]]>
Breathing new life into a cultural icon with technology

One of London’s most iconic landmarks, the former BBC Television Centre, is a cornerstone of British history. After over 50 years of service, the centre underwent significant redevelopment to breathe new life into this much-loved part of the city’s skyline. In 2013, the site was transformed into a mixed-use destination, combining luxury residences, creative workspaces and premium amenities.

However, reimagining a complex once celebrated as the “cathedral of entertainment and news” presented a major challenge – preserving its cultural heritage while adapting the space for modern living.

The challenge: selling the vision before construction

The Boundary, a global creative agency specialising in the best-in-class photorealistic visuals and immersive experiences for products, architecture and the built environment, was selected to bring the redevelopment to life. The Boundary dedicated its focus on the new residential building in the complex, The Ariel – a residence yet to be built.

The team crafted photorealistic marketing visuals that honoured the site as a cultural landmark, whilst also creating a visual experience that could be imagined as a home for future residents. The Boundary’s deliverables included CGIs, animations, a marketing film, and interactive real-time tours.

The Ariel project demanded a visual narrative that would help prospective buyers and tenants imagine life within the space. Materials needed to capture atmosphere, from how light shifts throughout the day, to how textures feel, and how people move throughout the space. Bringing that level of realism to life – with both accuracy and emotion – required the right technology.


Transforming visuals with Chaos Corona

Redefining design workflows with up to 80% time savings

The Boundary relied on Chaos Corona, an easy to use rendering software for architectural visualisation, for all rendered content. The tool’s flexibility meant they could create photorealistic images and easily tweak them based on feedback, without having to start from scratch each time a change was requested by the developer. This made the design process faster and more collaborative, saving time and money for the wider project with fewer iterations being produced.

The interiors at The Television Centre showcased a complex ceiling design that went through numerous iterations. What would typically require skilled modellers several hours to model and implement changes at each design development stage was streamlined to a fraction of the time using Corona’s Slicer tool.

By leveraging the tool’s ability to modify geometry with remarkable precision, the team significantly decreased the time spent on remodelling while ensuring accuracy. They could swiftly implement the design changes while ensuring the ceiling’s intricate details aligned with aesthetic and functional requirements.

Used with the Slicer, the Corona Pattern tool simplified the mapping of the ceiling’s intricate details and complex panelling. When designs changed multiple times, the team could adapt without compromising quality or project timelines.

The Boundary estimates that Corona tools saved up to 80% of the usual time spent to generate a render as intricate as this one, without compromising quality. In turn, the ability to rapidly test and refine designs significantly lowered expenses related to mistakes and delays.

Making space for creativity

At the heart of any project lies creativity. The Boundary relied on Corona Sun and Sky were used to create initial clay compositions, offering greater flexibility and the ability to test solutions using a range of tools. Sun direction and size were changed to quickly output lighting tests from morning to night, from sharp sun to overcast. With Volume Effect, the team could create depth that would otherwise require costly volumetrics. Designs, textures and materials were trialled and refined with technology that gave the team creative licence, without the risk of costly mistakes.


When vision meets technology through partnership

Today, what was once home to the BBC is now set to be home to approximately 950 residents, enjoying a reimagined creative district with a host of new amenities. The revival of an icon was never an easy project to conceptualise, but through emotive visuals, the end goal became easier to picture. The success of the redevelopment hinges not just on technology, but on the creative teams who utilised it to its full potential.

For Chaos, each of our customers are viewed as creative partners. Projects like this exemplify that collaboration, giving us the chance to see the tangible impact of our tools while building strong, lasting relationships with the people who use them. Meanwhile, our customers can rely on technology that they can trust.

“I have been using Corona since the very initial release and never looked back. It is one of the most utilised and trusted software we use in our studio.”

Eleonora Galimberti, Senior Associate at The Boundary

This redevelopment is a testament to what happens when creative vision meets trusted technology, transforming not just spaces but the way we experience them.

Visit www.chaos.com to learn more.

The post BBC Television Centre appeared first on AEC Magazine.

]]>
https://aecmag.com/sponsored-content/bbc-television-centre/feed/ 0
Ai & design culture (part 2) https://aecmag.com/ai/ai-design-culture-part-2/ https://aecmag.com/ai/ai-design-culture-part-2/#disqus_thread Thu, 24 Jul 2025 06:00:16 +0000 https://aecmag.com/?p=24365 How architects are using Ai models and how Midjourney V7 compares to Stable Diffusion and Flux

The post Ai & design culture (part 2) appeared first on AEC Magazine.

]]>
In the second of a two part article on Ai image generation and the culture behind its use, Keir Regan-Alexander gives a sense of how architects are using Ai models and takes a deeper dive into Midjourney V7 and how it compares to Stable Diffusion and Flux

In the first part of this article I described the impact of new LLM-based image tools like GPT-Image-1 and Gemini 2.0.Flash (Experimental Image Mode).

Now, in this second part I turn my focus to Midjourney, a tool that has recently undergone a few pivotal changes that I think are going to have a big impact on the fundamental design culture of practices. That means that they are worthy of critical reflection as practices begin testing and adopting:

Keir Regan-Alexander
Click the image to read Part 1

1) Retexture – Reduces randomness and brings “control net” functionality to Midjourney (MJ). This means rather than starting with random form and composition, we give the model linework or 3D views to work from. Previously, despite the remarkable quality of image outputs, this was not possible in MJ.

2) Moodboards – Make it easy to very quickly “train your own style” with a small collection of image references. Previously we have had to train “LoRAs” in Stable Diffusion (SD) or Flux, taking many hours of preparation and testing. Moodboards provide a lower fidelity but much more convenient alternative.

3) Personal codes – Tailors your outputs to your taste profile using ‘Personalize’ (US spelling). You can train your own “–p” code by offering up hundreds of your own A/B test preferences within your account – you can then switch to your ‘taste’ profile extremely easily. In short, once you’ve told MJ what you like, it gets a whole lot better at giving it back to you each time.

A model that instantly knows your aesthetic preferences

Personal codes (or “Personalization” codes to be more precise) allow us to train MJ on our style preferences for different kinds of image material. To better understand the idea, in Figure 1 below you’ll see a clear example of running the same text prompt both with and without my “–p” code. For me there is no contest, I consistently massively prefer the images that have applied my –p code as compared to those that have not.


Keir Regan-Alexander
(Left) an example of a generic MJ output, from a text prompt. The subject is a private house design in Irish landscape. (Right) an output running the exact same prompt, but applying my personal “–p” code, which is trained on my preferences of more than 450 individual A/B style image rankings

When enabled, Personalization substantially improves the average quality of your output, everything goes quickly from fairly generic ‘meh’ to ‘hey!’. It’s also now possible to develop a number of different personal preference codes for use in different settings. For example, one studio group or team may have a desire to develop a slightly different style code of preferences to another part of the studio, because they work in a different sector with different methods of communication.


Find this article plus many more in the July / August 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈


Midjourney vs Stable Diffusion / Flux

In the last 18 months, many heads have been turned by the potential of new tools like Stable Diffusion in architecture, because they have allowed us to train our own image styles, render sketches and gain increasingly configurable controls over image generation using Ai – and often without even making a 3D model. Flux, a new parallel opensource model ecosystem has taken the same methods and techniques from SD and added greater levels of quality.

We may marvel at what Ai makes possible in shorter time frames, but we should all be thinking – “great, let’s try to make a bit more profit this year” not “great let’s use this to undercut my competitor

But for ease of use, broad accessibility and consistency of output, the closed-source (and paid product) Midjourney is now firmly winning for most practices I speak to that are not strongly technologically minded.

Anecdotally, when I do Ai workshops, perhaps 10% of attendees really ‘get’ SD, whereas more like 75% immediately tend to click with Midjourney and I find that it appeals to the intuitive and more nuanced instincts of designers who like to discover design through an iterative and open-ended method of exploration.

While SD & Flux are potentially very low cost to use (if you run them locally and have the prerequisite GPUs) and offer massive flexibility of control, they are also much much harder to use effectively than MJ and more recently GPT-4o.

For a few months now Midjourney now sits within a slick web interface that is very intuitive to use and will produce top quality output with minimal stress and technical research.

Before we reflect on what this means for the overall culture of design in architectural practice going forwards, here are two notable observations to start with:

1) Practices who are willing to try their hand with diffusion models during feasibility or competition stage are beginning to find an edge. More than one recent conversation is suggesting that the use of diffusion models during competition stages has made a pivotal difference to recent bid processes and partially contributed to winning proposals.

2) I now see a growing interest from my developer client base, who want to go ahead and see vivid imagery even before they’ve engaged an architect or design team – they simply have an idea and want to go directly to seeing it visualised. In some cases, developers are looking to use Ai imagery to help dispose of sites, to quickly test alternative (visual) options to understand potential, or to secure new development contracts or funding.

Make of that what you will. I’m sure many architects will be cringing as they read that, but I think both observations are key signals of things to come for the industry whether it’s a shift you support or not. At the same time, I would say there is certainly a commercial opportunity there for architects if they’re willing to meet their clients on this level, adjust their standard methods of engagement and begin to think about exactly what value they bring in curating initial design concepts in an overtly transparent way at the inception stage of a project.

Text vs Image – where are people focused?

While I believe focusing on LLM adoption currently offers the most immediate and broadest benefits across practice and projects – the image realm is where most architects are spending their time when they jump into Generative Ai.

If you’re already modelling every detail and texture of your design and you want finite control, then you don’t use an Ai for visualisation, just continue to use CGI

Architects are fundamentally aesthetic creatures and so perhaps unsurprisingly they assume the image and modelling side of our work will be the most transformed over time. Therefore, I tend to find that architects often really want to lean into image model techniques above alternative Ai methods or Generative Design methods that may be available.

In the short term, image models are likely to be the most impactful for “storytelling” and in the initial briefing stages of projects where you’re not really sure what you think about a distinctive design approach, but you have a framework of visual and 3D ideas you want to play with.

Mapping diffusion techniques to problems

If you’re not sure what all of this means, see table below for a simple explanation of these techniques mapped to typical problems faced by designers looking to use Ai image models.


Keir Regan-Alexander

Changes with Midjourney v7

Midjourney recently launched its v7 model and it was met with relatively muted praise, probably because people were so blown away by the ground breaking potential of GPT-image-1 (an auto-regression model) that arrived just a month before.

This latest version of the MJ model was trained entirely from scratch so as a result it behaves differently to the familiar v6.1 model. I’m finding myself switching between v7 and 6.1 more regularly than with any previous model release.

One of the striking things about v7 is that you can only access the model when you have provided at least 200+ “image rating” preferences which points to an interesting new direction for more customised Ai experiences. Perhaps Midjourney has realised that the personalisation that is now possible in the platform is exactly what people want in an age of abundant imagery (increasingly created with Ai).


Keir Regan-Alexander
Example of what the new MJ v7 model can do. (Left) an image set in Hamburg, created with a simple text to image prompt. (Right) a nighttime view of the same scene, created by ‘retexturing’ the left hand image within v7 and with ‘personalize’ enabled. The output is impressive because it’s very consistent with the input image and the transformation (in the fore and mid-ground parts of the image are very well executed).

I for one, much prefer using a model that feels like it’s tuned just for me – more broadly, I suspect users want to feel like only they can produce the images they create and that they have a more distinctive style as a result. Leaning more into “Personalize” mode is helping with that and I like that MJ gating access to v7 behind the image ranking process.

I have achieved great results with the new model, but I find it harder to use and you do need to work differently with it. Here is some initial guidance on best use:

  • v7 has a new function called ‘draft’ mode which produces low-res options very fast. I’m finding that to get the best results in this version you have to work in this manner, first starting with draft mode enabled and then enhancing to larger resolution versions directly from there. It’s almost like draft mode helps v7 work out the right composition from the prompt and then enhance mode helps to refine the resolution from there. If you try to go for full res v7 in one rendering step, you’ll probably be confused by the lower-par output.
  • Getting your “personalize” code is essential for accessing v7 and I’m finding my –p code only begins to work relatively effectively from about 1,000+ rankings, so set aside a couple of hours to train your preferences in.
  • You can now prompt with voice activation mode, which means having a conversation about the composition and image type you are looking for. As you speak v7 will start producing ideas in front of you.

Letting the model play

Image models improvise and this is their great benefit. They aren’t the same as CGI.

The biggest psychological hurdle that teams have to cross in the image realm is to understand that using Ai diffusion models is not like rendering in the way we’ve become accustomed to – it’s a different value proposition. If you’re already modelling every detail and texture of your design and you want finite control, then you don’t use an Ai for visualisation, just continue to use CGI.

However, if you can provide looser guidance with your own design linework before you’ve actually designed the fine detail, feeding inputs for the overall 3D form and imagery for textures and materials, then you are essentially allowing the model to play within those boundaries.

This means letting go of some control and seeing what the model comes back with – a step that can feel uncomfortable for many designers. When you let the model play within boundaries you set, you likely find striking results that change the way you’re thinking about the design that you’re working on. You may at times find yourself both repulsed and seduced in short order as you search around through one image to the next, searching for a response that lands in the way you had hoped.

A big shift that I’m seeing is that Midjourney is making “control net” type work and “style transfer” with images accessible to a much wider audience than would naturally be inclined to try out a very technical tool like SD.


Keir Regan-Alexander
Latest updates from Midjourney now allow control net drawing inputs (left), meaning for certain types of view we can go from hidden line design frameworks to rendered concept imagery, or with a further step of complexity, training our own ‘moodboard’ to apply consistent styling (right). Note, this technique works best for ‘close-up’ subjects

I think that Midjourney’s decision to finally take the tool out of the very dodgy feeling Discord and launching a proper new and easy to use UI has really made the difference to practices. I still love to work with SD most of all, but I really see these ideas are beginning to land in MJ because it’s just so much easier to get a good result first time and it’s become really delightful to use.

Midjourney has a bit more work to do on its licence agreements (it is currently setup for single prosumers rather than enterprise) and privacy (they are training on your inputs). While you may immediately rule the tool out on this basis, consider – in most cases your inputs are primitive sketches or Enscape white card views, do you really mind if they are used for training and do they give away anything that would be considered privileged? With Stealth mode enabled (which you have to be on pro level for), your work can’t be viewed in public galleries. In order to get going with Midjourney in practice, you will need to allay all current business concerns, but with some basic guardrails in place for responsible use I am now seeing traction in practice.

Looking afresh at design culture

The use of “synthetic precedents” (i.e. images made purely with Ai already) is also now beginning to shape our critical thinking about design in early stages. Midjourney which has an exceptional ability to tell vivid first-person stories around projects, design themes and briefs, with seductive landscapes, materials and atmosphere. From the evidence I’ve seen so far, the images very much appeal to clients.

We are now starting to see Ai imagery be pinned up on the wall for studio crits and therefore I think we need to consider the impact of Ai on the overall design culture of the profession.


Keir Regan-Alexander
Example of sketch-to-render using Midjourney, but including style transfer. In this case a “synthetic precedent” is used to seed the colour and material styles to the final render using –sref tool.

If we put Ai aside for a moment – in architectural practice, I think it’s a good idea to regularly reflect on your current studio design culture by considering first;

  • Are we actually setting enough time aside to talk about design or is it all happening ad-hoc at peoples’ desks or online?
  • Do we share a common design method and language that we all understand implicitly?
  • Are we progressing and getting better with each project?
  • Are all team members contributing to the dialogue or waiting passively to be told what to do by a director with a napkin sketch?
  • Are we reverting to our comfort zone and just repeating tired ideas? • Are we using the right tools and mediums to explore each concept?

When people express frustration with design culture, they often refer specifically to some aspect of technological “misuse”, for example;

  1. “People are using SketchUp too much. They’re not drawing plans anymore”
  2. “We are modelling everything in Revit at Stage 3, and no one is thinking about interface detailing”
  3. “All I’m seeing is Enscape design options wall to wall. I’m struggling to engage”
  4. “I think we might be relying too heavily on Pinterest boards to think about materials”, or maybe;
  5. “I can’t read these computer images. I need a model to make a decision”.

… all things I’ve heard said in practice.

Design culture has changed a lot since I entered the profession, and I have found that our relationship with the broad category of “images” in general has changed dramatically over time. Perhaps this is because we used to have to do all our design research collecting monograph books and by visiting actual buildings to see them, whereas now I probably keep up to date on design in places like Dezeen or Arch Daily – platforms that specifically glorify the single image icon and that jump frenetically across scale, style and geography.

One of the great benefits of my role with Arka Works is that I get to visit so many design studios (more than 70 since I began) and I’m seeing so many different ways of working and a full range of opinions about Ai.

I recently heard from a practice leader who said that in their practice, pinning up the work of a deceased (and great) architect was okay, because if it’s still around it must have stood the test of time and also presumably it’s beyond the “life plus 70 year Intellectual Property rule” – but in this practice the random pinning up of images was not endorsed.

Other practice leads have expressed to me that they consider all design work to be somehow derivative and inspired by things we observe – in other words – it couldn’t exist without designers ruminating on shared ideas, being enamoured of another architects’ work, or just plain using peoples’ design material as a crib sheet. In these practices, you can pin up whatever you like – if it helps to move the conversation forward.

Some practices have specific rules about design culture – they may require a pin up on a schedule with a specific scope of materials – you might not be allowed to show certain kinds of project imagery, without a corresponding plan, for example (and therefore holistic understanding of the design concepts). Maybe you insist on models or prefer no renders.

I think those are very niche cases. More often I see images and references simply being used as a shortcut for words and I also think we are a more image-obsessed profession than ever. In my own experience so far, I think these new Ai image tools are extremely powerful and need to be wielded with care, but they absolutely can be part of the design culture and have a place in the design review, if adopted with good judgement.

This is an important caveat. The need for critical judgment at every step is absolutely essential and made all the more challenging by how extraordinary the outputs can be – we will be easily seduced into thinking “yes that’s what I meant”, or “that’s not exactly what I meant, but it’ll do”, or worse “that’s not at all what I meant, but the Ai has probably done a better job anyway – may as well just use Ai every time from now on.”

Pinterestification

This shortening of attention spans is a problem we face in all realms of popular culture, as we become more digital every day. We worry that quality will suffer as people’s attention spans cause more laziness around design idea creation and testing – which would cause a broad dumbing down effect. This has been referred to as the ‘idiot trap’, where we rely so heavily on subcontracting thinking to various Ais, that we forget how to think from first principles.

You might think as a reaction – “well let’s just not bother using Ai altogether” and I think that’s a valid critique if you believe that architectural creativity is a wholly artisanal and necessarily human crafted process.

Probably the practices that feel that way just aren’t calling me to talk about Ai, but you would be surprised by the kind of ‘artisanal’ practices who are extremely interested in adopting Ai image techniques because rather than seeing them as a threat, they just see it as another way of exercising and exploring their vision with creativity.

Perhaps you have observed something I call “Pinterestification” happening in your studio?

I describe this as the algorithmic convergence of taste around common tropes and norms. If you pick a chair you like in Pinterest, it will immediately start nudging you in the direction of living room furniture, kitchen cabinets and bathroom tiles that you also just happen to love.

They all go so well on the mood board…

It’s almost like the algorithm has aggregated the collective design preferences of millions of tastemakers and packaged it up onto a website with convenient links to buy all the products we hanker after and that’s because it has.


Keir Regan-Alexander
(Left) a screenshot from the “ArkaPainter_MJ” moodboard, which is a selection of 23 synthetic training images, the exact same selection that were recently used to train an SD LoRA with similar style. (Right) the output from MJ applies the paint and colour styles of the moodboard images into a new setting – in this case the same kitchen drawing as presented previously

Pinterest is widely used by designers and now heavily relied upon. The company has mapped our clicks; they know what goes together, what we like, what other people with similar taste like – and the incentives of ever greater attention mean that it’s never in Pinterest’s best interest to challenge you. Instead, Pinterest is the infinite design ice cream parlour that always serves your favourite flavour; it’s hard to stop yourself going back every time.

Learning about design

I’ve recently heard that some universities require full disclosure of any Ai use and that in other cases it can actually lead to disciplinary action against the student. The academic world is grappling with these new tools just as practice is, but with additional concerns about how students develop fundamental design thinking skills – so what is their worry?

The tech writer Paul Graham once said “writing IS thinking” and I tend to agree. Sure, you could have an LLM come up with a stock essay response – but the act of actually thinking by writing down your words and editing yourself to find out where you land IS the whole point of it. Writing is needed to create new ideas in the world and to solve difficult problems. The concern from universities therefore is that if we stop writing, we will stop thinking.

For architects, sketching IS our means of design thinking – it’s consistently the most effective method of ‘problem abstraction’ that we have. If I think back to most skilful design mentors I had in my early career, they were ALL expert draftspeople.

That’s because they came up with the drawing board and what that meant was they could distil many problems quickly and draw a single thread through things to find a solution, in the form of an erudite sketch. They drew sparingly, putting just the right amount of information in all the right places and knowing when to explore different levels of detail – because when you’re drawing by hand, you have to be efficient – you have to solve problems as you go.

Someone recently said to me that the less time the profession has spent drawing by hand (by using CAD, Revit, or Ai), the less that architects have earned overall. This is indeed a bit of a mind puzzle, and the crude problem is that when a more efficient technology exists, we are forced into adoption because we have to compete for work, whether it’s in our long term interests or not – it’s a Catch 22.

But this observation contains a signal too; that immaculate CAD lines do a different job from a sketching or hand drawing. The sketch is the truly high-value solution, the CAD drawing is the prosaic instructions for how to realise it.

I worry that “the idiot trap” for architects would be losing the fundamental skills of abstract reasoning that combines spatial, material, engineering and cultural realms and in doing so failing to recognise this core value as being the thing that the client is actually paying for (i.e. they are paying for the solution, not the instructions).

Clients hire us because we can see complete design solutions and find value where others can’t and because we can navigate the socio-political realm of planning and construction in real life – places where human diplomacy and empathy are paramount.

They don’t hire us to simply ‘spend our time producing package information’ – that is a by-product and in recent years we’ve failed to make this argument effectively. We shouldn’t be charging “by the time needed to do the drawing”, we should be charging “by the value” of the building.

So as we consider things being done more quickly with Ai image models, we need to build consensus that we won’t dispense with the sketching and craft of our work. We have to avoid the risk of simply doing something faster and giving the saving straight back to the market in the form of reduced prices and undercutting. We may marvel at what Ai makes possible in shorter time frames, but we should all be thinking – “great, let’s try to make a bit more profit this year” not “great let’s use this to undercut my competitor”.

Conclusion: judicious use

There is a popular quote (by Joanna Maciejewska) that has become a meme online:

I want Ai to do my laundry and dishes, so that I can do art and writing, not for Ai to do my art and writing so that I can do my laundry and dishes

If we translate that into our professional lives, for architects that would probably mean having Ai assisting us with things like regulatory compliance and auditing, not making design images for us.

Counter-intuitively Ai is realising value for practices in the very areas we would previously have considered the most difficult to automate: design optioneering, testing and conceptual image generation.

When architects reach for a tool like Midjourney, we need to be aware that these methods go right to the core of our value and purpose as designers. More so, that Ai imagery forces us to question our existing culture of design and methods of critique.

Unless we expressly dissuade our teams from using tools like Midjourney (which would be a valid position), anyone experimenting with it will now find it to be so effective that it will inevitably percolate into our design processes in ways that we don’t control, or enjoy.

Rather than allow these ad-hoc methods to creep up on us in design reviews unannounced and uncontrolled, a better approach is to consider first what would be an ‘aligned’ mode of adoption within our design processes – one that fits with the core culture and mission of the practice and then to make more deliberate use of it with endorsed design processes that create repeatable outputs that we really appreciate.


Keir Regan-Alexander
Photo taken during a design review at Morris+Company in 2022 – everyone standing up, drawings pinned up, table of material samples, working models, coffee cups. How will Ai imagery fit into this kind of crit setting? Should it be there at all? (photo: Architects from left to right: Kehinde, Funmbi, Ben, Miranda & David)

If you have a particularly craft-based design method, you could consider how that mode of thinking would be applied that to your use of Ai? Can you take a particularly experimental view of adoption that aligns with your specific priorities? Think Archigram with the photocopier.

We also need to question when something is pinned up on a wall alongside other material, if it can be judged objectively on its merits and relevance to the project, and if it stands up to this test – does it really matter to us how it was made? If I tell you it’s “Ai generated” does it reduce its perceived value?

I find that experimentation with image models is best led by the design leaders in practice because they are the “tastemakers” of practice and usually create the permission structures around design. Image models are often mistakenly categorised as technical phenomena and while they require some knowledge and skill, they are actually far more integral to the aesthetic, conceptual and creative aspects of our work.

To get a picture of what “aligned adoption of Ai” would mean for your practice, it should feel like you’re turning up the volume on the particular areas of practice that you already excel at, or conversely to mitigate aspects of practice that you feel acutely weaker in.

Put another way – Ai should be used to either reinforce whatever your specialist niche is or to help you remedy your perceived vulnerabilities. I particularly like the idea of leaning into our specialisms because it will make our deployment of Ai much more experimental, more bespoke and more differentiated in practice.

When I am applying Ai in practice, I don’t see depressed and disempowered architects – I am reassured to find that the most effective people at writing bids with Ai, also tend to be some of the best bid writers. The people who end up becoming the most experimental and effective at producing good design images with Ai image models, also tend to be great designers too and this trend goes on in all areas where I see Ai being used judiciously, so far – without exception.

The “judicious use” part is most important because only a practitioner who really knows their craft can apply these ideas in ways that actually explore new avenues for design and realise true value in project settings. If you feel that description matches you – then you should be getting involved and having an opinion about it. In the Ai world this is referred to as keeping the “human-in-the-loop” but we could think of it as the “architect-in-the-loop” continuing to curate decisions, steer things away from creative cul de sacs and to more effectively drive design.


Recommended viewing

Keir Regan-Alexander is director at Arka Works, a creative consultancy specialising in the Built Environment and the application of AI in architecture. At NXT BLD 2025 he explored how to deploy Ai in practice.

CLICK HERE to watch the whole presentation free on-demand

Watch the teaser below

The post Ai & design culture (part 2) appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/ai-design-culture-part-2/feed/ 0
Chaos releases V-Ray for Blender https://aecmag.com/visualisation/chaos-releases-v-ray-for-blender/ https://aecmag.com/visualisation/chaos-releases-v-ray-for-blender/#disqus_thread Tue, 01 Jul 2025 11:19:35 +0000 https://aecmag.com/?p=24227 Production renderer arrives natively in open-source 3D modeller

The post Chaos releases V-Ray for Blender appeared first on AEC Magazine.

]]>
Production renderer arrives natively in open-source 3D modeller

Chaos has launched V-Ray for Blender, bringing its production rendering technology to the open source 3D creation tool for the first time.

According to Chaos, V-Ray for Blender enables everything from photorealistic scenes to stylised animations. Intuitive controls let users mimic real-world camera effects and lighting using Chaos’ Global Illumination technology, which simulates natural light behaviour. The software also supports adaptive lighting and PBR-ready materials.

“Blender’s open-source model and active community make it one of the most versatile 3D creation tools for users of any level, and adding V-Ray takes it a step further,” said Allan Poore, chief product officer at Chaos. “With this plugin, Blender artists can render with confidence, all without compromising a thing.”

Blender users will also have access to over 5,600 free, high-quality assets through the Chaos Cosmos asset library, all of which can be accessed within Blender.

Once a scene is ready to render, users can access noise-free, interactive viewport rendering with the Nvidia AI Denoiser and the Intel Open Image Denoiser, or produce clean, final images through the V-Ray denoiser. From there, they will have a full range of post-processing tools for colour correction, light mix, compositing layers and masking, all available directly within the Blender UI.

V-Ray for Blender supports CPU, GPU and hybrid rendering configurations, making it fully scalable based on available hardware. Users can also utilise Chaos Cloud to move their data off their local machines and render in the cloud.

The post Chaos releases V-Ray for Blender appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/chaos-releases-v-ray-for-blender/feed/ 0
AI and design culture (part 1) https://aecmag.com/ai/ai-design-culture-part-1/ https://aecmag.com/ai/ai-design-culture-part-1/#disqus_thread Wed, 28 May 2025 06:33:54 +0000 https://aecmag.com/?p=23767 Keir Regan-Alexander explores the opportunities and tensions between creativity and computation

The post AI and design culture (part 1) appeared first on AEC Magazine.

]]>
As AI tools rapidly evolve, how are they shaping the culture of architectural design? Keir Regan-Alexander, director of Arka.Works, explores the opportunities and tensions at the intersection of creativity and computation — challenging architects to rethink what it means to truly design in the age of AI

An awful lot has been happening recently in the AI image space, and I’ve written and rewritten this article about three times to try and account for everything. Every time I think it’s done, there seems to be another release that moves the needle. That’s why this article is in two parts; first I want to look at recent changes from Gemini and GPT-4o and then take a deeper dive into Midjourney V7 and give a sense of how architects are using these models.

I’ll start by describing all the developments and conclude by speculating on what I think it means for the culture of design.


Arka Works
(Left) an image used as input (created in Midjourney). (Right) an image returned from Gemini that precisely followed my text-based request for editing

Right off the bat, let’s look at exactly what we’re talking about here. In the figure above you’ll see a conceptual image for a modern kitchen, all in black. This was created with a text prompt in Midjourney. After that I put the image into Gemini 2.0 (inside Google AI Studio) and asked it:

“Without changing the time of day or aspect ratio, with elegant lighting design, subtly turn the lights (to a low level) on in this image – the pendant lights and strip lights over the counter”

Why is this extraordinary?

Well, there is no 3D model for a start. But look closer at the light sources and shadows. The model knew where exactly to place the lights. It knows the difference between a pendant light and a strip light and how they diffuse light. Then it knows where to cast the multi-directional shadows and also that the material textures of each surface would have diffuse, reflective or caustic illumination qualities. Here’s another one (see below). This time I’m using GPT-4o in Image Mode.


Arka Works
(Left) a photograph taken in London on my ride home (building on Blackfriars Road). (Right) GPT-4o’s response to my request, a charming mock up of a model sample board of the facade

Create an image of an architectural sample board based on the building facade design in this image”

Why is this one extraordinary?

Again, no 3D model and with only a couple of minor exceptions, the architectural language of specific ornamentation, materials, colours and proportion have all been very well understood. The image is also (in my opinion) very charming. During the early stages of design projects, I have always enjoyed looking at the local “Architectural Taxonomy” of buildings in context and this is a great way of representing it.

If someone in my team had made these images in practice I would have been delighted and happy for them to be included in my presentations and reports without further amendment.

A radical redistribution of skills

There is a lot of hype in AI which can be tiresome, and I always want to be relatively sober in my outlook and to avoid hyperbole. You will probably have seen your social media feeds fill with depictions of influencers as superhero toys in plastic wrappers, or maybe you’ve observed a sudden improvement in someone’s graphic design skills and surprisingly judicious use of fonts and infographics … that’s all GPT-4o Image Mode at work.


Find this article plus many more in the May / June 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈


So, despite the frenzy of noise, the surges of insensitivity towards creatives and the abundance of Studio Ghibli IP infringement surrounding this release – in case it needs saying just one more time – in the most conservative of terms, this is indeed a big deal.

The first time you get a response from these new models that far exceeds your expectations, it will shock you and you will be filled with a genuine sense of wonder. I imagine the reaction feels similar to the first humans to see a photograph in the early c19th – it must have seemed genuinely miraculous and inexplicable. You feel the awe and wonder, then you walk away and you start to think about what it means for creators, for design methods … for your craft … and you get a sinking feeling in your stomach. For a couple of weeks after trying these new models for the first time I had a lingering feeling of sadness with a bit of fear mixed in.

These techniques are so accessible in nature that we should expect to see our clients briefing us with ever-more visual material. We therefore need to not be afraid or shocked when they do

I think this feeling was my brain finally registering the hammer dropping on a long-held hunch; that we are in an entirely new industry whether we like it or not and even if we wanted to return to the world of creative work before AI, it is impossible. Yes, we can opt to continue to do things however we choose, but this new method now exists in the world and it can’t be put back in the box.

I’ll return to this internal conflict again in my conclusion. If we set aside the emotional reaction for a moment, the early testing I’ve been doing in applying these models to architectural tasks suggest that, in both cases, the latest OpenAI and Google releases could prove to be “epoch defining” moments for architects and for all kinds of creatives who work in the image and video domains.

This is because the method of production and the user experience is so profoundly simple and easy compared to existing practices, that the barrier for access to image production in many, many realms has now come right down.

Again, we may not like to think about this from the perspective of having spent years honing our craft, yet the new reality is right in front of us and it’s not going anywhere. These new capabilities from image models can only lead to a permanent change in the working relationship between the commissioning client and the creative designer, because the means of production for graphical and image production have been completely reconfigured. In a radical act of forced redistribution, the access to sophisticated skill sets is now being packaged up by the AI companies to anyone who pays the licence fee.

What has not become distributed (yet) is wise judgement, deep experience in delivery, good taste, entirely new aesthetic ideas, emotional human insight, vivid communication and political diplomacy; all attributes that come with being a true expert and practitioner in any creative and professional realm.

These are qualities that for now remain inalienable and should give a hint at where we have to focus our energies in order to ensure we can continue to deliver our highest value for our patrons, whomever they may be. For better or worse, soon they will have the option to try and do things without us.

Chat-based image creation & editing

For a while, attempting to produce or edit images within chat apps has produced only sub-standard results. The likes of “Dall-E” which could be accessed only within otherwise text-based applications had really fallen behind and were producing ‘instantly AI identifiable images’ that felt generic and cheesy. Anything that is so obviously AI created (and low quality) means that we instantly attribute a low value to it.

As a result, I was seeing designers flock instead to more sophisticated options like Midjourney v6.1 and Stable Diffusion SDXL or Flux, where we can be very particular about the level of control and styling and where the results are often either indistinguishable from reality or indistinguishable from human creations. In the last couple of months that dynamic has been turned upside down; people can now achieve excellent imagery and edits directly with the chat-based apps again.

The methods that have come before, such as MJ, SD and Flux are still remarkable and highly applicable to practice – but they all require a fair amount of technical nous to get consistent and repeatable results. I have found through my advisory work with practices that having a technical solution isn’t what matters most’ it’s having it packaged up and made enjoyable enough to use that it’s able to make change to rigid habits.

A lesser tool with a great UX will beat a more sophisticated tool with a bad UX every time.

These more specialised AI image methods aren’t going away, and they still represent the most ‘configurable’ option, but text-based image editing is a format that anyone with a keyboard can do, and it is absurdly simple to perform.

More often than not, I’m finding the results are excellent and suitable for immediate use in project settings. If we take this idea further, we should also assume that our clients will soon be putting our images into these models themselves and asking for their ideas to be expressed on top…


Arka Works
(Left) Image produced in Midjourney (Right) Gemini has changed the cladding to dark red standing seam zinc and also changed the season to spring. The mountains are no longer visible but the edit is extremely high quality.

We might soon hear our clients saying; “Try this with another storey”, “Try this but in a more traditional style”, “Try this but with rainscreen fibre cement cladding”, “Try this but with a cafe on the ground floor and move the entrance to the right”, “Try this but move the windows and make that one smaller”…

You get the picture.

Again, whether we like this idea or not (and I know architects will shudder even thinking of this), when our clients received the results back from the model, they are likely to be similarly impressed with themselves, and this can only lead to a change in briefing methods and working dynamics on projects.

To give a sense of what I mean exactly, in the image below I’ve included an example of a new process we’re starting to see emerge whereby a 2D plan can be roughly translated into a 3D image using 4o in Image Mode. This process is definitely not easy to get right consistently (the model often makes errors) and also involves several prompting steps and a fair amount of nuance in technique. So far, I have also needed to follow up with manual edits.


Arka Works
(Left) Image produced in Midjourney using a technique called ‘moodboards’. (Right) Image produced in GPT-4o Image Mode with a simple text prompt

Despite those caveats, we can assume that in the coming months the models will solve these friction points too. I saw this idea first validated by Amir Hossein Noori (co-founder of the AI Hub) and while I’ve managed to roughly reproduce his process, he gets full credit for working it out and explaining the steps to me – suffice to say it’s not as simple as it first appears!

Conclusion: the big leveller

1) Client briefing will change

My first key conclusion from the last month is that these techniques are so accessible in nature that we should expect to see our clients briefing us with ever-more visual material. We therefore need to not be afraid or shocked when they do.

I don’t expect this shift to happen overnight, and I also don’t think all clients will necessarily want to work in this way, but over time it’s reasonable to expect this to become much more prevalent and this would be particularly the case for clients who are already inclined to make sweeping aesthetic changes when briefing on projects.

Takeaway: As clients decide they can exercise greater design control through image editing, we need to be clearer than ever on how our specialisms are differentiated and to be able to better explain how our value proposition sets us apart. We should be asking; what are the really hard and domain-specific niches that we can lean into?

2) Complex techniques will be accessible to all

Next, we need to reconsider technical hurdles as being a ‘defensive moat’ for our work. The most noticeable trend in the last couple of years is that the things that appear profoundly complicated at first, often go on to become much more simple to execute later.

As an example, a few months ago we had to use ComfyUI (a complex node-based interface for using Stable Diffusion) for ‘re-lighting’ imagery. This method remains optimal for control, but now for many situations we could just make a text request and let the model work out how to solve it directly. Let’s extrapolate that trend and assume that as a generalisation; the harder things we do will gradually become easier for others to replicate.

Muscle memory is also a real thing in the workplace, it’s often so much easier to revert back to the way we’ve done things in the past. People will say “Sure it might be better or faster with AI, but it also might not – so I’ll just stick with my current method”. This is exactly the challenge that I see everywhere and the people who make progress are the ones who insist on proactively adapting their methods and systems.

The major challenge I observe for organisations through my advisory work is that behavioural adjustments to working methods when you’re under stress or a deadline are the real bottleneck. The idea here is that while a ‘technical solution’ may exist, change will only occur when people are willing to do something in a new way. I do a lot of work now on “applied AI implementation” and engagement across practice types and scales. I see again and again that there are pockets of technical innovation and skills with certain team members, but I also see that it’s not being translated into actual changes in the way people do things across the broader organisation. This is a lot to do with access to suitable training, but also to do with a lack of awareness that improving working methods are much more about behavioural incentives than they are about ‘technical solutions’.

In a radical act of forced redistribution, the access to sophisticated skill sets are now being packaged up by the AI companies to anyone who pays the licence fee

There is an abundance of new groundbreaking technology now available to practices, maybe even too much – we could be busy for a decade with the inventions of the last couple of years alone. But in the next period, the real difference maker will not be technical, it will be behavioural. How willing are you to adapt the way you’re working and try new things? How curious is your team? Are they being given permission to experiment? This could prove a liability for larger practices and make smaller, more nimble practices more competitive.

Takeaway: Behavioural change is the biggest hurdle. As the technical skills needed for the ‘means of creative production become more accessible to all, the challenge for practices in the coming years may not be all about technical solutions, it will be more about their willingness and ability to adjust behaviour and culture. The teams who succeed won’t be the people who have the most technically accomplished solutions, more likely it will be those who achieve the most widespread and practical adaptations of their working systems.

3) Shifting culture of creativity

I’ve seen a whole spectrum of reactions towards Google and OpenAI’s latest releases and I think it’s likely that these new techniques are causing many designers a huge amount of stress as they consider the likely impacts on their work. I have felt the same apprehension many times too. I know that a number of ‘crisis meetings’ have taken place in creative agencies for example, and it is hard for me to see these model releases as anything other than a direct threat to at least a portion of their scope of creative work.

This is happening to all industries, not least across computer science, after all – LLMs can write exceptional code too. From my perspective, it’s certainly coming for architecture as well, and if we are to maintain the architect’s central role in design and place making, we need to shift our thinking and current approach or our moat will gradually be eroded too.

The relentless progression of AI technology cares little about our personal career goals and business plans and when we consider the sense of inevitability of it all – I’m left with a strong feeling that the best strategy is actually to run towards the opportunities that change brings, even if that means feeling uncomfortable at first.

Among the many posts I’ve seen celebrating recent developments from thought leaders and influencers seeking attention and engagement, I can see a cynical thread emerging … of (mostly) tech and sales people patting themselves on the back for having “solved art”.


Arka Works
(Left) An example plan of an apartment (AI Hub), with a red arrow denoting the camera position. (Right), a render produced with GPT-4o Image Mode (produced by Arka Works)

The posts I really can’t stand are the cavalier ones that actually seem to rejoice at the idea of not needing creative work anymore and salivating at the budget savings they will make … they seem to think you can just order “creative output” off a menu and that these new image models are a cure for some kind of long held frustration towards creative people.

Takeaway: The model “output” is indeed extraordinarily accomplished and produced quickly, but creative work is not something that is “solvable”; it either moves you or it doesn’t and design is similar — we try to explain objectively what great design quality is, but it’s hard. Certainly it fits the brief – yes, but the intangible and emotional reasons are more powerful and harder to explain. We know it when we see it.

While AIs can exhibit synthetic versions of our feelings, for now they represent an abstracted shadow of humanness – it is a useful imitation for sure and I see widespread applications in practice, but in the creative realm I think it’s unlikely to nourish us in the long term. The next wave of models may begin to ‘break rules’ and explore entirely new problem spaces and when they do I will have to reconsider this perspective.

We mistake the mastery of a particular technique for creativity and originality, but the thing about art is that it comes from humans who’ve experienced the world, felt the emotional impulse to share an authentic insight and cared enough to express themselves using various mediums. Creativity means making something that didn’t exist before.

That essential impulse, the genesis, the inalienably human insight and direction is still for me, everything. As we see AI creep into more and more creative realms (like architecture) we need to be much more strategic about how we value the specifically human parts and for me that means ceasing to sell our time and instead learning to sell our value.

In part 2 I will be looking in depth at Midjourney and how it’s being used in practice, I’ll also be looking specifically at the latest release (V7) in more detail, until then — thanks for reading.


Catch Keir Regan-Alexander at NXT BLDArka Works

Keir Regan-Alexander is director at Arka Works, a creative consultancy specialising in the Built Environment and the application of AI in architecture.

He will be speaking on AI at AEC Magazine’s NXT BLD in London on 11 June.

The post AI and design culture (part 1) appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/ai-design-culture-part-1/feed/ 0