Veras Archives - AEC Magazine https://aecmag.com/tag/veras/ Technology for the product lifecycle Fri, 07 Nov 2025 08:36:45 +0000 en-GB hourly 1 https://aecmag.com/wp-content/uploads/2021/02/cropped-aec-favicon-32x32.png Veras Archives - AEC Magazine https://aecmag.com/tag/veras/ 32 32 Chaos: from pixels to prompts https://aecmag.com/visualisation/chaos-from-pixels-to-prompts/ https://aecmag.com/visualisation/chaos-from-pixels-to-prompts/#disqus_thread Thu, 09 Oct 2025 05:00:40 +0000 https://aecmag.com/?p=24806 Chaos is blending AI with traditional viz, rethinking how architects explore, present and refine ideas

The post Chaos: from pixels to prompts appeared first on AEC Magazine.

]]>
Chaos is blending generative AI with traditional visualisation, rethinking how architects explore, present and refine ideas using tools like Veras, Enscape, and V-Ray, writes Greg Corke

From scanline rendering to photorealism, real-time viz to realt-ime ray tracing, architectural visualisation has always evolved hand in hand with technology.

Today, the sector is experiencing what is arguably its biggest shift yet: generative AI. Tools such as Midjourney, Stable Diffusion, Flux, and Nano Banana are attracting widespread attention for their ability to create compelling, photorealistic visuals in seconds — from nothing more than a simple prompt, sketch, or reference image.

The potential is enormous, yet many architectural practices are still figuring out how to properly embrace this technology, navigating practical, cultural, and workflow challenges along the way.

The impact on architectural visualisation software as we know it could be huge. But generative AI also presents a huge opportunity for software developers.


Find this article plus many more in the September / October 2025 Edition
👉 Subscribe FREE here 👈

Like some of its peers, Chaos has been gradually integrating AI-powered features into its traditional viz tools, including Enscape and V-Ray. Earlier this year, however, it went one step further by acquiring EvolveLAB and its dedicated AI rendering solution, Veras.

Veras allows architects to take a simple snapshot of a 3D model or even a hand drawn sketch and quickly create ‘AI-rendered’ images with countless style variations. Importantly, the software is tightly integrated with CAD / BIM tools like SketchUp, Revit, Rhino, Archicad and Vectorworks, and offers control over specific parts within the rendered image.

With the launch of Veras 3.0, the software’s capabilities now extend to video, allowing designers to generate short clips featuring dynamic pans and zooms, all at the push of a button.

“Basically, [it takes] an image input for your project, then generates a five second video using generative AI,” explains Bill Allen, director of products, Chaos. “If it sees other things, like people or cars in the scene, it’ll animate those,” he says.

This approach can create compelling illusions of rotation or environmental activity. A sunset prompt might animate lighting changes, while a fireplace in the scene could be made to flicker. But there are limits. “In generative AI, it’s trying to figure out what might be around the corner [of a building], and if there’s no data there, it’s not going to be able to interpret it,” says Allen.

Chaos is already looking at ways to solve this challenge of showcasing buildings from multiple angles. “One of the things we think we could do is take multiple shots – one shot from one angle of the building and another one – and then you can interpolate,” says Allen.


Model behaviour

Veras uses Stable Diffusion as its core ‘render engine’. As the generative AI model has advanced, newer versions of Stable Diffusion have been integrated into Veras, improving both realism and render speed, and allowing users to achieve more detailed and sophisticated results.

“We’re on render engine number six right now,” says Allen. “We still have render engine, four, five and six available for you to choose from in Veras.”

But Veras does not necessarily need to be tied to a specific generative AI model. In theory it could evolve to support Flux, Nano Banana or whatever new or improved model variant may come in the future.

But, as Allen points out, the choice of model isn’t just down to the quality of the visuals it produces. “It depends on what you want to do,” he says. “One of the reasons that we’re using Stable Diffusion right now instead of Flux is because we’re getting better geometry retention.”

One thing that Veras doesn’t yet have out of the box is the ability for customers to train the model using their own data, although as Allen admits, “That’s something we would like to do.”

In the past Chaos has used LORAs (Low-Rank Adaptations) to fine-tune the AI model for certain customers in order to accurately represent specific materials or styles within their renderings.

Roderick Bates, head of product operations, Chaos, imagines that the demand for fine tuning will go up over time, but there might be other ways to get the desired outcome, he says. “One of the things that Veras does well is that you can adjust prompts, you can use reference images and things like that to kind of hone in on style.”


Chaos Veras 3.0 – still #1
Chaos Veras 3.0 – still #2

Post-processing

While Veras experiments with generative creation, Chaos is also exploring how AI can be used to refine output from its established viz tools using a variety of AI post-processing techniques.

Chaos AI Upscaler, for example, enlarges render output by up to four times while preserving photorealistic quality. This means scenes can be rendered at lower resolutions (which is much quicker), then at the click of a button upscaled to add more detail.

While AI upscaling technology is widely available – both online and in generic tools like Photoshop – Chaos AI Upscaler benefits from being directly accessible at the click of a button directly inside the viz tools like Enscape that architects already use. Bates points out that if an architect uses another tool for this process, they must download the rendered image first, then upload it to another place, which fragments the workflow. “Here, it’s all part of an ecosystem,” he explains, adding that it also avoids the need for multiple software subscriptions.

Chaos is also applying AI in more intelligent ways, harnessing data from its core viz tools. Chaos AI Enhancer, for example, can improve rendered output by refining specific details in the image. This is currently limited to humans and vegetation, but Chaos is looking to extend this to building materials.

“You can select different genders, different moods, you can make a person go from happy to sad,” says Bates, adding that all of this can be done through a simple UI.

There are two major benefits: first, you don’t have to spend time searching for a custom asset that may or may not exist and then have to re-render; second, you don’t need highly detailed 3D asset models to achieve the desired results, which would normally require significant computational power, or may not even be possible in a tool like Enscape.

With Veras 3.0, the software’s capabilities now extend to video, allowing designers to generate short clips featuring dynamic pans and zooms, all at the push of a button

The real innovation lies in how the software applies these enhancements. Instead of relying on the AI to interpret and mask off elements within an image, Chaos brings this information over from the viz tool directly. For example, output from Enscape isn’t just a dumb JPG — each pixel carries ‘voluminous metadata’, so the AI Enhancer automatically knows that a plant is a plant, or a human is a human. This makes selections both easy and accurate.

As it stands, the workflow is seamless: a button click in Enscape automatically sends the image to the cloud for enhancement.

But there’s still room for improvement. Currently, each person or plant must be adjusted individually, but Chaos is exploring ways to apply changes globally within the scene. Chaos

AI Enhancer was first introduced in Enscape in 2024 and is now available in Corona and V-Ray 7 for 3ds Max, with support for additional V-Ray integrations coming soon.

AI materials

Chaos is also extending its application of AI into materials, allowing users to generate render-ready materials from a simple image. “Maybe you have an image from an existing project, maybe you have a material sample you just took a picture of,” says Bates. “With the [AI Material Generator] you can generate a material that has all the appropriate maps.”

Initially available in V-Ray for 3ds Max, the AI Material Generator is now being rolled out to Enscape. In addition, a new AI Material Recommender can suggest assets from the Chaos Cosmos library, using text prompts or visual references to help make it faster and easier to find the right materials.

Cross pollination

Chaos is uniquely positioned within the design visualisation software landscape. Through Veras, it offers powerful oneclick AI image and video generation, while tools like Enscape and V-Ray use AI to enhance classic visualisation outputs. This dual approach gives Chaos valuable insight into how AI can be applied across the many stages of the design process, and it will be fascinating to see how ideas and technologies start to cross-pollinate between these tools.

A deeper question, however, is whether 3D models will always be necessary. “We used to model to render, and now we render to model,” replies Bates, describing how some firms now start with AI images and only later build 3D geometry.

“Right now, there is a disconnect between those two workflows, between that pure AI render and modelling workflow – and those kind of disconnects are inefficiencies that bother us,” he says.

For now, 3D models remain indispensable. But the role of AI — whether in speeding up workflows, enhancing visuals, or enabling new storytelling techniques — is growing fast. The question is not if, but how quickly, AI will become a standard part of every architect’s viz toolkit.

The post Chaos: from pixels to prompts appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/chaos-from-pixels-to-prompts/feed/ 0
Chaos Envision launches for immersive presentations https://aecmag.com/visualisation/chaos-envision-launches-for-immersive-presentations/ https://aecmag.com/visualisation/chaos-envision-launches-for-immersive-presentations/#disqus_thread Tue, 20 May 2025 09:37:20 +0000 https://aecmag.com/?p=23852 Software brings together real-time ray tracing, animation and smart assets into  a single architect-friendly environment

The post Chaos Envision launches for immersive presentations appeared first on AEC Magazine.

]]>
Software brings together real-time ray tracing, animation and smart assets into  a single architect-friendly environment

Following its beta launch last year, Chaos has released Chaos Envision, a real-time storytelling tool that helps architects and designers turn 3D models into immersive, cinematic presentations. Chaos has also announced a range of ‘affordable’ suites of curated, industry-specific tools for architectural design, architectural visualisation, and media and entertainment.

“Envision offers new alternatives for architects that have had to find workarounds to avoid compromising on quality, speed and flexibility in their presentations,” said Petr Mitev, VP of solutions for designers at Chaos. “Now, anyone can produce high-fidelity visuals much earlier in the process, tapping V-Ray-like photorealism to resolve internal questions and get more stakeholders on board.”



Chaos Envision can bring multiple 3D components into its collaborative environment. The software accepts content from any application that hosts Enscape or V-Ray and can import common industry formats, so users ‘don’t have to worry about scene prep or data loss’. All lighting, materials and assets from their original CAD or Enscape design will carry over as-is.

Users can add entourage with Chaos Cosmos, and through a direct integration with the Chaos Anima engine, drag-and-drop ‘hyper-realistic 4D people and crowds’ with AI-enhanced behaviour into scenes, and then direct their movement by assigning them a path. Paths can also be applied to other objects and cameras for more cinematic looks.


In addition, Envision supports variation-based animation to help designers depict sun studies, construction phasing or even cycle through design options

Each visualisation can also be rendered with ray-traced realism. According to Chaos, this gives designers access to real-time and offline accuracy, even when polygon counts run into the trillions.

“What stands out to me is being able to import different rendering software files into Envision and edit them there without having to go back to the original programmes, all without any lag,” said architect David Tomic.

Chaos Suites

The new industry-specific Chaos Suites for architectural design, architectural visualisation, and media and entertainment, offers three options: Solo, Premium and Collection based on one of the flagship products—Enscape, V-Ray or Corona.

With the Architectural Design suites users start with the Enscape Solo option, or upgrade to the Enscape Premium suite, which includes the AI-powered Veras for faster ideation, design interactions and enhanced image details. The Enscape Collection adds Chaos Envision and Enscape Impact

With the Archvis suites, the Solo suite features V-Ray or Corona, along with the complete Chaos Cosmos asset library, while the Premium version adds the Chaos Phoenix dynamic simulator, the Chaos Player and 20 Chaos Cloud credits. The Collection includes everything from the Premium suite, as well as Chaos’ real-time ray tracer Chaos Vantage and the crowd/people animation system Chaos Anima.

The Media & Entertainment suites are built around V-Ray and include Vantage, Anima, Phoenix, Player and 20 Chaos Cloud credits—everything artists need to bring any idea to life.

The post Chaos Envision launches for immersive presentations appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/chaos-envision-launches-for-immersive-presentations/feed/ 0
Chaos acquires AI software firm EvolveLab https://aecmag.com/visualisation/chaos-acquires-ai-software-firm-evolvelab/ https://aecmag.com/visualisation/chaos-acquires-ai-software-firm-evolvelab/#disqus_thread Wed, 19 Feb 2025 13:00:07 +0000 https://aecmag.com/?p=23094 Developer of V-Ray and Enscape will gain valuable AI visualisation technology, and new opportunities in AEC design software

The post Chaos acquires AI software firm EvolveLab appeared first on AEC Magazine.

]]>
Developer of V-Ray and Enscape will gain valuable AI visualisation technology and unlock new opportunities in AEC design software

Chaos, a specialist in arch viz software, has acquired EvolveLab, a developer of AI tools for streamlining visualisation, generative design, documentation and interoperability for AEC professionals.

According to Chaos, the acquisition will reinforce its design-to-visualisation workflows, while expanding to include critical tools for BIM automation, AI-driven ideation and computational design.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Founded in 2015, EvolveLab was the first to integrate generative AI technology into architectural modelling software, demonstrating the massive potential of mixing imaginative prompts with 3D geometry. Through its flagship software Veras – which AEC Magazine reviewed back in 2023 – EvolveLab connected this capability to leading BIM tools like SketchUp, Revit, Vectorworks, and others, before expanding into smart documentation and generative design.

Looking ahead, the role of AI in traditional visualisation software will only expand, making the acquisition of EvolveLab a smart strategic move for Chaos. It will be fascinating to see how the two development teams collaborate to integrate their respective technologies.

Read what AEC Magazine thinks

Even before the acquisition, designers relied on the combination of EvolveLab and Chaos tools, using Veras and Enscape to accelerate both design and reviews. In the schematic design phase, this means rapidly generating ideas in Veras before committing the design to BIM where Enscape’s real-time visualisation capabilities pushes the project even further.

“Over a year ago, we began exploring AI tools to speed up our workflows and were excited to discover Veras, a solution specifically designed for AEC that seamlessly integrates with host platforms,” said Hanns-Jochen Weyland of Störmer Murphy and Partners, an award-winning architectural practice based in Hamburg, Germany. “Veras is now our go-to for initial ideation before transitioning to renderings in Enscape. This powerful combination accelerates concept development and ensures reliable outcomes.”

Enscape render
Enscape render enhanced with AI visualisation software Veras

“At Cuningham, we integrate EvolveLab’s Veras and Glyph alongside Chaos’ Enscape to enhance our design process,” said Joseph Bertucci, senior project design technologist of Cuningham, an integrated design firm with offices across the United States. “Using both Enscape and Veras allows us to visualise, iterate, and explore design concepts in real-time while leveraging AI-driven enhancements for rapid refinement. Meanwhile, Glyph has been a game-changer for auto-documentation, enabling us to efficiently generate views and drawing sets, saving valuable time in project setup. These tools collectively streamline our workflows, boosting efficiency, precision, and creativity.”

Chaos and the EvolveLab teams are exploring ways to integrate their products and accelerate their AI roadmaps. EvolveLab products will remain available to customers. The EvolveLab team will join Chaos, with Bill Allen serving as director of product management and EvolveLab chief technology officer Ben Guler as director of software development.

EvolveLab apps include Veras, for AI-powered visualisation; Glyph, for automating and standardising documentation tasks; Morphis, for generating designs in real-time; and Helix, for interoperability between BIM tools.

What AEC Magazine thinks

Like many long-established architectural visualisation software developers, Chaos has undoubtedly sensed growing competition from AI renderers over the past few years.

While tools like EvolveLab’s Veras aren’t yet mature enough or offer the necessary control to replace software like Enscape, they are already capable of handling certain aspects of the arch viz workflow—particularly in the early phases of a project. AI renderers can also enhance final outputs, improving visual quality. In fact, last year, Chaos introduced its own AI Enhancer for Enscape, which uses AI to transform assets like people and vegetation into high-quality, photorealistic visuals—minimising the need for high-poly, resource-intensive models.

Looking ahead, the role of AI in traditional visualisation software will only expand, making the acquisition of EvolveLab a smart strategic move for Chaos. It will be fascinating to see how the two development teams collaborate to integrate their respective technologies.

While EvolveLab’s AI rendering technology and expertise were likely the main drivers behind the acquisition, Chaos has also gained access to powerful tools for BIM automation, AI-driven ideation, and computational design. In our interview with EvolveLab CEO, Bill Allen  last year, he spoke of the company’s ambitious vision, including auto-generated drawings.

With the launch of Enscape Impact last year—bringing building performance analysis into Enscape’s real-time environment—Chaos has already shown its willingness to expand into new areas of AEC technology. Now, with advanced AEC design tools in its portfolio, it will be interesting to see how the company continues to evolve.

The post Chaos acquires AI software firm EvolveLab appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/chaos-acquires-ai-software-firm-evolvelab/feed/ 0
EvolveLab: bringing AI to architecture https://aecmag.com/ai/evolvelab-bringing-ai-to-architecture/ https://aecmag.com/ai/evolvelab-bringing-ai-to-architecture/#disqus_thread Thu, 25 Jul 2024 07:00:56 +0000 https://aecmag.com/?p=21088 The latest AI tools from EvolveLab, including one for BIM-based AI image generation and an AI assistant to automate drawings

The post EvolveLab: bringing AI to architecture appeared first on AEC Magazine.

]]>
With Veras, EvolveLab was the first software firm to apply AI image generation to BIM. Martyn Day talked with company CEO, Bill Allen, on the latest AI tools in development, including an AI assistant to automate drawings

Nothing has shown the potential for AI in architecture more than the generative AI image generators like Midjourney, DALL-E, and Stable Diffusion

Through the simple entry of text (a prompt) to describe a scene, early experimenters produced some amazing work for early-stage architectural design. These tools have increased in capability and moved beyond learning from large databases of images to working from bespoke user input, such as hand drawn sketches, to increase repeatability and enable design evolution. With improved control and predictability, AI conceptual design and rendering is becoming mainstream.

With early text-based generative AI, the output took new cues from geometry modelled with traditional 3D design tools. The first developer to integrate AI image generation with traditional BIM was Colorado-based developer, EvolveLab. The company’s Veras software brought AI image generation and rendering into Revit, SketchUp, Forma, Vectorworks and Rhino. This forced the AI to generate within the constraints of the 3D model and bypassed a lot of the grunt work and skills related to using traditional architectural visualisation tools.


Find this article plus many more in the July / August 2024 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Veras uses 3D model geometry as a substrate guide for the software. The user then adds text input prompts and preferences, and the AI engine quickly generates a spectrum of design ideas.

Veras allows the AI to control the override of BIM geometry and materials with simple sliders like ‘Geometry Override’, ‘Creativity Strength’ and ‘Style Strength’ — the higher the setting, the less it will adhere to the core BIM geometry. There are additional toggles to help, such as interior, nature, atmosphere and aerial view. But even with the same settings you can generate very different ideas with each iteration, which is great for ideation.

Once you have decided on a design, to further refine you select one of the outputs as a ‘seed image’. This allows more subtle changes to be made with user prompts but without the radical design jumps, such as colour of glass or materials.

A recent addition is the ability to create a selection within an image for specific edits, like changing the landscaping, floor material, or to select one façade for regeneration. This is useful if starting from a photograph of a site, as the area for ideation can be selected.

Drawing automation

EvolveLab is also working on Glyph Co-Pilot, an AI assistant for its drawing automation tool, Glyph, that uses ChatGPT to help produce drawings.

Glyph is Revit add-in that can perform a range of ‘tasks’ including Create (automate views), dimension (auto dimension), tag (automate annotation), import (into sheets) and place (automate sheet packing).


EvolveLab Veras


EvolveLab Veras


These tasks can be assembled into a ‘bundle’ which ‘plays’ a customised collection of tasks with a single click. One can automate all the elevation views, auto-dimension the drawings, auto-tag the drawings, automatically place in sheets and automatically arrange the layout. Within the task structure things can get complex, and users can define at a room, view or sheet level just what Glyph does. Once mastered Glyph can save a lot of time in drawing creation, but there are a lot of clicks to set this up.

With Glyph Co-Pilot, currently in closed beta, the development team has fused a ChatGPT front end to the Glyph experience, as Allen explains. “Users can write, dimension all my floorplans for levels 1 through 16 and elevate all my curtain walls on my project and it will go off and do it.

“I can prompt the application by asking it to elevate rooms, 103 through 107, or create enlarged plan of rooms of 103 through 107. Glyph Co-Pilot understands that that I don’t have to list all the rooms in-between,” he says.

Co-Pilot is currently limited to Revit but in the future it will be possible to plug into SketchUp, Archicad, Rhino etc. This means one could to get auto-drawings direct from Rhino, something that many architects are asking for.

But how do you do this, when Rhino lacks the rich meta data of Revit, such as spaces to indicate rooms? “There’s inferred metadata,” explains Allen. “Rhino, has layers and that’s typically how people organise their information and there’s obviously properties too. McNeel is also starting to build out its BIM data components. I expect some data hierarchy that will start to manifest itself within the platform which we can leverage.”

The future of AI

“AI is a bit of a buzzword,” admits Allen. “That means we see firms and products that may have been around for eight to ten years, suddenly claim to be AI applications. We are honest and we offer some AI apps, some prescriptive.”

For example, EvolveLab also develops Morphis for generative design, Helix, a conversion tool for SketchUp and AutoCAD, and Bento, a smorgasbord of Revit productivity apps, none of which are branded AI.

But true AI tools are developing at an incredible pace and while Allen finds it hard to envision where AI will lead to, he is clear that it will have a transformative impact on AEC.

“AI is going to be very disruptive,” he says. “Pre the advent of AI, I did this talk at an AU / Yale, called ‘The Future of BIM will not be BIM – and it’s coming faster than you think.”


“At the time I felt I had a pretty good idea where the industry was going. Now, with the advent of AI, I have no idea where the hell this train is going!

“I feel like the technology is so crazy, every step of the journey, I’m shocked at what is possible, and I didn’t anticipate or expect the rate of development.

“There’s a really good book from Jay Samit called ‘Disrupt You’. In it, the author talks about the value chain, and markets and where money flows. What happens is you get some kind of disruption, like a Netflix that disrupts Blockbuster or Uber that disrupts the taxi market. And AI is definitely doing that.

“A good example is Veras disrupting the rendering market. Users don’t have to go and grab a material and apply it or drop in a sky for their model in Photoshop. You just change the prompt, and you get this beautiful image.

“I feel like [AI] is going to change the market. I do feel like jobs will be lost. But at the end of the day, the AI still needs us, humans, to give us direction. It’s still just a tool like Revit, Rhino, or grasshopper is a tool. AI offers the maximum amount of efficiency and productivity benefits, and it will disrupt jobs and marketplace. But you’re always going to have people driving the tools.”


About EvolveLab

EvolveLab started off exclusively offering consulting, graduating to services. Over time the company saw itself as ‘the mechanic that worked on everyone else’s car’, carrying out all sorts of tasks, like creating dashboards for BIM, and building apps to meet the need of clients. Over the last few years, the company has made an intentional effort to flesh out these apps and productise them and be more like a software developer. It’s a business model that works. The company still does management services for clients but its focus, especially this year has been on the product side.

The company now has five core applications: Veras for AI rendering, Glyph for ‘autodrawings’, Morphis for generative design, Helix, a conversion tool for SketchUp and AutoCAD, and Bento, a smorgasbord of Revit productivity apps. The company will soon launch Glyph Co-Pilot, the AI powered drawing automation assistant.

The post EvolveLab: bringing AI to architecture appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/evolvelab-bringing-ai-to-architecture/feed/ 0
Veras brings AI visualisation to Forma https://aecmag.com/visualisation/veras-brings-ai-visualisation-to-forma/ https://aecmag.com/visualisation/veras-brings-ai-visualisation-to-forma/#disqus_thread Thu, 25 Jan 2024 10:58:36 +0000 https://aecmag.com/?p=19341 Veras brings AI visualisation to Autodesk Forma

The post Veras brings AI visualisation to Forma appeared first on AEC Magazine.

]]>
Software uses AI to generate visualisations from 3D models in Autodesk’s conceptual design tool

Veras, the AI-powered visualisation tool for SketchUp, Revit, Rhino and the web, is now available for Autodesk Forma.

The software uses 3D model geometry as a ‘substrate for creativity and inspiration’. Users input prompts or preferences, then the AI engine generates a spectrum of design ideas.

Users have the ability to override geometry and materials with simple sliders. Geometry override, for example, specifies the amount the geometry will change from the original model.

Other features include render selection which allows users to render specific regions of their designs. The users simply selects a portion of their image, redefines their vision with a new prompt, and then ‘renders’. Use cases include an interior designer looking to swap out furniture, a visualisation artist wanting to transform backgrounds, or an architect focused on refining specific building elements.

Veras also includes an Explore Mode, which offers pre-made settings and configurations; and seed-based rendering, to help users generate variations of designs.


Find this article plus many more in the January / February 2024 Edition of AEC Magazine
👉 Subscribe FREE here 👈

The post Veras brings AI visualisation to Forma appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/veras-brings-ai-visualisation-to-forma/feed/ 0
Veras: AI-based renderer for Revit models https://aecmag.com/visualisation/veras-ai-based-renderer-for-revit-models/ https://aecmag.com/visualisation/veras-ai-based-renderer-for-revit-models/#disqus_thread Mon, 30 Jan 2023 19:29:16 +0000 https://aecmag.com/?p=16450 An AI-based renderer for BIM models created in Revit

The post Veras: AI-based renderer for Revit models appeared first on AEC Magazine.

]]>
AI-enabled tools such as Midjourney and DALL-E can create some incredible artwork, but these are essentially flat images. EvolveLab is taking on a different and more complex task, as the first AI-based renderer for BIM models created in Revit, writes Martyn Day

Artificial intelligence (AI) is coming to the AEC market and will impact all areas of design. So far, the applications that have most vividly captured the public imagination have been image creation tools such as DALL-E and Midjourney, and writing tool ChatGPT, but there is so much more yet to come. In fact, AI is already automating an amazing range of creative activities. When used well, the results seem almost like magic.

In AEC, we have seen experts experiment with conceptual imagery and even master iterative design enhancements. Take, for example, the work of Hassan Ragab, which demonstrates the power of AI as a tool for architectural exploration. But many of these experiments have, up to this point, focused on the realm of concept art. How could this technology be applied to 3D models?

EvolveLab believes it has an answer, in the form of Veras, its recently released, AI-powered add-on for Revit versions released between 2019 and the present day. It uses the base 3D geometry for a design as a substrate, to which it applies prompt-based augmentations. EvolveLab has plans to create versions for SketchUp and Rhino, too.

EvolveLab EvolveLab EvolveLab EvolveLab

Controllable constraints

The great thing about Veras is that the geometry gives the AI a 3D constraint that is hard to control in 2D AI tools. It won’t change the shape or volume of the building too radically, but it is amazingly quick at changing materials, environment, time of day.

The best results are delivered in perspective view, with Revit’s graphic display set to show edges and with shadows set to cast. In the Veras window, there are sliders for ‘Creativity Strength’ and ‘Style Strength’, together with the ability to adjust resolution width and how many renders you want.

The higher the strength setting, the more the AI will replace what’s in Revit. The Style Strength controls how strictly the AI adheres to a text prompt. And there’s an additional selection set that helps the AI to know whether it’s dealing with an interior or exterior shot, as well as understand aspects of nature, atmosphere (for example, fog) and whether it’s an aerial view. To control the view, the user alters the perspective view in Revit and then updates the Veras windows. All images are saved to the default image location.

The only other input box is a text prompt. Here, you can enter a description of what it is that you want to create, specifying a huge range of materials, types of glazing, environment, style (such as timber cabin, modern concrete, residential/ office), night-time, natural material, marble floors and so on. With the same text prompt, users can play with the strength and style sliders and get a different result every time, usually very realistic and with minimal fuss.

It really is an excellent way to run through many different materials and styles in a very short amount of time. The results may not always be to your liking, but change the word input or sliders, and you get a totally different look and feel. And, if you like an output, it’s possible to change the perspective view and get a similar output, although there seem to be some variances like material mapping.

The first 30 images are free. Standard subscription is $34 per month billed yearly, or $49 per month for a monthly subscription. Veras allows unlimited renderings, resolutions up to 1,334 x 768, and the rendering of up to four images simultaneously. An enterprise version is coming soon, designed to empower whole design teams.

An impressive opening gambit For the opening gambit on an AI-enabled rendering application, Veras is seriously impressive. It’s a lot more practical than the 2D generators and operates well within the constraints of actual architectural geometry.

EvolveLab

For now, as there is still a lot of reliance on the randomness of AI, its appeal beyond conceptual work might leave it non-applicable to later design phases, but the results it delivers could certainly inspire design directions and material choices.

Veras can and will take some liberties; rendering geometry that might not be in the original, for example, and getting creative with glazing. However, I’m convinced it will be a tool that takes many projects in new, interesting directions.

Veras can generate very impressive photorealistic images, something that could be useful in presentations to clients. It also can apply materials impressively, but in order to get the best results, as with all software, it requires experimentation for the user to get to grips with how it obeys inputs.

As AI tools go, from learning from mass photos and inputs to allowing users to train them on customers’ own images and designs, I can only see AI tools like Veras becoming more and more important to firms. At the same time, outcomes will become more controllable.

Right now, text inputs are very generic. The more specific these can be, the more the architect’s specification can be reproduced faithfully. Either way, this technology has huge implications for architectural visualisation and the skills required in-house. This is the start of a revolution.

The post Veras: AI-based renderer for Revit models appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/veras-ai-based-renderer-for-revit-models/feed/ 0