Midjourney Archives - AEC Magazine https://aecmag.com/tag/midjourney/ Technology for the product lifecycle Sun, 12 Oct 2025 08:33:10 +0000 en-GB hourly 1 https://aecmag.com/wp-content/uploads/2021/02/cropped-aec-favicon-32x32.png Midjourney Archives - AEC Magazine https://aecmag.com/tag/midjourney/ 32 32 Ai & design culture (part 2) https://aecmag.com/ai/ai-design-culture-part-2/ https://aecmag.com/ai/ai-design-culture-part-2/#disqus_thread Thu, 24 Jul 2025 06:00:16 +0000 https://aecmag.com/?p=24365 How architects are using Ai models and how Midjourney V7 compares to Stable Diffusion and Flux

The post Ai & design culture (part 2) appeared first on AEC Magazine.

]]>
In the second of a two part article on Ai image generation and the culture behind its use, Keir Regan-Alexander gives a sense of how architects are using Ai models and takes a deeper dive into Midjourney V7 and how it compares to Stable Diffusion and Flux

In the first part of this article I described the impact of new LLM-based image tools like GPT-Image-1 and Gemini 2.0.Flash (Experimental Image Mode).

Now, in this second part I turn my focus to Midjourney, a tool that has recently undergone a few pivotal changes that I think are going to have a big impact on the fundamental design culture of practices. That means that they are worthy of critical reflection as practices begin testing and adopting:

Keir Regan-Alexander
Click the image to read Part 1

1) Retexture – Reduces randomness and brings “control net” functionality to Midjourney (MJ). This means rather than starting with random form and composition, we give the model linework or 3D views to work from. Previously, despite the remarkable quality of image outputs, this was not possible in MJ.

2) Moodboards – Make it easy to very quickly “train your own style” with a small collection of image references. Previously we have had to train “LoRAs” in Stable Diffusion (SD) or Flux, taking many hours of preparation and testing. Moodboards provide a lower fidelity but much more convenient alternative.

3) Personal codes – Tailors your outputs to your taste profile using ‘Personalize’ (US spelling). You can train your own “–p” code by offering up hundreds of your own A/B test preferences within your account – you can then switch to your ‘taste’ profile extremely easily. In short, once you’ve told MJ what you like, it gets a whole lot better at giving it back to you each time.

A model that instantly knows your aesthetic preferences

Personal codes (or “Personalization” codes to be more precise) allow us to train MJ on our style preferences for different kinds of image material. To better understand the idea, in Figure 1 below you’ll see a clear example of running the same text prompt both with and without my “–p” code. For me there is no contest, I consistently massively prefer the images that have applied my –p code as compared to those that have not.


Keir Regan-Alexander
(Left) an example of a generic MJ output, from a text prompt. The subject is a private house design in Irish landscape. (Right) an output running the exact same prompt, but applying my personal “–p” code, which is trained on my preferences of more than 450 individual A/B style image rankings

When enabled, Personalization substantially improves the average quality of your output, everything goes quickly from fairly generic ‘meh’ to ‘hey!’. It’s also now possible to develop a number of different personal preference codes for use in different settings. For example, one studio group or team may have a desire to develop a slightly different style code of preferences to another part of the studio, because they work in a different sector with different methods of communication.


Find this article plus many more in the July / August 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈


Midjourney vs Stable Diffusion / Flux

In the last 18 months, many heads have been turned by the potential of new tools like Stable Diffusion in architecture, because they have allowed us to train our own image styles, render sketches and gain increasingly configurable controls over image generation using Ai – and often without even making a 3D model. Flux, a new parallel opensource model ecosystem has taken the same methods and techniques from SD and added greater levels of quality.

We may marvel at what Ai makes possible in shorter time frames, but we should all be thinking – “great, let’s try to make a bit more profit this year” not “great let’s use this to undercut my competitor

But for ease of use, broad accessibility and consistency of output, the closed-source (and paid product) Midjourney is now firmly winning for most practices I speak to that are not strongly technologically minded.

Anecdotally, when I do Ai workshops, perhaps 10% of attendees really ‘get’ SD, whereas more like 75% immediately tend to click with Midjourney and I find that it appeals to the intuitive and more nuanced instincts of designers who like to discover design through an iterative and open-ended method of exploration.

While SD & Flux are potentially very low cost to use (if you run them locally and have the prerequisite GPUs) and offer massive flexibility of control, they are also much much harder to use effectively than MJ and more recently GPT-4o.

For a few months now Midjourney now sits within a slick web interface that is very intuitive to use and will produce top quality output with minimal stress and technical research.

Before we reflect on what this means for the overall culture of design in architectural practice going forwards, here are two notable observations to start with:

1) Practices who are willing to try their hand with diffusion models during feasibility or competition stage are beginning to find an edge. More than one recent conversation is suggesting that the use of diffusion models during competition stages has made a pivotal difference to recent bid processes and partially contributed to winning proposals.

2) I now see a growing interest from my developer client base, who want to go ahead and see vivid imagery even before they’ve engaged an architect or design team – they simply have an idea and want to go directly to seeing it visualised. In some cases, developers are looking to use Ai imagery to help dispose of sites, to quickly test alternative (visual) options to understand potential, or to secure new development contracts or funding.

Make of that what you will. I’m sure many architects will be cringing as they read that, but I think both observations are key signals of things to come for the industry whether it’s a shift you support or not. At the same time, I would say there is certainly a commercial opportunity there for architects if they’re willing to meet their clients on this level, adjust their standard methods of engagement and begin to think about exactly what value they bring in curating initial design concepts in an overtly transparent way at the inception stage of a project.

Text vs Image – where are people focused?

While I believe focusing on LLM adoption currently offers the most immediate and broadest benefits across practice and projects – the image realm is where most architects are spending their time when they jump into Generative Ai.

If you’re already modelling every detail and texture of your design and you want finite control, then you don’t use an Ai for visualisation, just continue to use CGI

Architects are fundamentally aesthetic creatures and so perhaps unsurprisingly they assume the image and modelling side of our work will be the most transformed over time. Therefore, I tend to find that architects often really want to lean into image model techniques above alternative Ai methods or Generative Design methods that may be available.

In the short term, image models are likely to be the most impactful for “storytelling” and in the initial briefing stages of projects where you’re not really sure what you think about a distinctive design approach, but you have a framework of visual and 3D ideas you want to play with.

Mapping diffusion techniques to problems

If you’re not sure what all of this means, see table below for a simple explanation of these techniques mapped to typical problems faced by designers looking to use Ai image models.


Keir Regan-Alexander

Changes with Midjourney v7

Midjourney recently launched its v7 model and it was met with relatively muted praise, probably because people were so blown away by the ground breaking potential of GPT-image-1 (an auto-regression model) that arrived just a month before.

This latest version of the MJ model was trained entirely from scratch so as a result it behaves differently to the familiar v6.1 model. I’m finding myself switching between v7 and 6.1 more regularly than with any previous model release.

One of the striking things about v7 is that you can only access the model when you have provided at least 200+ “image rating” preferences which points to an interesting new direction for more customised Ai experiences. Perhaps Midjourney has realised that the personalisation that is now possible in the platform is exactly what people want in an age of abundant imagery (increasingly created with Ai).


Keir Regan-Alexander
Example of what the new MJ v7 model can do. (Left) an image set in Hamburg, created with a simple text to image prompt. (Right) a nighttime view of the same scene, created by ‘retexturing’ the left hand image within v7 and with ‘personalize’ enabled. The output is impressive because it’s very consistent with the input image and the transformation (in the fore and mid-ground parts of the image are very well executed).

I for one, much prefer using a model that feels like it’s tuned just for me – more broadly, I suspect users want to feel like only they can produce the images they create and that they have a more distinctive style as a result. Leaning more into “Personalize” mode is helping with that and I like that MJ gating access to v7 behind the image ranking process.

I have achieved great results with the new model, but I find it harder to use and you do need to work differently with it. Here is some initial guidance on best use:

  • v7 has a new function called ‘draft’ mode which produces low-res options very fast. I’m finding that to get the best results in this version you have to work in this manner, first starting with draft mode enabled and then enhancing to larger resolution versions directly from there. It’s almost like draft mode helps v7 work out the right composition from the prompt and then enhance mode helps to refine the resolution from there. If you try to go for full res v7 in one rendering step, you’ll probably be confused by the lower-par output.
  • Getting your “personalize” code is essential for accessing v7 and I’m finding my –p code only begins to work relatively effectively from about 1,000+ rankings, so set aside a couple of hours to train your preferences in.
  • You can now prompt with voice activation mode, which means having a conversation about the composition and image type you are looking for. As you speak v7 will start producing ideas in front of you.

Letting the model play

Image models improvise and this is their great benefit. They aren’t the same as CGI.

The biggest psychological hurdle that teams have to cross in the image realm is to understand that using Ai diffusion models is not like rendering in the way we’ve become accustomed to – it’s a different value proposition. If you’re already modelling every detail and texture of your design and you want finite control, then you don’t use an Ai for visualisation, just continue to use CGI.

However, if you can provide looser guidance with your own design linework before you’ve actually designed the fine detail, feeding inputs for the overall 3D form and imagery for textures and materials, then you are essentially allowing the model to play within those boundaries.

This means letting go of some control and seeing what the model comes back with – a step that can feel uncomfortable for many designers. When you let the model play within boundaries you set, you likely find striking results that change the way you’re thinking about the design that you’re working on. You may at times find yourself both repulsed and seduced in short order as you search around through one image to the next, searching for a response that lands in the way you had hoped.

A big shift that I’m seeing is that Midjourney is making “control net” type work and “style transfer” with images accessible to a much wider audience than would naturally be inclined to try out a very technical tool like SD.


Keir Regan-Alexander
Latest updates from Midjourney now allow control net drawing inputs (left), meaning for certain types of view we can go from hidden line design frameworks to rendered concept imagery, or with a further step of complexity, training our own ‘moodboard’ to apply consistent styling (right). Note, this technique works best for ‘close-up’ subjects

I think that Midjourney’s decision to finally take the tool out of the very dodgy feeling Discord and launching a proper new and easy to use UI has really made the difference to practices. I still love to work with SD most of all, but I really see these ideas are beginning to land in MJ because it’s just so much easier to get a good result first time and it’s become really delightful to use.

Midjourney has a bit more work to do on its licence agreements (it is currently setup for single prosumers rather than enterprise) and privacy (they are training on your inputs). While you may immediately rule the tool out on this basis, consider – in most cases your inputs are primitive sketches or Enscape white card views, do you really mind if they are used for training and do they give away anything that would be considered privileged? With Stealth mode enabled (which you have to be on pro level for), your work can’t be viewed in public galleries. In order to get going with Midjourney in practice, you will need to allay all current business concerns, but with some basic guardrails in place for responsible use I am now seeing traction in practice.

Looking afresh at design culture

The use of “synthetic precedents” (i.e. images made purely with Ai already) is also now beginning to shape our critical thinking about design in early stages. Midjourney which has an exceptional ability to tell vivid first-person stories around projects, design themes and briefs, with seductive landscapes, materials and atmosphere. From the evidence I’ve seen so far, the images very much appeal to clients.

We are now starting to see Ai imagery be pinned up on the wall for studio crits and therefore I think we need to consider the impact of Ai on the overall design culture of the profession.


Keir Regan-Alexander
Example of sketch-to-render using Midjourney, but including style transfer. In this case a “synthetic precedent” is used to seed the colour and material styles to the final render using –sref tool.

If we put Ai aside for a moment – in architectural practice, I think it’s a good idea to regularly reflect on your current studio design culture by considering first;

  • Are we actually setting enough time aside to talk about design or is it all happening ad-hoc at peoples’ desks or online?
  • Do we share a common design method and language that we all understand implicitly?
  • Are we progressing and getting better with each project?
  • Are all team members contributing to the dialogue or waiting passively to be told what to do by a director with a napkin sketch?
  • Are we reverting to our comfort zone and just repeating tired ideas? • Are we using the right tools and mediums to explore each concept?

When people express frustration with design culture, they often refer specifically to some aspect of technological “misuse”, for example;

  1. “People are using SketchUp too much. They’re not drawing plans anymore”
  2. “We are modelling everything in Revit at Stage 3, and no one is thinking about interface detailing”
  3. “All I’m seeing is Enscape design options wall to wall. I’m struggling to engage”
  4. “I think we might be relying too heavily on Pinterest boards to think about materials”, or maybe;
  5. “I can’t read these computer images. I need a model to make a decision”.

… all things I’ve heard said in practice.

Design culture has changed a lot since I entered the profession, and I have found that our relationship with the broad category of “images” in general has changed dramatically over time. Perhaps this is because we used to have to do all our design research collecting monograph books and by visiting actual buildings to see them, whereas now I probably keep up to date on design in places like Dezeen or Arch Daily – platforms that specifically glorify the single image icon and that jump frenetically across scale, style and geography.

One of the great benefits of my role with Arka Works is that I get to visit so many design studios (more than 70 since I began) and I’m seeing so many different ways of working and a full range of opinions about Ai.

I recently heard from a practice leader who said that in their practice, pinning up the work of a deceased (and great) architect was okay, because if it’s still around it must have stood the test of time and also presumably it’s beyond the “life plus 70 year Intellectual Property rule” – but in this practice the random pinning up of images was not endorsed.

Other practice leads have expressed to me that they consider all design work to be somehow derivative and inspired by things we observe – in other words – it couldn’t exist without designers ruminating on shared ideas, being enamoured of another architects’ work, or just plain using peoples’ design material as a crib sheet. In these practices, you can pin up whatever you like – if it helps to move the conversation forward.

Some practices have specific rules about design culture – they may require a pin up on a schedule with a specific scope of materials – you might not be allowed to show certain kinds of project imagery, without a corresponding plan, for example (and therefore holistic understanding of the design concepts). Maybe you insist on models or prefer no renders.

I think those are very niche cases. More often I see images and references simply being used as a shortcut for words and I also think we are a more image-obsessed profession than ever. In my own experience so far, I think these new Ai image tools are extremely powerful and need to be wielded with care, but they absolutely can be part of the design culture and have a place in the design review, if adopted with good judgement.

This is an important caveat. The need for critical judgment at every step is absolutely essential and made all the more challenging by how extraordinary the outputs can be – we will be easily seduced into thinking “yes that’s what I meant”, or “that’s not exactly what I meant, but it’ll do”, or worse “that’s not at all what I meant, but the Ai has probably done a better job anyway – may as well just use Ai every time from now on.”

Pinterestification

This shortening of attention spans is a problem we face in all realms of popular culture, as we become more digital every day. We worry that quality will suffer as people’s attention spans cause more laziness around design idea creation and testing – which would cause a broad dumbing down effect. This has been referred to as the ‘idiot trap’, where we rely so heavily on subcontracting thinking to various Ais, that we forget how to think from first principles.

You might think as a reaction – “well let’s just not bother using Ai altogether” and I think that’s a valid critique if you believe that architectural creativity is a wholly artisanal and necessarily human crafted process.

Probably the practices that feel that way just aren’t calling me to talk about Ai, but you would be surprised by the kind of ‘artisanal’ practices who are extremely interested in adopting Ai image techniques because rather than seeing them as a threat, they just see it as another way of exercising and exploring their vision with creativity.

Perhaps you have observed something I call “Pinterestification” happening in your studio?

I describe this as the algorithmic convergence of taste around common tropes and norms. If you pick a chair you like in Pinterest, it will immediately start nudging you in the direction of living room furniture, kitchen cabinets and bathroom tiles that you also just happen to love.

They all go so well on the mood board…

It’s almost like the algorithm has aggregated the collective design preferences of millions of tastemakers and packaged it up onto a website with convenient links to buy all the products we hanker after and that’s because it has.


Keir Regan-Alexander
(Left) a screenshot from the “ArkaPainter_MJ” moodboard, which is a selection of 23 synthetic training images, the exact same selection that were recently used to train an SD LoRA with similar style. (Right) the output from MJ applies the paint and colour styles of the moodboard images into a new setting – in this case the same kitchen drawing as presented previously

Pinterest is widely used by designers and now heavily relied upon. The company has mapped our clicks; they know what goes together, what we like, what other people with similar taste like – and the incentives of ever greater attention mean that it’s never in Pinterest’s best interest to challenge you. Instead, Pinterest is the infinite design ice cream parlour that always serves your favourite flavour; it’s hard to stop yourself going back every time.

Learning about design

I’ve recently heard that some universities require full disclosure of any Ai use and that in other cases it can actually lead to disciplinary action against the student. The academic world is grappling with these new tools just as practice is, but with additional concerns about how students develop fundamental design thinking skills – so what is their worry?

The tech writer Paul Graham once said “writing IS thinking” and I tend to agree. Sure, you could have an LLM come up with a stock essay response – but the act of actually thinking by writing down your words and editing yourself to find out where you land IS the whole point of it. Writing is needed to create new ideas in the world and to solve difficult problems. The concern from universities therefore is that if we stop writing, we will stop thinking.

For architects, sketching IS our means of design thinking – it’s consistently the most effective method of ‘problem abstraction’ that we have. If I think back to most skilful design mentors I had in my early career, they were ALL expert draftspeople.

That’s because they came up with the drawing board and what that meant was they could distil many problems quickly and draw a single thread through things to find a solution, in the form of an erudite sketch. They drew sparingly, putting just the right amount of information in all the right places and knowing when to explore different levels of detail – because when you’re drawing by hand, you have to be efficient – you have to solve problems as you go.

Someone recently said to me that the less time the profession has spent drawing by hand (by using CAD, Revit, or Ai), the less that architects have earned overall. This is indeed a bit of a mind puzzle, and the crude problem is that when a more efficient technology exists, we are forced into adoption because we have to compete for work, whether it’s in our long term interests or not – it’s a Catch 22.

But this observation contains a signal too; that immaculate CAD lines do a different job from a sketching or hand drawing. The sketch is the truly high-value solution, the CAD drawing is the prosaic instructions for how to realise it.

I worry that “the idiot trap” for architects would be losing the fundamental skills of abstract reasoning that combines spatial, material, engineering and cultural realms and in doing so failing to recognise this core value as being the thing that the client is actually paying for (i.e. they are paying for the solution, not the instructions).

Clients hire us because we can see complete design solutions and find value where others can’t and because we can navigate the socio-political realm of planning and construction in real life – places where human diplomacy and empathy are paramount.

They don’t hire us to simply ‘spend our time producing package information’ – that is a by-product and in recent years we’ve failed to make this argument effectively. We shouldn’t be charging “by the time needed to do the drawing”, we should be charging “by the value” of the building.

So as we consider things being done more quickly with Ai image models, we need to build consensus that we won’t dispense with the sketching and craft of our work. We have to avoid the risk of simply doing something faster and giving the saving straight back to the market in the form of reduced prices and undercutting. We may marvel at what Ai makes possible in shorter time frames, but we should all be thinking – “great, let’s try to make a bit more profit this year” not “great let’s use this to undercut my competitor”.

Conclusion: judicious use

There is a popular quote (by Joanna Maciejewska) that has become a meme online:

I want Ai to do my laundry and dishes, so that I can do art and writing, not for Ai to do my art and writing so that I can do my laundry and dishes

If we translate that into our professional lives, for architects that would probably mean having Ai assisting us with things like regulatory compliance and auditing, not making design images for us.

Counter-intuitively Ai is realising value for practices in the very areas we would previously have considered the most difficult to automate: design optioneering, testing and conceptual image generation.

When architects reach for a tool like Midjourney, we need to be aware that these methods go right to the core of our value and purpose as designers. More so, that Ai imagery forces us to question our existing culture of design and methods of critique.

Unless we expressly dissuade our teams from using tools like Midjourney (which would be a valid position), anyone experimenting with it will now find it to be so effective that it will inevitably percolate into our design processes in ways that we don’t control, or enjoy.

Rather than allow these ad-hoc methods to creep up on us in design reviews unannounced and uncontrolled, a better approach is to consider first what would be an ‘aligned’ mode of adoption within our design processes – one that fits with the core culture and mission of the practice and then to make more deliberate use of it with endorsed design processes that create repeatable outputs that we really appreciate.


Keir Regan-Alexander
Photo taken during a design review at Morris+Company in 2022 – everyone standing up, drawings pinned up, table of material samples, working models, coffee cups. How will Ai imagery fit into this kind of crit setting? Should it be there at all? (photo: Architects from left to right: Kehinde, Funmbi, Ben, Miranda & David)

If you have a particularly craft-based design method, you could consider how that mode of thinking would be applied that to your use of Ai? Can you take a particularly experimental view of adoption that aligns with your specific priorities? Think Archigram with the photocopier.

We also need to question when something is pinned up on a wall alongside other material, if it can be judged objectively on its merits and relevance to the project, and if it stands up to this test – does it really matter to us how it was made? If I tell you it’s “Ai generated” does it reduce its perceived value?

I find that experimentation with image models is best led by the design leaders in practice because they are the “tastemakers” of practice and usually create the permission structures around design. Image models are often mistakenly categorised as technical phenomena and while they require some knowledge and skill, they are actually far more integral to the aesthetic, conceptual and creative aspects of our work.

To get a picture of what “aligned adoption of Ai” would mean for your practice, it should feel like you’re turning up the volume on the particular areas of practice that you already excel at, or conversely to mitigate aspects of practice that you feel acutely weaker in.

Put another way – Ai should be used to either reinforce whatever your specialist niche is or to help you remedy your perceived vulnerabilities. I particularly like the idea of leaning into our specialisms because it will make our deployment of Ai much more experimental, more bespoke and more differentiated in practice.

When I am applying Ai in practice, I don’t see depressed and disempowered architects – I am reassured to find that the most effective people at writing bids with Ai, also tend to be some of the best bid writers. The people who end up becoming the most experimental and effective at producing good design images with Ai image models, also tend to be great designers too and this trend goes on in all areas where I see Ai being used judiciously, so far – without exception.

The “judicious use” part is most important because only a practitioner who really knows their craft can apply these ideas in ways that actually explore new avenues for design and realise true value in project settings. If you feel that description matches you – then you should be getting involved and having an opinion about it. In the Ai world this is referred to as keeping the “human-in-the-loop” but we could think of it as the “architect-in-the-loop” continuing to curate decisions, steer things away from creative cul de sacs and to more effectively drive design.


Recommended viewing

Keir Regan-Alexander is director at Arka Works, a creative consultancy specialising in the Built Environment and the application of AI in architecture. At NXT BLD 2025 he explored how to deploy Ai in practice.

CLICK HERE to watch the whole presentation free on-demand

Watch the teaser below

The post Ai & design culture (part 2) appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/ai-design-culture-part-2/feed/ 0
AI and design culture (part 1) https://aecmag.com/ai/ai-design-culture-part-1/ https://aecmag.com/ai/ai-design-culture-part-1/#disqus_thread Wed, 28 May 2025 06:33:54 +0000 https://aecmag.com/?p=23767 Keir Regan-Alexander explores the opportunities and tensions between creativity and computation

The post AI and design culture (part 1) appeared first on AEC Magazine.

]]>
As AI tools rapidly evolve, how are they shaping the culture of architectural design? Keir Regan-Alexander, director of Arka.Works, explores the opportunities and tensions at the intersection of creativity and computation — challenging architects to rethink what it means to truly design in the age of AI

An awful lot has been happening recently in the AI image space, and I’ve written and rewritten this article about three times to try and account for everything. Every time I think it’s done, there seems to be another release that moves the needle. That’s why this article is in two parts; first I want to look at recent changes from Gemini and GPT-4o and then take a deeper dive into Midjourney V7 and give a sense of how architects are using these models.

I’ll start by describing all the developments and conclude by speculating on what I think it means for the culture of design.


Arka Works
(Left) an image used as input (created in Midjourney). (Right) an image returned from Gemini that precisely followed my text-based request for editing

Right off the bat, let’s look at exactly what we’re talking about here. In the figure above you’ll see a conceptual image for a modern kitchen, all in black. This was created with a text prompt in Midjourney. After that I put the image into Gemini 2.0 (inside Google AI Studio) and asked it:

“Without changing the time of day or aspect ratio, with elegant lighting design, subtly turn the lights (to a low level) on in this image – the pendant lights and strip lights over the counter”

Why is this extraordinary?

Well, there is no 3D model for a start. But look closer at the light sources and shadows. The model knew where exactly to place the lights. It knows the difference between a pendant light and a strip light and how they diffuse light. Then it knows where to cast the multi-directional shadows and also that the material textures of each surface would have diffuse, reflective or caustic illumination qualities. Here’s another one (see below). This time I’m using GPT-4o in Image Mode.


Arka Works
(Left) a photograph taken in London on my ride home (building on Blackfriars Road). (Right) GPT-4o’s response to my request, a charming mock up of a model sample board of the facade

Create an image of an architectural sample board based on the building facade design in this image”

Why is this one extraordinary?

Again, no 3D model and with only a couple of minor exceptions, the architectural language of specific ornamentation, materials, colours and proportion have all been very well understood. The image is also (in my opinion) very charming. During the early stages of design projects, I have always enjoyed looking at the local “Architectural Taxonomy” of buildings in context and this is a great way of representing it.

If someone in my team had made these images in practice I would have been delighted and happy for them to be included in my presentations and reports without further amendment.

A radical redistribution of skills

There is a lot of hype in AI which can be tiresome, and I always want to be relatively sober in my outlook and to avoid hyperbole. You will probably have seen your social media feeds fill with depictions of influencers as superhero toys in plastic wrappers, or maybe you’ve observed a sudden improvement in someone’s graphic design skills and surprisingly judicious use of fonts and infographics … that’s all GPT-4o Image Mode at work.


Find this article plus many more in the May / June 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈


So, despite the frenzy of noise, the surges of insensitivity towards creatives and the abundance of Studio Ghibli IP infringement surrounding this release – in case it needs saying just one more time – in the most conservative of terms, this is indeed a big deal.

The first time you get a response from these new models that far exceeds your expectations, it will shock you and you will be filled with a genuine sense of wonder. I imagine the reaction feels similar to the first humans to see a photograph in the early c19th – it must have seemed genuinely miraculous and inexplicable. You feel the awe and wonder, then you walk away and you start to think about what it means for creators, for design methods … for your craft … and you get a sinking feeling in your stomach. For a couple of weeks after trying these new models for the first time I had a lingering feeling of sadness with a bit of fear mixed in.

These techniques are so accessible in nature that we should expect to see our clients briefing us with ever-more visual material. We therefore need to not be afraid or shocked when they do

I think this feeling was my brain finally registering the hammer dropping on a long-held hunch; that we are in an entirely new industry whether we like it or not and even if we wanted to return to the world of creative work before AI, it is impossible. Yes, we can opt to continue to do things however we choose, but this new method now exists in the world and it can’t be put back in the box.

I’ll return to this internal conflict again in my conclusion. If we set aside the emotional reaction for a moment, the early testing I’ve been doing in applying these models to architectural tasks suggest that, in both cases, the latest OpenAI and Google releases could prove to be “epoch defining” moments for architects and for all kinds of creatives who work in the image and video domains.

This is because the method of production and the user experience is so profoundly simple and easy compared to existing practices, that the barrier for access to image production in many, many realms has now come right down.

Again, we may not like to think about this from the perspective of having spent years honing our craft, yet the new reality is right in front of us and it’s not going anywhere. These new capabilities from image models can only lead to a permanent change in the working relationship between the commissioning client and the creative designer, because the means of production for graphical and image production have been completely reconfigured. In a radical act of forced redistribution, the access to sophisticated skill sets is now being packaged up by the AI companies to anyone who pays the licence fee.

What has not become distributed (yet) is wise judgement, deep experience in delivery, good taste, entirely new aesthetic ideas, emotional human insight, vivid communication and political diplomacy; all attributes that come with being a true expert and practitioner in any creative and professional realm.

These are qualities that for now remain inalienable and should give a hint at where we have to focus our energies in order to ensure we can continue to deliver our highest value for our patrons, whomever they may be. For better or worse, soon they will have the option to try and do things without us.

Chat-based image creation & editing

For a while, attempting to produce or edit images within chat apps has produced only sub-standard results. The likes of “Dall-E” which could be accessed only within otherwise text-based applications had really fallen behind and were producing ‘instantly AI identifiable images’ that felt generic and cheesy. Anything that is so obviously AI created (and low quality) means that we instantly attribute a low value to it.

As a result, I was seeing designers flock instead to more sophisticated options like Midjourney v6.1 and Stable Diffusion SDXL or Flux, where we can be very particular about the level of control and styling and where the results are often either indistinguishable from reality or indistinguishable from human creations. In the last couple of months that dynamic has been turned upside down; people can now achieve excellent imagery and edits directly with the chat-based apps again.

The methods that have come before, such as MJ, SD and Flux are still remarkable and highly applicable to practice – but they all require a fair amount of technical nous to get consistent and repeatable results. I have found through my advisory work with practices that having a technical solution isn’t what matters most’ it’s having it packaged up and made enjoyable enough to use that it’s able to make change to rigid habits.

A lesser tool with a great UX will beat a more sophisticated tool with a bad UX every time.

These more specialised AI image methods aren’t going away, and they still represent the most ‘configurable’ option, but text-based image editing is a format that anyone with a keyboard can do, and it is absurdly simple to perform.

More often than not, I’m finding the results are excellent and suitable for immediate use in project settings. If we take this idea further, we should also assume that our clients will soon be putting our images into these models themselves and asking for their ideas to be expressed on top…


Arka Works
(Left) Image produced in Midjourney (Right) Gemini has changed the cladding to dark red standing seam zinc and also changed the season to spring. The mountains are no longer visible but the edit is extremely high quality.

We might soon hear our clients saying; “Try this with another storey”, “Try this but in a more traditional style”, “Try this but with rainscreen fibre cement cladding”, “Try this but with a cafe on the ground floor and move the entrance to the right”, “Try this but move the windows and make that one smaller”…

You get the picture.

Again, whether we like this idea or not (and I know architects will shudder even thinking of this), when our clients received the results back from the model, they are likely to be similarly impressed with themselves, and this can only lead to a change in briefing methods and working dynamics on projects.

To give a sense of what I mean exactly, in the image below I’ve included an example of a new process we’re starting to see emerge whereby a 2D plan can be roughly translated into a 3D image using 4o in Image Mode. This process is definitely not easy to get right consistently (the model often makes errors) and also involves several prompting steps and a fair amount of nuance in technique. So far, I have also needed to follow up with manual edits.


Arka Works
(Left) Image produced in Midjourney using a technique called ‘moodboards’. (Right) Image produced in GPT-4o Image Mode with a simple text prompt

Despite those caveats, we can assume that in the coming months the models will solve these friction points too. I saw this idea first validated by Amir Hossein Noori (co-founder of the AI Hub) and while I’ve managed to roughly reproduce his process, he gets full credit for working it out and explaining the steps to me – suffice to say it’s not as simple as it first appears!

Conclusion: the big leveller

1) Client briefing will change

My first key conclusion from the last month is that these techniques are so accessible in nature that we should expect to see our clients briefing us with ever-more visual material. We therefore need to not be afraid or shocked when they do.

I don’t expect this shift to happen overnight, and I also don’t think all clients will necessarily want to work in this way, but over time it’s reasonable to expect this to become much more prevalent and this would be particularly the case for clients who are already inclined to make sweeping aesthetic changes when briefing on projects.

Takeaway: As clients decide they can exercise greater design control through image editing, we need to be clearer than ever on how our specialisms are differentiated and to be able to better explain how our value proposition sets us apart. We should be asking; what are the really hard and domain-specific niches that we can lean into?

2) Complex techniques will be accessible to all

Next, we need to reconsider technical hurdles as being a ‘defensive moat’ for our work. The most noticeable trend in the last couple of years is that the things that appear profoundly complicated at first, often go on to become much more simple to execute later.

As an example, a few months ago we had to use ComfyUI (a complex node-based interface for using Stable Diffusion) for ‘re-lighting’ imagery. This method remains optimal for control, but now for many situations we could just make a text request and let the model work out how to solve it directly. Let’s extrapolate that trend and assume that as a generalisation; the harder things we do will gradually become easier for others to replicate.

Muscle memory is also a real thing in the workplace, it’s often so much easier to revert back to the way we’ve done things in the past. People will say “Sure it might be better or faster with AI, but it also might not – so I’ll just stick with my current method”. This is exactly the challenge that I see everywhere and the people who make progress are the ones who insist on proactively adapting their methods and systems.

The major challenge I observe for organisations through my advisory work is that behavioural adjustments to working methods when you’re under stress or a deadline are the real bottleneck. The idea here is that while a ‘technical solution’ may exist, change will only occur when people are willing to do something in a new way. I do a lot of work now on “applied AI implementation” and engagement across practice types and scales. I see again and again that there are pockets of technical innovation and skills with certain team members, but I also see that it’s not being translated into actual changes in the way people do things across the broader organisation. This is a lot to do with access to suitable training, but also to do with a lack of awareness that improving working methods are much more about behavioural incentives than they are about ‘technical solutions’.

In a radical act of forced redistribution, the access to sophisticated skill sets are now being packaged up by the AI companies to anyone who pays the licence fee

There is an abundance of new groundbreaking technology now available to practices, maybe even too much – we could be busy for a decade with the inventions of the last couple of years alone. But in the next period, the real difference maker will not be technical, it will be behavioural. How willing are you to adapt the way you’re working and try new things? How curious is your team? Are they being given permission to experiment? This could prove a liability for larger practices and make smaller, more nimble practices more competitive.

Takeaway: Behavioural change is the biggest hurdle. As the technical skills needed for the ‘means of creative production become more accessible to all, the challenge for practices in the coming years may not be all about technical solutions, it will be more about their willingness and ability to adjust behaviour and culture. The teams who succeed won’t be the people who have the most technically accomplished solutions, more likely it will be those who achieve the most widespread and practical adaptations of their working systems.

3) Shifting culture of creativity

I’ve seen a whole spectrum of reactions towards Google and OpenAI’s latest releases and I think it’s likely that these new techniques are causing many designers a huge amount of stress as they consider the likely impacts on their work. I have felt the same apprehension many times too. I know that a number of ‘crisis meetings’ have taken place in creative agencies for example, and it is hard for me to see these model releases as anything other than a direct threat to at least a portion of their scope of creative work.

This is happening to all industries, not least across computer science, after all – LLMs can write exceptional code too. From my perspective, it’s certainly coming for architecture as well, and if we are to maintain the architect’s central role in design and place making, we need to shift our thinking and current approach or our moat will gradually be eroded too.

The relentless progression of AI technology cares little about our personal career goals and business plans and when we consider the sense of inevitability of it all – I’m left with a strong feeling that the best strategy is actually to run towards the opportunities that change brings, even if that means feeling uncomfortable at first.

Among the many posts I’ve seen celebrating recent developments from thought leaders and influencers seeking attention and engagement, I can see a cynical thread emerging … of (mostly) tech and sales people patting themselves on the back for having “solved art”.


Arka Works
(Left) An example plan of an apartment (AI Hub), with a red arrow denoting the camera position. (Right), a render produced with GPT-4o Image Mode (produced by Arka Works)

The posts I really can’t stand are the cavalier ones that actually seem to rejoice at the idea of not needing creative work anymore and salivating at the budget savings they will make … they seem to think you can just order “creative output” off a menu and that these new image models are a cure for some kind of long held frustration towards creative people.

Takeaway: The model “output” is indeed extraordinarily accomplished and produced quickly, but creative work is not something that is “solvable”; it either moves you or it doesn’t and design is similar — we try to explain objectively what great design quality is, but it’s hard. Certainly it fits the brief – yes, but the intangible and emotional reasons are more powerful and harder to explain. We know it when we see it.

While AIs can exhibit synthetic versions of our feelings, for now they represent an abstracted shadow of humanness – it is a useful imitation for sure and I see widespread applications in practice, but in the creative realm I think it’s unlikely to nourish us in the long term. The next wave of models may begin to ‘break rules’ and explore entirely new problem spaces and when they do I will have to reconsider this perspective.

We mistake the mastery of a particular technique for creativity and originality, but the thing about art is that it comes from humans who’ve experienced the world, felt the emotional impulse to share an authentic insight and cared enough to express themselves using various mediums. Creativity means making something that didn’t exist before.

That essential impulse, the genesis, the inalienably human insight and direction is still for me, everything. As we see AI creep into more and more creative realms (like architecture) we need to be much more strategic about how we value the specifically human parts and for me that means ceasing to sell our time and instead learning to sell our value.

In part 2 I will be looking in depth at Midjourney and how it’s being used in practice, I’ll also be looking specifically at the latest release (V7) in more detail, until then — thanks for reading.


Catch Keir Regan-Alexander at NXT BLDArka Works

Keir Regan-Alexander is director at Arka Works, a creative consultancy specialising in the Built Environment and the application of AI in architecture.

He will be speaking on AI at AEC Magazine’s NXT BLD in London on 11 June.

The post AI and design culture (part 1) appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/ai-design-culture-part-1/feed/ 0
AI and the future of arch viz https://aecmag.com/visualisation/ai-and-the-future-of-arch-viz/ https://aecmag.com/visualisation/ai-and-the-future-of-arch-viz/#disqus_thread Fri, 21 Feb 2025 09:00:39 +0000 https://aecmag.com/?p=23123 Streamlining workflows, enhancing realism, and unlocking new creative possibilities without compromising artistic integrity

The post AI and the future of arch viz appeared first on AEC Magazine.

]]>
Tudor Vasiliu, founder of architectural visualisation studio Panoptikon, explores the role of AI in arch viz, streamlining workflows, pushing realism to new heights, and unlocking new creative possibilities without compromising artistic integrity.

AI is transforming industries across the globe, and architectural visualisation (let’s call it ‘Arch Viz’) is no exception. Today, generative AI tools play an increasingly important role in an arch viz workflow, empowering creativity and efficiency while maintaining the precision and quality expected in high-end visuals.

In this piece I will share my experience and best practices for how AI is actively shaping arch viz by enhancing workflow efficiency, empowering creativity, and setting new industry standards.

Streamlining workflows with AI

AI, we dare say, has proven not to be a bubble or a simple trend, but a proper productivity driver and booster of creativity. Our team at Panoptikon and others in the industry leverage generative AI tools to the maximum to streamline processes and deliver higher-quality results.



Tools like Stable Diffusion, Midjourney and Krea.ai transform initial design ideas or sketches into refined visual concepts. Platforms like Runway, Sora, Kling, Hailuo or Luma can do the same for video.

With these platforms, designers can enter descriptive prompts or reference images, generating early-stage images or videos that help define a project’s look and feel without lengthy production times.

This capability is especially valuable for client pitches and brainstorming sessions, where generating multiple iterations is critical. Animating a still image is possible with the tools above just by entering a descriptive prompt, or by manipulating the camera in Runway.ml.

Sometimes, clients find themselves under pressure due to tight deadlines or external factors, while studios may also be fully booked or working within constrained timelines. To address these challenges, AI offers a solution for generating quick concept images and mood boards, which can speed up the initial stages of the visualisation process.

In these situations, AI tools provide a valuable shortcut by creating reference images that capture the mood, style, and thematic direction for the project. These AI-generated visuals serve as preliminary guides for client discussions, establishing a strong visual foundation without requiring extensive manual design work upfront.

Although these initial images aren’t typically production-ready, they enable both the client and visualisation team to align quickly on the project’s direction.

Once the visual direction is confirmed, the team shifts to standard production techniques to create the final, high-resolution images that would accurately showcase the full range of technical specifications that outline the design. While AI expedites the initial phase, the final output meets the high-quality standards expected for client presentations.

Dynamic visualisation

For projects that require multiple lighting or seasonal scenarios, Stable Diffusion, LookX or Project Dream allow arch viz artists to produce adaptable visuals by quickly applying lighting changes (morning, afternoon, evening) or weather effects (sunny, cloudy, rainy).

Additionally, AI’s ability to simulate seasonal shifts allows us to show a park, for example, lush and green in summer, warm-toned in autumn, and snow-covered in winter. These adjustments make client presentations more immersive and relatable.

Adding realism through texture and detail

AI tools can also enhance the realism of 3D renders. By specifying material qualities through prompts or reference images in Stable Diffusion, Magnific, and Krea, materials like wood, concrete, and stone, or greenery and people are quickly improved.

The tools add nuanced details like weathering to any surface or generate intricate enhancements that may be challenging to achieve through traditional rendering alone. The visuals become more engaging and give clients a richer sense of the project’s authenticity and realistic quality.

This step may not replace traditional rendering or post-production but serves as a complementary process to the overall aesthetic, bringing the image closer to the level of photorealism clients expect.

Bridging efficiency and artistic quality

While AI provides speed and efficiency, the reliance on human expertise for technical precision is mandatory. AI handles repetitive tasks, but designers need to review and refine each output so that the visuals meet the exact technical specifications provided by each project’s design brief.

Challenges and considerations

It is essential to approach the use of AI with awareness of its limitations and ethical considerations.

Maintaining quality and consistency: AI-generated images sometimes contain inconsistencies or unrealistic elements, especially in complex scenes. These outputs require human refinement to align with the project’s vision so that the result is accurate and credible.

Ethical concerns around originality: There’s an ongoing debate about originality in AI-generated designs, as many AI outputs are based on training data from existing works. We prioritise using AI as a support tool rather than a substitute for human creativity, as integrity is among our core values.

Future outlook: innovation with a human touch: Looking toward and past 2025, AI’s role in arch viz is likely to expand further – supporting, rather than replacing, human creativity. AI will increasingly handle technical hurdles, allowing designers to focus on higher-level creative tasks.

AI advancements in real-time rendering are another hot topic, expected to enable more immersive, interactive tours, while predictive AI models may suggest design elements based on client preferences and environmental data, helping studios anticipate client needs.

AI’s role in arch viz goes beyond productivity gains. It’s a catalyst for expanding creative possibilities, enabling responsive design, and enhancing client experiences. With careful integration and human oversight, AI empowers arch viz studios – us included – to push the boundaries of what’s possible while, at the same time, preserving the artistry and precision that define high-quality visualisation work.


About the author

Tudor Vasiliu is an architect turned architectural visualiser and the founder of Panoptikon, an award-winning high-end architectural visualisation studio serving clients globally. With over 18 years of experience, Tudor and his team help the world’s top architects, designers, and property developers realise their vision through high-quality 3D renders, films, animations, and virtual experiences. Tudor has been honoured with the CGarchitect 3D Awards 2019 – Best Architectural Image, and has led industry panels and speaking engagements at industry events internationally including the D2 Vienna Conference, State of Art Academy Days, Venice, Italy and Inbetweenness, Aveiro, Portugal – among others.


Main image caption: Rendering by Panoptikon for ‘The Point’, Salt Lake City, Utah. Client: Arcadis (Credit: Courtesy of Panoptikon, 2025)

The post AI and the future of arch viz appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/ai-and-the-future-of-arch-viz/feed/ 0
Will AI design your next building? https://aecmag.com/ai/will-ai-design-your-next-building/ https://aecmag.com/ai/will-ai-design-your-next-building/#disqus_thread Fri, 21 Jul 2023 05:42:21 +0000 https://aecmag.com/?p=18066 Will AI take architects’ jobs too, or will it make them much more fulfilling instead? asks Akos Pfemeter of Graphisoft

The post Will AI design your next building? appeared first on AEC Magazine.

]]>
Will AI take architects’ jobs too, or will it make them much more fulfilling instead? asks Akos Pfemeter of Graphisoft

Although the field is not new at all – AI research started during the Dartmouth Summer Research Project in 1956 – it is only after a half-a-century-long “AI Winter” that with the unexpected breakthrough of OpenAI’s ChatGPT that reached 100 million users in just two months, AI is now all the rage in 2023 – and for a reason. AI now has human-level cognitive abilities making it capable of passing the US Uniform Bar Exam with a higher score than 90% of humans.

AI fairs well on the creative side, too: it can compose music, write poetry (helpful in creating lyrics for the music it just composed) and generate image representation of anything imaginable or rather “prompt” -able by the expression of natural human language. No wonder news headlines are full of “[BLANK] profession is in danger of losing jobs to AI” – feel free to fill in the blank with “journalists,” “marketeers,” “programmers,” “lawyers” or many other white collar job titles.

What does all this mean for the AEC industry? Will AI take architects’ and engineers’ jobs too? Or will it make them much more fulfilling instead? No one knows the future, and there are multiple scenarios for AI to unfold, but one thing is certain – our profession will be fundamentally different by the end of the decade. What follows is a discussion of the relevant points to help answer the questions stated, giving you a better overall understanding of the subject so that you can prepare.


Find this article plus many more in the July / August 2023 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Let’s start with definitions: AI (artificial intelligence), ML (machine learning), and DL (deep learning) are used interchangeably in colloquial language, and while they are related, they are by far not the same thing.

AI is an umbrella term for machines that can do things beyond sheer automation – a robot vacuum cleaner that can clean rooms of any shape without specific instructions is already considered AI. Its “intelligence” comes from its software; simply put, the AI revolution is a software revolution.

In traditional software development, people (programmers) specify the instructions for the computer. AI uses a new programming paradigm called machine learning where computers aren’t given specific instructions. Instead, they are shown vast amounts of examples of input<>output pairs (e.g., input: a photo of a bus <> output: image label “bus”) from which they distill patterns that will become in this example an image recognition software. (Fun fact: with CAPTCHA tests you, the human, have been teaching the computer these patterns).

The AI breakthrough is the result of recent developments in GPU-based mass computing married with traditional “statistical” methods to find patterns in large datasets of images (photos, medical x-rays, video frames, and photos), sound (speech and music) or text (emails, websites, and books). The resulting software can be used to recognise similar patterns in new datasets (e.g., cancer diagnosis, voice recognition), and to generate new similar datasets (prompt-based image, text, and music generation).

You might wonder if it’s possible to train AI on large datasets of architectural plans or better still building information models and have AI design buildings by the design program as the system “prompt”? There are startups already offering promises very close to this vision but there are still more questions than answers today. Would you (your client) really like this approach to building design? Would you (your client) agree to contribute your existing design (asset) IPs to train the AI? Wouldn’t such an approach lead to too much uniformity?

The more relevant question is how AI can contribute to building design today? It most certainly “won’t design your next building”, but it can potentially contribute to it by augmenting your capabilities by making you a better, more efficient designer.

AI will undoubtedly have a lasting impact on our industry but probably even more so on humanity

You can already use generative AIs such as Stable Diffusion or MidJourney to help you with design ideas in the form of renderings based on your mass model. There are third party integrations with BIM software already available and native integrations (i.e., one from Graphisoft) are underway.

In the immediate future, we can expect further integrations with existing large AI models to make specific tasks easier/better/automated. The target range is extremely broad, including voice/natural text or even gesture control for BIM software (revamped “command line” anyone?), BIM integrated AI chatbot for personal training/support, and AI translated “lightweight” API for broad accessibility to a more democratised add-on development.

On the not-so-far-in-the-future-term model generation from drawings, images, point clouds or even from natural language-given prompts is a promising utilisation of AI. This will provide the key to unlocking vast reservoirs of dumb/analogue/proprietary information into intelligent/parametric/open formats, accelerating the digitisation of building-related data/content. A specific example of this model generation is when natural language prompts are used for parametric object scripting (GDL, anyone?).

But the highest yet achievable AI goal should be to free human designers from mundane tasks such as technical documentation. In this use case, the architect would focus on what s best in design. At the same time, the computer would carry out the laborious tasks of technical detailing and documentation (not to mention the multiple cycles of updates while the design is still changing). This is undoubtedly one of the more complex tasks for AI implementation, but still within our reach.

Whether AI will ever provide complete architectural services directly to clients is still an open question. One thing is for sure: future architectural and engineering practices will require a different type of workforce with different skillsets than today; “prompt engineering”, for example, is an emerging field where you learn how to ask the right questions of AI – a skill we will need to master to a degree, should we want to yield the benefits of existing and future AI systems.

AI will undoubtedly have a lasting impact on our industry but probably even more so on humanity. AI truly has the potential to turn society on its head, disrupting everything, including wealth distribution, political control, and how we live – similar to what the steam engine did to medieval Europe during the industrial revolution in the 18th century. A growing number of AI scientists go even further, demanding a six-month pause on all large AI model training so that we humans can catch up with the latest progress in AI.

Should we be optimistic about AI? If we survive singularity, we certainly will have a powerful ally to help solve our biggest problems, such as global warming, crime, poverty, pandemics, erosion, etc., but to get there, we need safeguards. We should not only worry about IP rights, AI alignment, and a new arms race but also ensure we don’t transmit our own human biases to the machines. If we succeed, AI will bring never before seen prosperity to humanity.

The post Will AI design your next building? appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/will-ai-design-your-next-building/feed/ 0
AI special edition of AEC Magazine https://aecmag.com/technology/ai-special-edition-of-aec-magazine/ https://aecmag.com/technology/ai-special-edition-of-aec-magazine/#disqus_thread Wed, 26 Oct 2022 10:13:57 +0000 https://aecmag.com/?p=15932 We explore the current and future potential of Artificial Intelligence (AI) in AEC

The post AI special edition of AEC Magazine appeared first on AEC Magazine.

]]>
We have some incredible stories in the latest edition of AEC Magazine, available to view now, free, along with all of our back issues.

Subscribe here to the digital edition free, or take out a print subscription for $49 per year (free to UK AEC professionals).

What’s inside our Artifical Intelligence (AI) special edition?

  • What is the potential impact of AI on architectural design?
  • What AI could bring to future design systems?
  • Why AI will augment not changing AEC workflows
  • How next-generation design tools will automate mundane repetitive tasks. Is there a place for AI design in real-world practice?
  • Could text-to-image AI be used to spark creativity in architectural design?

Plus lots, lots more

  • Autodesk Forma: Our thoughts on Autodesk’s new AEC platform
  • Revizto – clash of the titan
  • How Scan Computers is deploying custom solutions for Nvidia Omniverse
  • Revit Data Exchange Connector for Rhino
  • Autodesk bundles Twinmotion with Revit
  • 13th Gen Intel Core CPUs launch for CAD and beyond
  • 4D construction simulation driving Everton stadium build
  • KPF and SimScale explore wind analysis for early stage design

The post AI special edition of AEC Magazine appeared first on AEC Magazine.

]]>
https://aecmag.com/technology/ai-special-edition-of-aec-magazine/feed/ 0
Artificial Intelligence (AI): the coming tsunami https://aecmag.com/ai/ai-the-coming-tsunami-architecture/ https://aecmag.com/ai/ai-the-coming-tsunami-architecture/#disqus_thread Mon, 24 Oct 2022 10:07:56 +0000 https://aecmag.com/?p=15888 We explore the potential impact of AI on architectural design

The post Artificial Intelligence (AI): the coming tsunami appeared first on AEC Magazine.

]]>
While we see design software marginally improve year on year, there has been growing unrest at the pace/scale of improvements. Questions have been raised about how well BIM workflows map to how the industry actually works. Martyn Day looks at the potential impact of artificial intelligence on architecture

As a society, living in a technological age, we have become incredibly used to rapid change. Sometimes it feelslike the one constant we can rely on is that everything will change. For millennia humankind lived in caves, scrawling drawings on the walls. The Stone Age was 2.5 million years long, then came the Bronze Age and, with it, urbanisation, which lasted 1,500 years. The first Industrial Revolution lasted just 80 years (1760 – 1840). Before we reached our current, digital age, the Wright Brothers perfected powered flight and just 66 years later, our species had escaped Earth’s gravity, traversed the vacuum of space and landed on the moon. We are making advances in ever shorter timeframes and have industrialised innovation through the development of ever-smarter tools.

The next revolution is already here but, as the saying goes, it will not be evenly distributed. At the moment, many aspects of our working lives are still going through digital transformation. Everything is becoming data and the more that becomes centralised, the more insights it enables, offering a greater opportunity for knowledge processing.

Artificial Intelligence and Machine Learning have gone from science fiction to science fact and are rapidly being used by increasing numbers of industries to improve productivity, knowledge capture and in the creation of expert systems. Businesses will need to transform as quickly as these technologies are deployed as they will bring structural and business model changes at rates which we have not yet truly anticipated.

In the last few months, I’ve seen demonstrations of design technology currently in development that will, at the very least, automate labour intensive detail tasks and perhaps greatly lessen the need for architects on certain projects.

First warning

During the lock down in 2020, I watched with interest an Instagram post by designer and artist, Sebastian Errazuriz. It soon became a series and more of a debate. He said, “I think it’s important that architects are warned as soon as possible that 90% of their jobs are at risk.”

His argument condensed down to the fact that architecture takes years to learn and requires years of practice. Machine learning-based systems can build experience at such an accelerated rate that humans cannot possibly compete.

As we already have millions of houses, enormous quantities of data, including blueprints, why do we need a new house when we can have an AI trained and then blend of all the best designs? “Now try to imagine what 1,000 times this tech and 10 years will do to the industry,” concluded Errazuriz.

 

The interesting thing is, at that point in time there was very little technology offering anything like that. Perhaps Errazuriz had seen Google’s Sidewalk Labs which was experimenting with generative design to create and optimise neighbourhood design. At the time I thought it was a good marketing ploy for himself, although the comments turned into a pile-on.

Current AI reality

We are still some way off from fulfilling anything like the true potential of AI in generative design, a view shared by Michael Bergin of Higharc, who used to head up a machine learning research group at Autodesk. “The full impact of a generative model that uses a deep learning system, what we call an Inference Model, is not ready for primetime yet but it’s incredibly interesting,” he says.

But there have already been several fascinating applications of AI/ML in AEC. Autodesk, for example, has delivered some niche uses of the technology. Autodesk Construction IQ is aimed at project risk management in commercial, healthcare, institutional, and residential markets. It examines drawings and identifies possible high-risk issues ahead. AutoCAD has a ‘My Insights’ feature, which examines how customers use their AutoCAD commands and what they do. The AI will then offer tailored advice to help improve productivity or how to better use tools.

Like all hype cycles, the impact of machine intelligence on jobs will be overestimated in the short term and underestimated in the long-term

There are also a range of adaptive and ‘solver’ tools available such as Testfit, Autodesk Spacemaker and Finch 3D, which all solve multiple competing variables to help arrive at solutions that are optimised. While not strictly AI/ML, their results feel like magic and actually help designers make better informed decisions and reduce the pain of complexity.

Bricsys has also been investing in AI. Bricscad BIM doesn’t use the Lego CAD paradigm of modelling with walls, doors, windows etc. Instead the user models with solids and then, using the BIMify command, runs AI over the geometry, which it identifies as IFC components, windows, floors, walls etc.

AI applications so far have either predominantly been at the conceptual side or have tried to ‘learn’ from the knowledge of past projects.

Recent advances

Over the last two years, in conversations with AEC firms who were fed up with the limitations of their BIM tools and were looking for significant productivity improvements, many seemed to turn to wanting to completely automate the 2D drawing process.

While drawings are a legal requirement, heavily model-based firms are calculating that they could save millions by just having AI take over that and then they could spend more time on design. Around the time of our NXT BLD conference in June 2022, I started to see early alpha code of software which was looking to apply AI/ ML to design. And, in subsequent conversations with some design IT directors at leading architectural firms, there was an appreciation that for many standard, repeatable buildings types – schools, hospitals, offices, and houses – automated systems will soon heavily impact bread and butter projects.

One firm was already running projects in the Middle East with an in-house system which only required one architect, whose task was to define and control the design’s ‘DNA’, with the rest of the team being engineers, focussed on streamlining fabrication. I’ve also seen a demonstration of a system that just requires mere polyline input to derive fabrication drawings for modular buildings, missing out detail design completely. There’s also Augmenta, which is looking to automate the routing of electrical, plumbing, MEP and structural detail modelling.

Another gift from lockdown was construction giant Bouygues Construction working with Dassault Systèmes to develop an expert system based the 3D Experience platform (Catia for us old schoolers).

Drop in a Revit model and the system outputs a fully costed, documented virtual construction model for fabrication – all based on the rules, processes and machines, which Bouygues has defined in its workflow, all managed through its Dassault Systèmes’ Product Lifecycle Management (PLM) backbone.

While the system is based on configuration and constraints and low on AI/ ML, there is a drive to build expert systems, bespoke systems to harness a company’s well-defined internal processes. Like Higharc, the rise of the platform to solve niche market segments is also more likely to be the case for next generation tools.

Pictures that infinitely paint

Ten years ago, machine learning systems were only just getting a hang of identifying what the subject of a photograph was. Is this a bear or is this a dog? Today’s systems can write entire paragraphs defining a scene from its computer vision. This advance is probably just as well, as there are already automated taxis with no human drivers in San Francisco driving around picking up passengers – Cruise and Waymo.

The rise of DALL-E, Midjourney, DeepRender and Stable Diffusion have flooded social media with all sorts of amazing images. In this issue you can see the work of many readers who have been experimenting with these tools, to great effect. Trained on billions of photographs and now allowing users to add their own, from week to week this technology seems to be rapidly advancing to a point where the output becomes useful at the conceptual phase of design.

Hassan Ragab
AI-generated image of a building facade produced by Hassan Ragab in midjourney, automatically converted to a 3D mesh using Kaedim, an AI that turns 2D images to 3D models

That’s a view shared by computational designer / digital artist, Hassan Ragab, one of the most accomplished users of the technology. “There will be a point in the near future where these tools could be directly employed into the design process,” he says. “For now many architects and designers are using it as sketching / inspirational tools, but for me, I am just trying to explore what these tools mean to our creative process; by trying to push my imagination to its limits and visualising what is on my mind using these powerful tools (and also to observe how these tools are changing how my mind works).”

Second warning

In August 2022, Sebastian Errazuriz was on Instagram again, this time identifying that illustrators will, unfortunately, be the first artists to be replaced by AI. Illustrations are commissioned based on text descriptions, which is how these AI systems work.

“The only difference between a human and the AI is that it takes a human about five hours to make a decent illustration that’s going to be published. It takes the computer five seconds” said Errazuriz.

He went on to recommend jumping in as fast as humanly possible to understand how the tools work and for illustrators to use their abilities to augment these designs. Experience will now help artists learn how to better describe an image to the machine.

 

 

I recently spent a weekend with friends who own a visualisation and media company. One of the partners confided to me that he thought that being a creative, he would never have to compete against artificial intelligence. In the last two months his company has had to invest hours of time learning to make use of and understand how these new tools can be harnessed for their business. They even have clients that are requesting to use AI generated presentation speakers, which read out written text in their videos to save money. It would seem Errazuriz is certainly more on the money.

AI to BIM?

Having seen the incredibly consistent midjourney building designs by Hassan Ragab and followed the community, it was interesting when a UK company called Kaedim popped up which appeared to be developing a service to convert 2D images to 3D mesh models. I contacted the CEO, Konstantina Psoma to see if we could try out the service.

Kaedim was designed to offer a service to the games industry a SaaS platform to quickly convert 2D assets into 3D meshes for games content. We sent over one of Hassan’s complex models and got an OBJ file back containing a single meshed object. It was interesting to see the interpretation but obviously there was no detail on any of the other sides of the building. Psoma had warned me that Kaedim hadn’t been trained on architectural assets but was up for giving it a go.

Kaedim
Photo of early modernist architecture, automatically converted to a 3D mesh using Kaedim, an AI that turns 2D images to 3D models

With the complex nature of the midjourney output, I next put through a photograph of some early modernist architecture, which was very rectilinear, this gave much better results. I then tried to put the mesh through Bricscad BIM to see if the BIMify command could turn it into a BIM model.

While I was hoping this would deliver the world’s first AI concept design to BIM model, incompatibilities in the software meant it fell a little short. Kaedim creates a single sealed mesh, whereas Bricscad BIM is expecting multiple meshes in its models. However, it did come temptingly close, especially with simplified geometry.

At some point these AI systems are most certainly going to start producing 3D models based on description, or the AI will be capable of rendering all façades, enabling some degree of 3D. Instead of feeding them flat 2D models, imagine an AI trained on every awardwinning architectural 3D model, or all the changes to architectural vocabulary throughout history, from Imhotep (2,700 BCE) to Zaha Hadid Architects (2016). Or an AI engineering system, which generates a fabricable engineering design of a hospital at 1:1, but allows the architect to design the façade panels, possibly inspired from another AI tool?

Conclusion

AI/ML, configurators and solvers are coming and coming fast. Over the next five years it will be fascinating to see how this all unfolds. To stay ahead of the game, the best survival advice is to familiarise yourself with these new systems, when you get the chance.

Established BIM developers of the existing tools are working out which elements of their software AI/ML can be applied to. These could be as boring, but essential, as stair design, to form optimisation, based on multiple analysis criteria.

This piecemeal approach to improvement will please existing users but won’t radically change the process. It will be for others, with nothing to lose, to come up with more powerful design systems which offer higher speeds of concept to design throughput. The focus might not be on architectural design but on construction because of the value benefit that could be applied.

Augmenta, for example, is looking to automate all the phases of detailed design. If this were to be driven into fabrication as well, the whole process might also go from 3D model to G-code.

Like all hype cycles, the impact of machine intelligence on jobs will be overestimated in the short term and underestimated in the long-term. From what I can see, efforts are being made to automate detail design, together with drawing production.

Both of these tasks are highly demanding and require sizable teams to carry out mundane work, and coordinate design changes. Automation could ultimately bring about reductions in head count at firms. The dream about having more time to design may hold some truth, but architects would need to change their business models, as billing by the hour and having a change driven fee structure is not going to survive the impact of automation in detail design.

The other thing that comes to mind is that, with all this time compression technology and ability to turn a process which has traditionally taken years into maybe weeks, it doesn’t really allow for the nature of humans and the reality of clients changing their mind.

I remember hearing of one successful collaborative BIM project that coordinated its project teams on an office building design and got early sign off from the client, at which point they ordered the steel. Much later, the client changed their mind on the design, but it was too late as the steel had been cut. AI might help deliver zero clashes and vastly reduced waste, but we can’t forget about the state of flux which is core to human condition.


AI in architecture: by Clifton Harness, CEO of Testfit

It was scheme “F0” fully printed and delivered to higher-ups for review. This baby was the sixth major site plan design, but the tenth minor iteration that slightly improved the developer’s financial outcome. Finance said it was a winner.

On the walk back to my desk at 11:14pm, I counted the units, again. 253. Good. I counted the parking stalls. It was ready for review. The next morning, I arrived to review “F0” and caught my 30-years-an-architect boss hard at work counting the stalls and units. This is when it really hit me: software has barely scratched the surface of building design. I think that this thought, in this moment, was the TestFit founding moment.

I was so deeply struck with the very real absurdity that industry-wide hundreds of thousands of hours are spent checking math on parking stalls. Imagine if we could fix that? Or more meaningful things? Like improving the hit rate for housing projects. Or to employ artificial intelligence to enable humans to comply with the rise of ever more complex zoning and compliance codes more ably?

Now to the meat of how I see AI playing out in architecture:

AI in architecture will result in better architecture, as long as there is actually a human architect running that AI. This will put the modern architect at a crossroads: do they embrace technologies that can make them super architects or do they reject them and watch the engineering and development industries embrace them? Either way, we will get better buildings, and the choice is the architect’s now.

If user-editable configurators like TestFit’s technology are employed, the project team has detailed control to achieve the design vision. It enables software engineers to use meaningful procedures to develop forms and understand why they break. The major strength (or weakness) to procedures is that they are all human-informed.

In the past few years, we have seen very impressive machine-learning algorithms start to tackle things like noise, daylighting, energy use, or microclimate analysis. These are promising, but ultimately computers were the ones doing those analyses anyway. The definition of form to meet project requirements continues to be the fundamental task at the heart of the design process.

Mixed-AI workflows are also quite promising. An example of this is using a simple procedure to generate massing, and then to ask a neural network for its best guess on column sizing for said mass.

Another thing I am absolutely convinced of: all these avenues of AI penetrating the architecture industry will still go through architecture firms. I’ve worked personally with hundreds of real estate developers, and nearly all of them would prefer to work with architects that have a long track record of success.

The real fear, I think, for the architecture industry, is when the Startup Development or Start-up Architecture shops start to leverage this technology and develop asymmetrical advantages over real estate investment trusts (REITs) or the Genslers of the world. AEC has always been soft on process, and AI is the process holy hand grenade.

The post Artificial Intelligence (AI): the coming tsunami appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/ai-the-coming-tsunami-architecture/feed/ 0
Midjourney architecture: Hassan Ragab https://aecmag.com/ai/midjourney-architecture-hassan-ragab-conceptual-ai-rtist/ https://aecmag.com/ai/midjourney-architecture-hassan-ragab-conceptual-ai-rtist/#disqus_thread Wed, 12 Oct 2022 15:44:29 +0000 https://aecmag.com/?p=15774 We speak with one of the most prolific and coherent architectural AI concept artists

The post Midjourney architecture: Hassan Ragab appeared first on AEC Magazine.

]]>
Social media has been alive with an explosion of realistic AI-generated art, created by inputting natural language descriptions into AI tools DALL-E, Midjourney, Latent Diffusion and others. Hassan Ragab has quickly established himself as one of the most prolific and coherent architectural AI concept artists

Generating AI art is pretty straightforward; it’s based on text descriptions and biases. Want to see what Gaudi would have created if commissioned to design a petrol station? Simply type ‘Gaudi gas station’ into one of the many AI generators and that’s exactly what you will get. We can now get conceptual architecture by merely writing a string of descriptive words. While this might seem like magic, the real challenge is trying to produce iterations of design variations, while maintaining a level of consistency, refining the word ‘recipe’.

Hassan Ragab

Over the past few months, Hassan Ragab has been posting his Midjourney conceptual architectural work on LinkedIn and is clearly enjoying exploring the nuances of refining the AI output, mixing free flowing architectural styles with biomimicry materials such as feathers and plant structures.

Midjourney architecture Hassan Ragab

Ragab has a background in generic design, and experience in everything from furniture design to digital fabrication. He is currently working in construction in downtown Los Angeles, using his computational design skills with Rhino and Grasshopper.

Hassan Ragab Midjourney architecture

“For me, AI generated art is about pushing your idea, pushing what you want to do and not actually coming up with something that’s entirely out of the world, that looks cool,” he explains. “I’m really interested in designing facades, and having weird, or very interesting shapes interact with it,” adding that he sometimes explores other areas architectural design, such as interiors.

By creating so many variations on a theme, does Ragad feel he has control of the AI generation process? “I don’t think anyone would be able to fully control the outcome of these generators,” he says. “You have only a certain amount of control. But again, that’s the beauty of using them! You don’t want to use them to create a certain thing, you don’t want them to create something that’s in your head, you want them to push your idea, to have another perspective, another outcome.

Hassan Ragab

“However, the more I work with Midjourney, the more I feel the need for control, because when you’re very ambiguous about what you want, the AI has biases and that’s why many people [are] having similar results. The main reason for that is that they are not being really specific enough.

Hassan Ragab Midjourney architecture

“The way I use the prompts is really important,” adds Ragab. “Again, it’s about building from the bottom up, using simple prompts. This is a good way to keep the ideas in control. But the main element of control is within the creation process, is to be specific with your definitions. But at some point, I might get this really, really cool output and that will change my direction entirely!”

Hassan Ragab

Ragab explains how he uses branching to refine designs. “Sometimes I go into different branches in parallel and, if I like the output, I’ll tweak my prompts based on the ones I like. I have to improvise all the time on how I generate my prompts, while also trying to stay in control. And no matter how much I feel in control, I always get surprised. But again, that’s actually really what I want!”

Hassan Ragab

From talking with Ragab, it’s clear that a certain mental approach to trial and error helps. To get to the images displayed on these pages, Ragab would have gone through typically generating 100 image iterations and word definition / bias experiments before being happy with the end result.

With the AI artist having total control through prompts, it’s also important for the scene to be set. Many users forget that the framing and context of what’s generated is also controllable, as Ragab explains, “There are certain elements that I always define, for example, like the image angle, close-up or zoomed out. If I want to make a realistic photograph, for example, one way is to put your building in context, with streets, people and cars.”

Hassan Ragab

In our conversations with Ragab, we briefly talked about AI design and a possible future where AI applications actually create 3D geometry, as opposed to images. “I know at some point this technology will drive architecture and it will be fascinating. I think it’s very important for everybody to understand this technology. This will affect architects and designers, at some point, so it’s really time to learn how these generators work, the limitations, the biases,” he explains. “This is a preparation period as AI will have a major impact when there is a merger between AI and 3D model generation. I think things will get very chaotic, very quickly and we need to be prepared for that. So right now, I’m just focusing on how can I control it, how can I understand how to deal with it and what are the limitations.

Midjourney architecture Hassan Ragab

“AI is not a threat to artists; it’s a threat to the skill of the artist – the skills that are acquired. In my opinion, art is a mix between skill and spirit, and the spirit is more important to the artist. AI kind of gives accessibility to a lot of people who don’t really have those artistic skills. Anyone can now produce that kind of art. That’s the real threat. If you’re a true artist, in my opinion, then you will find a meaningful way to use these AI tools in your own workflows.”

■ Instagram www.instagram.com/hsnrgb
■ Website www.hsnrgb.com
■ LinkedIn www.linkedin.com/in/hsnrgb

The post Midjourney architecture: Hassan Ragab appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/midjourney-architecture-hassan-ragab-conceptual-ai-rtist/feed/ 0
AI design: together in electric dreams https://aecmag.com/ai/together-in-electric-dreams/ https://aecmag.com/ai/together-in-electric-dreams/#disqus_thread Fri, 07 Oct 2022 11:20:04 +0000 https://aecmag.com/?p=15731 Is there a place for AI design in real-world practice? asks Edward Crump

The post AI design: together in electric dreams appeared first on AEC Magazine.

]]>
Is there a place for AI design in real-world practice? asks Edward Crump

These days there are many peo ple popping up on my social media feeds with weird and wonderful CGI-esque imagesthat look like creations of the Grimm brothers. Upon seeing these for the umpteenth time, I decided to finally test for myself the newfound craze that is ‘AI design’. I created an account on Midjourney and embarked on creating the images that accompany this article.

Once registered, you need to enter what resembles a chatroom and essentially create a search using keywords. Rather than getting the familiar response from a conventional search engine — more words — the programme seeks to respond through a series of images.

To begin my search I considered that the Art Deco style is very ‘in-vogue’ these days. Lots of interior designs, particularly lending themselves to angular patterns, strong colours, curved forms and framed elements. I wanted to see what the AI thought about the future of Art Deco for architecture — Neo-Decoism, so to speak. Thus, I entered my keywords:

Building, Front Elevation, Street, Art Deco, Entrance Door, Cinematic Lighting, Trees, Autumn, People, Cars.

After pressing the search button, the software whirred away in the background and finally presented me with four options of images with characteristics of my search. I was then given a series of options inviting me to ask for further variations on a particular image or ‘upscale’ one of my choosing, to increase its size and detail.


Edward Crump Edward Crump Edward Crump Edward Crump

While I have seen many people posting curiously realistic renders they claim were the result of their searching and refining process, which include buildings clad in piñata elements, neo-classical haystacks or structures that resemble tall strands of pale broccoli, the outputs of my efforts appeared to present themselves in a more artistic, textural fashion. Nonetheless, once you have received your result and overcome the thrill of what you have achieved with such ease, you do begin to zoom in and notice imperfections within the images, not unlike where a painter may intentionally blur their work at moments of precision – as if there is an unwillingness to give too much away by illustrating the finer details.

Having pressed all of the buttons to ensure I fully explored the software, what I can conclude is very interesting: firstly, you can’t just ‘Google’ a design. Now this may be obvious, but everything that is produced in this manner doesn’t respond to a particular site condition; this software merely produces a ‘realistic collage’, so to speak, based on keywords. Therefore as an act of design it relies on the literary skill of the ‘programmer’, and through the results embodying a sense of placelessness, it cannot be taken seriously in its present form as an act of considered real-world design.

That being said, I think it would be wrong to dismiss AI design completely. I found that, as a tool for generating ideas, it managed to succeed in creating a series of visuals that surprised me — that didn’t have any reasonable semblance to the images I had constructed cognitively when I wrote my keywords. To reflect how this process differs from how we design in professional practice, from experience I know a lot of ‘concept design research’ involves finding work from other practitioners (Pinterest, yep we all thought it!) and ‘creatively editing’ our preferred precedents in such a format that it fits the project we are working on. There is good reason for this. If such a design already exists in the real world, then it can most likely be constructed for the yet to be built project we are working on.

Whereas this process allows for touches of creativity and innovation — around the fringes of making a design work in a particular location or manipulating it slightly according to the project theme — it doesn’t allow for the ‘pure’ blank-canvas pie-in-the-sky ideas approach that is often heralded through architecture and design history (think Le Corbusier’s ‘Cartesian Skyscrapers’).

The most important act we can undertake as designers is to propose a vision of the world to give humanity something to believe in – a direction to work towards

In contrast to this process through precedent which could be argued is being extremely limiting, we have a tool where we can insert words or phrases that may be relevant to our project and it will generate ideas to allow us, as designers, to widen our field of view and challenge our existing constraints.

It is through this process of reflection and refocusing, therefore, that I believe the value lies. I think we can accept that professional practice has relied too heavily upon the ‘precedent’ approach to design creation and that has led to the frustrations outlined by Thomas Heatherwick (Heatherwick studio) in his recent speech in Singapore proclaiming that “we’re living through an epidemic of boringness.”

Does this mean that AI design is the silver bullet to this issue? Probably not. The process of taking these designs and converting them into feasible structures, which will then be ruthlessly value-engineered beyond any meaningful existence, suggests our wider political structures have a stronger influence upon the nullification of inspiration and the formation of deepset cognitive constraints. But it’s nice to dream, so who is to say why not?

At the end of the day, as tacitly suggested by Adam Curtis in his documentary ‘HyperNormalisation’ the most important act we can undertake as designers is to propose a vision of the world to give humanity something to believe in — a direction to work towards — and if AI design is part of this exercise, then there lies its true power and relevance to the “dull, flat, shiny, straight, inhuman” world we live in today.


Edward Crump is a University Tutor and award winning architectural designer with an interest in art and digital technology. @edthearch

The post AI design: together in electric dreams appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/together-in-electric-dreams/feed/ 0
Artificial Intelligence (AI) for concept design https://aecmag.com/concept-design/artificial-intelligence-ai-for-concept-design/ https://aecmag.com/concept-design/artificial-intelligence-ai-for-concept-design/#disqus_thread Wed, 12 Oct 2022 15:44:45 +0000 https://aecmag.com/?p=15763 Corey Weiner had a play with Midjourney. Could the text-to-image AI be used to spark creativity in architectural design?

The post Artificial Intelligence (AI) for concept design appeared first on AEC Magazine.

]]>
Corey Weiner had a play with Midjourney, an AI program that creates images from textual descriptions. Could it be used to spark creativity in architectural design?

I started playing around with Midjourney’s AI and was soon able to generate a lot of varied graphics using text prompts. All are somewhat “sketch-like” or roughly rendered.

Since I am not an architect, I don’t know their typical ideation process, however I imagine sketching multiple concepts from a blank page is time consuming. I think being able to generate 10-15 usable concepts in an hour would give a designer a great head start.

The first set of prompts might give you nothing, or a glimmer of hope to keep tweaking and regenerating. The exact prompts used to create the images can be seen in the captions below. Zooming in reveals lots of strange solutions the AI considered appropriate. So I think it will be a while before it is blindly relied upon for detailed concepts.

Corey Weiner is founder of as-built laser scanning service c2a.studio. For more Midjourney experiments click here.

Corey Weiner
(and below) Ultra luxury apartment building façade, dark wood parametric architecture, gray concrete balconies, tall windows, one cherry blossom tree, inside the main entrance is a staircase, highly detailed orthophoto, gregory crewdson lighting, 3ds Max render, photo realistic

Corey Weiner

Corey Weiner
Striking elegant modern Islamic cathedral, detailed architectural section cut, Autocad technical drawing, fine details
Corey Weiner
Magnificent striking modern cathedral, glass, detailed architectural section cut, Autocad technical drawing, fine details

The post Artificial Intelligence (AI) for concept design appeared first on AEC Magazine.

]]>
https://aecmag.com/concept-design/artificial-intelligence-ai-for-concept-design/feed/ 0