Cost-effective innovation
Our goal was to leverage generative AI to replace the conventional photography process for automotive accessories. By developing custom AI models, I aimed to produce on-brand, high-quality imagery for a full range of vehicle accessories, significantly cutting both production costs and turnaround time. This initiative was designed to demonstrate how AI can streamline and modernize the visual pipeline for automotive brands.

Chevrolet | Buick
GMC | Cadillac
Spent on parts & accessories photography annually, per brand
Of accessory sales happen within the first 12 months of ownership
director, sales & Marketing
I utilized a custom-trained AI model to bring this vision to life, preparing the ground for seamless integration of accessories onto any vehicle image. Below, you’ll see how these AI-driven techniques set the stage for flexible and cost-effective visual solutions.
I began with the Thule EasyFold XT bike rack as the initial 3D model precisely because of its visual complexity. I wanted to test this approach on something intricate, knowing that if I could handle a challenging accessory, I could handle anything. In these early stages, the renders were more like abstract prototypes than final imagery, and they were pretty far off from where I ultimately wanted to be. But this choice set the stage for refining our process and pushing the boundaries of what our AI model could achieve.





Through careful refinement of our training data and iteration on the AI model, I transformed those early rough outputs into highly accurate, true-to-life renderings. The final generated images now closely matched the actual bike rack in color, geometry, and detail, showcasing the power of iterative AI training.



As part of the process, I acquired and integrated a detailed 3D model of a vehicle. This provided the foundation for all subsequent visualizations. With this model in place, I had the perfect canvas to showcase how our refined AI would apply accessories in a realistic and adaptable way.








At this stage, I merged the trained bike rack and truck models in a 3D environment to produce a series of composite renderings. These rendered images formed the core dataset that trained the AI model to generate accurate and realistic final imagery. This digital integration step ensured that the AI could learn from precisely aligned and realistic data.
By iterating on these composite images, I created a strong foundation for the AI to generate realistic final renders. This step allowed me to move confidently into producing the AI-generated imagery, showcasing just how lifelike and accurate these models can be.
In this phase, I leveraged Stable Diffusion in the highly customizable ComfyUI interface to train and refine a custom workflow. By using a combination of positive and negative prompts, I was able to fine-tune the outputs to achieve highly realistic and accurate outputs. I daisy-chained multiple custom-trained models to progressively enhance the realism and accuracy of the generated images. This approach allowed me to create highly customized and lifelike results tailored specifically to the projects goals.

In this stage, I began to see significant leaps forward. While these initial AI-generated renders still lacked the final material details—like the correct reflectors and aluminum finishes—they already demonstrated how accurately the bike rack integrated with the various vehicles. These renders were a clear indicator that I was on the right track, making strides toward the lifelike accuracy I aimed to achieve in the final imagery.



In this final image, you can see the culmination of these efforts: a realized, high-fidelity render with correct materials and details in place. This image is a testament to the power of the custom-trained AI models and the iterative process I followed. It highlights not just the technical achievement, but the practical cost and potential time savings this approach can deliver.
