Generative AI has the potential to revolutionize the product photography industry, boosting sales and growth for a product that is transforming how e-commerce businesses create images. Serhii Zinchenko, who has over 8 years of experience working with startups like Claid.AI and Let’s Enhance, shares the story of his journey of building an AI product.
I want to share our journey at Claid, how we taped uncharted territory, and leveraged generative AI. Claid’s mission is to fully automate end-to-end image creation and editing and boost e-commerce through our suite of AI products.
Our latest achievement on this front is AI Photoshoot, a tool that harnesses a blend of AI technologies to effortlessly generate lifestyle scenes from simple product shots. But how exactly does it work out for us? And what motivated us to create it?
Our Origins: Let’s Enhance and Claid.AI
The company began in 2018 with Let’s Enhance, an AI tool that took low-resolution images and made them look way better. It is based on GAN networks that were able to restore details and enhance images. We were working with generative AI before it became hyped.
While customers loved Let’s Enhance, and the product grew to over 5 million organically, after doing our due diligence on e-commerce, we found a real need for better product photography, an even bigger opportunity in helping businesses. That’s when we decided to create Claid.ai, a platform for automating image editing in large marketplaces.
Embracing Uncertainty: Leap into Generative AI
By 2022, Claid.AI had gained traction: we started signing contracts with some of the fastest-growing online marketplaces. At the same time, we paid close attention to market trends and emerging technologies, searching for opportunities to get exponential growth faster.
The breakthrough in generative AI with technologies like DALL-E 2 and Stable Diffusion have finally created fresh opportunities for companies to tackle problems they couldn’t solve before.
Those systems, capable of creating photorealistic images from text prompts, resonated with our core competence in image enhancement but posed a new challenge: How do we integrate them into our products?
We went into full brainstorming mode. The ideas ranged from generating slides to creating personalized photobooks.
We ranked our ideas based on market potential, feasibility, and strategy compatibility. After a lot of back and forth, we picked the generation of product photo scenes as a target direction to explore.
Estimating the Opportunity: Diving in
Discovering the chance to create AI-based lifestyle product photos, we paused to size up both the market and our abilities before diving in. Here’s what guided our decision to proceed:
Understanding the Market:
- Size: The market was even bigger by expanding our value proposition, yet untapped by competitors.
- Growth: The market is fast growing, signaling more opportunities for success.
- Competitive Density: The field was open, with plenty of room for us to make our mark.
These factors painted a clear picture: the overall market attractiveness was high. It wasn’t just about the numbers; it was about the vibrant potential we saw in this space.
Analyzing Our Strengths:
- Integration with Existing Products: With our roots in e-commerce and existing clients, this new venture felt like a natural extension of Claid.AI.
- Competition: No one else was using generative AI for e-commerce photography at that time. It was a gap we were eager to fill.
- Shared Capabilities: Our existing expertise in technology and industry knowledge meant we weren’t starting from scratch. We had a foundation to build on, and our established brand added extra weight to our position.
Combining the insights from the macro-level market analysis with our micro-level understanding of our own capabilities, we saw more than just a new product idea. We saw a pathway that aligned perfectly with where we were and where we wanted to go.
It all added up: The Total Addressable Market is over $10 billion (Marketing budget in e-commerce that is allocated to visual content creation). The new market’s size, its growth potential, and the competitive landscape all pointed in the right direction. Our skills and brand could bridge the gap between ideas and reality.
Gathering First Insights
Our focus settled on a shortlist of ideas, from generating product photo scenes to food photography and virtual car showrooms. This decision was based on our strategic goals within the realm of generative AI.
Here’s how we went about it:
- Selecting Multiple Ideas: After brainstorming and evaluating strategy, we decided to explore 3 applications: product photo scenes, food photography, and virtual car showrooms.
- Launching Landing Page and Collecting Data: We simultaneously launched a landing page featuring these ideas and integrated a Typeform survey.
The primary goal was to validate interest and identify who our main customer segments might be.
- Driving Traffic: We directed traffic to our new landing page, leveraging our existing brands’ popularity. Then, we collected user requests with Typeform, using questions like “Can you give some examples of how image generation could be helpful for your business?”.
- Customer Interviews: We sifted through Typeform responses to select individuals for more detailed interviews to understand their needs. For example, one of the build interview questions was: “What does your workflow look like when you create visual content?” “What challenges do you face in visual content creation?”
- Identifying Common Challenges: After about 30 interviews, we saw a pattern: the difficulty of consistently creating engaging content for social media.
- Refining Focus: Using these insights, we adjusted our landing page to focus more on helping brands to create lifestyle product photos.
After completing the research, we adjusted our positioning, which was reflected in the new landing page messaging.
Technological Challenges
While we were doing business research, the tech team needed to create a proof of concept and validate the feasibility. In the beginning, we had to take some big chances. We knew that people wanted good product photos and that the technology had amazing potential, but we didn’t know if they would work well together. The core issues we faced were:
- Preserving the Product: Off-the-shelf AI models distorted products by altering labels, textures, shapes, and so on.
- Overgrowth: Edges of objects would sometimes get bigger or change in uncontrolled ways.
- Misplaced objects: Items didn’t always look naturally placed, or were “floating in the air.”
- Adjusting the Lighting: Beyond placement, the lighting had to be calibrated so products didn’t look artificially pasted in.
- Noisy Background: Not always did AI generate good results, we also got from time to time some weird generated patterns by the network.
“The problem at hand has unique challenges that make it research intensive. This can be seen as a desirable entry barrier for competitors that don’t have the traction in the fundamental technologies involved.”
Carlos Sánchez Mendoza, Head of AI.
We needed to address these issues to maintain the product’s true appearance and high resolution details, which was crucial for e-commerce. We knew that overcoming them would be key to delivering a product to the market.
We tried different things, and when a new version of the tech came out, it helped a lot. We kept tweaking and fine-tuning, and finally, we got product photos that we thought were good enough.
It was a risky journey, but we believed in the tech and the need for better photos, even when things got tough. By January, we had an imperfect, but functional alpha version of AI Photoshoot.
The Race to Be First
The advent of generative AI sparked a competitive race among companies. Everyone was trying to stake their claim, even before they had a fully ready solution.
Sofi Shvets – our CEO – and I visited an AI hackathon by HF0, and it was a moment of major insight for us. Surrounded by hackers and innovators, we found a room with a team working on a project like ours and a PM from a huge company focused on solving a similar problem.
The experience was a wake-up call. Being a distributed team, we focused our attention on building the product and weren’t exposed to such intense competition. But in San Francisco, we found that we weren’t alone; several teams were working on similar ideas. The hackathon taught us that a great product alone wasn’t enough: building and early marketing must go hand in hand for swift and efficient market competition.
These experiences underlined the need for us not just to identify our product-market fit and build high-quality products, but to quickly differentiate ourselves, and beat the market.
Iterating with Early Community
Just two months in, we launched AI Photoshoot as a closed beta. With a new landing page we continued to gather a more focused audience. The product was raw, but it had core functionality (transforming simple product shots into lifestyle scenes using a blend of AI technologies) and allowed us to gather the community of beta testers and iterate our product development based on customer feedback.
It was essential to build a product based on short feedback loops with customers. Which allows us to check if there are any gaps or challenges in user experience. We learned the importance of that from Let’s Enhance when the AI technology was new and users didn’t know how to use it.
We conducted a second interview round combined with a product demo, where we asked more specific questions related to the problem we solved, helping businesses with their marketing, for example: “How do you engage and acquire customers?”
To speed things up, we moved to weekly sprints, making incremental progress each iteration and using each round of feedback as an opportunity to improve.
What we changed based on user feedback
One of the biggest insights/changes: We intended to display the B&W of the image to help our customers understand that the AI would be guided by the composition of the image, but every time they will get the newly generated results. But, during user testing, customers expressed that they didn’t require black-and-white backgrounds. Our Product Designer proposed a radical solution at that time. She explained that people don’t think in steps; they expect to see the final possible result. We implemented the solution and saw much bigger engagement with the product.
“I came to the realization that UX design in AI tools should not adhere to implementation logic, as it can be too complex for users who are not familiar with the development process. Instead, design should align with the mental model of the customers and how they expect to interact with the product.”
Anna Prodvoiska, Principal Product Designer
We also made what can be perceived as small changes, and they were essential to provide flawless product adoption.
Prompt Box Improvement: Users often confuse our prompt box with those from ChatGPT. For instance, they would type commands like “place product on a table” instead of describing the scene. To enhance clarity, we improved the message and provided an example.
Changed Sidebar: Initially, we had a black sidebar as a part of the UI, and on user testing, we found that people completely ignored template space.
Based on user feedback, we fine-tuned our product, making it ready for public release.
Public Release
We aimed to launch at Shoptalk, a major e-commerce event, on March 30th, 2023, and to debut the first public version of AI Photoshoot on Product Hunt at the same time.
Our booth attracted constant interest throughout Shoptalk. Hundreds of excited attendees kept us busy, eager to learn how AI could upgrade their product shots. The public release marked months of effort in transforming our initial AI concept into a polished tool for photorealistic imagery.
[show what the final version of AI Photoshoot can do]
This new product has shifted our company’s focus. We’ve updated our landing page to better share what we’re now offering. It’s showing a 90% growth rate month over month, making it our fastest-growing initiative. Yet, it’s not just about numbers; we’ve evolved our initial MVP into something much more — photorealistic product imagery — in just six months.
What We’ve Learned So Far
We took a calculated risk by diving into generative AI, and it works well for us. It wasn’t easy, but we navigated through the tech challenges and market uncertainties to build something meaningful.
One thing is clear: there will always be competition. Today or tomorrow, someone else will try to do what we’re doing. That’s why we keep our eyes on what really matters — our customers. This project has taught us to get moving quickly, start selling as soon as possible, and not to hold back on letting people know what we’re up to, even in the early days.
⠀
Written by Serhii Zinchenko,
former Founding Head of Product at AI startups Claid.ai and LetsEnhance.io.