#716 – AI-Generated Amazon Product Videos Carrie Miller , Carrie Miller, Principal Brand Evangelist at Helium 10 33 minute read Published: November 8, 2025 Modified: November 11, 2025 Share: URL copied How can still product photos evolve into mesmerizing cinematic experiences, such as Amazon product videos? AI expert Andrew Bell joins us for a captivating discussion on the transformative power of AI in e-commerce. Andrew shares his insights on how AI can elevate brand content by seamlessly turning static lifestyle images into dynamic videos without losing the essence of traditional production values. We explore the storytelling artistry and strategic complexities involved in using AI tools to create engaging motion content, offering e-commerce sellers innovative ways to captivate their audience and enhance product presentation. We also navigate the intricacies of generating product videos with AI, focusing on items like blankets, and the challenges of rapidly producing multiple versions. Our conversation dives into refining Generative Business Templates (GBTs) to evoke themes of elegance and sophistication. From blending AI-generated visuals with real assets to utilizing cutting-edge tools like Sora 2, Google Gemini, and Runway, listeners will gain practical tips on maximizing creative possibilities even when resources are constrained. Join us for this informative journey into the future of AI-driven video content creation, packed with insights and strategies for aspiring digital storytellers.Want to see the AI magic in action? 🎥 Head over to our YouTube channel to watch this episode and see Andrew Bell create videos from still photos, live on screen: https://youtu.be/ea5pqM1QmxYWant to try Andrew Bell’s AI video GPTs and tools yourself? 🚀 Access his custom GPTs for video creation using the link in the show notes and start turning your product photos into cinematic videos today: https://andrewbell.craft.me/hip1qvfS5vkZ9Y In episode 716 of the Serious Sellers Podcast, Carrie and Andrew discuss: 00:00 – Creating Cinematic Videos From Still Photos 01:51 – AI Enhancing Brand Motion Content 11:33 – Product Photography for Water Bottles 13:34 – Exploring AI-Generated Video Creation 15:53 – Creating Visual Content for Business 22:36 – Optimizing Video Generation for Amazon 28:32 – Creating Videos on Google Gemini 32:53 – Video Generation Model Options Considered Transcript Carrie Miller: In this episode of the Serious Sellers Podcast, we have AI expert Andrew Bell join us and he is talking about how you can create videos from still photos using AI. Bradley Sutton: How cool is that? Pretty cool, I think. Hello everybody, and welcome to another episode of the Serious Sellers Podcast by Helium 10. I’m your host, Bradley Sutton, and this is the show that’s completely BS-free, unscripted and unrehearsed organic conversation about serious strategies or serious sellers of any level in the e-commerce world. Carrie Miller: We have a really exciting webinar. I know you guys are really excited for this one. We have Andrew Bell, who has been doing all of our AI webinars and he’s an AI expert and he really understands e-commerce as well. Which is really helpful for all of us is that he’s not just showing us a concept, he’s going to show us how to use it for e-commerce. So I’m going to go ahead and just bring Andrew on. Hello Andrew. Andrew: Hey, how’s everyone going? Carrie Miller: Yeah, good, and I’m sure they’re very excited to hear about this AI. I know, for me especially, it’s really expensive to do a lot of this stuff without AI, so I think a lot of people are very happy to know that you can create cool videos from a still picture using AI, and so I’m excited to hear this, and I know people are definitely excited in the comments. So do you want to go ahead and take it away? Andrew: Yeah, let’s do it. I kind of want to get right into it and kind of like, as a start, I want to be clear about what we’re doing. Like I’m not talking about here AI generated, UGC, user generated content. Honestly, most of that looks bad, it’s uncanny, it’s inauthentic, often hurts brand trust more than it helps. So what we’re doing here is completely different and I’m not going to tell you that AI is going to replace your studio. If you have the resources, I think you should keep going with that, but I think it’s important here, we’re using I think it’s important here we’re using AI as a studio, not the studio, so it’s not a substitute for real people, for influencers and things like that. It’s about turning your existing lifestyle imagery, the assets you’ve already invested in, into cinematic brand motion content and like, if you have a video studio and a photography studio, you should definitely use those resources. Andrew: But a lot of people only maybe have something where they can take pictures right, or maybe they can’t take pictures, so they use AI to take their products and put them into lifestyle scenes. But the hardest thing to do is probably to put something into video, and we know video converts even more than just static images. So how can we turn those into cinematic, brand-driven motion content? So, again, this isn’t about pretending to have influencers or fake people holding your product. It’s about giving your product its own presence the lighting, the motion, the story, all designed to make your product look like it belongs in a professional commercial event, not a fabricated influencer clip. So if you’ve been seeing those AI UGC videos floating around, that’s not what we’re talking about today. This is about quality storytelling in speed and no shortcuts. And so one thing I’m going to do before we like dig right in into examples and actually two GPTs that I built for you guys. One that’s more simple in the way that it does things. So, for example, it’ll take a lifestyle image and bring it to life within a 10, 15 second clip by staying within that lifestyle image, whereas the other GPT it’s going to like actually create out a scene from scratch. Right, that’s relevant to your product, and so I’m really excited to go into this. Andrew: But first I’d like to start with like some like really good tips, I believe. So first thing is to start simple. You want to begin with short focus prompts that describe a single subject, simple motion, clear lighting and then, from there, add complexity gradually. You don’t want to just say, oh, create it, it’s studio ready. No, you want to iterate and iterate and iterate. Because even in a studio, when you do videos, a lot of the work is actually the editing right. So here you’re actually getting the creative assets and from there you’re able to put those together using editing software and things like that. Shivali Patel: Thinking about selling on TikTok shop? Or maybe you are already in it and you’re ready to scale. Unlock all of Helium 10’s brand new TikTok shop tools with our Diamond plan. Everything from bulk Amazon to TikTok, listing conversions to instant Amazon MCF fulfillment. Best of all, you use the code TT10 to get 10% off Diamond for six months, even if you’ve used a coupon before. So go ahead and upgrade and let Helium 10 do all the heavy lifting for you so you can focus on what really matters. For more info on our new TikTok shop offerings, visit h10.me/TikTok. I’ll see you there. Andrew: So we’re going to go right into looking at the lifestyle image to cinematic motion GPT. So, basically, what you do is you can upload an image of a certain product so let’s say it’s this one, a water bottle and here it’ll create five different prompts that are hyper relevant to this image. So, for example, here, option one arctic refresh cool, diffused light ripples across metallic bottle surface as a condensation beads form and slowly trickle downward. A hand enters frame to lift the bottle. Ice cubes inside Clinking softly, the background, water will ripple subtly. Animates. Camera remains static, and so the intent is to emphasize purity, endurance and sensory chill of long-lasting cold. So all these tips that I’m gonna recommend, you might seem like, oh gosh, do I have to be a cinematographer? No, but I’m giving you GPTs that’ll help you form prompts that will bring images to life, and so you see several different kinds here. So soft daylight from left brightens bottle surface. A hiker’s hands clips the carburetor to the backpack loop, testing the grip before setting it down again. And then you see option three, contrast developments. And then option four, hydration focus, and then explores. Pause here and then, if I want to go more into a lifestyle image of that of a different like bottle, and I click this here, it’ll come up with another five as well. So you can put as many images as you want in here and it’ll produce these prompts for you. So here you can make this image go. Okay, a gentle hand reaches down to grab the bottle, while the skateboard here will slowly spin in the background, subtle reflection glimmers on the middle surface. Motion is fluid and minimal. The intent is to convey motion, readiness and youthful energy, and we’re going to actually go and test a lot of these too. So condensation beads appear briefly on the bottle, catching light before fading. So these are things like you might see, some like, ah no, that wouldn’t work. But then others you’re like yeah, that definitely fits what I’m going to do. Andrew: So we’re going to look at prompts specifically from here, and then we’re also going to do prompts from this as well. So what you do is you would either describe your product or you can upload the product itself. So let’s do the same one here and it’ll create an even more in-depth prompt. So here it’s going to say okay, format, look, let’s do 15 seconds. It’s going to give you the aspect ratio. It’s going to be able to capture this way frame rate, motion, texture lenses and filtration, grade palette, lighting and atmosphere, location and framing, wardrobe props, extras, sound and this is actually based on what OpenAI. Creators of ChatGPT and SOAR 2 actually gave in a guide. So this prompt is based on that. Then it gives you optimized shot list. It’ll give you 15 seconds, seen here broken up into four seconds, and then another four seconds, another four seconds and then another three seconds. It’ll give camera notes and then the finishing touch as well. Andrew: So without, let’s try this prompt here in Sora 2. So we’re going to go right here like this, and we’re going to go, say, 15 seconds. So you want to follow the format there. So just go to Portrait and then you have 15 seconds and then we’re going to enter there, and then you have 15 seconds and then we’re going to enter there. And then one thing I really suggest is, because you want to iterate, you want to have multiple scenes going at once. So I’d go ahead and do another one to test that. So you have portrait here, enter here, and then now we’re going to go back to one of these. So what we’re going to do is take this image right here and we’re going to take one of these prompts here. Andrew: So let’s say, let’s just try this one here. So what we’ll do is upload the image here in Sora 2. And then we’ll do the prompt here like this. So it really doesn’t matter for this one. If you want to do portrait, if you want a duration, I recommend portrait, just because this is square and it’ll actually be better if you go more vertical this way. Andrew: In most cases with Sora, this is important too. You can’t use images that already have people in it. However, you can use images that have people in it that aren’t showing like the face and stuff. That’s the big thing. So we’re going to create here. So I’m going to show you something funny. So what I mean by iterate is, you know, you start with something like this from a lifestyle image and I’m going to show you some bloopers of what happened. So I do the video. You can see the hand going down like this right, and then you ask it to do again and it’s just a hand reaching out really awkwardly and look, it doesn’t even have a head. So these are things you have to like, continue to work with. Andrew: And one thing I’ve done with the prompt is I’ve made it with the GPTs, made it much better, and so you can see, like this one is definitely much better, as you can see someone go into the mirror looking and you can put text in the background. But you gotta watch it too. Like again, I’m kind of showing you just the inside look of like this is what it looks like to do it it’s gonna take time. So here, notice, like it’s not as good here. What’s the problem is the girl’s going up and look, she’s not even showing up in the mirror. So it looks good that she’s walking in front of the mirror, it shows the size of the mirror, etc. But the problem here is it’s not actually showing her face, whereas with this one it does. She comes in, looks at herself like this a little better, right, so there are things like that you can do. Andrew: So here, this is actually just for a general water bottle. So like this is the scene that we were looking at together. 15 seconds aspect ratio this is how the scene goes. It shows the water coming off this way, like this, and obviously it’s going to require a lot of editing and stuff, and so this is one actually done with the photo, and so this is something you want to continually iterate on. So when you drink, it’s drinking the water right. You notice like, okay, you could have put that back down, it’s probably a little bit too fast, but this is something you want to do over and over again, and the fact that it got it like this in just one try. That’s actually hard to do, but the intent here was to convey effortless style and hydration as part of an active urban lifestyle. So, you notice, I was able to do that with the actual lifestyle image there, and so this one is almost ready to go. Andrew: One thing right here we go. So it could be for another water bottle that you have and here I didn’t upload an image of water bottle, but it shows you, like, what you can be doing too with the prompt itself. And so see, consistently, it’s going through the same scenes here, same character, everything, and so what you’ll be able to do eventually is to actually be able to put your product in there too, which is super important, kind of going next, I want to show you this is something you can actually do. I am curious who is everyone that have messed with the image generation and videos on Amazon specifically? So, in the Creative Studio, if you can’t find it, go to Campaign Manager and under there you should see creative studio. And once you hit creative studio, it should give you an option to generate images or video. Hit video and you’ll be able to do it from there. Andrew: Blanket, firefighter costume for kids I think that would probably be a hard one to do. Right, let’s do dog treat container, spray bottle blanket. Let’s try a blanket. Blanket sounds good. Blanket here let’s do. Let’s try this one. Hope for a good one here. So what you do is you end up, you generate the videos. Andrew: One thing I noticed is you actually have to wait. One thing I was hoping for is that you could like generate as many as you want. You know. Generate here, go back, do another product, but it turns out you have to wait for it here, like this. So, as that’s going, I’m actually going to share one of my GPTs in the chat. So let’s do this one here. Kind of want to show you. Let’s do one more example like this and let’s try, oh, let’s try this example, just to show you can do that. Andrew: This is the image that I use in those videos that I was telling you about, and this is how I made the GPT better. Was I would go in, I would generate with a prompt. I’d say, okay, that’s not working, okay, do this one, and then the next one, and the next one. So this GPT is actually the product of that right. It’s the continue iteration that I did to see how good you know, um, the video would do, and so by the end, this is what I got. And just so you know, this is not a GPT I just create for, like the webinar, like, yes, you guys will exclusively get it, but, more importantly, I’m going to continue to iterate on this so you’ll be able to see over the next, you know, month, like, okay, there’s going to be a 2.0, there’s going to be a 3.0, 4.0. Andrew: One thing about my GPTs is I don’t just put it on. Okay, here’s a comment. I’m going to send it to you and I’m not going to worry about it. I’m going to continually update these for you guys so that you can have them, and they should be gated for, like, the next three months or so, so you’ll have exclusive access to them for the first three months and the next three versions as well. So you can hear a soft golden light drifts through the curtain. You can see the curtain there gently brightening the room, a person enters the reflection in the mirror. Andrew: I mean, these are things like you wouldn’t even necessarily, you know, think of. You know you can’t expect someone to be a cinematographer. I know, like all the you know, prompting guides out there like we’ll tell that and that’s good, you should have those tips. But, most importantly, I think it’s good to say not everybody has that skill. In fact, I would believe the vast majority of people don’t have that skill. And so, having people who went through the work to do this, so you don’t have to spend the time on it, you can focus on your business, focus on what you’re doing, and someone create these things for you that reflect those best practices. And so it gives you the intent. Andrew: The intent of each of these is like okay, here evoke timeless elegance in self-assurance. Intent. Here suggest nostalgia, hidden stories within refined surroundings. Here, intent convey the poetic passage of time and serene domestic intimacy. I don’t know if I would do serene domestic intimacy. Intent. Highlight beauty and simplicity and the touch of human care. Here evoke classic sophistication and contemplative stillness. See, I like those. And these are things that you’ll continue to go through and every option you’ll notice is better with each iteration, all right. Andrew: So let’s go back to this tab, and here is one of the videos. So, some motions better than no motion. I think that’s the big thing here, and it has the text correctly on there. This one’s better and they’re only six seconds long. But you should that’s about like what you want sometimes for, like a sponsored brand ad, let’s say, especially if you don’t have the equipment, you don’t have the money to be able to like actually, you know, um, like invest those resources in a video. Again, if you have the resources to do that, if you have the video assets, um, and you have the ability to produce video assets like that at scale, with like a video, like a studio, I definitely think you should do that. But one thing you can do, too, is like you can put things together like that. You can use AI, studio driven, you know, video generation with your real assets that you have, okay. Andrew: So here’s another one with a person in it. Not horrible, is it? Okay, that doesn’t make any sense, does it? You can see here, this one thing you have to watch for, too, is like why would you have your coffee right over your blanket like that? I don’t know. Maybe some people do. I just doesn’t seem natural to do that. And then let’s look at this one again. Here zoom in cozy moments, perfectly wrapped, and here’s another version of the human. So it had a little bit extra there at the end, like that I’m kind of hoping here that they’re the same ones. You know what I mean, and then you can do this you hit generate more and you see, okay, what else can it create? I definitely recommend doing this. In fact, if you have the chance with your product specifically, I definitely recommend doing this. In fact, if you have the chance with your product specifically, I would spend maybe an hour just going through and producing and producing and producing and see what you can get from it. See if it’s just like, okay, it’s all the same stuff, it’s just repeating, but there might be a ton of assets You’re like, oh, you know what, I can put that together, I can put that together. So if you generate up to an hour of that, you’re getting well over 50 videos within an hour that you can take together little clips. You can go into your editing software, whatever you use, and you can put those together seamlessly. So I would definitely recommend possibly doing that. So we’ll see what this generates here. Andrew: And in the meantime, I would like to talk about a little, like some other things that are important. So like if you’re going to like, actually, you know, decide, I want to. You know, Andrew, this is all great, but I think I would like to you know prompt on my own. So like what kind of stuff should I do? One thing you want to do is you want to replace any of your vague terms that you have with specific framing. So you don’t want anything weak like oh, cinematic look, or close-up, or hey, camera moves. Instead have things like wide shot, low angle, medium close-up, slight angle from behind, slow dolly in from eye level. This is effective framing examples. So like wide establishing shot at eye level, medium close-up shot over the shoulder arrow. Wide shot, slight downward angle, tight close-up on hands, macro detail. And remember my cinematic prompt generator is able to do this kind of thing here. You’ll notice. It goes through everything. Andrew: It goes through the aspect ratio, capture, frame rate, motion, texture lenses. The goal of it highlights mids, blacks, the palette, lighting and atmosphere. So like what’s the key? You know here the ambiance daylight reflected from modern glass facades. The fill, balance of practical is the atmospherics clean, open air with subtle depth from urban reflections. Here you have the direction of it and then you have the location framing as well, where it’s contemporary urban plaza with reflective architecture. Then you have the framing of like, okay, the wide shot, subject environment, sense of urban leisure. Then you have a mid-human gesture checking phone, skateboard is idle, then you have a close one, a close shot condensation on bottle light brushing surface. You actually notice that too, like in the video here. So when you look at it. It goes through that scene and then watched right there like that. So you see the level of detail I captured there when it said condensation on bottle light brushing surface. Andrew: Right, and it’s not going to be 100% perfect. But again, the more specific you are, the better it’s going to be. I’m not saying overload it and make it crazy big, but this is actually the best practices that come from OpenAI on how to do a generation with video, and this is kind of what I expected from Amazon. It’s the same kind of thing. It just regenerated some of the videos right, like this, and so it doesn’t look like it’s going to really give you anything new. It’s going to be very similar to what it was before. And unfortunately, right now with videos, you can’t really give your own prompt per se. It doesn’t necessarily let you do that, but it is good to have the ability to take it. It’s free. Number one, you don’t have to worry about going to Sora 2 and doing all this stuff. You can do it right within your Amazon ads and think about this you have six videos that you can test from. Say, hey, I want to actually beta test all these videos and see which ones do better in sponsored brands or sponsored display ads. Andrew: The videos. I’m kind of like wrapping up here again. You know, be specific about movement. Like I’ve said before, you don’t want to have, you want to, you know, describe things like the depth of field, the motion and timing. You know so, like, for example, when you say a cyclist moves quickly, instead, say, cyclist pedals three times breaks, stops at crosswalk. The reason you want to do stuff like that is because OpenAI now is way better at kind of getting down to the physics of things. And you want to, like, anchor realism as well. Use descriptors like handheld jitter or overcast afternoon to ground a video in a believable style. You want to keep the number of characters sometimes small and motion simple, because some complex interactions can reduce the fidelity of the prompt itself. Andrew: And then the most important thing is to iterate. Don’t expect the first generation to be perfect. What I showed you on Sora, like the multiple generations of that mirror, I think is the most important lesson in video generation. It’s not going to be perfect at first. It only will get better if you iterate, and that’s what those GPTs are built on as well. Other than that, I just I encourage two things, I guess you know iteration and patience. That’s what it takes to do. Video generation is iteration and patience. You’re going to remember two things from here it’s iteration and patience when it comes to generating videos and, of course, use these GPTs as well to help. I actually don’t recommend like prompting from scratch, but if you want to, you will have a guide that I’m going to be providing. That I’ve put together with the two GPTs as well, and possibly my new GPT I created specifically for Sora too, but these should also work in VO3.1, which is, by the way, if people don’t know, VO3.1 is the video generation model from Gemini, right from Google, and then, remember, Sora is from OpenAI, readers of ChatGPT. Andrew: and I do not recommend Grok. Grok is not there yet in terms of the technology. You know, Kling, I think, is great. Runway is good too. I don’t know anything about DaVinci. I’ll have to look into that. Yeah, I definitely recommend Sora 2, VO3, and you can try the image generation, video generation for Amazon as well, which is good. Carrie Miller: Awesome, that’s great information. Thank you so much, Andrew. Somebody asked how do you get access to Sora 2? Andrew: Right now, from what I understand, it is invite only, but you can get Sora 1 and it’s a lot of the same prompting techniques that you can do. I recommend first going to Sora.com. I recommend going through OpenAI.com first to get to it, and then you will be able to sign up for free at that. But then you can also do Sora 2, because if you’re on Sora 2, you can technically go back to the old Sora so you can sign up for Sora. But go ahead and go to Sora.com. I think the exact link actually is, you know, https, and then Sorachatgpt.com is what you want to type in, and if you have it to where it’s only invite only, you’ll eventually get an invite very soon. Andrew: If you sign up, you can probably get, I know you can get early access to that, especially as they’re releasing it to more and more people. But you can definitely do the first Sora and if you have trouble doing that, I definitely recommend reaching out to me on LinkedIn, because I feel led to do this. If you guys are having trouble with that, like log in and get into it. I am actually free. I want to help troubleshoot that for you. So reach out to me on LinkedIn. Name is Andrew Bell and help me troubleshoot. I’ll help you troubleshoot that and get into a video generation model. Carrie Miller: All right, somebody was asking about TopView AI, I think. Andrew: I don’t know that one TopView AI. I’m going to write it down, though, that and DaVinci. Carrie Miller: And somebody else said what about Runway? Andrew: Yes, Runway, Runway’s good. The prompts do not need to be as long with Runway. In fact, it’s advised that you do it much shorter. But if you’re going to do a product video, I highly recommend Sora, and VO3 is even good too. Andrew: In fact, if you’re on Google Gemini, let me just go ahead. I’m gonna share one more example here. Okay, what we’re gonna do is we’re gonna go to Gemini and we’re gonna take a lifestyle image and actually create a video out of that. I’m gonna show you a cool trick here that you can do. So let’s upload an image here, say it’s this one, that, and say go here, create videos of Theo, and then just say this bring this image. Andrew: This is a hack that I think is really important, because we talk about all these big prompts, right, but this is actually one that you can do. You have less control over it, but you’ll notice it actually does. It’s pretty cool. So go to Gemini, go to create videos with Theo and then again upload image of your lifestyle, one of your creative assets, and then put bring this image live within this scene. So you give it a second there to create the video and you should be able to create multiple videos also at once. So people know you can obviously go here and you can go to upload files and you can take this and say make a bottle, the bottle blue, and then once you do that, it’ll come with an image and the image should make this bottle blue and if it makes it blue then from there you can create a video off that as well, so generating image here. You notice the blue here, like this, and then what you want to do from there. Carrie Miller: Do you just have to keep say it to keep everything the same color and look the same in that prompt, or? Andrew: Yeah, exactly yeah. All you have to do is say make the bottle blue, and it’ll know instinctually not to like. It’ll just change the color of the bottle. So I could say here forever I could say, okay, go green color of the bottle. So I could say here forever I could say, okay, go green now. And then the next one I could say go aqua blue. See, then it went green like that. And say, okay, go aqua blue, like this aqua blue. Then what you can do is I can download it like this, start a new chat. I can download it like this, start a new chat, and then I can upload that same image and then bring this image. All right, let’s see what it did. There. So you can see that it does very well. Carrie Miller: Wow. Andrew: Yeah. I’m very impressed with V03. The only problem with V03 is it’s not as long and you don’t have as much creative freedom either to do it. But like this to me is like good enough, and you can actually go to what’s called flow and maybe that’s another one we can do, where you put it together as a storyboard and you can do as many things you want. So, like the next one could be okay, now I want her to pick up the bottle, and so the next one would be she picks up the bottle here, and ideally, what you want it to do is like okay, you have this scene and then you create another scene that’s her picking up the bottle, and then you could put the scene together. Right, okay, you could find a way to put that together into separate shots. Andrew: So it’s creating the video here, bring this image to life, so like it just made it a different thing. That’s because what? You cannot go from image to video in the same chat on Gemini. You have to stay image, download the image, but then you can go to a separate chat and do this and go to the video. So if you don’t have access to Sora 2, my second recommendation is to go to Gemini and use their video generation model, and you’ll find it right here at the bottom when you click this. Right here you see deep research, create videos with VO, create images. You want to go to create videos and that’s where you’ll be able to generate the. Andrew: If you can see, it’s a little bit harder for that motion, whereas, like with a phone, it’s a lot easier to do. That, you know. But her she’s like very hesitant. Do I pick this water up or not? Yeah, do I pick it up? No confidence, you don’t want that in your commercial. You don’t want that in a product video, you know. But something like this to me this is usable. I mean, tell me if I’m wrong here, like in the chat, but this is definitely something that’s usable, especially if it comes from this lifestyle image. I don’t see any real distinctions here. Yeah, I mean that’s, it does a phenomenal job. Carrie Miller: I think that’s all the questions we have. I think that was pretty straightforward, pretty good information so that people can start doing some of those questions. I mean, where, where do you think most, most e-commerce sellers could use these videos Like, what do you think they’re best for? Andrew: Oh, definitely best for sponsored brand videos. I’d say number one, okay, product videos. Since product videos are much longer. I think there’s opportunity for that. Like I said, google has the ability it’s called Flow where you can do multiple video generations into like a storyboard. I’ve been able to get a video actually up to I mean, I should share it too in that link up to two minutes. I’ve been able to get a consistent product video. But here’s the thing it’s taken way longer than it would be, you know, just to shoot it on your own. I just wanted to see what it can do, but the fact that it can do it, that’s evidence that it’s going to only get better from here. This is the worst things are ever going to be for sure. Carrie Miller: That is a good point. Cause then people can start doing. We’ve been doing some advertising, like an advertising series, and one of the things you could do is, you know, take one of these still photos and kind of change the background Maybe. Maybe like uh, you know, for example, destiny always uses the um example of, you know, protein powder. Well, protein powder for bodybuilders. If you show a picture or a video of someone, just a normal person that’s not a bodybuilder, it’s not going to really answer what the person’s searching for. But if you had a bodybuilder with the protein maybe they’re drinking a protein shake then they’re going to be more likely to click on it. So you can do a lot of these videos, which is really cool. Well, thanks again, Andrew, this is great. We really appreciate you. It looks like people want some more uh webinars in the future, so we’ll try to plan some more content around all this stuff. And thanks again for joining and everyone thanks for all of, and we’ll see you all on the next webinar. Bye, everyone. Andrew: Thanks guys. Enjoy this episode? Be sure to check out our previous episodes for even more content to propel you to Amazon FBA Seller success! And don’t forget to “Like” our Facebook page and subscribe to the podcast on iTunes, Spotify, or wherever you listen to our podcast. Get snippets from all episodes by following us on Instagram at @SeriousSellersPodcast Want to absolutely start crushing it on Amazon? Here are few carefully curated resources to get you started: Freedom Ticket: Taught by Amazon thought leader Kevin King, get A-Z Amazon strategies and techniques for establishing and solidifying your business. Helium 10: 30+ software tools to boost your entire sales pipeline from product research to customer communication and Amazon refund automation. Make running a successful Amazon or Walmart business easier with better data and insights. See what our customers have to say. Helium 10 Chrome Extension: Verify your Amazon product idea and validate how lucrative it can be with over a dozen data metrics and profitability estimation. SellerTrademarks.com: Trademarks are vital for protecting your Amazon brand from hijackers, and sellertrademarks.com provides a streamlined process for helping you get one. Serious Sellers Podcast Get weekly insider strategies from top e-commerce sellers and thought leaders. Subscribe: Serious Sellers: Spanish Get weekly insider strategies from top e-commerce sellers and thought leaders. Now in Spanish. Subscribe: Serious Sellers: German Get weekly insider strategies from top e-commerce sellers and thought leaders. Now in German. Subscribe: AM/PM Podcast Join Kevin every Thursday as he sits down with top experts to talk about all things entrepreneurship and e-commerce. Subscribe: Weekly Buzz Bringing you the latest news in e-commerce, interviews with experts, and your training tip of the week. Subscribe: Carrie Miller , Carrie Miller, Principal Brand Evangelist at Helium 10 A 7-figure e-commerce seller, Carrie began her journey on Amazon, expanding rapidly to Shopify and now Walmart.com. Currently serving as the Principal Brand Evangelist for Walmart.com tools at Helium 10, she's deeply passionate about sharing success strategies, tips, and tricks with fellow e-commerce sellers. Published in: Serious Sellers Podcast Share: URL copied Share: Published in: Serious Sellers Podcast Thought Leadership, Tips, and Tricks Never miss insights into the Amazon selling space by signing up for our email list! Subscribe Achieve More Results in Less Time Accelerate the Growth of Your Business, Brand or Agency Maximize your results and drive success faster with Helium 10’s full suite of Amazon and Walmart solutions. Get Started