/bbwai/ - bbwai

Stable Diffusion, Loras, Chatbots

BBW-Chan is supported by simple text or static image ads from a-ads.com.
Please consider whitelisting us in your adblocker if you enjoy your stay!


Mode: Reply
Name
Subject
Message

Max message length: 0/9999

Files

Max file size: 10.00 MB

Max files: 6

Captcha
More

READ THE *RULES* BEFORE YOU POST!

[ / / / / ]

(1.49 MB 480x768 002.webp)
(1.20 MB 480x768 005.webp)
(1.77 MB 768x608 009.webp)
(1.37 MB 768x608 013.webp)
(1.51 MB 768x608 020.webp)
(1.55 MB 768x608 024.webp)
Wan1.2 T2V Wan2.1 AI Video 03/03/2025 (Mon) 17:59:57 No. 32127
What up coomers <3 It's been awhile since I've posted anything over in the Hunyuan thread. I felt like I'd hit a wall with progress for Lora improvement. Luckily, Wan2.1 (t2v + i2v) was released...and wow. It blows Hunyuan out of the water in pretty much every application. Not only that, but the diffusion-pipe was updated to support training almost immediately and the datasets for Hunyuan/Wan are interchangeable. I trained my first Lora last night - ga1n3rb0t_wan2.1 version 1.0. I've been testing it out today and I am more than pleased with the results. I'm not exaggerating when I say, 99% of generations are keepers. Wan2.1 has exceptional understanding when it comes to prompt adherence and appears to train much easier than Hunyuan. My current Lora is trained for the Wan2.1 14B T2V model which can generate 480p or 720p videos. I have no idea if it would work for the I2V or 1.3B models, it's possible, but I haven't tested it out. JSYK: For local generation, like Hunyuan, Wan2.1 is very resource heavy and generation times can take a few minutes. However, new workflows are quickly being released to accommodate lower-end consumer cards. I've seen some for 6GB of VRAM already. Keep your eyes on CivitAI, I'm sure you could find something that'll work for your setup. Like with Hunyuan, to experience this tech to it's full potential, I recommend renting a runpod. As always, I plan on releasing this Lora publicly ASAP. Unfortunately, CivitAI has not yet updated their models to include Wan2.1, so I am just waiting on that. As soon as it becomes available, I will upload it there and create a post here with a link to the page. Until then, I am happy to take prompt requests which I will generate and post here. Finally, here are some examples of what I have generated so far. I'm excited to hear what you think. I'll continue posting throughout the day as generations come in. Thanks for the support. -drewski
(1.53 MB 768x608 006.webp)
(1.19 MB 768x608 008.webp)
(1.45 MB 768x608 010.webp)
(1.27 MB 768x608 022.webp)
(1.79 MB 768x608 023.webp)
More examples -drewski
Also, oops, just realized I accidentally named the thread "Wan1.2" instead of "Wan2.1" sorry for any confusion. -drewski
Thats lovely squish in first and second gif. Might I ask do you have special prompt in your training to make them or its randomly?
>>32133 Thank's man! I'm not for sure, do you mean like a trigger-word? If so, yes - for my ga1n3rb0t Lora's I always begin my dataset captions with "ga1n3rb0t". Then when prompting for a generation I will always place "ga1n3rb0t" at the beginning of every prompt. Other than that, I can't really say I do anything special. I hand caption everything, I haven't really found an auto-caption program that I like, especially for when it comes to bbw content. I try to caption things with variety and structured how I imagine people will style their prompts. -drewski
>>32137 lol I do not understand?
>>32136 So its not a trigger word? Can it generate a women standing outdoors jiggling theirs belly in cargo black pants?
>>32140 My Lora uses the trigger-word I've assigned to it. Base model Wan2.1 will not generate results close to what I have posted here unfortunately. It'll have no clue what "ga1n3rb0t" is. A Lora is definitely required. I will upload my Lora to CivitAI as soon as they update their model list to include Wan2.1, which should be in a day or two (they're usually pretty fast). Until then, I'll gladly run a prompt for you. I'm sure my Lora could handle black cargo pants. Feel free to get as creative as you want with it and I'll test it out. -drewski
(1.05 MB 768x560 026.webp)
(1.68 MB 560x768 025.webp)
(1.43 MB 768x560 030.webp)
(1.02 MB 768x560 029.webp)
>>32140 >>32142 Here you go -drewski
There are still some errors like belly button appearing out of nowhere but the physics seems to be nice. Any ideas how it could be more improved to have more realistic looking outputs like Hunyuan Video did?
Any chance you could host the LoRA somewhere else for now to share here? Like mediafire?
>>32146 The bellybutton glitching really only happens in the pant unbuttoning example, seems to be a non-issue in other cases so far. Which, fair enough, because there was only one example of unbuttoning in the training data. More/Better data should improve it. I agree, Hunyuan defaults to more realistic and can be crisper/smoother with generations sometimes. But Wan2.1 really shines with physics, like you said. Collisions and multiple people are handled much better. What's really pulling me into it is the general success rate of Wan vs. Hunyuan. When Hunyuan doesn't work, it really doesn't work. But with Wan, even if it isn't exactly what you were looking for, it's usually pretty solid. It'll be interesting to see what happens in the next couple of weeks as people develop upscaling and refining tools for Wan. Right now I am using the most basic workflow provided by the developer for generations (no refining/upscaling). Future workflows could improve things drastically. >>32147 Probably, but I prefer to wait for CivitAI. I trust the hosting and I'm sure it won't be long at all. Like genuinely would be surprised if it is more than a day or two. -drewski
I mean, so does it randomly gives between realistic/cartoony outputs for now? Or you chose something for more realistic video look?
>>32150 Oh, sorry I misunderstood. Yeah, it's kind of 50/50 right now. That is probably due to my training data - I used some images I generated with Pony_Realism which can still be a little Pixar-y. I'll try to correct that in future versions. What I'll try now to correct it is lowering the Lora strength and see what that does (atm it is set at 1.00). -drewski
>>32151 Oh yeah makes sense, like a more training on realistic looking content would help?
I mean I have some videos that might be useful if you want to check them out. https://mega.nz/folder/eJBUQbLS#LvrhGuo3qTcI2R8NLJEFMw
eyo... I took a break for a bit because it felt like we hit a "lets wait for this to get better" wall... and this is definitely looking promising. Seconding the request to just get it uploaded somewhere so I can play with it on runpod..
(17.52 KB 300x162 th-3.jpeg)
Well, as suspected, CivitAI has officially added Wan Video to their model list today - told ya it wouldn't be long :P My Lora is now available for download: https://civitai.com/models/1144518/ga1n3rb0t As always, the trigger-word is "ga1n3b0t". Recommended lora strength is 1.00, however if you find you are getting semi-realistic gens and prefer more realism/hyper-realism set the strength to 0.80 and this should resolve the issue. Excited to see what can be created! Please share generations here as well as on the CivitAI page, I'd love to see them. Also really helps to find any shortcomings which can be fixed in upcoming versions. If you enjoy the lora, please drop the model a like on my CivitAI page, it's much appreciated and earns me some buzz. Happy generating everyone!
Any way you could generate something based off my videos I just sended?
>>32180 Do we have to use ComfyUI for this or can it be made to work with ReForge UI? Also, can you link the Wan model you recommend? I can't find one with that exact name on civ
>>32181 Those are all videos. It would be possible with the video2video models - but I'm not sure if my lora would work with that model since it was trained for the text2video model. Also, the lora is capable of generating what's depicted in those videos pretty easily with a simple prompt. Not much going on there, other than belly jiggling, so not too sure what you'd be looking for/expecting out of using those, sorry. >>32201 As far as I am aware, ComfyUI is currently the only way to locally use Wan Video. I have never used ReForge UI, so my apologies, I have no idea when it comes to that. I use ComfyUI for everything. Once you get a grasp on working with nodes, it becomes very easy and you'll really start to appreciate the customization. If you are just starting out with ComfyUI, do not make the mistake of going to CivitAI and downloading overly complex workflows that promise a bunch of nonsense. When trying out a new model, always stick to the smallest workflow possible, usually by using the workflows provided by the model developers. If you click on my example videos from my lora page on CivitAI, I believe it will let you copy the workflow I have been using. As for the model downloads: https://github.com/Wan-Video/Wan2.1 ^This is the main Wan Video page where you can find links from the developer. If you are interested in downloading GGUF versions for low-vram cases, here is the link for those: https://huggingface.co/city96/Wan2.1-T2V-14B-gguf/tree/main ^If you are using GGUF, you must download the GGUF model loader custom node via the "Comfy Manager" to properly load the files. This City96 dude has GGUF's available for most video models, definitely look over his page. Keep in mind, the lora was trained with the 14B T2V model. I have not tested it on either the I2V or 1.3B models. It could work, it could not. No idea yet. Hope this all helps, let me know if you have any questions.
>>32202 Sure I wanted some belly jiggling with small belly overhang over the pants. I can other ones just tell me which ones you would need.
From what I heard, it not only trains on the movements but the style of the video itself, so I guess it could sort of improve your training a bit to make it more realistic looking, so you could try adding them.
>>32203 >>32204 You don't need to send me videos, it's all good. Just write out a prompt that you would like to see and I can run it for you. Be as descriptive as you want, the more descriptive the better. Belly overhang, no problem. But what environment, hair-style, skin color, clothing, movement/actions, time of day, multiple people?, etc, etc. Write out a few sentences.
(37.23 KB 335x225 thi.jpg)
>>32206 A scene of two girls in kitchen standing near the table preparing donuts or eating them. Also from what I meant "overhanging bellies" I just meant that belly fold on the lovehandles (that is on the videos I just send so you could check as reference)
(3.11 MB 768x608 ComfyUI_00191_.webp)
(2.69 MB 768x608 ComfyUI_00196_.webp)
(3.03 MB 768x608 ComfyUI_00194_.webp)
>>32207 Here are a few of my attempts. Not perfect, but hope you like them. Can't really say the lora is great at smaller bellies lol. >>32207
>>32210 Third one is definitely better, did you made any changes to your lora or you managed to generate them without it? Because if would be nice the lora currently could make them. Wish those would be more detailed and belly could jiggle a bit too.
>>32213 nope, no changes to the lora, it can just do this. Just figuring out the proper wording for the prompt. Had to specify that the jeans were partially lowered, belly was exposed and hanging over waistband, lovehandles. At first I was prompting for low-rise jeans, but that wasn't giving the results we were looking for. To change the lora would mean re-training it entirely which would take several hours and cost $$. I'll make an updated version in the future, but not for a while. This current version still has a lot of potential.
Sure it looks good but it still too cartoony a bit but looks quite good and have potencial.
btw have you tried training it on faces more too like last time? Might it get also more realistic results next time, and the lovehandles movement? You could try training it on it next time.
1) this ai can generate expansion? 2) this ai can generate strongfat women?
>>32214 Do you think this has the potential to generate more anime-style outputs with the right prompting?
I've been working on a LoRA for wan i2 vid but it still needs a lot of work. The dataset I built it from was like 9 videos I managed to scrape from all corners of the internet. So now I'm taking the best generations I can from what I've generated and see if I can't make it more well generalized. Not sure if I'll even upload it, but I probably will.
>>32352 Which videos you used for dataset? And what ones you looking for building dataset? What type of those you would need?
>>32353 Well, this is just stage one. Stage one is establish a weight gain LoRA. It's still cartoony and not super realistic. Next step is to well... reverse the videos and make a weight loss LoRA. Then I gather images of real overweight people and make them lose weight... the reverse it again and make another LoRA which should in theory give me a realistic weight gain LoRA. So basically just gotta train build and train again. Working on the weight lose part now.
Just thought I'd share a progress report of the LoRA. The idea of reversing the weight gain video to make a LoRA for weight loss to reverse it again to make a more realistic endpoint for the weight gain seems to be working. I'm almost done with the dataset for the cleaned up weight gain LoRA. Here's some of the weight loss LoRA.
>>32365 this is very shocking, good job anon. btw is there a way for make it look more like inflation? Like if the body were a huge inflatable latex suit?
>>32367 I'm sure there is. There isn't too much data floating around for it though so I'd probably have to tool around in blender and make some "bad" data so I can fish out some good data later.
>>32368 wow, that would be awesome, a realistic vid of a buff gurl inflating into a huge extremely bloated nuumatic-like human balloon struggling to flex
Looks nice, anyway sort of request if you can ofc can you make those 2 images also sort of weight loss as you did in those videos? just sort request https://mega.nz/folder/iU4AQJrC#3dzyIThApxSlVX5LWX-OAg
>>32370 Oh boy, here come the request spammers.
Here's the final gain LoRA at 10 epochs. I'm gonna leave it running overnight and see if I can get it to work a little better. It's been disappointing on portrait shots so far. I assume because it is still very under trained.
>>32373 So cool! Can't wait to try this. Thanks for the previous lora with jiggling, it gives good physics to bodies, but it is impossible to do manipulations with changing sizes there. So I'm really looking forward to the new one!
>>32365 >>32373 Damn! Nice work anon! Fantastic progress. Wan has so much more potential than Hunyuan. Looking forward to seeing more updates on your progress. Bit of a shameless plug, but I have started a deviantart page in case anyone wants to check out some more generations. Same username as my civitai page: https://www.deviantart.com/drewski95572 I recently developed a new lora which I've been really happy with. Focus is more on the bigger side of things and I worked in sexual positions. I haven't posted it to my civitai page, keeping this one under wraps for a bit (sorry).
Whats the name of gain weight lora?
>>32474 Turns out the creator of weight gain Lora and the creator of jiggling Lora are different people and my thanks went to the wrong place. Correcting my karma. Drewski, thank you for your Lora! >>32373 How's your progress? Can I ask you to share weight gain Lora if you've completed the training please?
>>32482 Done. Here are the LoRAs I used including the weight loss one I used to train the weight gain one. Weight loss: https://civitai.com/models/1385301/ozempic-wan21-i2v-lora Weight gain: https://civitai.com/models/1385270/weight-gain-wan21-i2v-lora
>>32501 Thanks a lot! Did you used only one trigger word in training data of weight gain Lora?
>>32527 Yeah just one. These were done with the I2V models and were a little bit scuffed because of it. I'll be going through again and training it with the T2V model.
Does retraining the lora on the AI content you generated would work too?
>>32533 yup! It’s the circle of life. Depending on what you’re shooting for, sometimes there aren’t a lot of existing examples to train on - so you scrape together a rough Lora that works 1/10 times > make 100s of gens using rough Lora, maybe get 10-15 decent gens > retrain with those gens > rinse and repeat. But it is easy to fuck up/lose quality using this method if you aren’t careful sifting through. AI, especially videos will have inconsistencies and artifacts that you don’t want to train on. >>32482 Thanks broski, glad you’re enjoying it!
(563.82 KB 1248x720 Wan2.1_00011_converted.webm)
(627.55 KB 1248x720 WanVideo2_1_00209_converted.webm)
(604.28 KB 1248x720 Wan2.1_00007_converted.webm)
So I made a few changes to the dataset and how I trained the weight gain LoRA. Basically training on the I2V model isn't ideal so I switched over to T2V which works just as well with I2V. I also reduced the rank a bit as it was overfitting quite a bit on the previous model causing warping with clothing and stuff. T2V It lets you train on both images and video and I think it gives better results. I'll probably update the civit model later.
(590.79 KB 640x640 AnimateDiff_00350.webm)
>>32546 yes please! having fun with the first version. thankyou for doing this.
>>32546 Looks promising, looking forward to trying out the I2v

Show forms
Quick Reply
Drag files to upload or
click here to select them
No cookies?
Extra