/bbwai/ - bbwai

Stable Diffusion, Loras, Chatbots

BBW-Chan is supported by simple text or static image ads from a-ads.com.
Please consider whitelisting us in your adblocker if you enjoy your stay!


Mode: Reply
Name
Subject
Message

Max message length: 9999

Files

Max file size: 10.00 MB

Max files: 6

Captcha
E-mail
Password

(used to delete files and postings)

Misc

READ THE *RULES* BEFORE YOU POST!

[ / / ]

for the first time in my life decided to just put fat anime girl in prompt with anything else. All the pictures kinda look the same, so here u can have a couple of examples of absolute lame
anyone got the wgt for bigger girls
test
(54.84 KB 512x512 1.jpg)
(51.32 KB 512x512 2.jpg)
(45.17 KB 512x512 4.jpg)
(43.69 KB 512x512 8.jpg)
Is it OK to try with more realistic prompts?
(855.63 KB 712x712 7.png)
(832.82 KB 712x712 23.png)
(932.15 KB 712x712 8.png)
(896.29 KB 712x712 9.png)
(967.57 KB 712x712 25.png)
(803.97 KB 712x712 26.png)
More if you're interested
>>2135 >>2136 >>2136 wtf ya using to get those results and whats the mix i got this with just the bigger girls model
would anyone be willing to generate more gray wolf?
(422.23 KB 512x768 tmp5n0xr_ax.png)
(448.14 KB 512x768 tmp5r0qr9ne.png)
(347.97 KB 512x768 tmphpz7zqk5.png)
(527.79 KB 640x768 tmp6r_c9axa.png)
(338.47 KB 512x640 tmpj9qjfurz.png)
(278.95 KB 512x640 tmpd6_rtv5u.png)
Year of chubby bunny
Is there any way that you can make these cool? Like can you give them dongs or something?
>>2145 Ye you can, but you'll have to go to /ee/ with that stuff. If you need prompts for that check AIbooru, there's enough stuff like that
>>2144 how do you get such consistency in the design of the pink cat? could you share an example of some prompts or parameters?
>>2143 what model did you use
i know this isnt exactly a request thread, but if someone could generate some of vtubers buffpup or shylily i would be forever grateful. (rough is ok since i assume you guys dont have trained models for them)
>>2154 i bow down to you, thank you. their designs are so complicated id love if you shared the prompts. even if they take 20 minutes on my pc. or if you can manage another batch with them larger. sorry if im asking for too much haha.
>>2158 I actually like this. What prompts and model did you use?
>>2157 >>2158 Wtf these are genuinely amazing
(492.18 KB 512x768 tmpnk7c0eff.png)
(487.21 KB 512x768 tmp6d0i9g4l.png)
(532.90 KB 640x768 tmpbdlgug2d.png)
(634.12 KB 512x768 tmp0tspuh_1.png)
(364.50 KB 512x768 tmpg4ktja6b.png)
>>2140 The bigger girls model is cool but it makes way better results when you merge it with another model. I merged with AnythingV4 I also trained it a little using the last Ben's dreambooth colab. It got an update that lets you train using training images with captions, which really helps the quality and also lets you train specific tags e.g. fats, hips, T&A, futa, feet, etc Also use DPM++ 2S a Karras sampler if you have the option, it is about as fast as euler but gives better details and can also do better hands. 10-20 steps for roughs and 40-60 for real pics >>2144 Kino >>2147 Idk what the other guy used in his prompts but I used "nyatasha nyanners, light pink hair, pink cat ears, twintails, purple eyes, hair ribbon" You'll have to inpaint to get black hair ribbons cause they usually come out pink or purple regardless of prompt and you can replace "twintails" with whatever hair you want
(784.44 KB 512x800 651.png)
(891.28 KB 512x800 644.png)
(784.60 KB 512x800 671.png)
(799.27 KB 512x712 963.png)
(1.12 MB 640x976 223.png)
(770.34 KB 512x712 988.png)
I don't see any better place to post these, so here we go .... Enjoy.
(610.57 KB 512x688 3725.png)
(918.66 KB 720x920 1721.png)
(1.09 MB 560x808 1678.png)
S'more
(343.08 KB 512x512 a something.png)
>>2153 Dall-e has its own unique art style. And it knows that better to hind the hands than show them off, lol.
(226.72 KB 512x512 01.png)
(292.38 KB 512x512 02.png)
>137720 Else
here's a link to it: https://huggingface.co/WarriorMama777/OrangeMixs and to a pre-made colab notepad if you can't do AI on you PC: https://colab.research.google.com/github/camenduru/stable-diffusion-webui-colab/blob/main/abyss_orange_mix_2_webui_colab.ipynb
I wish I waz back
>>2168 i almost fucking pissed myself seeing the painting in the background. thank you for purple cat, it was technically a vtuber, and more like lavender cat, but its appreciated.
>>2169 love whenever AI adds extra random boobs in the corner of the picture. does this model have booru tags for characters?
>>2164 Nice — what model did you use for these?
>>2164 ayyy nice ones! What models are you using for these?
>>2173 Yea, most probably, didn't try that yet
>>2176 interested to see how this model handles those, if you do them please post
once again my PC has no idea what it is doing. DPM2 a Karras 40 steps. For some reason the last two images have had an attempt at text. tried "cat mouth" on the one before and the mouths were hell
>>2174 >>2175 Forthemasses model but it's really inconsistent. https://civitai.com/models/4673
>>2179 Does it work with USSBBW sizes?
>>2181 Nicely done! What model did you use for this?
>>2182 https://civitai.com/models/3950/art-and-eros-aeros-a-tribute-to-beauty this one
Has anyone had any luck making fat traps? Either photorealistic or drawn? Most models I've used don't handle androgyny well, and adding dicks into the mix just complicates things.
>>2184 >chicks with dicks
(368.24 KB 512x768 tmp7lvir4qz.png)
(441.06 KB 512x768 tmpq8bd3a_8.png)
(328.48 KB 512x768 tmpfkwh3fxw.png)
(451.94 KB 512x768 tmp_aqpy9lq.png)
(447.07 KB 512x768 tmp0e4fh18k.png)
(419.49 KB 512x768 tmprviloop4.png)
>>2170 In the colab version, how do you merge the model with the bigger girls model?
>>2189 Lmao, I recommend putting "teeth" and "open mouth" in negative prompts. for some reason it associates having teeth out with "nerd"-type prompts, which I noticed you're using, so be judicious about mixing tags with those
(308.14 KB 2065x1735 harold.jpg)
>>2189 >hide the pain >>2190 Maybe it's because anime girls are toothless or have the flesh teeth? >>2172 Hahah, thanks >>2153 >>2178 😭 >>2184 You gae
>>2187 See those !wget commands? They download stuff and put them in the directory specified after -O. So just make another wget command, put Bigger Girls download link in and put it in the same directory as all the others go, just name it bigger_girls.cptk or like that. After you boot up go to checkpoint merger, choose which model you want to be main(A) and which secondary(B) choose a ratio of secondary model in the final result (0-original A; 1-original B) and choose the checkpoint format based on what format was your main model, either safetensors or cptk. Don't touch the mixing method unless you do that with 3 models, 3rd starring stuff you DON'T want to be in final model.
IT'S OUT, I REPEAT, IT'S OUT! https://youtu.be/foeS4UBm-EU
>>2192 >>2193 >>2194 >>2195 Actually some pretty incredible stuff in here, color me surprised
>>137925 I liked it. Thanks for sharing, anon.
>>2210 Nah i'd say it's just amateurish
I imagined this chick I met a bar that looked kinda like this.....but more fuller....
>>2210 what model(s) are you using here? Nice works
>>137984 too horny to type
>>2210 Yea, dev probably meant this to be a thing, that's why he released both v2 and safe version with simplified prompting. Maybe try that one as well? I don't use it on it's own most of the time anyway, so I didn't see much difference aside from overall better shapes its mix with Anything 3 can do now
(500.73 KB 512x768 tmppr32w1pf.png)
(501.99 KB 512x768 tmp8m5if449.png)
(507.73 KB 512x768 tmpat4hy0_4.png)
(484.21 KB 512x768 tmpi_elncwo.png)
(471.39 KB 512x768 tmpgv5xo1n6.png)
(469.12 KB 512x768 tmp1iz2fkt8.png)
chub'd
>>2204 I tried adding the link and just copying the directory for the other commands, which made Bigger Girls appear in the selections under checkpoint merger, but I selected Abyss Orange as A and Bigger Girls as B and when I tried to merge them both boxes changed to ERROR in red regardless of the format I picked. What could have gone wrong? I notice a difference between the base four links and the last is that civitai doesn't end its link in either .safetensors or .ckpt if that makes a difference.
>>2218 huh, that's interesting. I mean, I tried merging v2 just this morning and this mf just crased my whole colab becuz it depleted all RAM, but I never got an error message with 1.2 and practically any other model. Here's how my code looks, check your code for errors maybe
>>2219 Oh I see what I did wrong. I had typed the site that hosted the download link, not the link to the direct file. Bigger Girls is apparently huge as a file so I get why yours crashed. How do you guys use img2img to improve your results? I get some neat stuff but with the occasional random error here or there that I'm sure is resolved with good img2img usage.
Stupid question, but which program does i need to make this kind of AI?
>>2221 Have a ripped PC with Nvidia graphics card? Wait for someone else, they'll tell ya how to work with client software, maybe. Got unlucky to have a weak machine or running it on AMD card? Go here: https://github.com/camenduru/stable-diffusion-webui-colab pick your model of choice and boot the thing up, our overlords at google give you a ripped fucking maching for 3 hours a day to do literally whatever. If you want stylized and anime images pick any version of Anything, Eimis, Elysium, Counterfeit, Certain, they can make kinda chubby gals on their own, but if you want to go broader, go to Civitai.com and take either BiggerGirls or ChubbyGirlsGolden, add them with a !wget command and either use on their own or merge with any model you want. It's a huge field, so you'll have to experiment on your own. If you want to make realistic fat gals images, you'll have to wait for someone who works with that. Adding a bunch of images I posted in deleted thread as a dummy guide to working with colab.
>>2221 >>2222 If anything goes south - kill the whole thing the most brutal way possible and check for everything to look somewhat simillar. First image is a way to kill everything, here >>2219 is about how wget section should look. All models come in either .cptk or .safetensors format, all of them require VAE files to operate properly, so check which models need what VAE's, usually they are listed on model page and come in their respectful colab file, if it's not done by retards that is
IMPORTANT! can someone make a stable diffusion guide for total beginners trying to make fatart! I want to start but im totally lost
>>2225 Well what models do you have?
>>2205 Link?
>>2222 how do i make those wget comands for bigger girls?
How do you guys get actually characters? I keep getting random OCs
>>2229 It's usually not enough to just punch in "[character], fat", you usually have to describe the character you want in detail. For example, image #1 of this post is just "gawr gura", and is completely unrecognizable as gura, but image #2 is "gawr gura, hololive, virtual youtuber, blunt bangs, grey hair, blue hair, blue hoodie, blue eyes, bare legs, shark girl, shark tail, shark hair ornament, shark hood, two side up, shark costume" with negative tags "blonde hair, purple eyes, grey hoodie, cat ears, animal ears, shorts, short shorts, skirt, miniskirt, thigh_strap, pants, leggings". Just go to gelbooru or danbooru, pull up an image for whatever character you want to replicate, then copy its tags. If your character isn't on a booru, or doesn't have many results, you'll have to either make an embed or get creative with prompting&inpainting. Then, once you are satisfied, you can start adding the tags you actually want. Images #3 and #4 have the gura prompt + some fat tags. And when you have a good image, if you want, can do some inpainting to fix details like hands, faces, clothes, anatomy, etc. A good candidate here would've been gura's shark tail or hands
(416.23 KB 512x768 tmphj1k_d86.png)
(370.97 KB 512x768 tmp1epid8p9.png)
(443.63 KB 512x768 tmpksi2gwm4.png)
(498.85 KB 512x768 tmpscqox3cu.png)
(437.60 KB 512x768 tmpv6cvfpf5.png)
(355.82 KB 512x768 tmpmu8nmd1l.png)
>>2225 I'm gonna copy and paste this from /vt/ >NAI Leak Speedrun https://rentry.org/nai-speedrun | https://rentry.org/sdg_FAQ | https://rentry.org/sdhypertextbook >/vtai/ Resources Holomem Prompt List: https://rentry.org/3y56t | VTuber Model Mixes: https://rentry.org/8nxtk | Embed/Thread Archive: https://mega.nz/folder/23oAxTLD#vNH9tPQkiP1KCp72d2qINQ >/hdg/ FAQ + Embeds, Hypernetworks, Models https://rentry.org/hdgfaq | https://rentry.org/hdglorarepo | https://rentry.org/hdgrecipes >Local Install Nvidia: https://rentry.org/voldy | https://github.com/AbdBarho/stable-diffusion-webui-docker CPU: https://rentry.org/cputard | https://rentry.org/webui-cpu AMD: Native: https://rentry.org/sd-nativeisekaitoo | Docker: https://rentry.org/sdamd | Onnx: https://rentry.org/ayymd-stable-diffustion-v1_4-guide >Cloud Hosted Install Colab: https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb | Paperspace: https://rentry.org/865dy Colab for Complete Retards: https://colab.research.google.com/drive/1STL60qfoY-iSlhRb9zFETRLTqhNbznRf >Subscription sites https://novelai.net/ | https://nijijourney.com/en/ | https://yodayo.com/ >Guides Prompting: https://rentry.org/hdgpromptassist Inpainting/Outpainting: https://rentry.org/drfar | https://rentry.org/inpainting-guide-SD Upscaling: https://rentry.org/sdupscale | https://rentry.org/hiresfixjan23 Animation: https://rentry.org/AnimAnon | https://rentry.org/AnimAnon-Deforum Wildcards: https://rentry.org/NAIwildcards Embeddings: https://rentry.org/simplified-embed-training Hypernetworks: https://rentry.org/HNSpeedrun LoRAs: https://rentry.org/lora_train | https://rentry.org/HDGLoRaIssues >Old News https://rentry.org/sdupdates3 >GPU Performance Data https://docs.getgrist.com/3mjouqRSdkBY/sdperformance >Upload your images https://aibooru.online/ https://www.pixiv.net/en/ >More Resources 4chanX Catbox userscript: https://rentry.org/hdgcb Essentially everything: https://rentry.org/sdgoldmine Tags: https://danbooru.donmai.us/wiki_pages/tag_groups /g/ wiki: https://wiki.installgentoo.com/wiki/Stable_Diffusion 5ch NovelAI wiki: https://seesaawiki.jp/nai_ch/ Models: https://civitai.com/ | https://huggingface.co/ | https://rentry.org/sdmodels More links: https://rentry.org/sdg-link
>>2226 Bigger girls Orange abyss v2
>>2147 I'm only a little late on responding to this, lmao. I couldn't give an answer until I booted up the program again because I normally don't boot it unless I know I'm gonna be busy, but I don't mind running it just to fetch the exact prompts again For the positive keywords I use cat girl, twin tails, long hair, messy bangs, (pink hair:1.1), (pink cat ears:1.1), black bows, purple eyes, pink cat tail and for the negatives I add blue eyes, black cat ears, white bows, purple bows, unibrow >>2163 I stopped using "nyatasha nyanners" when I realized that danbooru only had around 200+ results. I also didn't really have trouble with getting the ribbons to stay a certain color by just using the negatives. When I was originally setting it all up I just did some tests and added each negative color keyword until it stopped pushing out that color, and if it did push it out again once I started doing larger batches I would just change it when doing img2img
>>2163 Fuck, I forgot to ask what prompts and/or model mix you used for the Gray Wolfs here? Tossing them into the PNG Info didn't show many keywords
>>2242 You ever thought about opening an art account to rake in those delicious LIKES and VIEWS? This stuff blows away virtually everything else being put out by anyone, IMO. Mad cred to everyone else who is building the thread, as well.
>>2244 I mean, I drop my stuff to AIbooru from time to time, but come to think of it, why not make a brand new account for that somewhere on twitter or DA
>>2243 Was the second one done with the button gap LORA that's on civitai? I was wanting to experiment with it, but 99% of the prompts I have the AI generate normally involve swimsuits or latex so I didn't consider getting to it yet
>>2246 Holofat model? You've piqued my curiosity.
>>2247 Nah, I was just fucking around with a public ai site and got it by chance. I’m extremely new to ai, so I’ve been just experimenting with it all day. I’ve been using the available Abyss Orange model, and I’ve gotten some alright results.
i know asking for help is looked down upon here but I'm seriously lost. I've got Stable Diffusion AI, Anything v3 and Bigger Girls v2, but I can't figure out how to make it generate girls that look as good as the ones I see posted here. Is there certain prompts or a setting I need to change? The closest I got was feeding it an img2img and adding a specific character in the prompts, but even then the image still came out somewhat messy.
>>2250 ignore this, just saw the beginners guide posted a few posts ago
>>2251 >>2250 Unignore this, the "guide" doesn't really guide you, just gives you download links for things. I'm looking for specific prompts that will get me good fatties.
Finally tried fucking around a little with this on a webbased version (yodayo). I've gotten it to work reasonably ok but still struggling to get it to draw specific characters and not even obscure ones. Am I better off just installing the program and maybe trying img to img than continuing to tinker with a web based version?
>>2253 I dont know what to tell you, brother. Do you like huge breasts, or tiny ones?
fuckin around a bit, surprisingly fun stuff
(1.22 MB 720x824 00012-1770739494.png)
(992.88 KB 720x824 00021-1044165255.png)
(1.27 MB 720x824 00011-1834350101.png)
(1.24 MB 720x824 00013-1770739495.png)
(1.26 MB 720x824 00016-3046812896.png)
(1.30 MB 720x824 00017-3046812897.png)
More AI generated realisthicc fatties
(1.06 MB 720x824 00020-1044165254.png)
(1.01 MB 720x824 00022-2634975826.png)
(1.05 MB 720x824 00027-1868144893.png)
(1.10 MB 720x824 00023-2634975827.png)
(1.05 MB 720x824 00026-1868144892.png)
(1.23 MB 720x824 00029-3067658837.png)
(722.44 KB 536x664 00133-4038006264.png)
(812.36 KB 536x664 00103-1074169307.png)
(1.15 MB 720x824 00030-3067658838.png)
(1.14 MB 720x824 00061-2382088233.png)
(1.17 MB 720x824 00068-106931422.png)
(1.31 MB 720x824 00080-116162508.png)
(801.57 KB 536x664 00151-713898239.png)
(727.14 KB 536x664 00179-1610541998.png)
(766.76 KB 536x664 00166-4290023209.png)
(911.97 KB 2504x3124 have.jpg)
>>2261 Holy shit man XDD
>>2248 Scroll up to the /vt/ copy pasta and click "model mixes" and you'll find all the models in there including holofat.
>>2257 Whoa, these are great! What model/mix is this out of curiosity?
>>2265 Forthemasses model, just updated today BTW. https://civitai.com/models/4673/forthemasses
(534.03 KB 512x768 tmpa2brxny0.png)
(511.32 KB 512x768 tmp3kcrj5jq.png)
(551.24 KB 512x768 tmp1qibg_1d.png)
(526.08 KB 512x768 tmpy9qjtvca.png)
(515.97 KB 512x768 tmpuxq59c02.png)
fat foxgirl from my gacha game
>>2268 good stuff, we need more arknights fats.
Add voices to your pictures!: https://beta.elevenlabs.io/ AUDIO AI HAS BEEN RELEASED! Here’s an example of Patrick Bateman from a 4chan thread. https://vocaroo.com/1mcYA2LGqbJg
>>2270 anon while i do appreciate the funny drawn cat pictures it feels like a whiplash to see them under an ai fat art thread
>>2192 How did you get the colors so vibrant? It might just be me but the lines look clearer too. Been playing around with a few models and mixes like this one, and there's a number that seem to have a slight sepia haze I haven't been able to get rid of.
>>2272 Not the poster, but check that you're using the right VAE for your model. I know when I use the standard Stable Diffusion VAE it can have that effect. If it still happens, you could try out different VAEs, or try mixing your model a bit with one that has more vibrant colors. For a workaround fix, you could try adding "bright colors" or "high saturation" to your prompts.
>>2271 I don’t think you can fathom how proud I am of my kink and how god tier this innovation is
>>2274 You're a woman.
https://moritz.pm/posts/parameters This site has a big list of prompt keywords, I thought the lighting keywords were especially cool. I added "diffused lighting, rim lighting, volumetric lighting, hard shadows, subsurface scattering" on the last pic of this post and it did a pretty good job although its hit or miss (there were like a dozen other pics that gave the girl blue&black skin for some reason) >>2238 I merged anythingv4 & biggergirls and trained it a little more, mostly on pics of hips&asses My prompt was "masterpiece, best quality, highres, 1girl, grey wolf \(kemono friends\), animal ears, fur collar, black hair, yellow eyes, plaid pleated skirt,black thighhighs, cropped black suit jacket, grey necktie, sleeve cuffs, white gloves, fat, gigantic belly, midriff" and img2img alongside inpainting for the heterochromia & necktie+cleavage >>2246 Very nice details, I want to try and get something like this
>>2274 well congrats anon, now give me credit for the last pic because that art style is fucking mint.
(539.85 KB 512x768 tmpij_bhfkp.png)
(453.39 KB 512x768 tmp4g1lz36a.png)
(469.57 KB 512x768 tmp60azj80d.png)
(492.69 KB 512x768 tmpee9x6pdf.png)
(496.96 KB 512x768 tmp6i3nf6ii.png)
(534.85 KB 512x768 tmp0101zfgb.png)
>want to generate AI fatties >computer is probably too weak to handle running it locally >I have no idea how this shit works even if it somehow could >any and all information I can find on it (including the guides posted in this thread) requires you to already know a shit-ton about computers to use any of it apparently >try connecting to a colab instead >requires a google account >try to make a new one anonymously so I'm not tying this to my normal account >can't because it needs a phone number >disposable ones are always detected and not allowed as a result I apologize for being an absolute retard, but is there something I'm missing with either option, or am I just stuck paying the price for being dumb?
>>2280 Try making the spare google account on a phone itself? I've never get asked for a phone number that way.
>>2281 I could try that, yeah, issue is I don't exactly have a "burner" phone to do it with. Nor do I have the cash for one or ability to keep it from being known to others, being a penniless NEET. And I'm not about to tie my real phone to it for the same reason I won't use my real google account: Probably unfounded and pointless paranoia that people will find out I like fat chicks and mock me relentlessly for it. Stupid, I know, but hard fear for me to shake. I don't mean to fuck up the thread by the way, if there isn't an easy way to get around this (yet?), I'll just go back to silently enjoying the results without being able to add any of my own
>>2282 No shame in asking questions, especially if you're nice about it. It's not like it's your job to know these things (although i agree that you're worrying too much). it's a lot better to have a beefy computer for th e local install, but if you don't have a complete shit can, you can get away with running it on mediocre hardware (no less than 4gb of vram on your GPU) if you change some settings. You can also do cpu-only mode if your GPU is bad. It's incredibly slow but you can just leave it running in the background Nvidia users will have an easy installation, and AMD users are kinda shafted unless they're on linux. https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies Follow steps 1 and 2. Install Python and Git, and once you have git, you can clone (download) the program into whatever directory you want. Ignore step 3, since you're gonna want the fat girl model instead If that worked, you can proceed to setup, which is mostly automatic. Navigate into the downloaded folder and run webui-user.bat. It will take a while, and when it's done, close it if it doesn't close itself. After setup, download the bigger girls model: https://civitai.com/models/2664/bigger-girls-model and then put it in the models/stable-diffusion folder of your install. You should also change the startup parameters. Edit webui-user.bat and add the following, with quotes, to COMMANDLINE_ARGS= : "--medvram --opt-split-attention --no-half-vae" Sub --medvram with --lowvram if you have 4gb vram or less, works slower. --opt-split-attention helps reduce vram costs with a speed penalty. --no-half-vae makes increases vram costs but fixes an issue where images are solid black. The full list of arguments is here https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings After that you're pretty much set and only need to learn how to prompt, which is the easy part. Access SD by going to http://127.0.0.1:7860/ when the .bat is running. I don't know the specifics because I don't use windows or nvidia. I also don't use the SD colab so I got no advice there. But, hopefully this helps
>>2282 As long as your phone's an android, you should be able to make a burner account without a phone number. Hell, I made one for an androidtv device with a phone that's still on 6.0.
>>2277 Do you have any tips on training? Can you train an artist's style? I tried using dreambooth back in October and trained about 300 images, but it didn't work.
>>2268 >>2279 What model and prompts are you using, might I ask? I'm interested in generating my own arknights fats but don't know where to start
I know it's a long shot with how complex her design is, but having tried on my own for a bit and gotten nowhere with my rudimentary understanding of these programs, anyone willing to try and do the independent vtuber Juniper Actias?
>>2283 Also I forgot to mention that if cpu mode doesnt enable itself you can just replace --medvram with --use-cpu >>2285 I got a few Manually crop/resize & caption your training images, the autocrop is crap and autocaption tags every big girl regardless of size&shape as 'fat', making it really hard to train specific sizes or proportions, and really hard to train new concepts. Takes a while but the end result is better, you can use tags to generate, and you don't have to train as long. Just treat the file name as a string of tags in order of importance like "big_belly, fat, button_gap, office_lady, sitting_in_chair, brown_eyes" I also ran into this problem where including a specific character as part of the tags of an image made it a lot harder to generate them. i think it's because I didn't have enough training images on them. Whatever the problem is, it really sucks because now I can't generate patchouli knowledge without this horrible sketchy-cartoony style (see last image of this post) And while I haven't tried specific artists it looks like it's possible >>2195 >>2266 i found 60 steps per image on the default settings (with captioning turned on!) to be good. So, if you had 300 images, train for 18000 steps. Maybe 20000. You are also fine to not train the text encoder at all, since it's not needed if you've already captioned your own images. If you're on a free account the training takes a long time and you run the risk of being disconnected regardless of whether or not you're afk. So you should save checkpoints frequently or shell out $10 for premium Good luck
>>2257 now THESE are great
>>2270 I want to hear feederism audio
>>2257 I'm very curious what model or mix you used for this, simply because most of the models that use photorealistic sources don't do well with larger figures and these seem to be great.
>>2291 Not the poster, but this may be a mix of some model with "ForTheMasses", or just the latter on it's own. Didn't work with it, but stumbled upon it on civitai https://civitai.com/models/4673/forthemasses
Pardon my double post, but I have some questions relating to some of the models. >>2293 What is the other 70% you're mixing with Dreamshaper? I tried Bigger Girls v2 [F3C16C83DE] 70% with 30% DreamShaper [13DFC9921F], but it gave me [47f6f54edb] rather than your hash of [73610e09da]. Similarly, for>>2192 I tried 70% BiggerGirls-v1.2 Beta [FD3DDC7587] and 30% BaronMix [2C541F81CE] and got [4baaaa2f79] rather than [03079c9c55] My output is similar enough but I'm still curious.
>>2293 Might I recommend SwinIR_4x if you have trouble with Latent? You are correct to notice that Latent overcooks fat faces. I also notice that it tends to add extra fingers. Of the options besides Latent, SwinIR_4x is the one I ended up settling for. I used a prompt very similar to one of yours to run these tests.
hoping someone here can help me. today out nowhere my webui no longer runs and I dont know why. command prompt tells me "stable diffusion model failed to load".
Maybe I'm blind, but I didn't see the artist list and tag text file for bigger girls v2 until today. art by trinity-fate62, 544 art by sooftly, 488 art by axel-rosered, 384 art by kurocaze, 380 art by midnitemorty, 231 art by metalforever, 211 art by kuron, 204 art by squarewave, 199 art by toroboro, 191 art by kipteitei, 166 art by honi san, 155 art by pixiveo, 138 art by jaykuma, 137 art by Unotiltedforthewin, 135 art by bamboo ale, 132 art by woot, 115 art by pewbutt, 110 art by aimbot-jones, 102 art by katou, 101 art by better with salt, 95 art by thedookus, 53 art by plaga+trinity-fate62, 13 art by melonstand+woot, 10
(913.11 KB 568x856 00472-3302358097.png)
(747.61 KB 512x768 00329-3086480759.png)
(940.28 KB 568x856 00463-2624211423.png)
(840.38 KB 568x856 00461-2214009521.png)
(1.18 MB 768x768 00004-3862135086.png)
(850.11 KB 512x768 00094-3089860175.png)
Playing with bodyshapes is awesome.
(836.04 KB 512x768 00085-678759491.png)
(870.87 KB 512x768 00080-1818338406.png)
(808.94 KB 512x768 00071-2643535497.png)
(813.07 KB 512x768 00082-2551542884.png)
(927.89 KB 512x768 00041-2332537937.png)
(1.02 MB 768x768 00026-2515050199.png)
(1.23 MB 712x920 00007-4279778699.png)
(1.20 MB 712x920 00012-3927839703.png)
(994.22 KB 712x816 00398-2903982177.png)
(934.56 KB 712x712 00367-2070926478.png)
(868.29 KB 712x712 00365-3686637061.png)
(1.16 MB 712x816 00395-1704080627.png)
(727.86 KB 640x640 00263-301584987.png)
(818.37 KB 640x640 00253-1733266323.png)
(694.63 KB 512x688 00114-1980074943.png)
(795.90 KB 512x768 00164-3529949108.png)
(819.75 KB 640x640 00244-3590097562.png)
(992.17 KB 624x784 00128-780341477.png)
(622.23 KB 512x688 00076-789496912.png)
(715.04 KB 560x712 00185-1571399907.png)
(876.59 KB 512x768 00324-2684964711.png)
(1.04 MB 768x768 00410-419344757.png)
(1.14 MB 768x768 00367-216077756.png)
(1.18 MB 768x768 00389-753785339.png)
(969.68 KB 512x768 00321-712525786.png)
(894.02 KB 512x768 00307-1623116278.png)
(835.21 KB 512x768 00223-1590636377.png)
(788.31 KB 512x768 00213-856455750.png)
(806.23 KB 512x776 00881-3919128256.png)
(1.39 MB 768x768 00247-1843755414.png)
(761.36 KB 512x776 00849-2412720276.png)
(644.85 KB 512x640 00431-2682752390.png)
(717.99 KB 512x776 00846-3060513249.png)
(1.03 MB 712x712 00127-1798889119.png)
(1.07 MB 712x712 00130-1798889122.png)
(1.03 MB 712x712 00118-267797974.png)
(518.38 KB 536x520 00097-462485877.png)
(763.94 KB 512x840 00069-2959477668.png)
(776.41 KB 512x840 00061-592904182.png)
(716.43 KB 512x768 00958-10943952.png)
(751.35 KB 512x768 00998-507536150.png)
(863.92 KB 512x768 00999-2442127629.png)
>>2306 Honestly, I've not really found that adding the artist tags does much good? I mean, it will give you something more similar, such as more anime-ish, or in the case of someone like unotiltedforthewin, it'll give you something more artistic, but overall it doesn't seem to do that much? That said, I may just be a fool. I've never managed to get anything so detailed when I generate things.
(1.41 MB 1024x1024 00004-801081011.png)
>>2295 I'm not even getting similar output. I tried merging the same ckpts (they match the hashes you have) but I get different ones again and output that looks like a goofy classical painting (related) instead of what the OP there had with the swifties on a picnic.
>>2314 What model was used to make these? Could you link me please?
>>2317 forthemasses https://civitai.com/models/4673
>>2318 Could you share the prompts used? Appreciate any suggestions!
>>2316 I don't get the exact same output as >>2158 (for the same prompt and seed) but it's similar enough, with BG1.2 and Baron mixed at 30%. First one at clip skip 1, second at 2. Idem with images 3 and 4, but I didn't have the bad-artist-anime embedding at the time. I have a hypothesis. Is it possible that you applied the mix percentages backwards? That you merged 70% of baronmix rather than 30%. Images 5 and 6 are that result (clip 1 and 2 respectively). In the checkpoint merger, a multiplier (M) of 0.7 means that you get 0.7 of model B. If it's not that... What is the .vae you're using with it? I'm using the anythingV3 vae.
(64.80 KB 1088x577 barmixinfo.PNG)
(1.52 MB 1024x1024 baron_to_BG12_070.png)
(1.51 MB 1024x1024 baron_to_BG12_070_3.png)
(55.83 KB 1077x492 bgmixinfo.PNG)
(1.51 MB 1024x1024 eldrich_horror.png)
(1.50 MB 1024x1024 BG12_to_baron_030.png)
First off - thanks for the response. Here's what I have, from how I named the models I thought I blended them correctly but I was high as a kite doing this and may have compltely fucked it up, so I'm going to try to retrace the steps. I know I didn't apply ENSD correctly but I have fixed that now. The results I'm getting today are much better overall so I definitely had something messed up before I have - for the purposes of recreating this Bigger Girls 1.2 beta [fd3ddc7587] BaronMix_1.0 [2c541f81ce] The first mix is 30% baron, 70% BG1.2, (shown in first image) - it came out as [f21e029cec] The second mix is 70% BG1.2, 30% baron (second image) - shows as [369bcbae87] The mix formula A*(1-M)+B*M would suggest that they'd be the exact same if you swap B for A and change M from 0.3 to 0.7 since it's just a weighted sum of the models, but the results are quite different. The *only* percievable difference from my end is mixing them in a different order and there looks to be something in there that's not accounted for that creates this difference. Images in order: 1. Showing recipe of first mix 2. First output of first mix (Baron primary) 3. Third output of first mix (Baron as primary, batch size 3) 4. Second recipe (Bigger Girls 1.2 as primary, but same mix) 5. First output of 2nd mix (closest match to the original, but it's horrifying) 6. Third output of 2nd mix (Best output from batch of 3) 7. Third output of 2nd mix with AnythingV3_fp16 vae applied What is the clip skip parameter? This is the first i've heard of it but I saw it in your embeds too, this shit is complicated I feel like I see a new option every time - but it's come such a long way since the first thread here.
(1.41 MB 1024x1024 BG12_to_baron_030_VAE.png)
>>2320 >>2321 Oops I hit the image limit, this is #7 from the last set. Applying the VAE does sharpen it quite a bit and reduce some of the watercolor effect, which is what I was looking for. I just didn't know if it was going to apply properly
>>2321 >>2322 Well, you're clearly on the right path. The vae sharpening up the colors is exactly what should happen. While you're not getting the exact same results (neither am I), your mix seems good to work with. You can find "clip skip" in the Settings tab, under the Stable Diffusion sub-tab (or just click "show all pages" and control-F for "clip skip"). Some models work better with 2, other with 1. Bigger Girls was intended to work with 1 afaik.
Cool dude I love your AI these arts are awesome! Do you have more arts?
(2.27 MB 768x1024 1675995982594456.webm)
(1.43 MB 1024x1024 00383-801081011.png)
(1.57 MB 1024x1024 00422-801081013.png)
(1.29 MB 1024x1024 00446-801081013.png)
(1.35 MB 1024x1024 00516-2150758077.png)
(1.36 MB 1024x1024 00453-801081016.png)
(1.27 MB 1024x1024 00541-3180258786.png)
Some of the latests from messing around with other model combinations, baron, dreamshaper, AbyssOrange, all mixed into bigger girls v2 at 30%
Be
>>2321 It's really weird but I have both the Bigger Girls 1.2 model and the BaronMix with those hashes... but mixing them even following your screenshots to the letter results in vastly different hash outcomes.
>>2329 Please tell us your secrets. What was the process in making this animation? After Effects?
(803.60 KB 768x1024 00489-501688211.png)
(453.50 KB 768x1024 00019-621185928.png)
(1.02 MB 768x1024 00488-2346857153.png)
>>2333 The animation was made with After Effects utilizing a collection of stable diffusion assets.
Is anyone else getting "RuntimeError: Cannot add middleware after an application has started" when trying to run webui-user"? I've abyss orange mix in colab successful a couple times only for it to give me this no matter what model I use. Anybody know what's up?
>>2332 From earlier in this thread it seems like the hashes from the combinations won't necessarily match, I had followed someone else and got different result hashes from them as well. I don't know if there's some element of randomness to it with the mix but you should get similar results even if the hashes aren't the same if you combined the blocks in the same way. What really helps narrow down the best is doing an X/Y grid and mixing parameters like CFG scale and denoising (for the 2nd step upscale). You can find the combination that starts making good results that don't overfit the prompt and get too weird and then gradually refine that.
Bless this thread and bless technological advancements
>>2345 A time machine would be nice.
(685.54 KB 768x768 00074-2288139844.png)
(708.06 KB 768x768 00077-863917063.png)
(764.19 KB 768x768 00078-863917066.png)
(668.51 KB 768x768 00081-4072069323.png)
(791.88 KB 768x768 00082-4072069324.png)
(706.87 KB 768x768 00087-2805505153.png)
>>2337 Just tried out stable diffusion for the first time using this blend as a starting point. Some surprisingly good results!
(781.24 KB 768x768 00090-2447105477.png)
(744.66 KB 768x768 00092-1897881698.png)
(702.85 KB 768x768 00095-1437086582.png)
(709.79 KB 768x768 00099-1928472217.png)
(697.53 KB 768x768 00100-3342130439.png)
(687.29 KB 768x768 00115-4082530501.png)
(781.80 KB 768x768 00118-329950620.png)
(637.85 KB 768x768 00121-1503817216.png)
(658.05 KB 768x768 00122-1503817219.png)
(757.34 KB 768x768 00123-1750315407.png)
(655.78 KB 768x768 00125-3129021359.png)
(617.50 KB 768x768 00129-3299579717.png)
Darker complexions are pretty challenging to get out though. Overly cartoonish
(566.34 KB 696x696 00147-3204369413.png)
(809.23 KB 768x768 00148-631359665.png)
(688.01 KB 768x768 00151-574036993.png)
(730.28 KB 768x768 00153-3059706987.png)
(680.07 KB 768x768 00164-2907623990.png)
(646.61 KB 768x768 00165-4013914962.png)
Also tried doing only 30% Dream/BGM and 70% forthemasses
(700.62 KB 768x768 00166-4013914966.png)
(652.96 KB 768x768 00167-3365343725.png)
(586.97 KB 768x768 00169-1781337115.png)
(743.27 KB 768x768 00171-3542786712.png)
(747.03 KB 768x768 00172-337524124.png)
(683.71 KB 768x768 00179-2076059867.png)
>>2348 >>2350 Stunning results!
(952.33 KB 512x784 00056-1928363259.png)
(880.74 KB 512x784 00058-1928363261.png)
(842.73 KB 512x784 00061-1928363264.png)
(898.07 KB 512x784 00062-1928363265.png)
(904.17 KB 512x784 00064-1928363267.png)
(554.19 KB 512x512 00013-3762613778.png)
(585.00 KB 512x512 00017-3256666275.png)
(826.55 KB 512x784 00036-4174376514.png)
(870.92 KB 512x784 00039-601288336.png)
(840.45 KB 512x784 00043-2119810010.png)
(783.52 KB 512x784 00045-1376552392.png)
(881.03 KB 512x784 00047-833883201.png)
(964.08 KB 512x1000 00308-246886840.png)
(941.18 KB 512x784 00054-3052177846.png)
(923.65 KB 512x784 00055-3052177847.png)
(1.32 MB 768x768 00319-2383917413.png)
(1.35 MB 768x768 00333-843242271.png)
(1.52 MB 768x768 00314-1953844369.png)
(788.32 KB 512x768 00382-2935491928.png)
(823.72 KB 512x768 00385-225394744.png)
(1.22 MB 768x768 00336-3540090308.png)
(1.43 MB 768x768 00334-3386616513.png)
(794.84 KB 512x768 00393-616081425.png)
(905.72 KB 512x768 00409-1998105269.png)
>>140849 The RL ones are getting pretty cringy.
>>2359 I agree. I think the AI is going the opposite way of "beautiful girl" as much as they possibly can.
Can one of the people making realistic girls trying making girls with Echo style body/size?
(918.67 KB 512x848 00118-434107472.png)
(976.07 KB 512x848 00116-2791927468.png)
(861.80 KB 512x848 00123-3125424038.png)
(773.47 KB 512x848 00155-3420445995.png)
(839.92 KB 512x848 00154-1175537807.png)
(922.61 KB 512x848 00128-2691327587.png)
Some variations around the "black girl cosplay" theme. Enjoy.
(690.22 KB 512x848 00163-3872556854.png)
(785.03 KB 512x848 00161-3090552994.png)
(823.78 KB 512x848 00165-1946427459.png)
(974.04 KB 512x848 00175-1519240128.png)
(889.40 KB 512x848 00176-1519240129.png)
(1021.95 KB 512x848 00177-996654751.png)
>>140883 That's precisely what I want. Reality isn't perfect. Thanks for the compliment. ^ ^
If we wanted to see realistic girls we would be on /bbw/. But this is the DRAWN board. Make a new thread for this shit on /bbwalt/.
>>2365 Agreed. The "hyper-realistic" shit doesn't fit /bbwdraw
>>140891 I understand where you're coming from and agree but the issue is less about the quality and more about the subject matter. Even if they were posting perfect 10/10 fat girls if they're realistic they don't belong on the drawing board. We would be on /bbw/ if we wanted realism.
I'd say the Hyper realistic stuff should go on /ee/. It's not quite Real and not quite Drawn.
(536.14 KB 512x704 00218-1417336008.png)
(483.06 KB 512x704 00209-350671495.png)
(501.20 KB 512x704 00219-1023354348.png)
(731.67 KB 512x848 00268-3562234206.png)
(897.74 KB 512x848 00247-3625593670.png)
(1.04 MB 512x848 00273-645071837.png)
Trying different, more extreme bodyshapes. Kudos to those who encouraged me, there will be more to come.
(586.31 KB 512x848 00277-2140277268.png)
(759.56 KB 512x848 00301-1395134481.png)
(686.45 KB 512x848 00300-1395134480.png)
(886.82 KB 512x848 00303-74023421.png)
(705.85 KB 512x848 00296-934416441.png)
(972.17 KB 512x848 00298-1619154016.png)
somebody went overboard at the trough, fell into a food coma, and woke up sweaty, horny, and fatter.
>>2311 Those last three look like Foxy Roxie's young sister
Does anyone know a way to pull out the supported tags from a model? I have to imagine when it's trained it's embedded somewhere what it responds to, but stuff like specific tags or artists names surely there's a way to extract this? Anyone know how?
(2.95 MB 1920x1920 08972.png)
(2.84 MB 1920x1920 08973.png)
>>2373 love it. Ran it through img2img to upscale & add detail >>2375 Doesn't seem possible although this would be super nice to have >>2372 Nice pics. What's the step count? I've been using DPM++ 2S a Karras because it does a good job on few steps (15-25) and has a decent chance to produce ok hands.
(767.79 KB 512x728 00090-3845567822.png)
(790.20 KB 512x728 00089-3845567821.png)
(903.78 KB 776x776 00242-2290626025.png)
(1021.39 KB 776x776 00245-2142892259.png)
(1.07 MB 776x776 00243-2290626026.png)
(1.02 MB 776x776 00244-2142892258.png)
Semi-realistic and realistic tests
(680.83 KB 512x896 00310-778969254.png)
(864.58 KB 512x896 00321-1760258440.png)
(997.95 KB 512x896 00342-3616186420.png)
(1.09 MB 648x808 00360-205281501.png)
(1.13 MB 648x808 00366-2592858484.png)
(1.20 MB 768x784 00354-4265327696.png)
(972.48 KB 648x808 00368-1930074346.png)
(986.16 KB 696x808 00373-2130169454.png)
(1.10 MB 648x808 00367-2592858485.png)
(1.00 MB 696x808 00374-1095328332.png)
(1.22 MB 784x808 00378-3311888133.png)
(1.52 MB 904x808 00377-487567917.png)
(1001.82 KB 616x808 00385-1844518507.png)
(926.18 KB 616x808 00386-3306197801.png)
(1.11 MB 784x808 00379-3311888134.png)
(1.09 MB 784x808 00381-2883190537.png)
(874.02 KB 616x808 00387-3306197802.png)
(996.03 KB 616x808 00388-3692186024.png)
Last ones for the time being, enjoy.
>>2375 That would be interesting indeed, the only way I found to see whether a tag was effective or not was to reuse the same seed and generate with a slightly different prompt each time. Takes hours to experiment and fine-tune.
(93.60 KB 512x512 Oop.jpg)
(93.65 KB 512x512 oop2.jpg)
it's kind of hard to believe how much fun I'm having with this.
>>2382 Webelieve you anon. Make sure your AI doesn't lump in together transexual male faces with real female faces or else your females may start looking ugly.
(107.46 KB 512x512 oop3.jpg)
>>2383 what a strange thing to say but, thanks for the tip, I guess? lol wtf i swear to god people nowadays
>>2382 Obviously this joke was aimed at AI technologies, and not a ugly transexuals that don't resemble girls. All the transexuals wish they were cute little white girls, but it's not possible, is it? That is why you have to believe. That is why bad things happen to ugly. That is why the truth will set you free, and love will find a way to your heart.
>>2383 >>2385 Can you go be a schizo somewhere else?
>>2386 Sorry.
>>2388 what model did you use for these?
(1.07 MB 768x768 00075-316299377.png)
(675.56 KB 512x672 00057-261152509.png)
(651.21 KB 512x672 00059-1393624150.png)
(1.09 MB 768x768 00073-2293524887.png)
Korean doctor
(703.50 KB 512x592 00161-3576938757.png)
(731.39 KB 512x592 00159-3576938755.png)
(706.12 KB 512x592 00158-3576938754.png)
(1.02 MB 592x768 00152-3913477580.png)
(880.65 KB 512x768 00134-1606931910.png)
(843.43 KB 512x768 00128-1606931904.png)
Besties
(814.76 KB 512x752 00191-3890563653.png)
(954.85 KB 552x752 00208-2802156965.png)
(970.38 KB 552x752 00201-120320570.png)
(905.66 KB 512x752 00192-3890563654.png)
(1004.25 KB 552x752 00194-454584352.png)
(932.91 KB 552x752 00196-2538348302.png)
Fantasy
More extreme body shapes.
(595.38 KB 552x752 00222-2968630837.png)
(623.13 KB 552x752 00221-2968630836.png)
(756.24 KB 552x696 00232-154157950.png)
(577.30 KB 552x752 00213-3099976722.png)
(799.48 KB 552x696 00230-551118208.png)
S'more
(562.51 KB 552x696 00245-2211991594.png)
(541.42 KB 552x696 00244-2148625700.png)
(597.09 KB 552x696 00235-1229982075.png)
(647.21 KB 552x696 00241-1064188309.png)
(577.96 KB 552x696 00234-604149375.png)
(767.83 KB 528x720 00254-2219784909.png)
(679.52 KB 552x696 00251-2343723400.png)
(726.89 KB 552x696 00247-2646861939.png)
(934.49 KB 528x720 00253-2219784908.png)
(821.84 KB 552x696 00248-2646861940.png)
(1.20 MB 768x768 00255-3493146396.png)
My take on anime style BBWs
>>2397 Is there any way you can make the art more creepy and ugly on the AI program? This is interesting for me.
(1.47 MB 768x768 00011-247564985.png)
(1.42 MB 768x768 00012-247564986.png)
(1.40 MB 768x768 00015-2434440522.png)
(1.55 MB 768x768 00016-2434440523.png)
>>2398 Obviously. SPOILER FOR NIGHTMARE FUEL
>>2399 la creatura...
>>2399 I'm sorry that I csn't sarcasm, but I am glad that these are not only better art but also better porn.
>>2399 The first one reminds me of a Kewpie Doll.
>>2403 What is inpainting?
>>2404 >https://rentry.org/drfar NSFW images but a more helpful guide >https://rentry.org/inpainting-guide-SD
May I please request a very obese Bayonetta or Cremia from Majora's Mask
>>2407 I appreciate it mayne
(362.90 KB 1400x933 Stella'sInterview.jpg)
The origin story of the cowgirl I posted earlier...keep going or nah?
(415.66 KB 1400x933 stellasfirstday.jpg)
(22.20 KB 481x592 Mr_Burns_evil.jpg)
>>2411 oh you better believe it there will be weight gain. But, I'm kind of a slow burn kind of man... so it will be slow... over multiple days and panels...
>>2412 Keep going, it's good stuff
(911.63 KB 560x840 00044-1362934153.png)
(936.30 KB 560x840 00030-580174688.png)
(804.12 KB 512x808 00270-371121364.png)
(922.36 KB 560x712 00020-2166341317.png)
(1.00 MB 560x840 00025-1712943020.png)
(1.02 MB 560x840 00036-834788277.png)
>>2406 Loosely inspired fron this request
(889.45 KB 560x840 00034-1674144516.png)
(989.88 KB 560x840 00035-1674144517.png)
(933.04 KB 560x840 00032-2066487830.png)
(1015.25 KB 560x840 00033-2066487831.png)
(918.06 KB 560x840 00039-3035860368.png)
(964.50 KB 560x840 00037-834788278.png)
Enjoy^ ^
>>2402 I had to google it... Shockingly true lol
>>2415 Damn, is the first one also AI generated? And are these all forthemasses?
>>2417 All AI generated since it's the thread's topic ^ ^. >>And are these all forthemasses? Yes, but I used like a shit ton of loras and textual inversions
(697.99 KB 3840x3072 00034.jpg)
(237.66 KB 1536x1920 00046.jpg)
(282.17 KB 1536x1920 000042.jpg)
(684.13 KB 3072x3840 000039.jpg)
(633.73 KB 3072x3840 00043.jpg)
(838.99 KB 3072x3840 00042.jpg)
Random stuff.
>>2419 Finally something good after endless stream of creatures from namefagging retard.
>>2354 i just wanted to ask, what kind of configurations did you use to get the results in the last two images? I'm still pretty new to all of this and I'm still trying to figure out the fundamentals but I'm slowly learnin. Previously I just downloaded and setup the basic NovelAI and used it blindly, but now i kinda wanna understand how to experiment and stuff, thanks in advance.
(1.25 MB 768x768 00138-3892775285.png)
(1.33 MB 768x768 00139-3892775286 (1).png)
(1.26 MB 768x768 00144-2569555105.png)
(1.33 MB 768x768 00139-3892775286.png)
(1.40 MB 768x768 00150-124952817.png)
(1.49 MB 768x768 00151-1308667700.png)
Playing with random tags
(1005.26 KB 512x768 00188-273245551.png)
(992.24 KB 512x768 00189-273245552 (1).png)
(1.40 MB 768x768 00153-2505301979.png)
(1.50 MB 768x768 00157-2505301983.png)
(1.50 MB 768x768 00156-2505301982.png)
(1.47 MB 768x768 00181-3621221706.png)
(884.92 KB 768x768 00155-2505301981.png)
(987.35 KB 512x768 00191-1078280581.png)
(1.01 MB 512x768 00199-1280670174.png)
(1001.79 KB 512x768 00201-2963737263.png)
(1.02 MB 512x768 00202-2088030943.png)
(583.86 KB 512x768 00211-1355404618.png)
(606.03 KB 512x768 00209-1269673382.png)
(824.57 KB 512x768 00204-3075693632.png)
(873.76 KB 512x768 00206-680401445.png)
Some surprising outputs
(393.39 KB 512x768 105294795_p0.png)
(447.34 KB 512x768 105294795_p2.png)
(371.22 KB 512x768 105370379_p1.png)
(392.57 KB 512x768 105370379_p3.png)
found some good ones on pixiv, uploader didn't say what he used
(631.69 KB 512x768 00137-3655755554.png)
(572.24 KB 512x768 00005-943347680.png)
(639.41 KB 512x768 00023-2559612474.png)
(591.02 KB 512x768 00015-1786810502.png)
(503.51 KB 512x768 00051-1819684830.png)
(334.20 KB 512x512 00469-2719881529.png)
Some experiments as I try to nail down prompts / styles / checkpoint recipes. I want something I'd be comfortable doing a fairly long series with, so I really want to be sure before I start generating thousands of images per scene just to get one that can be hammered to a fine sheen. Open to suggestions. Favorite recipe so far is Bigger Girls V2 Mixed with Chubby Girls Golden at .5 for variety since bigger girls is a bit lacking, then diluted with Abyss Orange Hard at .3.
Maybe it's asking for too much, but the current issue is when you dilute checkpoints like Bigger Girls and Chubby Girls Golden, eventually, it's "average" body sizes go down enough that making the extreme end of the range is difficult. So workflow usually involves making thousands of 20 step "roughs" in bigger BG or CGG to get extremely fat bodies, then sending that to img2img with a merged checkpoint for variety. Cumbersome, but if you want both variety AND extreme fats, I guess that's the cost ATM.
(498.57 KB 512x768 tmp2srypog1.png)
(381.55 KB 512x768 tmp7tczb96r.png)
(435.54 KB 512x768 tmp39ppu11j.png)
(411.94 KB 512x768 tmpctlmfthb.png)
(440.11 KB 512x768 tmpef97wtuz.png)
(482.08 KB 512x768 tmpw7jjq50y.png)
Reminder to make feederism Audio vocaroo Via elevenlabs io
>>2431 Stop with the RL shit.
So the guy who made the biggergirls model just released a photorealistic model: https://civitai.com/models/10380/bigger-girls-photorealism haven't tried it yet. Seems promising though?
>>2431 >>2432 Who is it though?
(464.65 KB 512x768 tmpr7sz3u7i.png)
(480.63 KB 512x768 tmp5sekhrci.png)
(504.76 KB 512x768 tmpch_kes02.png)
(474.00 KB 512x768 tmpbahvencg.png)
(453.21 KB 512x768 tmpcjq68ixq.png)
(442.07 KB 512x768 tmpyuakftwn.png)
>>2421 >>2390 and >>2288 pretty much sum me up. for experimentation, you just have to be willing to play with the parameters to find what works for you. In particular, it's good to get a feel for what the denoising slider does and how it can make your inpainting & upscaling & img2img-ing better. And of course use guides (>>2232) or ask questions >>2428 >>2429 Your research is paying off cause these look great. If you're able to try generating at higher resolutions with highres fix. Sometimes I get issues like crap anatomy/faces/composition that get fixed with literally no input on my part just by generating higher res, so maybe you don't need to spend as long generating & tweaking as you think >>2435 good stuff, what's the model?
>>2436 I am lost with the guides, can you explain to me simply how you arrive at this result, and with what, please ?
>>2436 If I could one day do the same as you, that would be wonderful ! O_O
(301.78 KB 512x512 00018-3680144887.png)
(323.69 KB 512x512 00101-98756337.png)
>>2437 Barclay's Stable Diffusion for Dummies Assuming you have a recent Nvidia GPU and Windows, just 8 steps to get started: Download and install Python: https://www.python.org/ftp/python/3.10.10/python-3.10.10-amd64.exe Download and install Git: https://github.com/git-for-windows/git/releases/download/v2.39.2.windows.1/Git-2.39.2-64-bit.exe Download and extract AutoMatic1111's WebUI to it's own folder: https://github.com/AUTOMATIC1111/stable-diffusion-webui/archive/refs/heads/master.zip Download at least one checkpoint to the webui's models/stable-diffusion folder (start with Bigger Girls V2): https://civitai.com/api/download/models/6327?type=Pruned%20Model&format=PickleTensor Download at least one VAE to use for color correction if you are merging checkpoints or using checkpoints without a baked in VAE (any work fine really, you just need one or you will get purple splots disease or extremely faded or sepia tones): Xpero End1ess VAE (vibrant colors): https://civitai.com/api/download/models/7307?type=VAE&format=Other Anything v4 VAE (standard anime color scheme): https://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.0.vae.pt (recommended) Stable Diffusion VAE - Photorealism colors (must download to seperate stable-diffusion-webui/models/VAE folder): https://huggingface.co/stabilityai/sd-vae-ft-ema-original/resolve/main/vae-ft-ema-560000-ema-pruned.ckpt Save the VAE in the models/stable-diffusion folder AND select it for use with all models after you run the webui. Run web-user.bat. Once it finishes loading, head to 127.0.0.1:7860 in your browser to access the web ui. Don't forget to setup your VAE as instructed earlier. (Settings Tab - Stable Diffusion - SD VAE) You can also check 'Ignore selected VAE for stable diffusion checkpoints that have their own .vae.pt next to them' This will make the selected VAE only be used if the checkpoint you are generating from does not have one baked in or already downloaded right next to it. I wouldn't recommended using this option, as if you use many models and generate between them, the color scheme may not be consistent. Picking one from the drop down and using it for all generations, regardless of if the checkpoint has baked in VAE or a separate vae with it, is usually best in my opinion. Make sure to hit "Apply and Restart WebUI" for the change to take effect. HOW TO PROMPT: Start with simple positive prompts and build from there. A typical positive prompt might look something like this: masterpiece, best quality, highres, 1girl, (chubby cute teenage anime cowgirl redhead standing in front of a desk), (beautiful green eyes), (cow ears), (cow horns), (medium breasts), (blank expression), jeans, (white t-shirt), (freckled face), deep skin, office lighting PROMPT STRUCTURING: masterpiece, best quality, highres, 1girl - This is telling the model primarily (by putting it at the front of prompt, i.e., weighting) make the generation resemble art tagged as masterpiece, was originally uploaded in high resolution, and was specifically tagged as 1girl, meaning it was tagged on a Danbooru as only having one female subject within frame. (Add the Danbooru autocomplete extension for help with learning those). (chubby cute teenage anime cowgirl redhead standing in front of a desk) - putting this in brackets tells the model to focus more on this specific grouping of tokens more than those that are not in brackets. Emphasis. This is also where you typically put the main subject of the generation in the form of ADJECTIVE DESCRIPTOR FLAVOR SUBJECT LOCATION ACTIVITY (beautiful green eyes), (cow ears), (cow horns), (medium breasts), (blank expression) - these are also in brackets, but behind our main subject. This helps the model apply and emphasize these features AFTER the main subject is 'visualized' in frame by the AI in the first 10 steps or so. Applying these before the main subject could result in TOO much emphasis, i.e. cow ears everywhere, eyes on things that shouldn't have eyes, eyes and ears not aligned to the characters because they were 'drawn' first, etc. jeans, (white t-shirt), (freckled face), deep skin, office lighting - we prefer jeans, but we do not mind if they are otherwise, same with office lighting. If the model decides hey maybe shorts and candlelight, hey, let the boy try. NEGATIVE PROMPTING: (lazy eye), (heterochromia), lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, bad feet, extra limbs, (multiple navels), (two navels), (creases), (folds), (double belly), thin, slim, athletic, muscular, fit, fat face, blemished stomach, rash, skin irritation These are all things we DON'T want to see, and we can use emphasis here as well. you don't have to use a negative prompt, but it's often quite helpful to achieve what you're going for. In this example, I wanted to make sure that the subject would not be described as muscular or athletic. Hit generate and watch the magic happen. Experiment and find your favorite sampler. I tend to favor the three DPM++ options. All samplers vary in speed, quality, amount of steps required for good results, variety, etc. It will take some experimentation to find your favorites and you may need to use different ones depending on context (if generating from scratch or img2img, for example). Note that, the original base model was trained on DDIM, so you may want to play with that one at least a little bit, to get an idea of how the model generated images by default, before we had an array of other samplers to choose from. CFG scale refers to how closely the model should try to follow the prompt. A lower scale of 6-8 will produce more variety but may not follow the text prompt as closely as a higher scale of 9-11. Higher than 11 (13+) can 'overcook' an image, and lower than 6 (1-3) can produce messy, blurry, unfocused generations. Tick the 'Extra' checkbox and drag the variation strength to .5-.7 for interesting inspirations. Especially effective with simple prompts only describing a subject, not what they are doing or where they are.
Edited last time by admin on 03/24/2023 (Fri) 01:47:39.
(338.38 KB 512x512 00151-675972307.png)
(510.78 KB 512x768 00058-2096888230.png)
>>2437 EXTRA CREDIT: Merge a checkpoint to keep most of the parameters / general style of the first but add variety and realism from the second: Bigger Girls v2 (referred to as BGV2) is practically essential for our workflow, since getting ssbbw and usbbw sizes are easy. However, it suffers from same-face syndrome, has a number of low-quality, low-effort art included in the training data, and lacks a proper assortment of backgrounds and locations. Being able to merge checkpoints essentially means you can create custom recipes and fine tune what the model generates. When you see people saying they are using Bigger Girls with 30% this or that, they mean they have merged checkpoints using the checkpoint merger tab. After selecting two checkpoints, the slider indicates how much of the second checkpoint (B) you want to be represented in the merge compared to the first one (A). So if you leave the slider at the default .3, you are creating a 70% A / 30% B mix. Again, if you select Bigger Girls V2 as checkpoint A and Abyss Orange as checkpoint B, leave everything at default and merge, you would have a 70% BGV2 30% AO Mix. The slider is from 0-1, with every increment representing a percentage of B. .15 would be 15%, .5 would be 50%, etc. You can then merge that merged checkpoint with another at a lower percentage to add even further variety, but if you continue mixing checkpoints that are dissimilar enough you do start to get an 'overcooked' recipe where generations are unfocused, blurry, don't make sense, etc. There are three schools of thought here. "start fat, stay fat" That is, always use a model that more or less defaults to the body size you want without having to strangle it with a prompt or abuse gimp's warp tool and img2img. Think mostly BGV2 but with a little sprinkling of other checkpoints. What if we take BGV2 and dilute it just 25% with, say, a model like Abyss Orange NSFW. (https://civitai.com/api/download/models/5036?type=Model&format=SafeTensor) Well, the merged checkpoint would still be primarily fat-focused, but you've given it way more depth to pull from when generating. Better faces, better backgrounds, beter colors. It's like the difference between giving a cook an entire spice rack or just salt and pepper. Well, then you could go even further right? You take the 70/30 mix and then dilute it 15% further with a model like, say, Chubby Girls Golden (https://civitai.com/api/download/models/4163). Then, any overall/average/median size reduction hit you took from mixing in Abyss Orange (which was not as fat focused) is mitigated and fat is re-enforced. Not to mention, you're adding even more depth to pull from for generations. You can see how there is a checkpoint and checkpoint recipe rabbit hole. If civitai.com is anything to go by, eventually, there will be a specific checkpoint for EVERYTHING, and the recipes will be endless. "start chubby, get fat" Say you are on a checkpoint recipe binge like I was and eventually, you dilute BGV2 so far that really, it's only 20% or so of the merged checkpoint. You start finding that, while you really LOVE the aesthetic / colors / faces / etc, even when you prompt (massive huge belly) or (morbidly obese), your merged model spits out a barely overweight teenage diva. Now in this instance, you could take your super lovely but just not ever fat enough diva, run her through gimp, use the warp tool (Tools-Transform Tools-Warp Tool) to expand her tiny potbelly just a tad, push out her tits and ass just a touch, then run her through the exact same model again in img2img. With a combination of low (.1-.3) de noising strength (to make sure the generation is CLOSER to the original) and high (.55-.7) variation (to ensure the model has a chance to make her even possibly bigger or wide in some way and adding additional prompts like super wide, thick thighs, etc), doing this process over and over again CAN result in superfats that look exceedingly good. The problem is this method is cumbersome, and slow, and generally starts getting cooked after about 3-4 loopbacks. "start simple, get creative" Another strategy is to run a bunch (1000's) of generations on a base model like BGV2 which only requires simple text prompts at lows steps (20) and a low CFG scale (7-8), cherry-pick good generations, then switch to a merged checkpoint, up the CFG Scale to 9 or 10 and run those cherry picks with a much more detailed and varied text prompt, higher variation setting, different samplers, textual inversions, LoRA models, etc. Personally this seems to be the best of both worlds. BGV2 will often get the general gist of what I want but with bad faces/simple backgrounds, which are then more or less corrected in a merged checkpoint via img2img. This has the least manual GIMP work (still some), but there's a lot of sorting through trash. Some interesting checkpoints you might consider for recipes: https://civitai.com/models/6231/xpero-end1ess-model https://huggingface.co/WarriorMama777/OrangeMixs https://civitai.com/models/3748/chubby-girls-golden https://civitai.com/models/3449/anylactation https://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.5.ckpt https://civitai.com/models/3627/protogen-v22-anime-official-release https://huggingface.co/eimiss/EimisAnimeDiffusion_1.0v INPAINTING Inpainting refers to masking an area of an image, then generating a prompt within the mask that takes into consideration a small area around the mask. Think a perfect generation with a horrendous face. Mask the face, then craft a new text prompt about a beautiful anime face and boom, you've got a bunch of now-good faces on without changing the already-good body/background. You could also mask out objects and remove or replace them in this way. Fix hands, fingers, blemishes. The possibilities with in-painting are huge img2img In addition to using a text prompt, you can additionally use the img2img feature to supply an image prompt. This will tend to make the generations at least vaguely similar to the image prompt in some way, depending on CFG scale and variation settings. This can be especially helpful if you just can't seem to guide a model to a specific pose / scene / subject with a text prompt alone. De-noising in the context of img2img is most helpfully explained by saying that, the lower the de-noising strength, the more like the image prompt the generations will be. The higher the de-noising strength goes, the more generations will deviate from the image prompt in fun and unexpected ways while still maintaining some aspect of the original, especially in combination with the the variation setting accessed from the 'extra' checkbox. Download at least one textual inversion package to webui's embeddings folder. Textual inversion is somewhat new, but essentially, these are prompt injections that push generations towards a specific POV, scene, or pose without have to manually craft a specific prompt to do so. People are sharing new textual inversions on civitai all the time. https://civitai.com/models/4218/corneos-cowgirl-position-embedding-for-anime https://civitai.com/models/4725/corneos-pov-bound-wrists-missionary-embedding https://civitai.com/models/5811/corneos-spitroast-threesome-ti-embedding https://civitai.com/models/6005/corneos-ball-gag-ti-embedding https://civitai.com/models/4463/corneos-pov-oral-embedding https://civitai.com/models/4475/corneos-pov-paizuri-embedding https://civitai.com/models/5371/corneos-side-view-deepthroat-ti-embedding https://civitai.com/models/4551/corneos-arm-grab-doggystyle-embedding https://civitai.com/models/5202/corneos-covering-breasts-ti-embed-two-hands-version https://civitai.com/models/5203/corneos-covering-breasts-ti-embed-one-arm-version https://civitai.com/models/5241/corneos-covering-breasts-ti-embed-arms-crossed-version Download at least one LoRA model to the webui's models/lora folder: https://civitai.com/api/download/models/10069 LoRA are small sets of training data to supplement checkpoints without requiring a merge and guide generations like textual inversions, and are scaleable. Basically textual inversion+. Fairly new feature. This example makes adding milking machines/cups/pumps/hoses to a scene much more reliable. Download and install re-synthesizer (content aware fill Filters-Enhance-Heal Selection) for gimp (edit - preferences - folders - plugins) extract everything from the folder in the zip there (but not the folder itself): https://github.com/pixlsus/registry.gimp.org_static/raw/master/registry.gimp.org/files/Resynthesizer_v1.0-i686.zip Have almost the perfect generation for img2img but a certain defect keeps getting brought over? Lasso select and content aware fill, boom, now instead of having to edit out the defect in all subsequent gens, it's gone for the git go! Or, use it to touch up near-perfect generations that just need an extra arm or hand removed or extra person / object removed. Add wildcards extension and start using nameofwildcardfile (no file extension) surrounded by double underscores in your prompt to get effortless creativity. (extensions tab - available - wildcards) Check other people's generations for good prompts/keywords/settings with the PNG Info tab. You can also send it directly to img2img, with the prompt (if it was generated in Automatic1111)! If an image wasn't generated in Automatic1111, you can still use img2img 'interrogate with clip' to get a general idea of the text prompt parameters. Steal the manga master font if you want to make anime panels: https://www.dropbox.com/s/71rdeje512z9wwh/MangaMaster%20BB%20-%20by%20Blambot.rar That's about all I can think of ATM.
Edited last time by admin on 03/27/2023 (Mon) 21:00:54.
(1009.33 KB 512x848 00063-927521050.png)
(1003.54 KB 512x848 00064-927521051.png)
(938.54 KB 512x768 00072-3375272164.png)
(935.43 KB 512x768 00083-3652417023.png)
(993.34 KB 512x768 00080-1491265331.png)
(976.79 KB 512x768 00084-965670374.png)
>>2435 >>2441 Amazing stuff !
(884.44 KB 512x768 00098-1359196414.png)
(999.83 KB 512x768 00088-3531634976.png)
(929.71 KB 512x768 00099-1007340569.png)
(873.62 KB 512x768 00100-1007340570.png)
(841.34 KB 512x768 00106-1656774990.png)
(983.10 KB 512x768 00086-1884288937.png)
>>2439 >>141928 lots of good resources here, when there ends up being another thread this should be reposted >>2437 I'm assuming you got SD running. https://gofile.io/d/EdWyfm that's my model. It's just biggergirls & anythingv4. VAE file not included. Because of the merge it doesn't do extreme sizes as well as base biggergirls, and because of my training it's biased towards big hips & thighs. But you can control proportions with tags, emphasis, & negatives like normal. I use this to generate (but after reading the guide I need to reevaluate): DPM++ 2S a Karras on 15-30 steps Highres fix on latent or latent(nearest-exact) and denoising on 0.45-0.6, lower looks better but may have artifacting or errors Cfg scale on 8-12 AnythingV4 vae "masterpiece, best quality, highres" at the start of every positive prompt "(lowres, blurry:1.1), bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name" at the start of every negative prompt Same settings used for inpainting and img2img If you need to know what tags to use you can download images from this thread and run them through png info to see the generation info. And check out the guide that just got posted, it's pretty good
(3.40 MB 1920x1080 Test.mp4)
slight re-done for funsies
>>2446 Original pics?
(906.37 KB 512x1000 00645-2260256676.png)
(915.15 KB 512x1000 00663-3249544910.png)
(888.58 KB 512x1000 00659-3571128624.png)
(944.78 KB 512x1000 00658-3571128623.png)
(1.04 MB 512x1000 00682-1078489171.png)
(1.02 MB 512x1000 00671-1156973430.png)
Stuck in doorway
>>2439 >>2440 Thank you so much ! Your guide is great! :) But my images are bland, I followed what you said, but my images lack color, and are not as pretty as other people... do you have an email where I can reach you ? >>2441 >>2445
(871.59 KB 768x960 no vae.png)
(789.27 KB 768x960 anythingv3 vae.png)
(889.65 KB 768x960 SD vae.png)
(723.45 KB 768x960 orange vae.png)
(737.02 KB 3072x3840 002.jpg)
(603.28 KB 3072x3840 005.jpg)
>>2451 >color Sounds like a vae issue. Use whichever one your model recommends. Usually the stable diffusion vae for photorealism and the anythingv3 vae for anime drawings. You can also use extra prompts like "vivid color".
>>2452 That I should merge with "BiggerGirlsV2"?
(381.34 KB 512x512 00053-694605199.png)
(302.94 KB 512x512 00057-1974254664.png)
>>2447 OC do not steal lol. Nah just playin here ya go. >>2449 it's the TikTok remix of Turn it up that blew up last year from that old ass busta rhymes song. https://www.youtube.com/watch?v=xfKd5itj8K8 >>2451 glad you liked it. It sounds like you didn't get your VAE setup correctly. I'll update the guide with slightly clearer descriptions and more VAE options. And e-mail? bruh this is a chan we don't do that. >>2453 No, you don't merge VAEs (well, technically you can bake them in during a checkpoint merger, not relevant here). VAE are separate files that you can assign to specific checkpoints or override and use one VAE for all checkpoints. Again, reference the guide where I talk about selecting the SD VAE.
>>2349 Which model did you use for this? The 00125 image is godtier IMO.
From the beloved Japanese restaurant slice of life anime "Bāgā no Ai", three busty young girls attempt to sell traditional American burgers while their stern but caring manager tries his best to market western food to an eastern audience. It's amazing the difference one season can make in the animation in some shows. (inspired by the picture earlier in the thread with the lady holding a burger, in front of a table with burgers, with a tv with burgers on in it. I ran it through image to image a few twoo many times and ended up with these lol)
Still figuring out some of the better ways to prompt right now, but I figured I'd share some of the higher points and fine-tunes. Tossed in a couple of earlier ones that turned out well as well, but trying to go beyond just bashing others prompts together.
Also these, more focused on the 'growing out of clothes', but these ones were when I was trying to force a non-merged model (100% Kotos, I think). 'Beefcake' as a framework tag is interesting but I think I'll stick to the merged checkpoints for the immediate future.
>>2456 Season two with the Mendy's rival chain selling homegrown Wagyu beef steak in the life or death taste challenge was intense.
>>2452 So if I understood correctly, you are merging BGV2 with another. You create the image, you go to "Img2img" and use another checkpoint? Do you save the result and start again in a loop with another checkpoint ? That's right ?
>>2452 And I downloaded "Anything V4" but it is in "pt" format and not "ckpt" so I can't use it.
>>2461 You downloaded Anything v4's VAE color correction file, which should only be about 800mb. Here is Anything v4 checkpoint (full): https://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.0.ckpt Here is the pruned version (basically same, half size): https://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.0-pruned.ckpt The full 4.5 checkpoint: https://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.5.ckpt The pruned 4.5 checkpoint: https://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.5-pruned.ckpt
(11.17 KB 201x223 setting.png)
>>2461 The .ckpt files are model files, and VAE files are what models use to get their colors right and fix issues like poor details and desaturation. You need to use them both or download a different model that has it included like the other guy suggested It should be in .pt format, and you put it in your models/VAE folder. after that you go to settings and then change your VAE settings to use that specific VAE, or to automatically select the appopriate one for the model (I prefer to select bc auto usually gets it wrong for me)
>>2436 nah man you got to tell us in more detail how you made that first image like what model and, if you mixed models together, what percent. like it's to do to not tell us the formula
(2.22 MB 1225x1575 Prelude.png)
The introductory prelude to my forthcoming entirely ai-made comic.
(664.77 KB 1400x1800 ResearchAndDevelopment-Ch1.jpg)
(703.67 KB 1400x1800 ResearchAndDevelopment-Ch2.jpg)
(2.22 MB 1225x1575 ResearchAndDevelopment-Ch3.png)
>>2439 >>2440 Thanks for taking the time to write such a great guide, I don't know if I'll ever use it or even generate a single image... but rest assured, I'll save this on a file since I have a strong feeling in my gut that this is the kind of thing that could come in handy in the future. You are a gentleman and a scholar, hope you a have a great day.
(497.38 KB 2824x2048 2s.jpg)
(359.28 KB 3660x1194 22s.jpg)
(195.56 KB 1252x1988 222s.jpg)
(148.32 KB 1896x896 2222s.jpg)
(278.33 KB 2886x1038 22222s.jpg)
(7.95 MB 2688x2688 08980.png)
>>2464 i'm sorry but I don't think I understand. If you mean to say you want an example of a workflow, then here: >>2445 the model is here. No further merges or mixes First prompt what you want. We'll reuse the prompt of the image you mentioned. So, the positive prompt will be: "masterpiece, best quality, highres, office lady, dress shirt, (button gap:1.1), undersized clothes, (huge belly on table, fat:1.2), pants, black hair, brown eyes, in an office" And the negative prompt will be "(lowres, blurry:1.1), bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, loli, child, chibi, monochrome, sketch" Run a big batch of 512x images. It's hard to get something good on the first try so running big batches on a low resolution and low step count is a very good idea. I'll do dpm++ 2sa karras on 28 steps for 16 images. I like the first image a lot so that's the one I'll use. Turn on highres fix, and adjust the upscale to whatever you want. I will do 1.75x for 896x, but you could do higher or lower depending on your pc and patience. I will keep the steps set to 28, set upscaler sampler to "latent(nearest-exact)", set denoising to anywhere between .45 to .55, and cfg scale to 8. Finally, I take the desired image's seed, punch it into the seed box, and re-generate it. In img 2 of this post you can see an example of what the denoising slider does. Lower values makes the upscaler respect the source img more, and adds better details, but may introduce artifacting, or keep errors from the source. Higher values are good at preventing errors, but lack fine detail, and may differ too much from the source. I bring this up because there is no best value. It varies from seed to seed, tag to tag, and your preference, and you'll probably want to try more than one value when you upscale (but, .5 pretty much always works). I'll send the .45 denoising image to inpaint. Inpaint will fix things you don't like by regenerating specific parts, basically img2img on an area you define. Don't do the hands or face yet. I used 0.6 denoising and "Just resize (latent upscale)", and inpaint tags should be the same as what you used to generate, but changed to reflect what you want/don't want in your image. For example, I specified "black office shirt" instead of just "office shirt" to get rid of the white collar and white undershirt, and I also added "necklace" to the negatives to get rid of that weird blue thing on her tits. It generates, and the changes are good, but her shirt is still a little off. I send it back to inpaint. This time I only want to work on her shirt, and don't want to potentially undo the other changes I made, so I undo all my selections and only select the collar and the bottom of the shirt and leave everything else untouched. I remove button gap from the prompt, which appeared to be fucking up her collar. Img 3 shows the inpaints. Next, I want to make her shirt cover her belly a little, because she's supposed to be wearing a shirt, not a crop top. I open my image editor, and scribble a little where I want there to be a shirt. Upload, inpaint, and now the shirt is draped over her gut. (this is also how you can easily do feeding tubes and slime and whatever else) Img #4 Now that I'm satisfied with the rest of the image, I can do the hands and then the face. I wanted to do them separately because the ai is more likely to give you a good generation if you do them separately, and I also don't want to end up with good hands, just to lose them in trying to redo the shirt or face at the same time. Generate, and then send it back to inpaint when I'm happy. Finally, the face. Set inpaint area to "only masked". This makes the ai regenerate the area at the specified resolution (so here it generates at 896x instead of 100x or whatever the original size is), and then downscale it, making it easy to get pretty and detailed faces. Also, set "only masked padding, pixels" to the max. This is how much "context" the generation will have, which is important so that you actually generate a nice face instead of shit like a navel where a mouth is supposed to be like in img #5. I'll add "looking away, burping", generate, and call it there. Final image is #6 All in all it takes maybe 6-10 minutes to generate and clean up an image, but definitely could take more if you get unlucky generations or have a hard/undertrained subject (or have a dumpy computer). The image browser extension is also super nice to have because it lets you see your history and save images & prompts, so get it if you don't have it. For extra credit you could go to the 'extras' tab and run your final image through an upscaler before posting
>>2468 Hey I'm curious what AI site your using to generate this great looking art?
(3.17 MB 2048x1536 00048-455639480.png)
(304.24 KB 512x512 00059-455639483.png)
(333.25 KB 544x544 00060-2893633715.png)
(334.57 KB 544x544 00062-3915972623.png)
(337.63 KB 544x544 00064-3343856170.png)
>>2468 Hi, I followed your little tutorial and here is the result, so I can't get bigger than 600x600, otherwise my computer won't follow. ^^ As a checkpoint I use a merge of BGV2 at 30% and ChubbyGirlGoldenV2 at 70% I used the same prompts as well as the same negative prompts. x512, dpm++ 2sa karras and 28 steps for 10 images. After making the 10 images, I selected one, then I went to Img2img with your configurations, which gave the 3rd image, I then removed the line that passed through the navel, and I redid the bottom of the shirt. I use "Anything v.4.5" as a VAE and I always find I lack color.... :/ and realism as in your picture
(1.27 MB 896x896 00001-2798739812.png)
>>2468 Can confirm these upscaling settings are nice, if finnicky to nail down as they must be slightly tweaked for each picture. Still, considering I had mostly given up on upscaling due to inconsistency / amount of time required for little improvement, I will say, kudos for sussing out some really decent upscaling settings. I will add some of this to my guide.
When I try to have a high definition image, at the end of 100% I have an error message, you know why ? File "G:\----\stable-diffusion-webui-master\venv\lib\site-packages\torch\functional.py", line 378, in einsum return _VF.einsum(equation, operands) # type: ignore[attr-defined] torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.50 GiB (GPU 0; 4.00 GiB total capacity; 2.23 GiB already allocated; 513.26 MiB free; 2.24 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
>>2472 You are likely hitting the vram limit. Up-scaling is fairly intense. Try closing any and all GPU-accelerated applications (extra browser tabs, discord, etc). If you still can't, attempt 1.25x or 1.5x instead of 1.75x or 2x. Or run GFPGAN/ESRGAN as their own separate applications, instead of within the WebUI. https://github.com/TencentARC/GFPGAN
>>2472 Oh also, only upscale one image at a time. If you attempt multiple it tries to load them all into ram and that can really be too much if you don't have 24gb vram
>>2469 This is done locally using automatic1111 stable diffusion webui. There is a quick guide on getting started here >>2439 >>2470 are you sure you're using the right VAE file? it should be "anything-v4.0.vae.pt" and the direct download is here https://huggingface.co/andite/anything-v4.0/resolve/main/anything-v4.0.vae.pt >>2472 >>2473 >>2474 is on the nose. youve hit the limit of what your computer can generate. You can try changing launch settings to fix it. Open webui-user.bat and add the following to COMMANDLINE_ARGS if they're not already there: > --medvram --opt-split-attention --no-half-vae medvram and opt decrease speed, but have more efficient gpu usage. no half vae makes VAEs compatible with these settings, otherwise you get solid black pictures So the full line should look like this on windows: > set COMMANDLINE_ARGS=--medvram --opt-split-attention --no-half-vae On linux you edit webui-user.sh and it should look like this instead, with quotes: > export COMMANDLINE_ARGS="--medvram --opt-split-attention --no-half-vae" This lets my rx6500xt 8gb generate at 1088x, a nice improvement over 640x, but it generates ~30% slower. also the webui just generally has shitty memory management including multiple memory leaks so we have to deal until it gets taken care of at some point. Also, --xformers is said to drastically increase performance, but only on nvidia cards. I have no idea how to get it set up and haven't bothered bc i'm on AMD(im feeling heavy buyers remorse). You can try yourself: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers
this any good?
>>2476 Looks a *wee* bit over-cooked to me and might have looked better at -less- than 2048x2048.
(987.69 KB 896x896 00089-242857618.png)
(878.55 KB 896x896 00090-242857617.png)
(815.45 KB 896x896 00098-4281968058.png)
(865.72 KB 896x896 00101-3777850865.png)
(827.01 KB 896x896 00106-601872897.png)
Thank you for all your advice ! I finally succeeded ! :)
>>2466 "Feed the ducks". When you're aware about what people do while reading your comic.
>>2479 I cant describe how tickled it makes me that someone noticed and commented on that.
(692.78 KB 3840x3072 007.jpg)
(629.50 KB 3072x3840 009.jpg)
(1.03 MB 3584x3584 010.jpg)
(570.82 KB 3584x3072 016.jpg)
(762.18 KB 3072x3840 017.jpg)
(664.10 KB 3072x3840 018.jpg)
bump
>>142718 Yeah, it's stable diffusion on automatic1111 webui. There's a couple of guides on getting started floating around >>2468 >>2439
Could I request some vtubers? Maybe a fauna or ookami mio, round or thick?
>>2482 I'm mixing in 30% Xpero End1ess right now. I inpaint the faces most of the time even when they turn out okay from the latent upscaler. I don't do any fine tuning with the faces, just re-rolling on euler A with a high denoising until something comes up that would scale okay at 4x.
(335.46 KB 512x512 00003-1597609848.png)
(344.07 KB 512x512 00001-1597609846.png)
(361.76 KB 512x512 00002-1597609847.png)
(254.23 KB 512x512 00003-1908491398.png)
I have been playing around a little bit. So are 50% are horrifying. These are some that have worked fairly well.
(760.60 KB 1024x1024 skdfhjskjfhd.png)
anything apart from anime is uncanny valley desu
(813.68 KB 816x1024 tmp5ubjwq1r.png)
(968.41 KB 816x1024 tmp21r59y3_.png)
(1.08 MB 816x1024 tmpm2hth8t6.png)
(686.98 KB 816x1024 tmpdzomxs4r.png)
(774.55 KB 816x1024 tmpyhv8s3gw.png)
(758.01 KB 816x1024 tmpaorr0tq5.png)
How do I get my results to not look horribly generic when using text generation? You know what I'm talking about: the tell-tale AI generated artstyle. Is this an issue with prompts or settings? Should I reference particular artists or add something about artstyle into the prompt? No matter what I do in terms of tags/descriptions, the artstyle is always the exact same between prompts.
>>2490 could you attach a couple of examples of what you want to avoid? But like in general using different VAE files proves to make some difference or try using some kind of different model from the one you use now. Maybe merge two models together so that there will be more interesting stuff from other model you chose.
>>2492 What is not vegan?
>>2494 Woah, catbox pls?
>>2495 Not much of a point since bbwchan saves metadata (just dl the image and put it in png info), but here you go https://files.catbox.moe/yddry7.png https://files.catbox.moe/5el5k0.png
(56.68 KB 1260x264 notepad++_2023-03-01_01-05-01.png)
>>2496 Just so you know, I downloaded that second Fauna image in your post and the metadata isn't present.
>>143098 i was wondering why this thread got so slow. Was everybody banned? >>2497 Shit my bad, I couldve sworn it worked like that. I'll remember to use catbox. Heres that fauna pic https://files.catbox.moe/zorjot.png And heres this post's pics https://files.catbox.moe/8xkpsx.png https://files.catbox.moe/q0574z.png https://files.catbox.moe/gky4qi.png https://files.catbox.moe/0zm3dy.png
>>2498 Thanks for the links. And I'm also pretty positive I've saved things directly from these threads before and used them. Maybe when the site updated recently they changed it.
>>2494 Do you have a link to the model for these? they're pretty spectacular
>>2502 much appreciated friend, I just realised I've seen your stuff on /vt/ today aswell, nice to see fellow architects on there
>>2503 There was a regular /ag/ thread, but a singular mod has a vendetta
>>2505 ah I've got that LORA already but thank you. I just quickly threw a couple models together and got a nice mix of detail and {{bigger}.
>>2499 Unfortunately, I can confirm that while we have always had 'strip exif data' enabled, it was unwittingly not functioning until the last lynxchan front-end update. While this is a loss for this specific thread, it is a gain for the overall anonymity of the site. Kind of torn. It sure was handy to just be able to drag stuff directly from the thread to the PNG info tab. We also can't have it just disabled for bbwdraw... it's all or nothing :(
>>2507 Exif data is handy. The privacy concern is real when it comes to pics being tagged with a smartphone's GPS. Talk about an oopsie. I use an obscure fs now for linux, but when I used to use windows file metadata was useful.
(610.86 KB 3584x3584 036.jpg)
(898.06 KB 3584x3584 039.jpg)
(968.69 KB 3584x3584 041.jpg)
(842.79 KB 3584x3584 042.jpg)
(904.46 KB 3584x3584 043.jpeg)
(968.74 KB 3584x3584 044.jpeg)
Attempts at masked characters. Definitely helps to draw a rough shape then inpaint. The model knows batman/superman logos.
Okay I got all this installed but fucking hell it takes forever to generate an image. Even as low as 5 sampling steps it still takes 3 minutes. What can I do to fix it?
>>2510 Get a better computer
> "can you please stop making cute chuubas fat....." The nerve of some people. >>2510 There is a pretty high chance that you're running on CPU mode and not GPU mode. Even the slowest of gpus won't take 5 minutes to do so little. I might be wrong though, check task manager while you're generating to see if your CPU or GPU usage spikes. If you really are in CPU mode, you'll have to google it, cause i forgot how to fix it
>>2512 I've tried getting everything set and every single time I go to make anything like the images you generate, it's never even close. I followed all the steps, used the same model and VAE. What am I doing wrong? I even copied the images using the WebUI img2text and it still doesn't work even after copying all the data the exact same, seed and everything. Please, if you can help or provide any advice, it'd be much appreciated.
(1.03 MB 1024x1024 00089.png)
(2.05 MB 1344x1344 00087.png)
How long does your computer take to have a high definition image, honestly I'm not far from 3 minutes xD
>>2513 Yeah, the seed won't do you much good. The seed info only shows the most recently used seed, not the original seed, so if an image is inpainted(most of the stuff on this thread is), it'll show the inpaint seed, which is always different from the seed used for generation. Same with img2img. But besides that what problems are you getting? off the top of my head have you tried: Generating above 512x with highres fix (512x and below have a lot of errors that go away with 0 effort on higher resolutions) Using a VAE, if your colors are desaturated Post pics of the issues you got
>>2515 my imgs didnt post? >>2514 Its similar for me, about 2:40 for a 1024x
>>2516 Except that I have an error message, and I have to do x512 with 1.5 of upscale, no more... ^^
(292.43 KB 768x512 normals_test.webm)
(936.62 KB 768x1152 00023-3598525720.png)
(1.50 MB 1024x1536 00024-1072086656.png)
(863.95 KB 1024x1024 00010-2206969744.png)
(881.80 KB 1024x1024 00019-2047882849.png)
(817.60 KB 1024x1024 00022-1117491675.png)
(841.02 KB 1024x1024 00015-2993571446.png)
Huge Thanks to all of you posting great info here!
(969.18 KB 1024x1024 00086-3432798835.png)
(998.18 KB 1024x1024 00085-3432798833.png)
(779.22 KB 1024x1024 00084-774371182.png)
(1.06 MB 1024x1024 00075-2353264396.png)
(1.10 MB 1024x1024 00092-2518549586.png)
(1.02 MB 1024x1024 00093-2030734262.png)
Playing with Lora combinations
>>2523 >>2522 I don't know any Kanye West meme pi, but this is the best AI art that has been posted thus far. I hope you don't find this reafirming post annoying. Keep up the good work.
>>2522 What is Lora? Is that a model? If not, what models are you using to create those pictures? Is it with Stable Diffusion? Also, what prompts did you use? Those look amazing
>>2526 >Seems okay so far, but less consistent. Don't worry about that. This is America. Just make sure you grow old and thin. Nothing else matters. Money.
I'm confused, what are some of the tags used to make some of these pictures because when I do it they're not as big as the ones on this thread
>>2533 what model are you using? Normal models have very little fat training and can't do it well. Try using biggergirls and get back to us https://civitai.com/models/2664/bigger-girls-model
>>2519 Could you upload the 4th and 5th picture here https://catbox.moe/ and post the links for me please?
>>2439 >https://huggingface.co/stabilityai/sd-vae-ft-ema-original/resolve/main/vae-ft-ema-560000-ema-pruned.ckpt Most vae files I find are .pt and this ema vae is a .ckpt, which makes me wonder if it's really a vae or just another checkpoint?

Delete
Report