Tensor.Art
Create

FLUX BOOBA

LORA
Reprint


Updated:

7.3K

Version Detail

FLUX.1
ATTENTION! For best results, please read below. To promt just use natural language " a nude woman blah blah blah" nude was used in the training captions so might work better versus "naked". 1. If using the Fp8 dev Flux model, to get good results make sure and use the fp8_e4m3fn version. 2. Use the lora at about strength of 0.7-.75. Higher strengths will increase likelihood of generating the little details better but also increase chances of unwanted artifacts like messy fingers and other unwanted things. Lowering the strength below 0.7 will increase the cohesion of the image but reduce the details on nipples and reduce the chances of getting anything decent from down below (underwear starts to show up). 3. In comfy for the model sampling flux node make sure and use the mas_shift strength of .5 and base_shift at 0.5 respectively. 4. Use Euler as the sampler and Beta as the scheduler with 25 steps minimum. 5. Higher resolutions like 1024x1400 or 1024x1216 seem to produce best results. Also use 2x3 aspect ratio (portrait) for best results. Some info about this lora. This is an early alpha lora and is not fully cooked yet so you will not get very detailed genitals or nipples just yet and there are some artifacts here or there. It was trained on 100 images and manual caption pair's of women all in "cowboy shot" where the subject is seen from thighs up, so the images generated with this lora will be very biased in that camera shot and angle. A woman seen from different angles can be generated successfully with good quality but you need to reduce the strength of the lora to prevent mutations and other cohesion issues for other angles, so play around with the strength of the lora for best results in your use case. This is an early testing lora so dont expect miracles just yet, ill be working on a more generalized lora that includes men and also a larger data set with more natural body shapes and a diverse set of poses, angles and shots. This will probably take a while so be patient. Some basic info about training process. This lora was trained on an A100 using the simple tuner training script (props to the dev!). The lora was trained on an fp16 dev base flux model, during training it was using about 27gb worth of VRAM for the following settings. The training speeds are about 2.3 sec/it on the A100. We used prodigy with constant, 64 rank and 64 alpha, bf16, gamma 5. No dropout used, batch size of 1 (batch size 1 yields better results versus using any other batch size). Because nudity is new to flux it takes quite a while for the concept to converge decently at about 350 steps per image minimum and 650 steps per image for good results. Lots of tests were performed to converge on the best hyperparameters and this is what we settled on (more testing needed for manual hyperparameters as I expect a large speedup with use of adam8w and such..). Some other notes of interest. We trained on an fp8 flux variant and results were just as good as the fp16 flux model at the cost of 2x convergence speed. That means it now took 700 minimum steps to converge on the subject decently and 1400 steps to converge on a good result. Training on an fp8 flux model took about 16.3gb worth of vram with our settings so I don't see a reason training cant happen on any card that has that VRAM, and possibly with some optimizations maybe could even happen on cards with 16gb of vram for fp8 lora training. Special thanks to Raj as he provided the A100 and also was responsible for getting the simple training script to work and also modified it for our needs.

Project Permissions

Model reprinted from : https://civitai.com/models/639046?modelVersionId=714655

Reprinted models are for communication and learning purposes only, not for commercial use. Original authors can contact us to transfer the models through our Discord channel --- #claim-models.

Comments

Related Posts

Describe the image you want to generate, then press Enter to send.