Tensor.Art
Create

Boring Reality Faces - Absolute Realism

LORA
Reprint


Updated:

3K

NOTE: Please read below for working with these loras. They are unlikely to give good results when used individually and as is.

This model is actually a collection of LoRAs that each behave a bit differently but work together to achieve photo-realism. They are using some experimental techniques that depend on training on only a small number of portrait shots in order to maintain complex scenes. New versions in the future will likely change drastically once there is more understanding for this approach.

For now follow this short guide for starting off:

  • Due to these small number of faces trained on, the quality of the faces will be very distorted and often share the same features (hands will also be bad). It is strongly recommended to use a very powerful upscaler like MagnficAI to fix the faces as it will also evenly fix up the scene. Individual face improvement tools like those with ADetailer may cause the sharpness of the scene to look off.

  • These loras primarily work with the SDXL Base model. Using a different SDXL model will likely lead to less interesting scene complexity (though it might fix the faces up a bit).

  • These LoRA versions are each attuned to slighly different scenes. BoringReality_primaryV3 has the most general capabilities followed by BoringReality_primaryV4. It is best to start out using multiple versions of the lora and scale the weights evenly at a lower number, and then start adjusting them to see which results works best for you.

  • Currently any negative prompt added will likely ruin the image. You should also try to keep the prompt relatively short.

  • To get even better results out of these LoRAs, you should try using a img2img with depth controlnet approach. In Auto1111, you can place a "style image" in the img2img and set the denoise strength to around 0.90. The "style image" can be literally any image you want. It will just cause the generated image to have colors/lighting that are close to the style image. You would place another image with a pose/sceneLayout that you like (could be something you created in text2img) as the control image and use a depth model. Have the control strength lean more towards the prompt.

  • Try not use very simple prompt descriptions like "a man" as you may get bizarre results at times, but also try to avoid very long descriptions as they may cause the results to become bland.

For initial prompts you may want to consider starting out with something like <lora:boringRealism_primaryV4:0.4><lora:boringRealism_primaryV3:0.4> <lora:boringRealism_facesV4:0.4> and then experiment going out from there.

Also start with standard DPM++ 2M Karras, 20-25 steps, and CFG around 7.0. You can likely increase the CFG a good bit to get even a better sharp look sometimes, though it might also start to get very distorted.

Version Detail

SDXL 1.0
NOTE: Please read below for working with these loras. They are unlikely to give good results when used individually and as is. This model is actually a collection of LoRAs that each behave a bit differently but work together to achieve photo-realism. They are using some experimental techniques that depend on training on only a small number of portrait shots in order to maintain complex scenes. New versions in the future will likely change drastically once there is more understanding for this approach. For now follow this short guide for starting off: Due to these small number of faces trained on, the quality of the faces will be very distorted and often share the same features (hands will also be bad). It is strongly recommended to use a very powerful upscaler like MagnficAI to fix the faces as it will also evenly fix up the scene. Individual face improvement tools like those with ADetailer may cause the sharpness of the scene to look off. These loras primarily work with the SDXL Base model. Using a different SDXL model will likely lead to less interesting scene complexity (though it might fix the faces up a bit). These LoRA versions are each attuned to slighly different scenes. BoringReality_primaryV3 has the most general capabilities followed by BoringReality_primaryV4. It is best to start out using multiple versions of the lora and scale the weights evenly at a lower number, and then start adjusting them to see which results works best for you. Currently any negative prompt added will likely ruin the image. You should also try to keep the prompt relatively short. To get even better results out of these LoRAs, you should try using a img2img with depth controlnet approach. In Auto1111, you can place a "style image" in the img2img and set the denoise strength to around 0.90. The "style image" can be literally any image you want. It will just cause the generated image to have colors/lighting that are close to the style image. You would place another image with a pose/sceneLayout that you like (could be something you created in text2img) as the control image and use a depth model. Have the control strength lean more towards the prompt. Try not use very simple prompt descriptions like "a man" as you may get bizarre results at times, but also try to avoid very long descriptions as they may cause the results to become bland. For initial prompts you may want to consider starting out with something like <lora:boringRealism_primaryV4:0.4><lora:boringRealism_primaryV3:0.4> <lora:boringRealism_facesV4:0.4> and then experiment going out from there. Also start with standard DPM++ 2M Karras, 20-25 steps, and CFG around 7.0. You can likely increase the CFG a good bit to get even a better sharp look sometimes, though it might also start to get very distorted.

Project Permissions

Model reprinted from : https://civitai.com/models/310571?modelVersionId=348828

Reprinted models are for communication and learning purposes only, not for commercial use. Original authors can contact us to transfer the models through our Discord channel --- #claim-models.

Comments

Related Posts

Describe the image you want to generate, then press Enter to send.