Tensor.Art
Create
Knifestripe

Knifestripe

220
Followers
13
Following
0
Runs
0
Downloads
0
Likes
Latest
Most Liked
All you need to know about adetailer

All you need to know about adetailer

Hello everyone, it's me againI have been planning to write this article for a long time age to share my experiences using the Enhance feature (or in other words “After detailer”). In this article, I will only refer to Adetailer as true to the nature of Stable Diffusion.If you do not feel satisfied with this feature, then maybe the following article will be somewhat useful for you.So what is Adetailer?As mentioned above, Adetailer is short for the phrase "After Detailer".Sometimes the AI will make some weird or worst looking details in the image, often parts that are difficult for AI to recognize such as nose, skin, hands and in some rare cases are some parts of the body.So when you use the Adetailer feature after the image has been created by AI, the AI will re-run one more time with Adetailer models to fix these details.To make it easier to understand, I will use the Comfyui Workflow environment to analyze for you.After you turn on the Adetailer feature, in this case let say I want to fix the face.Then the AI will work normally as always, they will still create images. As you can see, the details in the face are very bad. However, after the image is finally made, The AI will connect to Adetailer's model (here I am choosing face_yolo8m model) to determine which is the subject's face in the original photo.From there, it will recreate the details again within certain areas, and here we are talking about correcting the details of the face.And here is the resultSo what are the parameters that you need to pay attention to here?Although there are still not many options in Tungsten except "level of denoising: (to tell the AI how much level of reconstruction of details you want , recommend using level at 0.1~0.4)" , I will still explain some of it for you to understand.guide_size: This feature attempt detail recovery only when the size of the detected mask is smaller than this value. If the size is larger, this feature increase the resolution and attempt detail recovery.guide_size_for: This parameter determines whether guide_size is used based on the size of the detected face (bbox) or the size of the crop area that includes the face and is broadly cropped by crop_factor.max_size: The guide_size increases the scale so that the shorter side reaches the guide_size. For masks with elongated shapes, this can cause a significant scale up. The max_size limits the maximum size of one side.feather: When compositing the recovered details onto the original image, this feature use a gradient to composite it so that the boundaries are not visible. The thickness of this gradient is determined.force_inpaint: force_inpaint will try to force regeneration even if it is smaller than guide_size . This function is useful when you simply want to change to another type of prompt other than the function to save details. However, in this case, upscale is forcibly fixed to 1.noise_mask: The decision to limit the area to be regenerated using KSampler with a mask depends on whether the noise_mask is enabled or disabled. When noise_mask is enabled, only the masked area of the image is regenerated, whereas when it is disabled, the image generation occurs for the entire cropped area, with only the mask area being cut and pasted.threshold: Detect only those object whose recognized confidence is above this set value.dilation: Expand the detected mask area.crop_factor: Determine how many times the surrounding area should be included in the detail recovery process based on the detected mask area. If this value is small, the restoration may not work well because the surrounding context cannot be known.So should you always use Adtailer or not?Of course not, even with older models of Stable Diffusion 1.5Why? And in any case, you don't need to use it.1. In case of creating portrait/upperbody photos2. In case of using celebrity or real person lora, it will cause the face to become inaccurate compared to the original.3. Landscape photos.With SDXL, deformities rarely occur unless the models are poorly trained, or LoRa with low size under 200mb which often produces poor resolution.A few small notes1. Do not use Adetailer more than 0.5, to avoid deformity of the face or hands, and furthermore save a lot of time2. The further the face and hands are from the viewer's perspective, the level of denoising must also increaseFor example, for portrait photos, it is not necessary, upperbody photos should only be at 0.2 ~ 0.3. As for full body view, it will be 0.42. Usually with SD1.5 models, Adetailer hands are almost useless due to low quality/ low resolution data it have.3. Please check your prompt carefully before using Adetailer, because as mentioned above the AI will run again according to your prompt. Make sure none of the prompts have a weight index like (prompt:1.8) or (((((((prompt))))))) because doing so will make the AI focus on reproducing what you wrote here. in quotes instead of reconstructing the face or hand
2
1
[TUT] How to make perfect hands and pose with controlnet depth

[TUT] How to make perfect hands and pose with controlnet depth

As promised, today I will show you how to use controlnet_depth to create the pose you want with 100% accurate.In my previous article, we mentioned open_pose but in terms of accuracy, they are still not really perfect.Hands and fingers problems are always happen when using Stable Diffusion, so this control_net will solve these kind of problems.The way control_net depth works is very simple, they will analyze the 3D surface of the object from the image you gave, and then render the image from that 3D surface.1/ First, click here2/ Select control_net depth3/ Select the working mode for controlnet depth, here I will choose the working mode zoe instead of midas because the accuracy of zoe is much higher than midas4/ Upload your reference image via this option5/ Adjust the weight of control_net depth, recommend 0.5~0.7 for best results without losing much details6/ And tadaaaa UwUTry again with another pictureAnd the resultHope you enjoy my post, subscribe or support me for more quality content next time (乃^o^)乃
5
2
[TUT] How to fix common errors while using Stable Diffusion pt.1

[TUT] How to fix common errors while using Stable Diffusion pt.1

Have you ever encounter these problem, when you use Stable Diffusion and suddenly many girls appear in one frame, or the limbs are deformed, or the body is stretched out?These are common errors when you use Stable Diffusion. The reasons for fixing them are very simple and many people often ignore them because they think they are not important.The reason behind is the resolution and image ratio you have chosen are all wrong.I will give a brief explanation about this, because as we know Stable Diffusion currently has two most popular versions: SD 1.5 and SDXL.With version SD1.5, they are trained by million small resolution images, with a pixel size of only about 512x512. Therefore, when you choose an image size > 1.5 times larger than the resolution of 512x512, the AI will encounter many difficulties in the sampling and denoising process, then errors will certainly occur.Similar to the SDXL model, although in terms of prompt understanding they are clearly superior to SD1.5, and the SDXL trained data is twice that of SD1.5 (1024x1024), but still, they still has the error of the human body being stretched out.Although you can use negative prompts to prevented these errors, they will certainly still occur frequently, which really annoying.So to avoid these errors from happening, I recommend adjusting the resolution to only 1.1~1.5 times than the original data training image sizeAnd here are the tweaks that I often use, please pay attention to the ratio too (d ^o^ b)1. 768x768 (ratio: 1:1) with SD 1.5 and 1024x1024 (ratio: 1:1) with SDXL: Portrait, logo, vector, close-up sports display)2. 640x960 (ratio: 2:3) with SD 1.5 and 768x1024 (ratio: 3:4) with SDXL: In cases you want to create full body view photos, cowboy shots, street landscape photos3. 960x640 (ratio 3:2) with SD 1.5 and 1024x768 (ratio: 4:3) with SDXL: In case you want to create of many people, photos of natural scenes with many details.
10
1
[TUT] How to make a perfect pose with control_net

[TUT] How to make a perfect pose with control_net

Hi guys, this is my first article on TensorArt. Have you ever found it difficult to make a difficult pose for a character such as dancing?­Even though you used many specific words to describe it, but it seem like AI don't understand your promptToday I will introduce to you the control_net that will make your "AI life" easier than ever. ^o^That is control_net openposeHow to use?It's very simple.Let say you wanna make a ballet dancer1/ Go to the "Add control net" option2/Upload the image sample you have, then select the working model of control_net (for ex: openpose)3/ Then wait for the resultOne more example with akimbo pose, with in my opinion is very hard for AI to understandOf course, because this is a very basic controlnet pose, it is understandable that the accuracy is not high.In the next article, I will show you a more advanced option called control_depth, which helps you achieve results 10 times more accurate than openpose.­­­­­
15
5