Tensor.Art
Create

Articles

Some prompts I've collected 一些我收藏的提示词

Some prompts I've collected 一些我收藏的提示词

风格提示词——StylizationInfographic drawing, The concept character sheet 信息图表,概念字符character sheet style 人物表character sheet ilustration 人物表插画Realistic 真实感bokeh 背景散焦Ethereal 空灵 幽雅Warm tones 暖色调The lighting is soft 灯光柔和natural light 自然光ink wash 水墨Splash pigment effect 飞溅颜料cybernetic illuminations 科技光Neo-Pop 新波普艺术风格Art nouveau 新艺术Grandparentcore 复古老派风格Cleancore 简约风格red theme 红色主题sticker 贴纸Reflection 反射Backlit 逆光depth of field 景深A digital double exposure photo 双重曝光blurry foreground 模糊前景blurry background 模糊背景motion_blur 动作模糊split theme 分裂主题Paisley patterns 佩斯利图案(花纹)lineart 线条画silhouette art 剪影艺术concept art 概念艺术graffiti art 涂鸦艺术Gothic art 哥特式艺术Goblincore 地精自然风格ukiyo-e 浮世绘sumi-e 墨绘magazine cover 杂志封面commercial poster 商业海报视角提示词——ViewPerspective view 透视视角Three-quarter view 三分之一视角Thigh-level perspective 大腿水平视角close-up 特写Macro photo 微距图像Headshot 头像portrait 肖像low angle shot 低视角front and back view 前视图和后视图(正反面)various views 各种视角(多视角)Panoramic view 全景Mid-shot/Medium shot 中景cowboy_shot 牛仔镜头Waist-up view 腰部以上视图Bust shot 半身照Torso shot 躯干照foot focus 足部焦点looking at viewer 看着观众from above 俯视from below 仰视full body 全身像sideways/profile view 侧面fisheye lens 鱼眼镜头Environmental portrait 环境人像表情提示词——Facial expressionSmile 微笑grin 咧嘴笑biting lip 咬嘴唇adorable 萌tearing up/crying_tears 泪目tearful 含泪wave mouth 波浪嘴spiral_eyes 螺旋眼Cheerful 乐观nose blush 潮红running mascara 流动睫毛膏发型提示词——Hairstylesmooth 柔顺hair over one eye 刘海遮住一只眼睛twintails 双马尾ponytail 马尾辫diagonal bangs 斜刘海Dynamic hair 飘发hanging hair 垂发ahoge 呆毛braid 辫子braided bun 包子头Undercut 剃鬓发型装饰提示词——Ornamentforehead mark 额头痣mole under eye 泪痣Skindentation 勒痕eyepatch 单眼罩blindfold 眼罩hairpin 发卡hairclip 发圈headband 发箍hair holder 束发hair ribbon 发带Ribbon 缎带 蝴蝶结maid headdress 女仆头饰headveil 头纱tassel 流苏thigh strap 大腿带服装提示词——Clothingjkseifuku jk 日本女子校服miko 女巫idol clothes 偶像服competition swimsuit 竞速泳装Rococo 洛可可pelvic curtain 盆骨帘midriff 分体式halterneck 露背装enmaided 女仆装backless sweater 露背毛衣turtleneck sweater 高领毛衣French-style suspender skirt 法式吊带裙winter coat 冬大衣Trench Coat 风衣race queen 赛车女郎Highleg/Leotard 高叉紧身衣slit skirt 分衩裙Stirrup legwear 踩脚裤fishnet stockings 渔网袜thighhighs/thigh-high socks 大腿袜kneehighs 过膝袜toeless legwear 无指袜yoga pants 瑜伽裤frilled 荷叶边(花边)动作提示词——Action(crossed_legs_(sitting)/crossed legs 二郎腿坐cross-legged sitting 盘腿坐semireclining position 半卧姿势head tilt 头部倾斜leaning forward 向前俯身planted sword 种植剑heart hand duo 双人心形手double thumbs up 点赞peace sign 比耶Salute (≧ω≦)/Energetic Pose 活力姿态sitting on seiza 正坐身体提示词——Bodythick eyebrows 浓眉Abs 腹肌toned 强壮navel 露脐off-shoulder 露肩tsurime 吊梢眼cyborg 半机械人tan skin 日晒肤色cocoa skin 可可肤色fit physique 健美体态
110
40
Understanding Score_9, Score_8_Up, Score_7_Up, Score_6_Up, Score_5_Up, Score_4_Up

Understanding Score_9, Score_8_Up, Score_7_Up, Score_6_Up, Score_5_Up, Score_4_Up

IntroductionIn the realm of AI image generation, particularly with models like Pony Diffusion, achieving high-quality outputs consistently is a significant challenge. A crucial innovation to address this challenge involves the use of aesthetic ranking tags such as score_9, score_8_up, score_7_up, score_6_up, score_5_up, and score_4_up. These tags play a vital role in guiding the model to produce better images by leveraging human-like aesthetic judgments. This article delves into what these tags are, their purpose, and how they are utilized to enhance the quality of AI-generated images.What Are Score Tags?Score tags are annotations added to image captions during the training phase of AI models. These annotations indicate the aesthetic quality of the images, based on a scale derived from human ratings. Here is a breakdown of the specific tags:1. Score_9: Represents the highest quality images, typically in the top 10% of all images.2. Score_8_Up: Includes images that are in the top 20%, from 80% to 90% in quality.3. Score_7_Up: Covers images in the top 30%, from 70% to 80% in quality.4. Score_6_Up: Encompasses images in the top 40%, from 60% to 70% in quality.5. Score_5_Up: Represents images in the top 50%, from 50% to 60% in quality.6. Score_4_Up: Includes images in the top 60%, from 40% to 50% in quality.These tags are used during the training of AI models to help the model distinguish between different levels of image quality, thereby enabling it to generate better images during the inference phase.Purpose of Score TagsEnhancing Model TrainingThe primary purpose of score tags is to improve the training process by providing the model with a clear understanding of what constitutes a good image. By repeatedly exposing the model to images annotated with these tags, it learns to recognize the characteristics that make an image aesthetically pleasing.Providing Fine-Grained ControlScore tags offer fine-grained control over the quality of the generated images. Users can specify the desired quality level in their prompts, ensuring that the output meets their expectations. For example, using the score_9 tag in a prompt indicates that the user expects the highest quality images.Overcoming Data Quality ChallengesIn large datasets, not all images are of high quality. Score tags help in filtering out lower-quality images during the training phase, ensuring that the model is trained on the best possible data. This selective training helps in achieving better overall performance and higher quality outputs.How Score Tags Are UsedTraining PhaseDuring the training phase, images in the dataset are manually or semi-automatically annotated with score tags based on their aesthetic quality. This process involves:1. Data Collection: Gathering a diverse set of images from various sources.2. Manual Ranking: Expert reviewers rank the images on a scale, typically from 1 to 5, based on aesthetic criteria.3. Tag Assignment: Images are tagged with the corresponding score tags (e.g., score_9 for top-tier images).The model is then trained on this annotated dataset, learning to associate the score tags with the quality levels of the images.Inference PhaseDuring the inference phase, users can include score tags in their prompts to influence the quality of the generated images. For example:•A prompt with the tag score_9 will generate images that the model has learned to associate with the highest quality.•A prompt with the tag score_6_up will generate images that meet the quality standards from 60% to 100%.This tagging system provides users with the flexibility to request images of varying quality levels, depending on their specific needs.Practical ApplicationIn practice, the use of score tags can vary depending on the tools and interfaces available. Some tools, like the PSAI Discord bot, automatically add these tags to prompts, simplifying the process for users. In other interfaces, such as Auto1111, users may need to manually add these tags to their prompts. This can be done by saving the tags as a style or copying and pasting them into the beginning of the prompts.Limitations and Future ImprovementsWhile score tags significantly enhance the quality of AI-generated images, there are some limitations:1. Bias in Tags: The tags can introduce biases, especially when using style or artist-specific LoRAs. This may affect the diversity and creativity of the outputs.2. Negative Tags: Negative tags (e.g., score_4) are less effective because the training data does not include extremely low-quality images. Therefore, their impact on steering the model away from bad images is limited.Future improvements for Pony Diffusion V7 aim to refine the tagging system and enhance the model’s ability to understand and utilize these tags effectively. Simplifying the tags and ensuring a more diverse training dataset are key areas of focus.ConclusionScore tags like score_9, score_8_up, score_7_up, score_6_up, score_5_up, and score_4_up play a crucial role in enhancing the quality of AI-generated images in models like Pony Diffusion. By providing a clear indication of image quality and enabling fine-grained control during the inference phase, these tags help in achieving more consistent and aesthetically pleasing outputs. As the development of AI models continues, refining these tagging systems and addressing their limitations will further improve the quality and versatility of AI-generated content.If you like this article, please give it a thumbs up and share it. You can also try using my Pony Diffusion model for generation. Thank you.〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓 ★★★ FuturEvoLab ★★★ 〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓Welcome to FuturEvoLab! We greatly appreciate your continuous support. Our mission is to delve deep into the world of AI-generated content (AIGC), bringing you the latest innovations and techniques. Through this platform, we hope to learn and exchange ideas with you, pushing the boundaries of what's possible in AIGC. Thank you for your support, and we look forward to learning and collaborating with all of you.In our exploration, we recommend several powerful models:Pony XL (Realistic)[Pony XL]Aurora Realism - FuturEvoLab[Pony XL]Lifelike Doll Romance - FuturEvoLabPony XL (Anime)[Pony XL]Cyber Futuristic Maidens - FuturEvoLab[Pony XL]Cyberworld Anime - FuturEvoLabDream Brush SDXL - FuturEvoLabSDXL 1.0 (Realistic)[SDXL]Lover's Light - FuturEvoLab[SDXL]Real style fantasy - FuturEvoLab[SDXL]Soulful Particle Genesis - FuturEvoLabSDXL 1.0 (Anime)[SDXL]Lovepunk Synth - FuturEvoLabFutureDreamWorks-SDXL-FuturEvoLabDreamEvolution-SDXL-FuturEvoLabStable Diffusion 1.5 (Realistic)[SD1.5]Genesis Realistic - FuturEvoLabTemptation Core - FuturEvoLab[SD1.5]Meris Realistic - FuturEvoLab[SD1.5]Fantasy Epic - FuturEvoLab[SD1.5]Fantasy - FuturEvoLabStable Diffusion 1.5 (Anime)[SD1.5]LoveNourish EX Anime - FuturEvoLab[SD1.5]LoveNourish Anime - FuturEvoLab[SD1.5]Temptation Heart【2.5D style】- FuturEvoLabBy leveraging these models, creators can generate images that range from hyper-realistic to vividly imaginative, catering to various artistic and practical applications.〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓 ★★★ FuturEvoLab ★★★ 〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓
57
10
Welcome to FuturEvoLab!

Welcome to FuturEvoLab!

Welcome to FuturEvoLab! We greatly appreciate your continuous support. Our mission is to delve deep into the world of AI-generated content (AIGC), bringing you the latest innovations and techniques. Through this platform, we hope to learn and exchange ideas with you, pushing the boundaries of what's possible in AIGC. Thank you for your support, and we look forward to learning and collaborating with all of you.In our exploration, we recommend several powerful models:Pony XL (Realistic)[Pony XL]Aurora Realism - FuturEvoLab[Pony XL]Lifelike Doll Romance - FuturEvoLabPony XL (Anime)[Pony XL]Cyber Futuristic Maidens - FuturEvoLab[Pony XL]Cyberworld Anime - FuturEvoLabDream Brush SDXL - FuturEvoLabSDXL 1.0 (Realistic)[SDXL]Lover's Light - FuturEvoLab[SDXL]Real style fantasy - FuturEvoLab[SDXL]Soulful Particle Genesis - FuturEvoLabSDXL 1.0 (Anime)[SDXL]Lovepunk Synth - FuturEvoLabFutureDreamWorks-SDXL-FuturEvoLabDreamEvolution-SDXL-FuturEvoLabStable Diffusion 1.5 (Realistic)[SD1.5]Genesis Realistic - FuturEvoLabTemptation Core - FuturEvoLab[SD1.5]Meris Realistic - FuturEvoLab[SD1.5]Fantasy Epic - FuturEvoLab[SD1.5]Fantasy - FuturEvoLabStable Diffusion 1.5 (Anime)[SD1.5]LoveNourish EX Anime - FuturEvoLab[SD1.5]LoveNourish Anime - FuturEvoLab[SD1.5]Temptation Heart【2.5D style】- FuturEvoLabBy leveraging these models, creators can generate images that range from hyper-realistic to vividly imaginative, catering to various artistic and practical applications.
41
Negative Prompts

Negative Prompts

Avoid unwanted artifacts!Did you know using negative prompts can benefit your image?Here is a short list with some good negative prompts i have collected over one year of being a tensor user(SD 1.5 and XL compatible).Negatives for landscapesblurryboringclose-updark (optional)details are lowdistorted detailseeriefoggy (optional)gloomy (optional)grainsgrainygrayscale (optional)homogenouslow contrastlow qualitylowresmacromonochrome (optional)multiple anglesmultiple viewsopaqueoverexposedoversaturatedplainplain backgroundportraitsimple backgroundstandardsurrealunattractiveuncreativeunderexposedNegatives for street viewsanimals (optional)asymmetrical buildingsblurrycars (optional)close-upcreepydeformed structuresgrainyjpeg artifactslow contrastlow qualitylowresmacromultiple anglesmultiple viewsoverexposedoversaturatedpeople (optional)pets (optional)plain backgroundscarysolid backgroundsurrealunderexposedunreal architectureunreal skyweird colorsNegatives for people3Dabsent limbsage spotadditional appendagesadditional digitsadditional limbsaltered appendagesamputeeasymmetricasymmetric earsbad anatomybad earsbad eyesbad facebad proportionsbeard (optional)broken fingerbroken handbroken legbroken wristcartoonchildish (optional)cloned facecloned headcollapsed eyeshadowcombined appendagesconjoinedcopied visagecorpsecripplecropped headcross-eyeddepresseddesiccateddisconnected limbdisfigureddismembereddisproportionatedouble faceduplicated featureseerieelongated throatexcess appendagesexcess body partsexcess extremitiesextended cervical regionextra limbfatflawed structurefloating hair (optional)floating limbfour fingers per handfused handgroup of peoplegruesomehigh depth of fieldimmatureimperfect eyesincorrect physiologykitschlacking appendageslacking bodylong bodymacabremalformed handsmalformed limbsmangledmangled visagemerged phalangesmissing armmissing legmissing limbmustache (optional)nonexistent extremitiesoldout of focusout of frameparchedplasticpoor facial detailspoor morphologypoorly drawn facepoorly drawn feetpoorly drawn handspoorly rendered facepoorly rendered handssix fingers per handskewed eyesskin blemishessquintstiff facestretched napestuffed animalsurplus appendagessurplus phalangessurrealuglyunbalanced bodyunnaturalunnatural bodyunnatural skinunnatural skin toneweird colorsNegatives for photorealism3D renderaberrationsabstractanimeblack and white (optional)cartooncollapsedconjoinedcreativedrawingextra windowsharsh lightingillustrationjpeg artifactslow saturationmonochrome (optional)multiple levelsoverexposedoversaturatedpaintingphotoshoprottensketchessurrealtwistedUIunderexposedunnaturalunreal engineunrealisticvideo gameNegatives for drawings and paintings3dbad artbad artistbad fan artCGIgrainyhuman (optional)inaccurate skyinaccurate treeskitschlazy artless creativelowresnoisephotorealisticpoor detailingrealismrealisticrenderstacked backgroundstock imagestock phototextunprofessionalunsmoothAdditional negativesBad anatomy: flawed structure, incorrect physiology, poor morphology, misshaped bodyBad proportions: improper scale, incorrect ratio, disproportionateBlurry: unfocused, hazy, indistinctCloned face: duplicated features, replicated countenance, copied visageCropped: trimmed, cut, shortenedDark images: dark theme, underexposed, dark colorsDeformed: distorted, misshapen, malformedDehydrated: dried out, desiccated, parchedDisfigured: mangled, dismembered, mutilatedDuplicate: copy, replicate, reproduceError: mistake, flaw, faultExtra arms: additional limbs, surplus appendages, excess extremitiesExtra fingers: additional digits, surplus phalanges, excess appendagesExtra legs: additional limbs, surplus appendages, excess extremitiesExtra limbs: additional appendages, surplus extremities, excess body partsFingers: conjoined fingers, crooked fingers, merged fingers, fused fingers, fading fingersFused fingers: joined digits, merged phalanges, combined appendagesGross proportions: disgusting scale, repulsive ratio, revolting dimensionsJPEG artifacts: compression artifacts, digital noise, pixelationLong neck: extended cervical region, elongated throat, stretched napeLow quality: poor resolution, inferior standard, subpar gradeLowres: low resolution, inadequate quality, deficient definitionMalformed limbs: deformed appendages, misshapen extremities, malformed body partsMissing arms: absent limbs, lacking appendages, nonexistent extremitiesMissing legs: absent limbs, lacking appendages, nonexistent extremitiesMorbid: gruesome, macabre, eerieMutated hands: altered appendages, changed extremities, transformed body partsMutation: genetic variation, aberration, deviationMutilated: disfigured, dismembered, butcheredOut of frame: outside the picture, beyond the borders, off-screenPoorly drawn face: badly illustrated countenance, inadequately depicted visage, incompetently sketched featuresPoorly drawn hands: badly illustrated appendages, inadequately depicted extremities, incompetently sketched digitsSignature: autograph, sign, markText: written language, printed words, scriptToo many fingers: excessive digits, surplus phalanges, extra appendagesUgly: unattractive, unsightly, repellentUsername: screen name, login, handleWatermark: identifying mark, branding, logoWorst quality: lowest standard, poorest grade, worst resolutionDo not forget to set this article as favorite if you found it useful.Happy generations!
38
Understanding the Use of Parentheses in Prompt Weighting for Stable Diffusion

Understanding the Use of Parentheses in Prompt Weighting for Stable Diffusion

Prompt weighting in Stable Diffusion allows you to emphasize or de-emphasize specific parts of your text prompt, giving you more control over the generated image. Different types of brackets are used to adjust the weights of keywords, which can significantly affect the resulting image. In this tutorial, we will explore how to use parentheses (), square brackets [], and curly braces {} to control keyword weights in your prompts.Basics of Prompt WeightingBy default, each word or phrase in your prompt has a weight of 1. You can increase or decrease this weight to control how much influence a particular word or phrase has on the generated image. Here’s a quick guide to the different types of brackets:1. Parentheses (): Increase the weight of the enclosed word or phrase.2. Square Brackets []: Decrease the weight of the enclosed word or phrase.3. Curly Braces {}: In some implementations, they behave similarly to parentheses but with slightly different multipliers.Using Parentheses to Increase WeightParentheses () are used to increase the weight of the enclosed keywords. This means the AI model will give more importance to these words when generating the image.• Single Parentheses: Increase the weight by 1.1 times.• Example: (girl) increases the weight of “girl” to 1.1.• Nested Parentheses: Increase the weight further.• Example: ((girl)) increases the weight of “girl” to 1.21 (1.1 * 1.1).You can also specify a custom weight:• Custom Weight: Specify the exact multiplier.• Example: (girl:1.5) increases the weight of “girl” to 1.5.Example Prompts:(masterpiece, best quality), (beautiful girl:1.5), highres, looking at viewer, smileUsing Square Brackets to Decrease WeightSquare brackets [] are used to decrease the weight of the enclosed keywords. This means the AI model will give less importance to these words when generating the image.• Single Square Brackets: Decrease the weight by 0.9 times.• Example: [background] decreases the weight of “background” to 0.9.• Nested Square Brackets: Decrease the weight further.• Example: [[background]] decreases the weight of “background” to 0.81 (0.9 * 0.9).Example Prompts:(masterpiece, best quality), (beautiful girl:1.5), highres, looking at viewer, smile, [background:0.8]Using Curly BracesCurly braces {} are less commonly used but in some implementations (e.g., NovelAI), they serve a similar purpose to parentheses with different default multipliers. For instance, {word} might be equivalent to (word:1.05).Example Prompts:(masterpiece, best quality), {beautiful girl:1.3}, highres, looking at viewer, smileCombining WeightsYou can combine different types of brackets to fine-tune the prompt further:• Example: ((beautiful girl):1.2), [[background]:0.7]Example Prompts:(masterpiece, best quality), ((beautiful girl):1.2), highres, looking at viewer, smile, [[background]:0.7]Practical ExamplesIncreasing Emphasis:To generate an image where the focus is heavily on the “girl”:(masterpiece, best quality), (beautiful girl:1.5), highres, looking at viewer, smile, [background:0.8]Decreasing Emphasis:To generate an image where the “background” is less emphasized:(masterpiece, best quality), beautiful girl, highres, looking at viewer, smile, [background:0.5]ConclusionBy using parentheses, square brackets, and curly braces effectively, you can guide Stable Diffusion to prioritize or de-prioritize certain elements in your prompt, resulting in images that better match your vision. Practice using these weighting techniques to see how they affect your generated images, and adjust accordingly to achieve the best results.〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓 ★★★ FuturEvoLab ★★★ 〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓Welcome to FuturEvoLab! We greatly appreciate your continuous support. Our mission is to delve deep into the world of AI-generated content (AIGC), bringing you the latest innovations and techniques. Through this platform, we hope to learn and exchange ideas with you, pushing the boundaries of what's possible in AIGC. Thank you for your support, and we look forward to learning and collaborating with all of you.In our exploration, we recommend several powerful models:Pony XL (Realistic)[Pony XL]Aurora Realism - FuturEvoLab[Pony XL]Lifelike Doll Romance - FuturEvoLabPony XL (Anime)[Pony XL]Cyber Futuristic Maidens - FuturEvoLab[Pony XL]Cyberworld Anime - FuturEvoLabDream Brush SDXL - FuturEvoLabSDXL 1.0 (Realistic)[SDXL]Lover's Light - FuturEvoLab[SDXL]Real style fantasy - FuturEvoLab[SDXL]Soulful Particle Genesis - FuturEvoLabSDXL 1.0 (Anime)[SDXL]Lovepunk Synth - FuturEvoLabFutureDreamWorks-SDXL-FuturEvoLabDreamEvolution-SDXL-FuturEvoLabStable Diffusion 1.5 (Realistic)[SD1.5]Genesis Realistic - FuturEvoLabTemptation Core - FuturEvoLab[SD1.5]Meris Realistic - FuturEvoLab[SD1.5]Fantasy Epic - FuturEvoLab[SD1.5]Fantasy - FuturEvoLabStable Diffusion 1.5 (Anime)[SD1.5]LoveNourish EX Anime - FuturEvoLab[SD1.5]LoveNourish Anime - FuturEvoLab[SD1.5]Temptation Heart【2.5D style】- FuturEvoLabBy leveraging these models, creators can generate images that range from hyper-realistic to vividly imaginative, catering to various artistic and practical applications.〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓 ★★★ FuturEvoLab ★★★ 〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓
30
PhotoReal Makeup Edition - V3 Slider

PhotoReal Makeup Edition - V3 Slider

PhotoReal Makeup Edition - V3 Slider (no trigger)Introducing the PhotoReal Makeup Edition - V3 Slider! Slide to the right to add beautiful, realistic makeup. Slide to the left to reduce the makeup effect for a more natural look. It's perfect for adjusting the makeup to get just the style you want.Try it out and see the amazing changes you can make!More Information:- Model linkYour feedback is invaluable to me. Feel free to share your experiences and suggestions in the comment section. For more personal interactions, join our Discord server where we can discuss and learn together.Thank you for your continued support!
26
2
Let's make LoRA with 🎨TensorArt 🖌️

Let's make LoRA with 🎨TensorArt 🖌️

Hello everyone, this time I will briefly explain his LoRA creation feature in TensorArt. This feature allows you to create your own learning files.⚠️As I am self-studying, I would appreciate it if you could read this for reference only⚠️overview:LoRA (low rank adaptation) is an efficient method for fine-tuning AI models. TensorArt allows you to easily create your own training files with specific styles and characteristics using LoRA. This allows you to generate images for individual projects and creative needs.process:1. Login and menu selection:First, log into TensorArt and select the Online Training option from the menu.2. Upload images:Add the images you want to train to the upload area on the left. To do this, prepare multiple images with specific characteristics, such as the character's expression or pose.3. Set model and trigger word:Select the model theme to be used, the base model (select from anime, reality, 2.5D, standard), and set the trigger word etc.Trigger words do not necessarily need to be set depending on the purpose.4. About tags:Tags are automatically generated for each image when you upload it. Click on the uploaded image to check, delete, or add the generated tags. By deleting the tags of the features you want to learn, you can learn the features more accurately. Adjusting the tags will improve the quality of the images produced.(If this is your first time, feel free to ignore it.)5. Run the training:Once the settings are complete, click the "Train Now" button at the bottom right. Training can take minutes to hours, and you can track your progress on a dedicated page. The amount of credits consumed will vary depending on the number of images you prepare, the number of training sessions, and the number of epochs, so please proceed in a planned manner.6. Download the file:Once training is complete, download the generated LoRA file and use it for actual image generation.7. Host the model:Proceed to create a project from "Host my model", enter the necessary information and click the create button.Completion and testing:Let's try image generation using the trained LoRA file. Although features can be learned sufficiently with a small number of images, similar compositions and poses are more likely to be generated if there are fewer reference images.             Completed: "Shizuku-chan" general-purpose XLCredit consumption:Creating LoRA consumes credits. The number of credits consumed varies depending on the number of images you prepare, the number of training sessions, and the number of epochs, so it is important to check the number of credits you need in advance and proceed accordingly. If you don't have enough credits, we recommend purchasing more or considering a Pro plan.summary:By using TensorArt's LoRA creation function, you can create illustrations with a higher degree of freedom. Please try out the features that are easy to use even for beginners. By using LoRA, your creative projects will be even more fulfilling.Let's create! !
25
15
Prompt reference for "Lighting Effects"

Prompt reference for "Lighting Effects"

Hello. I usually use "lighting/lighting effects" when generating images.I will introduce some of the "words" I use when I want to add something.Please note that these words alone do not provide 100% effectiveness, and the base modelThe effect you get will differ depending on the LoRA sampling method and where you place it in the prompt.Words related to "lighting effects"・ Backlight :  Light from behind the subject・ Colorful lighting :  The impression itself is not colored, but the color changes depending on the light.・ moody lighting :  natural lighting, not direct artificial light・ studio lighting :  A term used to describe the artificial lighting of a photography studio.・ Directional Light :  directional light source is a light source that shines parallel rays in a selected direction.・ Dramatic lighting :  Lighting techniques in the field of photography・ Spot lighting :  A lighting technique that uses artificial light in a small area.・ Cinematic lighting :  A single word that describes several lighting techniques used in movies.・ Bounce Lighting :  Light reflected by a reflex plate, etc.・ Practical Lighting :  Photographs and videos that depict the light source itself in the composition・ Volumetric lighting :  A word derived from 3DCG. It tends to be a picture with a divine golden light source.・ Dynamic lighting :  I don't really understand what it means, but it tends to create high-contrast images.・ Warm lighting :  Creates a warm picture illuminated with warm colors・ Cold lighting :  Lights with a cold light source.・ High-key lighting :  Soft light, minimal shadows, low contrast, resulting in bright frames・ Low-key lighting :  It provides high contrast, but the impression is a little weak.・ Hard light :  Strong light. Highlights appear strong.・ soft light :  A word that refers to faint light.・ strobe lighting :  strong artificial light (stroboscopic lighting)・ Ambient light :  An English word that refers to ambient lighting/indoor lighting.・ flash lighting  :  For some reason, the characters themselves tend to emit light, and there are often flashes of light. (flash lighting photography) ・ Natural lighting :  This tends to create a natural-looking picture that feels contrasting with artificial light.
24
2
My Models list & need Suggestions in comment. Part - 1

My Models list & need Suggestions in comment. Part - 1

Hello all!I uploaded my models 3 different category1st - ColorMaxColorMax models i made only for Comic, Cartoon and Anime art style. all my models with ColorMax Title have Different art styles. Models list :-ColorMax(SDXL) - https://tensor.art/models/628320743211433592This model was my first SDXL model and have unique 2.5d + anime art style. that works better with detailed prompt.ColorMax Anime-Verse(SD1.5) - https://tensor.art/models/667900735154841978This model was detailed semi-Anime + 2.5d style. i made 2 versions V1 was beta and v2 is full and highly detailed. but its take 25 or more steps for make proper image. i will start work on v3 soon.ColorMax Anime (Shiny) - https://tensor.art/models/660959423759510296This was my First, best and most detailed anime model. i made it with 3gb+ anime images dataset. and i am so glad and Thankful for everyone love and got positive response. and Shiny i write because i made 2 models and 2ed one with dark style.ColorMax Anime (Dark) - https://tensor.art/models/667209700686802899Same as ColorMax Anime (shiny)... this model i train with dark anime dataset but its have some issues. i was plan to make v2 but i decided to give other model priority first according to people demand.ColorMax Eye details - https://tensor.art/models/680228304906628453After getting lots of people demand... i made Eye Detailer. i am Glad people love it.ColorMax AniAir - https://tensor.art/models/700620204037809468This model For multi colorful anime arts... mostly Sci-fi, fantasy or any unique anime art can easily make this model.ColorMax Anime V2 - https://tensor.art/models/701391262336569195After many people request and some little prompt issues in ColorMax Anime(shiny)... i decided to improve it and made V2 with highly detailed and low steps setting... that need only 20 steps for make image.ColorMax Neon lights and background - https://tensor.art/models/695080125722681685This was my first lora made with TA Training and this mainly for neon style in 2.5d or anime images.ColorMax Toon Style - https://tensor.art/models/713960892150149766My newest SD1.5 model. i made it very detailed and cartoon style.ColorMax Anime Shiny Detailer & Enhancer - https://tensor.art/models/715722275418201651This is Detailer for SDXL ColorFul anime... I made with TA Training.I am not gonna add pony models in this list...To be continue... in part 2
23
2
The Significance of “BREAK” in Stable Diffusion Prompts

The Significance of “BREAK” in Stable Diffusion Prompts

The Significance of “BREAK” in Stable Diffusion PromptsUnderstanding Prompts and TokenizationStable Diffusion interprets prompts by dividing them into tokens, with earlier tokens often having a stronger influence on the resulting image than later ones. The model processes prompts in blocks of 75 tokens, making the order of tags within the prompt crucial for achieving the desired visual effects.The Role of “BREAK”“BREAK” is used to create a deliberate separation within the prompt, capping a token block at 75 tokens even if the prompt is shorter. This forces the model to reset the influence of subsequent words, allowing for more controlled and segmented impacts within the prompt.Practical ExampleConsider the following prompt sequence:Score_9, Score_8_up, Score_7_up, Score_6_up, Score_5_up, Score_4_up, masterpiece, best quality, BREAK (FuturEvoLabBadge:1.5), cyberpunk badge, BREAK Cyberpunk style, Cyberpunk girl head, wings composed of brushes behind her, BREAK front view, Purple energy gem, neon colors, intricate design, symmetrical pattern, futuristic emblem, vibrant hues, high-tech, black backgroundThis prompt is divided into segments using “BREAK” to manage the influence of each section:1. Quality and Style:Score_9, Score_8_up, Score_7_up, Score_6_up, Score_5_up, Score_4_up, masterpiece, best quality, BREAK1. This segment ensures the image is of the highest quality.2. Cyberpunk Badge:(FuturEvoLabBadge:1.5), cyberpunk badge, BREAK2. This specifies a detailed, futuristic badge with a cyberpunk theme.3. Character Description:Cyberpunk style, Cyberpunk girl head, wings composed of brushes behind her, BREAK3. This describes the main character and specific elements such as wings made of brushes.4. Additional Elements:front view, Purple energy gem, neon colors, intricate design, symmetrical pattern, futuristic emblem, vibrant hues, high-tech, black background4. These details enhance the overall composition with specific colors, patterns, and a high-tech appearance.Benefits of Using “BREAK”1. Controlled Influence: Isolates specific tags to minimize unwanted interactions and maintain visual coherence.2. Editing Flexibility: Allows easier adjustments and refinements of prompts without drastically altering other parts of the image.3. Precision: Ensures that certain descriptive tags only affect intended parts of the image.By strategically placing “BREAK” in your prompts, you can significantly enhance control over the image generation process, leading to more precise and visually appealing results.ConclusionInserting “BREAK” in Stable Diffusion prompts is a powerful method for advanced users aiming for high levels of control and detail in AI-generated images. It helps manage the influence of different tags, ensuring each element of the prompt contributes as intended to the final output.By understanding and applying the concept of “BREAK,” users can improve their prompt crafting skills, leading to more sophisticated and desired AI art creations.〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓 ★★★ FuturEvoLab ★★★ 〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓Welcome to FuturEvoLab! We greatly appreciate your continuous support. Our mission is to delve deep into the world of AI-generated content (AIGC), bringing you the latest innovations and techniques. Through this platform, we hope to learn and exchange ideas with you, pushing the boundaries of what's possible in AIGC. Thank you for your support, and we look forward to learning and collaborating with all of you.In our exploration, we recommend several powerful models:Pony XL (Realistic)[Pony XL]Aurora Realism - FuturEvoLab[Pony XL]Lifelike Doll Romance - FuturEvoLabPony XL (Anime)[Pony XL]Cyber Futuristic Maidens - FuturEvoLab[Pony XL]Cyberworld Anime - FuturEvoLabDream Brush SDXL - FuturEvoLabSDXL 1.0 (Realistic)[SDXL]Lover's Light - FuturEvoLab[SDXL]Real style fantasy - FuturEvoLab[SDXL]Soulful Particle Genesis - FuturEvoLabSDXL 1.0 (Anime)[SDXL]Lovepunk Synth - FuturEvoLabFutureDreamWorks-SDXL-FuturEvoLabDreamEvolution-SDXL-FuturEvoLabStable Diffusion 1.5 (Realistic)[SD1.5]Genesis Realistic - FuturEvoLabTemptation Core - FuturEvoLab[SD1.5]Meris Realistic - FuturEvoLab[SD1.5]Fantasy Epic - FuturEvoLab[SD1.5]Fantasy - FuturEvoLabStable Diffusion 1.5 (Anime)[SD1.5]LoveNourish EX Anime - FuturEvoLab[SD1.5]LoveNourish Anime - FuturEvoLab[SD1.5]Temptation Heart【2.5D style】- FuturEvoLabBy leveraging these models, creators can generate images that range from hyper-realistic to vividly imaginative, catering to various artistic and practical applications.〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓 ★★★ FuturEvoLab ★★★ 〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓
22

Tips for new Users

Intro Hey there! If you're reading this, you're probably new to AI image generation and want to learn more. If you're not, you probably already know more than me :). Yeah, full disclosure: I'm still pretty inexperienced at this whole thing, but I thought I could still share some of the things I've learned with you! So, in no particular order:1. You can like your own posts I doubt there's anyone who doesn't know this already, but if you're posting your favorite generations and you care about getting likes, you can always like them yourself. Sketchy? Kinda. Do I still do it? Yes. And on the topic of getting more likes:2. Likes will often be returned Whenever I receive a like on one of my posts, I'll look at that person's pictures and heart any that I particularly enjoy. I know a lot of people do this, so one of the best ways to get people to notice and like your content is to just browse through posts and be generous with your own likes. It's a great way to get inspiration too!3. Use turbo/lightning LORAs If you find yourself running out of credits, there are ways to conserve them. When I'm iterating on an idea, I'll use a SDXL model (Meina XL) paired with this LORA. This lets me get high quality images in 10 steps for only 0.4 credits! It's really nice, and works with any SDXL model. Unfortunately, if there is a similar method for speeding up SD 1.5 models I don't know it, so it only works with XL.4. Use ADetailer smartly ADetailer is the best solution I've found for improving faces and hands. It's also a little difficult to figure out. So, though I'm still not a professional with it, I thought I could share some of the tricks I've learned. The models I normally use are face_yolo8s.pt and hand_yolo8s.pt. The "8s" versions are better than the "8n" versions, though they are slightly slower. In addition to these models, I'll often add the Attractive Eyes and Perfect Hand LORAs respectively. These are all just little things you can do to improve these notoriously hard parts of image generation. Also, using ADetailer before upscaling the image is cheaper in terms of credits, though the upscaling process can sometimes mess up the hands and face a little bit so there's some give and take there.5. Use an image editing app Wait a minute, I hear you saying, isn't this a guide for using Tensor Art? Yes, but you can still use other tools to improve your images. If I don't like a specific part of my image, I'll download it, open it in Krita (Or Photoshop or Gimp) and work on it. My art skills are pretty bad, (which is why I'm using this site in the first place,) but I can still remove, recolor, or edit certain aspects of the image. I can then reupload it to Tensor Art, and Img2img with a high denoising strength to improve it further. You could also just try inpainting the specific thing you want to change, but I always find it a bit of a struggle to get inpaint to make the changes I want.6. Experiment! The best way to learn is to do, so just start generating images, fiddling with settings, and trying new things. I still feel like I'm learning new stuff every day, and this technology is improving so fast that I don't think anyone will ever truly master it. But we can still try our hardest and hone our skills through experimentation, sharing knowledge, and getting more familiar with these models. And all the anime girls are a big plus too.Outro If you have anything to add, or even a tip you'd like to share, definitely leave a comment and maybe I can add it to this article. This list is obviously not exhaustive, and I'm no where near as talented as some of the people on this platform. Still though, I hope to have helped at least one person today. If that was you, maybe give the article a like? I appreciate it a ton, so if you enjoyed, just let me know. Thanks for reading!
22
Batch Resize Images

Batch Resize Images

https://www.presize.io/batch resize images for dataset training models/lora,etc. (allow tweaking each image's scale,zoom)as the general size desired.512-5121024-1024etcin one go.'optional'https://www.birme.net/batch resize images for dataset training models/lora,etc.as the general size desired.512-5121024-1024etcin one go.
22
How to enable "Mature Content" (N-SFW or R-18) and S*ensitive Word lists  ლ(́◕◞౪◟◕‵ლ)

How to enable "Mature Content" (N-SFW or R-18) and S*ensitive Word lists ლ(́◕◞౪◟◕‵ლ)

I believe the most important guideline for new users should be: How to enable "Mature Content" ლ(́◕◞౪◟◕‵ლ)Just kidding, alright, I just saw the new feature about article publishing and noticed that there are very few people here. So, I'll take the initiative to explain it ( ~'ω')~How to enable "Mature Content":Go to settings, and the option to enable "Mature Content" should be the first button. Activate it and you're good to go, enjoy! (́◉◞౪◟◉‵)If you don't want N-SFW content to cause your social de/ath, I suggest enabling "Blur Mature Content." This way, all N-SFW content will be blurred. This button is located just below "Mature Content" and will only appear when you have enabled "Mature Content."Q: Why should I enable "Mature Content"?A: Besides some reasons you don't want to admit, it's because Tensor Art activated AI filtering for N-SFW content. Unfortunately, the AI filter they're using is not very stable yet, and many times it mistakenly identifies SFW images as N-SFW. If you haven't enabled "Mature Content," the misjudged images you post will be hidden from your account, even you won't be able to see your own pictures. From your perspective, you might think that the image you just posted was immediately deleted by Tensor Art.So, I'm also forced to enable "Mature Content." I'm doing this for academic purposes, definitely not for any reasons I don't want to admit. I'm pure, really! Please believe me! (◕ܫ◕)Q: Would it be harmful to my account if many of my posts were classified as N-SFW content?A: No, it wouldn't have any negative impact on your account. Your posts will simply not be visible to users who haven't enabled "Mature Content" or to unregistered individuals. It may also make you wonder if the AI is trolling you.--Updated on 23 May 2024, if you post child p*ornography related images, your account may be banned, please be aware.In addition, please be careful with ‘The Forbidden Fox’ (see ‘The Forbidden Fox’ incident below for more information).Q: How does Tensor Art's AI review my posts?A: I'm not an official representative, but based on my observations, Tensor Art's AI determines whether your post contains N-SFW content by analyzing the text prompt, rather than directly analyzing the image itself. Therefore, you may notice that Tensor Art's AI can detect N-SFW content within less than a second after posting (although its accuracy may...) (:3 」∠ )Here are some prompts that I have observed, which might not resemble N-SFW content but could still be mistakenly identified as such (as of May 23, 2024, and may be updated from time to time):S*ensitive Word lists:b*reast, b*are, s*eductive, b*lood, unc*ensored,s*exy, ad⚠️ult , hard⚠️core--Updated on 23 May 2024, "The Forbidden Fox" incidentDate of occurrence: May 22, 2024Hazard Level: Post replacementDetails: If your prompt contains certain specific words, including "ad⚠️ult" and "hard⚠️core," your post will be automatically banned by the AI filter. Images will be replaced with an illustration of a fox wearing a crown interacting with a safe, accompanied by the message "Websites Oops! Forbidden," as shown in the images below.The individuals affected by this fox image will not be aware that their pictures have been replaced. In their accounts, they will see their original images instead of the fox image. However, others will only see the fox image.The purpose behind the fox's interaction with the safe remains unknown, but it is evident that the victims' images have been replaced and taken without their knowledge.According to official sources, the appearance of this image is considered abnormal, and a team of hunters is being dispatched to eliminate the fox. The encounter between the hunters and the fox took place on ██th of ██, and the battle officially commenced.The battle lasted for ██ hours, with the hunters attempting to resolve the situation using the "UPDATE" method. However, the fox retaliated with a curse called "BUG." During this period, the "BUG" caused a disruption in the order of all users' posts, with over ██ posts hidden by the fox. Approximately ██ affected users were impacted, forcing the hunters to retreat.Later, the hunters regrouped and sought the assistance of a sage. They utilized a time reversal magic to restore the posts affected by the "BUG" back to normal. With the sage's cooperation, the hunters employed the "UPDATE" method once again and successfully executed the long-wicked fox.Currently, prompts such as "ad⚠️ult" and "hard⚠️core" will no longer result in post replacements, and the majority of previously stolen posts have been successfully recovered.As stated by official personnel, the affected images can be reported for restoration. If you discover that your pictures have been taken by this fox, please report it immediately through the official Discord bug report channel.The official Discord is located in the top right corner of the website, just to the left of your avatar.If you can't locate it, you can also click on this link.Q: Why does the AI filter often make mistakes?A: Perhaps Tensor Art's AI is still only ten years old, but she will grow up (I hope).
22
22
[ 🔥🔥🔥 SD3 MEDIUM OPEN DOWNLOAD - 2024.06.12 🔥🔥🔥]

[ 🔥🔥🔥 SD3 MEDIUM OPEN DOWNLOAD - 2024.06.12 🔥🔥🔥]

Finally! It's happening! The Medium version will be released first!+Stability.AICo-CEO Christian Laporte has announced the release of the weights.Stable Diffusion 3 Medium, our most advanced text-to-image model, will soon be available! You can download the weights from Hugging Face starting Wednesday, June 12.SD3 Medium is the SD3 model with 2 billion parameters, designed to excel in areas where previous models struggled. Key features include:• Photorealism: Overcomes common artifacts in hands and faces to deliver high-quality images without complex workflows.• Typography: Provides powerful typography results that surpass the latest large models.• Performance: Optimized size and efficiency make it ideal for both consumer systems and enterprise workloads.• Fine-Tuning: Can absorb fine details from small datasets, perfect for customization and creativity.SD3 Medium weights and code are available for non-commercial use only. If you wish to discuss a self-hosting license for commercial use of Stable Diffusion 3, please fill out the form below and our team will contact you shortly.+ @everyone
19
3
The future of AI image generation: endless possibilities -

The future of AI image generation: endless possibilities -

introduction{{For those who are about to start AI image generation}}In recent years, advances in AI technology have brought about revolutionary changes in the field of image generation. In particular, AI-powered illustration generation has become a powerful tool for artists and designers. However, as this technology advances, issues of creativity and copyright arise. In this article, we will explain the possibilities of AI image generation, specific use cases, how to create prompts, how to use LoRA and its effects, keywords for improving image quality, consideration for copyright, etc.Fundamentals of AI image generationAI image generation uses artificial intelligence to learn from data and generate new images. Deep learning techniques are often used for this, and one notable approach is stable diffusion. Stable Diffusion employs a probabilistic method called a diffusion model to gradually remove noise during image generation, resulting in highly realistic, high-quality output.Generating real imagesAI technology is excellent not only for creating cute illustrations, but also for generating realistic images. For example, you can generate high-resolution images that resemble photorealistic landscapes or portraits. By utilizing Stable Diffusion, it is possible to generate more detailed images, which expands the possibilities of application in various fields such as advertising, film production, and game design.Generate cute illustrationsOne of the practical applications of AI image generation is the creation of cute illustrations. This is useful for things like character design and avatar creation, allowing you to quickly generate different styles. This process typically involves collecting a large dataset of illustrations, training an AI model on this data to learn different styles and patterns, and generating new illustrations based on user input or keywords.creativity and AIAI image generation also influences creative ideas. Artists can use her AI-generated images as inspiration for new works or expand on ideas, which can lead to the creation of new styles and concepts never thought of before.Use and effects of LoRALoRA (Low-Rank Adaptation) is a technique used to improve the performance of AI models. Its impacts include:1. Fine-tune models: LoRA allows you to fine-tune existing AI models to learn specific styles and features, allowing for customization based on user needs.2. Efficient learning: LoRA reduces the need for large-scale data collection and training costs by efficiently training models using small datasets.3. Rapid adaptation: LoRA allows you to quickly adapt to new styles and trends, making it easy to generate images tailored to your current needs.For example, LoRA can be leveraged to efficiently achieve high-quality results when generating illustrations in a specific style.Creating a promptWhen instructing an AI to generate illustrations, it's important to create effective prompts. Key points for creating prompts include providing specific instructions, using the right keywords, trial and error, and an optional reference image to help the AI figure out what you're looking for. Keywords for improving image qualityWhen creating prompts for AI image generation, you can incorporate keywords related to image quality improvement to improve the overall quality of the images generated. Useful keywords include "high resolution," "detail," "clean lines," "high quality," "sharp," "bright colors," and "photorealistic."Copyright considerationsImage generation using AI also raises copyright issues. If the dataset used to train your AI model contains copyrighted works, the resulting images may infringe your copyright. When using AI image generation tools, it's important to be aware of the data source, ensure that the generated images comply with copyright laws, and check the license agreement.conclusionAI image generation offers great possibilities for artists and designers, but it also raises challenges related to copyright. By using data responsibly and understanding copyright law, you can leverage AI technology to create innovative work. Leveraging technologies like LoRA can further improve efficiency and quality. Users can adjust the output by incorporating image enhancement keywords into the prompt. Let's explore new ways of expression while being aware of advances in AI technology and the considerations that come with it! !
19
16
Your work can be sold at...

Your work can be sold at...

You Can Sell Models, LoRAs, ..etc on Ko-fi and SociabuzzIn today's digital era, machine learning and AI tools are valuable assets that can be leveraged in various applications, from developing innovative technologies to conducting cutting-edge research. If you have the skills to create high-quality models, LoRAs, and other related resources, you have a great opportunity to monetize those skills. Two platforms where you can sell ready-to-use AI resources are Ko-fi and Sociabuzz.Why Sell AI Resources?Well-prepared models and more are in high demand among researchers, developers, and tech companies. The process of creating and fine-tuning these resources often requires significant time and expertise. By providing ready-to-use AI resources, you can help others save time and effort while earning financial rewards for your work.-Ko-fi PlatformKo-fi is a popular platform (Global) that allows creators to receive financial support from their fans / buyers. Although initially designed to support artists and content creators, Ko-fi has become a good place to sell digital products, including AI models and more.Steps to Sell AI Resources on Ko-fi:Create a Ko-fi Account: Sign up and create a profile on Ko-fi.Add Digital Products: Use the "Shop" feature to add your models, LoRAs, or other AI resources as digital products.Describe the Resource: Create an informative and engaging description of your resource, including details such as the type of model, use cases, and performance metrics.Set a Price: Determine a reasonable price for your resource, considering the quality and uniqueness of the product you offer.Promote Your Product: Share the link to your Ko-fi page on social media, blogs, or relevant communities to attract potential buyers.-Sociabuzz PlatformSociabuzz is another popular platform in Indonesia (can be used by anyone from any country such as ko-fi, etc.) that allows creators to receive support from their fans / buyers. Additionally, Sociabuzz provides features for selling services and digital products.Steps to Sell AI Resources on Sociabuzz:Create a Sociabuzz Account: Sign up and create a profile on Sociabuzz.Add Services or Products: Use the "Create Service" feature to add your models, LoRAs, or other AI resources as products for sale.Describe the Resource: Write a detailed description of your resource, including important information like the type of model, use cases, and performance metrics.Set a Price: Set a competitive price for your resource.Promote Your Sociabuzz Page: Use social media, forums, and blogs to promote your Sociabuzz page and attract interested buyers.Tips for SuccessQuality is Key: Ensure that the models and resources you sell are high-quality, well-tested, and relevant to market needs.Clear Descriptions: Clearly explain what is included in your resource and how it can be used.Free Samples: Consider providing free samples of your resources to attract buyer interest.Active Promotion: Actively promote your product on various platforms and relevant communities.By leveraging platforms like Ko-fi and Sociabuzz, you can sell ready-to-use AI models, LoRAs, and other related resources to a broader audience while earning income from your hard work in creating and fine-tuning these tools. Start today and see how your AI resources can become a valuable source of revenue!
18
2
Resources Kpop / Actress

Resources Kpop / Actress

Kpophttps://kpopping.com/notes : login first. then search for the idol you want in the search field, click on the list of photos you want then press the download menu (no need to download one by one like downloading on google, etc.) There are other websites I know but it will be a little complicated because they download one by one and the content is more or less the same as kpopping, while kpopping can download a lot at once (you will know soon if you try it).Actresshttps://www.hancinema.net/notes : no need to login, but must download one by one.first search for the desired artist on the search menu, then click on the artist's profile, then click on the pictures menu / more pictures, then download.
17
What exactly are the "node" and the "workflow" in AI image platform (explanation for the beginner)

What exactly are the "node" and the "workflow" in AI image platform (explanation for the beginner)

The Traditional Way of Generating AI Images for the BeginnerIf you are a beginner in the AI community, maybe you will be very confused and have no clue about what is "Node", and "Workflow" and their relations with "AI Tools" in the TensorArtTo start with the most simple way. We need to first mention how the user generates an image using a "Remixing" button that brings us to the "Normal Creation menu"Needless to say, by just editing the prompt (what you would like to see your picture look like) and negative prompt (what you do not want to see in the output image). Then push the Generate button, and the wonderful AI tool will kindly draw the new illustration serving you within a minute!!!!That sounds great, don't you think? If we imagine how humans spent a huge amount of time in the past to publish just 1 single piece of art. (Yeah, today, in 2024, in my personal opinion, both AI and human abilities are still not fully replaceable, especially in the terms of beautiful perfect hand :P ) However, the backbone or what happens behind the User-friendly menu allows us to "Select model", "Add LoRA", "Add ControlNet", "Set the aspect ratio (the original size of the image)" and so on, all of them are collected "Node" in a very complex "Workflow" PS.1. The Checkpoint or The Model often refers to the same thing. They are the core program that had been trained to draw the illustration. Each one has its strengths and weaknesses (I.E. Anime oriented or Realistic oriented) PS.2. The LoRA (Low-Rank Adaptation) is like an add-on to the Model allowing it to adapt to a different style, theme, and user preference. A concrete example is the Anime Character LoRAPS.3 The ControlNet is like a condition setting of the image. It helps the model to truly understand what is beyond the text prompt can describe. For instance, how a character poses in each direction and the angle of the camera.So here comes "The Comfyflow" (the nickname of the Workflow, people also mentioned it by the name "ComfyUI") which gives me a super headache when I see things like this for the first time in my life!!!!!!!!!(This image is a flow I have spent a lot of time studying, it is a flow for combining what is in the two images into a single one) Yeah, maybe, it is my fault that did not go to class about the workflow from the beginning or search for the tutorial on YouTube the first time (as my first language is not English). But would it be better if we had an instructor to tell us step-by-step here in Tensor.ArtAnd that is the reason why I got inspired to write this article solely for the beginner. So let's start with the main content of the article.What is ComfyFlowComfyFlow or the Workflow is an innovative AI image-generating platform that allows users to create stunning visuals with ease. To get the most out of this tool, it's important to understand two key concepts: "workflow" and "node." Let's break these down in the simplest way possible.What is a Workflow?A workflow is like a blueprint or a recipe that guides the creation of an image. Just as a recipe outlines the steps to make a dish, a workflow outlines the steps and processes needed to generate an image. It’s a sequence of actions that the AI follows to produce the final output.Think of it like this:Recipe (Workflow): Tells you what ingredients to use and in what order.Ingredients (Nodes): Each step or component used in the recipe.Despite the recommended pre-set template that TensorArt kindly gives to the users, from the beginner view's viewpoint without the knowledge of the workflow, it is not that helpful because, after clicking the "Try" button, we will bombarded with the complexity of the Node!!!!!!!What is a Node?Nodes are the building blocks of a workflow. Each node represents a specific action or process that contributes to the final image. In ComfyFlow, nodes can be thought of as individual steps in the workflow, each performing a distinct function.Imagine nodes as parts of a puzzle:Nodes: Individual pieces that fit together to complete the picture (workflow).How Do Workflows and Nodes Work Together? 1-2) Starting Point: Every workflow begins with an initial node, which might be an image input from the user, together with Checkpoint and LoRA serving the role of image references. 3-4) Processing Nodes: These are nodes that draw or modify the image in some way, such as adding color, or texture, or applying filters. 5) Ending Point: The node outputs the completed image which works very closely with the node of the previous stage in terms of sampling and VAE PS. A Variational Autoencoder (VAE) is a generative model that learns input data, such as images, to reconstruct and generate new, similar, or variations of images based on the patterns it has learned.Here is the list of nodes I have used in the normal image-generating images of my Waifu using 1checkpoint, and 2LoRAs to help the reader understand how ComfyFlow worksThe numbers 1-5 represent the overview process of the workflow and the role of each type of node I have mentioned above. However, in the case of more complex tasks like in AI Tools, the number of nodes sometimes is higher than 30!!!!!!!By the way, when starting with an empty ComfyFlow page, the way to add a node is "Right Click" -> "Add Node" -> Scroll down to the top, since the most frequently used node will be over there.1) loaders -> Load CheckPointLike in the normal task creation menu, this node is the one we can choose CheckPoint or the Core model.It is important to note that nodes work together using input/output. The "Model/CLIP/VAE" (the output) circles have to connect to the next one in which it has to correspond. We link them together by left-clicking on the circle's inner area and then drag to the destination. PS. CLIP (Contrastive Language-Image Pre-training) is a model developed by OpenAI that links images and text together in a way that helps AI understand and generate images based on textual descriptions.2) loaders -> Load LoRACheckpoint is very closely related to LoRA and that is a reason why they are connected by the input/output named "model/MODEL", "clip/CLIP"Anyway, since in this example, I have used 2 LoRAs (first for The theme of the picture and the Second for the character reference of my Waifu), two nodes of LoRAs then have to be connected as well. Here we can adjust the strength of the LoRA or the weight like it happens in the normal task generation menu.3) CLIP Text Encode (Prompt)This node is the prompt and negative prompt we normally see in the menu. The input here is only clip (Contrastive Language-Image Pre-training) and the output is "CONDITIONING" User tip: If you click on the output circle of the "Load LoRA" node and drag it to the empty area, the ComfyFlow will pop up a corresponding next node list to create a new one with ease. 4) KSampler & Empty Latent ImageThe sampling method is used to tell the AI how it should start generating visual patterns from the initial noise and everything associated with its adjustment will be set here in this type of sampling node together with "Empty Latent Image" The inputs in this step here are models (from LoRA node), positive and negative (from prompt node) and the output is "Latent"5) VAE Decode & Final output nodeOnce we establish the sampling node, the output named "LATENT" will then have to connect with "samples" Meanwhile the "vae" is the linkage between this one and the "Load Checkpoint" node from the beginning.And when everything is done the "IMAGE" as a final output here will be served at your hand.PS. An AI Tool is a more complex Workflow created to do some specific task such as swapping the face of the human in the original picture with the target face or changing the style of the input illustration to another one and etc.
16
2
Favorite works (Re)

Favorite works (Re)

This is my favorite work that I have created 😊 (This is a repost because the previous article disappeared due to an error)
16
How to Unlocking the Power of CUDA 12.1 with DWPose and Onnxruntime-GPU

How to Unlocking the Power of CUDA 12.1 with DWPose and Onnxruntime-GPU

Unlocking the Power of CUDA 12.1 with DWPose and Onnxruntime-GPU: A Simple GuideIf you're looking to harness the capabilities of DWPOSE or Onnxruntime with CUDA 12.1, follow these straightforward steps to get started.Step-by-Step Installation Guide:Uninstall Old Version of Onnxruntime-GPU:Begin by removing any existing version of Onnxruntime-GPU to avoid conflicts.shpip uninstall onnxruntime-gpuCopy codepip uninstall onnxruntime-gpuInstall PyTorch with CUDA 12.1:Install the specific versions of PyTorch, Torchvision, and Torchaudio that are compatible with CUDA 12.1.shpip install torch==2.3.0+cu121 torchvision==0.18.0+cu121 torchaudio==2.3.0+cu121 --extra-index-url https://download.pytorch.org/whl/cu121Copy codepip install torch==2.3.0+cu121 torchvision==0.18.0+cu121 torchaudio==2.3.0+cu121 --extra-index-url https://download.pytorch.org/whl/cu121Install Onnx and Onnxruntime:Next, install the Onnx and Onnxruntime packages.shpip install onnx onnxruntimeCopy codepip install onnx onnxruntimeInstall Onnxruntime-GPU for CUDA 12.1:Finally, install the Onnxruntime-GPU package compatible with CUDA 12.1 using the provided extra index URL.shpip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/Copy codepip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/Portable Version Installation:If you need a portable setup, follow this method:shpython_embeded\python.exe -m pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/Copy codepython_embeded\python.exe -m pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/By following these steps, you'll be equipped to fully utilize the power of CUDA 12.1 with DWPOSE and Onnxruntime-GPU, ensuring optimal performance and compatibility for your projects. Happy coding!
15
Mastering Graph Representation Learning: Unlocking the Mysteries of Node Mapping and Random Walks

Mastering Graph Representation Learning: Unlocking the Mysteries of Node Mapping and Random Walks

Graph Representation Learning: Unveiling the Secrets of Node Mapping and Random WalksNode Mapping: Simplifying ComplexityNode mapping is like using mathematical functions to represent images. By translating an image into a function, we can drastically simplify it. This technique also allows us to reduce the dimensionality of three-dimensional data, shrinking the overall data magnitude from an n-th power of singular data to a magnitude of 2n, thus significantly reducing the total data volume.The Magic of Random WalksUnbiased Random Walks: Imagine exploring a maze where every path is equally probable. Unbiased random walks represent this scenario, where the probability of moving in any direction is the same.Biased Random Walks: Now, picture a maze where some paths are more likely to be chosen. Biased random walks occur when there is a higher probability of moving in a certain direction.Decoding with EncodersFeature Processing: Encoders help us process data features, enabling us to understand the similarities between different data points based on their paths.Similarity Representation: Think of it as measuring the closeness of two friends based on how often their paths cross.Deep Walks with Consistent StepsRandom Walk: Begin by performing a random walk on the graph. Start at a node and move randomly along the edges of the graph for a set number of steps, forming a random walk sequence. This is like exploring the connections between nodes by wandering through the graph.Word2Vec Model: Apply the Word2Vec model to each random walk sequence to learn the vector representation of nodes. The model treats node sequences as word sequences, using a neural network to learn node vectors. Nodes that frequently appear together in sequences are closer in vector space.Node Representation Learning: Finally, map each node into a low-dimensional vector space using the trained Word2Vec model. These representations can be used for graph analysis, node classification, link prediction, and more.Visualizing Graphs in ImagesFor a generated image, the walk path represents the movement path of pixels or feature points in the image. This path records the movement trajectory on the image, starting from an initial position and following certain rules.Specifically, the walk path can be interpreted as:Feature Point Movement Trajectory: If the nodes in the image represent feature points or key points, the walk path can be understood as the movement trajectory between these points. This path captures the structural information or key features of the image.Pixel Scanning Order: If the image is a collection of pixels, the walk path represents the scanning order of these pixels. This path helps traverse the image and capture its content or texture information.Object Movement Path: If the nodes in the image represent objects, the walk path can be understood as the movement path of these objects. This path simulates the motion or behavior of objects within the image.Graph representation learning unveils the hidden connections and movements within data, transforming complex relationships into understandable patterns. Through node mapping and random walks, we can unlock new insights and applications in data analysis and visualization.
15
Enchanting Script Creation and Animation Prep for Knitted Masterpieces

Enchanting Script Creation and Animation Prep for Knitted Masterpieces

Captivating Script Creation and Animation Preparation for Knitted ProductsImage Generation:Precision and Style: Harnessing the power of SD for ultimate control, ensuring a cohesive series of images (utilizing masks, showcasing the same attire on different models, and seamlessly switching model poses).Seamless Transitions:Immersive Zoom Out: Experience the magic of scene transitions with Midjourney's captivating zoom-out effects.Product Showcase:Dynamic Scene Changes: Highlight your products through smooth and engaging scene transitions.Animation Production:Advanced Techniques: Employ cutting-edge tools like Ipiv's Morph, img2vid, AnimateDiff, LCM, and Hyper-SD for superior continuity and fluidity between keyframes.Keyframe Mastery:Auxiliary Elements: Enhance the narrative with detailed auxiliary elements.Unified Elements: Combine auxiliary elements with main elements for a richer visual experience.Style Transformation: Watch as main elements undergo stunning style transitions.Final Masterpiece: Culminate in a breathtaking final image.Deform:Unique AI Visuals: Dive into scene transitions that feature AI’s distinctive, large-scale imagery.Runway Magic:Dynamic Movements: Capture small yet impactful model movements and action close-ups for a compelling visual experience.Enchanting Style Transfer:Fantasy Meets Reality: Blend fantasy with the knitted aesthetic for a mesmerizing look.Script Preparation:Perfect Timing: Craft a 15-second masterpiece, with each segment highlighting a unique workflow effect.Segment One:Eight Stunning Images:Zoom out (1)Texture close-up (2)Fabric close-up (3)Product close-up (4)Model close-up (5, 6, 7)Scene close-up (8)Each image displayed for a tantalizing 0.5 seconds.Segment Two:Dreamlike Transitions:Fall into a surreal lake, maintaining the main element’s action frames while switching scenes.Segment Three:Keyframe Brilliance:Four keyframes featuring Ipiv's Morph, showcasing auxiliary elements (water splashes, lines) and the main elements (knitted product models).Segment Four:Artistic Style Transfer:Transform realistic water splash images into a knitted fantasy style using LoRa.Compare materials and IP effects for a stunning conclusion.Bring your knitted products to life with this captivating and visually enchanting script and animation preparation. Let your audience be mesmerized by the blend of advanced techniques and artistic creativity.
15
did you know? “tensor.Art” [ Enhance Prompt] feature

did you know? “tensor.Art” [ Enhance Prompt] feature

Hello everyone. Today, I would like to introduce a feature of tensor.art that is often overlooked.[Enhanced Prompt] Have you heard of it?It's a feature that is a little inconspicuous on the image generation screen that you usually use, but it's very useful.1. What is this feature?The Prompt reinforcement automatically generates multiple prompts that are enhanced and reconstructed by AI using the words of the prompt you entered. This makes the original prompt richer and more diverse, and increases the variety of images generated.2. How to use:It's very easy!Enter the prompt: First, enter the prompt for the image you want to generate as you normally would.Click the prompt extension button: Click the "Enhance Prompt " button at the bottom right of the prompt input field.It's a feature that is a little inconspicuous on the image generation screen that you usually use, but it's very useful.3. Check the suggested prompts:The AI ​​automatically analyzes the prompt you entered and suggests multiple enhanced prompts.Several restructured and enhanced prompts appear.All you have to do is choose the prompt of your choice.Choose the prompt of your choice and click the Select button.It will be automatically filled in the prompt field.Then, generate as usual! !4. UtilizationBroaden your ideas: Using the prompt extension allows you to discover new expressions and ideas that you never thought of.Improve the quality of your projects: Using more sophisticated prompts can improve the quality of the images generated.Breaking down creative barriers: When you're stuck for ideas, this feature can help you find new inspiration.5. Actual effectsBy using the prompt extension, you can achieve the following effects:Generate more diverse images: Increase the variety of images generated from the original prompt.Improved accuracy: With more specific and detailed prompts, you can easily generate images that are closer to your expectations.Save time: You can avoid the trouble of manually trying out prompts.*Summary*As you can see, the tensor.art Enhance Prompt is a powerful tool that can further stimulate your creativity and streamline the image generation process. Give it a try and see the effects for yourself!!!
14
15
List of style collection - focusing on anime charactor examples (continue updating)

List of style collection - focusing on anime charactor examples (continue updating)

AI image-generating platforms like Tensor.art offer diverse anime styles, enabling users to create artwork in various distinct masterpieces of art inspired by popular anime aesthetics. These collections aim to cater to different preferences from classic to contemporary anime illustrations within one place.P.S.1 I will continue updating this post maybe every 2 weeks when I find a unique style (both for LoRA and model) that is worth listing here solely from my perspective - Anyway if anyone has a list of favorite styles in mind, feel free to share them here or even create your post. :DP.S.2 People normally mix multiple LoRA at once, and the core model (checkpoint) has a variation in base style depending on the prompt used. Therefore, in the following example, I will choose only a single LoRA or Checkpoint to represent without mixing anything. However, if confusion about the contribution to the style happens, I have to apologize in advance since I am just a beginner in the art community. Here are some examples: Anime Lineart / Manga-like (线稿/線画/マンガ風/漫画风) Style (LORA) https://tensor.art/models/623935989624337542 Spacezin Sketch Style (LoRA) https://tensor.art/models/638083414328801488 Cute Chibi - V.1 (LoRA) https://tensor.art/models/726716640076597245 CAT - Citron Anime Treasure (Checkpoint) https://tensor.art/models/713607777118974323 LizMix V.7.0 (Checkpoint) https://tensor.art/models/721034681811855891 Flower style - (LORA) https://tensor.art/models/699582840586758007 Art Nouveau Style - Oosayam (LoRA) https://tensor.art/models/654562112921690173 Torino Style - v.2.0.09 (LoRA) https://tensor.art/models/705577639974520212 Yody PVC 3D Print - 1.0 (Checkpoint) https://tensor.art/models/673632484975460872 Eldritch Expressionism style (LoRA) https://tensor.art/models/708171473803739178 [Y5] Impressionism Style 印象派风格 (LoRA) https://tensor.art/models/621173217551417505 surrealism - 2024-02-17 (LoRA) https://tensor.art/models/695557949424221333 pop-art - 01 style (LoRA) https://tensor.art/models/697182692602582375 FF Style: Kazimir Malevich | Suprematism (LoRA) https://tensor.art/models/655758742350092928 Hoping these collections (today and in the future) will allow A.I. artists and enthusiasts to generate anime-inspired images effortlessly, blending creativity with advanced AI technology to bring their visions to life. :D
13
2
ComfyUI-AnimateDiff -DW Pose-Face Swap- ReActor  -Face Restore-Upscayl- Video Generation Workflows

ComfyUI-AnimateDiff -DW Pose-Face Swap- ReActor -Face Restore-Upscayl- Video Generation Workflows

Video Generation Workflows 30 nodesDownloda Workflows.json 👈👈My video Gallery link 🎥🎬
13
5
My Models list & need Suggestions in comment. Part - 2

My Models list & need Suggestions in comment. Part - 2

Hello all!2ed - ☣NUKE ☣NUKE Models mainly based on people demand and request. ☣NUKE have all styles of models, enhancer and detailer too. if anyone request any style in future that model will upload with ☣NUKE. Model list :-☣NUKE - PhotoRealism = https://tensor.art/models/662687551095681601 My one of best model i ever made. this model only need 25 steps and make amazing realistic images with details and I glad people love it.☣NUKE - Semi-Realistic = https://tensor.art/models/662684763661880107This model basically for detailed semi-Realistic images. V1 have some issues so i improved that and made V2. ☣NUKE - Disney Pixar Cartoon = https://tensor.art/models/662777277257529287 This was my first Disney style model. have many issues i tried to fix but SD1.5 is not good enough for make his V2.☣NUKE - Perfect Anime Details = https://tensor.art/models/663438977099025152As people demand i made perfect body and anime color lora. this lora is good for anime characters. ☣NUKE - Realistic Fantacy = https://tensor.art/models/666406043586181918This model was Test like how people like Realistic + fantacy style but this kinda boring because one good checkpoint and lora can make it easily.☣NUKE - Real Sci-fi = https://tensor.art/models/711223782506802179This model making detailed Cinematic and Sci-fi style images. V1 had some issue which i fixed in V2.☣NUKE - RealMix = https://tensor.art/models/705011516040072711This model V1 was worth i did many mistakes in V1 but in V2 i improved it and glad people like it and its my one of best detailed realistic model. ☣NUKE - Realistic = https://tensor.art/models/670457593610683566This model same as ☣NUKE - PhotoRealism but with better prompt understanding. ☣NUKE - RealisticMix = https://tensor.art/models/672847742911096880This model same as RealMix but its have amazing understanding + its more detailed. and if you want try to make images with My PhotoRealism Enhancer and Perfect Real Photo lora. they together make amazing image :- ☣NUKE- Perfect Real Photo = https://tensor.art/models/672995640109825255as you saw photo up there... This lora is mainly for enhance photography and make image like realistic but set width 5 to 8. this lora little sensitive start width 5. ☣NUKE - Disney Pixar Style SDXL = https://tensor.art/models/677504969223601395 This model is one of my best and most detailed one. mainly Disney style after SD1.5 disney model failed. i made it in SDXL with more and detailed dataset. i am very thankful people love it and still making amazing art. ☣NUKE - SDXL Art & Real Detailer + Enhancer = https://tensor.art/models/679365785279483346This model mainly requested by artists who make amazing and very unique style with SDXL model. and i am very glad it working according to them. ☣NUKE - AniMix = https://tensor.art/models/682723887064160000This model is Anime character maker and best work with lora. any kind of anime it make it. ☣NUKE - PhotoRealism V2 = https://tensor.art/models/682801217950248993 This model same as ☣NUKE - PhotoRealism but i am improved lot of things and make it for simple prompt user with detailed. ☣NUKE - Art DiffusionXL = https://tensor.art/models/684590930822489042This model kinda combo of all styles 2.5D, realistic, comic or anime. but it have only one issue its need detailed prompt. better prompt better image.☣NUKE - PhotoRealism Enhancer = https://tensor.art/models/703121077594962012This lora mainly enhance realistic, skin, hair, environment or clothes. any checkpoint it always work amazing.☣NUKE - Realistic Fusion = https://tensor.art/models/707622057292305356 This model is combination of Sci-fi, fantasy and Photography. amazing images with so real detailed.I am not gonna add pony models in this list...in last Part... i tell some my unique models
13
Understanding the Impact of Negative Prompts: When and How Do They Take Effect?

Understanding the Impact of Negative Prompts: When and How Do They Take Effect?

📝 - SynthicalThe Dynamics of Negative Prompts in AI: A Comprehensive Study by: Yuanhao Ban UCLA, Ruochen Wang UCLA, Tianyi Zhou UMD, Minhao Cheng PSU, Boqing Gong, Cho-Jui Hsieh UCLAEThis study addresses the gap in understanding the impact of negative prompts in AI diffusion models. By focusing on the dynamics of diffusion steps, the research aims to answer the question: "When and how do negative prompts take effect?". The investigation categorizes the mechanism of negative prompts into two primary tasks: noun-based removal and adjective-based alteration.The role of prompts in AI diffusion models is crucial for guiding the generation process. Negative prompts, which instruct the model to avoid generating certain features, have been less studied compared to their positive counterparts. This study provides a detailed analysis of negative prompts, identifying the critical steps at which they begin to influence the image generation process.FindingsCritical Steps for Negative PromptsNoun-Based Removal: The influence of noun-based negative prompts peaks at the 5th diffusion step. At this critical step, negative prompts initially generate a target object at a specific location within the image. This neutralizes the positive noise through a subtractive process, effectively erasing the object. However, introducing a negative prompt in the early stages paradoxically results in the generation of the specified object. Therefore, the optimal timing for introducing these prompts is after the critical step.Adjective-Based Alteration: The influence of adjective-based negative prompts peaks around the 10th diffusion step. During the initial stages, the absence of the object leads to a subdued response. Between the 5th and 10th steps, as the object becomes clearer, the negative prompt accurately focuses on the intended area and maintains its influence.Cross-Attention DynamicsAt the peak around the 5th step for noun-based prompts, the negative prompt attempts to generate objects in the middle of the image, regardless of the positive prompt's context. As this process approaches its peak, the negative prompt begins to assimilate layout cues from its positive counterpart, trying to remove the object. This represents the zenith of its influence.For adjective-based prompts, during the peak around the 10th step, the negative prompt maintains its influence on the intended area, accurately targeting the object as it becomes clear.The study highlights the paradoxical effect of introducing negative prompts in the early stages of diffusion, leading to the unintended generation of the specified object. This finding suggests that the timing of negative prompt introduction is crucial for achieving the desired outcome.Reverse Activation PhenomenonA significant phenomenon observed in the study is Reverse Activation. This occurs when a negative prompt, introduced early in the diffusion process, unexpectedly leads to the generation of the specified object within the context of that negative prompt. To explain this, researchers borrowed the concept of the energy function from Energy-Based Models to represent data distribution.Real-world distributions often feature elements like clear blue skies or uniform backgrounds, alongside distinct objects such as the Eiffel Tower. These elements typically possess low energy scores, making the model inclined to generate them. The energy function is designed to assign lower energy levels to more 'likely' or 'natural' images according to the model’s training data, and higher energy levels to less likely ones.A positive difference indicates that the presence of the negative prompt effectively induces the inclusion of this component in the positive noise. The presence of a negative prompt promotes the formation of the object within the positive noise. Without the negative prompt, implicit guidance is insufficient to generate the intended object. The application of a negative prompt intensifies the distribution guidance towards the object, preventing it from materializing.As a result, negative prompts typically do not attend to the correct place until step 5, well after the application of positive prompts. The use of negative prompts in the initial steps can significantly skew the diffusion process, potentially altering the background.ConclusionsDo not step less than 10th times, going beyond 25th times does not make the difference for negative prompting.Negative prompts could enhance your positive prompts, depending on how well the model and LoRA have learn their keywords, so they could be understood as an extension of their counterparts.Weighting-up negative keywords may cause reverse activation, breaking up your image, try keeping the ratio influence of all your LoRAs and models equals.Referencehttps://synthical.com/article/Understanding-the-Impact-of-Negative-Prompts%3A-When-and-How-Do-They-Take-Effect%3F-171ebba1-5ca7-410e-8cf9-c8b8c98d37b6?
10
The Importance of Data Cleansing for AI LoRA Training Datasets : A Case of Anime Character

The Importance of Data Cleansing for AI LoRA Training Datasets : A Case of Anime Character

In the world of artificial intelligence, particularly in training models using Low-Rank Adaptation (LoRA), the quality and integrity of the dataset play a pivotal role. This is especially true when dealing with specialized domains such as anime characters, where aesthetic and stylistic nuances are crucial. Here, we explore the importance of retouching and cleansing image datasets before using them in the LoRA training process, just for the beginner. Ensuring Data QualityAnime characters are often depicted with a high degree of stylistic consistency. The dataset must be impeccable to train an AI model that can accurately generate or recognize these characters. Raw image datasets frequently contain noise, irrelevant details, and inconsistencies that can confuse the model. Retouching images involves enhancing the quality, removing noise, and correcting any visual imperfections, ensuring each image meets a high standard. This step is essential to avoid training the model on flawed representations, which could lead to poor performance.===================== Here is an example data set that I'm too lazy to spend my time retouching (In this case, Scama from Overlord is one of an anime characters that has limited fanart pictures) ============================================== Here is the result I obtained when generating the image using my trained LoRA (This problem can be solved by using the negative prompt, but sometimes it is not accurate) ==========================So......Don't be Lazy to Clean Your Data in the First Place!!!! :P Removing Irrelevant DataDatasets often include images that, while related, do not serve the training purpose. For example, background scenes, side characters, or promotional art with different artistic directions can dilute the learning process. Cleansing the dataset involves filtering out these irrelevant images, and ensuring that the model is trained only on relevant data. This specificity allows the AI to develop a deeper and more precise understanding of the main characters and their typical representations.========================== Here is a good example of a training dataset with only simple background ==========================Enhancing Feature RecognitionAnime characters are defined by distinct features such as eye shapes, hairstyles, and clothing details. Retouching images to highlight these features can significantly improve the model’s ability to recognize and reproduce them. Techniques such as adjusting contrast, sharpening details, and standardizing colors ensure that these defining characteristics are prominent in the training data, aiding the model in learning what makes each character unique. However, in the case of reference image scarcity, going back to the basics by commissioning a human artist may needed. =================== At the beginning stage of creating LoRA for my favorite waifu that has just a single digit of a low-sized fanart image, I decided to spend my money to get some of her image references in a good quality resolution. And that helped me a lot when A.I. gradually drew the details of my character more accurately ===================Artist name: พิมพ์วิมล เจิมมงคลAvoiding Bias and RedundancyDatasets can inadvertently introduce bias if certain character poses, expressions, or angles are overrepresented. Cleansing the dataset involves ensuring a balanced representation of various aspects of the characters, preventing the model from becoming biased towards specific images. Additionally, removing redundant images that do not add new information helps in optimizing the training process, making it more efficient and effective.=================== Although I'm still lazy in terms of data cleansing, at least, in the process of training the SDXL model of Calca, I have spent some extra effort to select 100+ reference images carefully, especially the difference in style & her expression despite a dominant in upper body portrait ===================ConclusionIn the AI training process, particularly with specialized applications like anime character generation or recognition using LoRA, the importance of retouching and cleansing the image dataset cannot be overstated. High-quality, consistent, and relevant data are the cornerstones of successful AI model training. By investing time in retouching and cleansing datasets, developers can ensure that their AI models achieve high accuracy and produce results that meet the aesthetic and stylistic standards expected in the anime domain.
10
A List of Prompts/Themes I Like

A List of Prompts/Themes I Like

Hey there! This is mostly just for personal use, I like having all of my favorite topics in one place, but feel free to use this list as inspiration if you need ideas! I liked the idea of collecting a bunch of words and elements I use and compiling them all into one list so here we are! It's subject to change, as I'll likely keep adding to and improving it, but it's a good start.So, without any further ado, and in no particular order, here are a few of my favorite things:People/Bodies:silver/white hairvery long hairside-shavered eyesgold eyesblank eyesgrey skin (really cool, but hard to get working sometimes)wingsangels/demonsautomatons (clockwork, not mecha or androids)plastic/metal skin/faceClothing:sweater dresshoodieonesie (I don't use this one a lot, but I still like it)halter topslit dressasymmetrical clothesoff-the-shouldersuitjacketexposed midriffdeep v-neckwide-brimmed hatmask (not COVID style)wet clothesdancer clothes? (not entirely sure what they're called)Props/Subjects:instrumentsscythesanimals (some)ruinsfantastical landscapesflowersglass itemsmushroomsmagicThemes/Styles:monochrome/duochromeminimalismwatercolorabstract (sometimes)goldeldritchwatermusicdancingabandoned thingsart-decospace/stars (duh?)comedyminiaturelineartlow-polybioluminescencecursed generationsMisc.:anime girls (duh.)And, that's a wrap! It's definitely not comprehensive, but it's a good start. (Cover art: literally all of these words in one prompt:)
10
1
  ComfyUI - FreeU:您需要這個!升級任何模型  ComfyUI - FreeU: You NEED This! Upgrade any model

ComfyUI - FreeU:您需要這個!升級任何模型 ComfyUI - FreeU: You NEED This! Upgrade any model

FreeU WORKFLOWSComfyUI-FreeU (YouTube)說明
9
The Importance of LoRA (Low-Rank Adaptation) - anime example and comparison

The Importance of LoRA (Low-Rank Adaptation) - anime example and comparison

Among the myriad of techniques that drive AI image-generating platforms, Low-Rank Adaptation (LoRA) stands out as a particularly crucial and effective method. Understanding LoRA: The FundamentalsLow-Rank Adaptation is a technique that aims to reduce large-scale models' complexity. This decomposition allows for efficient storage and computation, which is particularly valuable in the AI image generation context, where models often require substantial computational resources.Improved Model AdaptabilityOne of the challenges in AI image generation is the need to adapt models to different styles, themes, and user preferences. LoRA facilitates this adaptability through efficient fine-tuning and transfer learning.Style Transfer: LoRA can fine-tune pre-trained models to generate images in specific artistic styles or adapt to the visual themes needed. This is achieved without extensive retraining, thanks to the reduced parameter space.Note: The all of the image below has the same prompts and the parameters except the style-related LoRA.......CalicoMix FlatAni - v1.0 without any other LoRAs apart from the one for my waifu (Calca Bessarez)...............................................................CalicoMix FlatAni - v1.0 with DesillusionRGB as an extra LoRA..........................................................................................CalicoMix FlatAni - v1.0 with Glitter and Shiny details as an extra LoRA.................................................................................CalicoMix FlatAni - v1.0 with Retro Lofi - Pop Art (Style) - v2.0 as an extra LoRA.......................................................................................CalicoMix FlatAni - v1.0 with Fireflies ホタル as an extra LoRA...........................................................................................CalicoMix FlatAni - v1.0 with tshee - vector style art - v1.0 as an extra LoRA.....................................Personalization: Users can personalize image generation by training models on their datasets or preferences, especially for specific art styles, sceneries, or characters. LoRA enables these customizations to be performed quickly and efficiently, enhancing user satisfaction and engagement like Calca Bessarez in the example above......................................CalicoMix FlatAni - v1.0 with Swamp / Giant Tree Forest 绪儿-巨树森林背景 - XRYCJ as an extra LoRA.....................................However, in the case of eliminating variation, customization of the outfit, or changing the character's physical appearance, the method of weighting parameters needs to be considered, from the default value of 0.8 to 1.0 (similar to the original design), above 1.0+++ (look very the same to the original design), lower than 0.5 (not pay attention to the original design)........CalicoMix FlatAni - v1.0 without any LoRA even the one for the character designSo the result for the same prompt is just an ordinary girl........................................................CalicoMix FlatAni - v1.0 with Calca's LoRA but weighting with 0.3..................................................................................(There is no kingdom symbol on her chest and the tiara's shape is not the original one).......................................................................................CalicoMix FlatAni - v1.0 with Calca's LoRA but weighting with 0.5..........................................................................(Her dress and tiara are very close to the original, but there is no kingdom symbol on the chest)...........................................A key limitationAlthough Low-Rank Adaptation (LoRA) offers significant benefits in AI image-generating platforms, including reduced computational complexity and memory usage, it also comes with limitations that can impact its effectiveness and applicability. Here are some key limitations of LoRA in the context of AI image generation:1) Approximation ErrorsLoRA involves approximating a high-dimensional matrix with two lower-dimensional matrices. This approximation can introduce errors that affect the performance and quality of the model. Specifically: Loss of Detail and Bias in Representation2) Model Compatibility and IntegrationWhile LoRA is effective in many scenarios, integrating it into existing AI frameworks and models can present a challenge like Compatibility Issues as not all models are equally suited for low-rank approximations.3) Scalability LimitationsAlthough LoRA helps reduce the computational load, there are still scenarios where scalability remains an issue:Extremely Large Models: even though the reduced matrices can be substantial, the extremely large model still requires considerable computational resources and memory.Real-Time Constraints: In applications demanding ultra-low latency, such as real-time image processing, the approximation process might still introduce unacceptable delays........................................................Result of real-time generation from the same prompt I used in the previous examples.......................................................................................In this case, even the main model, CalicoMix FlatAni - v1.0, is not available.....................................4) Complexity and Availability of the platformLoRA models, despite being optimized for efficiency, still consume computational resources such as memory and processing power. allowing an unlimited number of LoRA could:Overwhelm System Resources: The computational demands of managing multiple LoRA models simultaneously could exceed the available system resources, leading to slower performance or crashes.Increase Latency: Each additional LoRA model increases the complexity of the image generation process, potentially leading to higher latency and slower response times for users.This might be a reason why many AI image-generating platforms limit the number of LoRA for one batch of images (3 for free users and 6 for pro users in the case of Tensor.Art).....................................Yeah, I'm not here to sell the subscription, but if you guys have no financial constraints, it would be very helpful to support our dedicated developers here in the Tensor.Art like myself. :P To enjoy the benefit of using 6 maximum LoRAs in total......................................
9
2
Contrast Boost XL

Contrast Boost XL

Contrast Boost XL!!!This is a Contrast Boost slider model. Slide to the right, the contrast goes up, making your image pop with more contrast and sharper details. But here’s the cool part - slide to the left, and the contrast goes down, giving your image a softer, more subtle look. It’s perfect for fine-tuning your photos to get just the look you want. Try it out and see the difference it can make!More Information:- Model linkYour feedback is invaluable to me. Feel free to share your experiences and suggestions in the comment section. For more personal interactions, join our Discord server where we can discuss and learn together.Thank you for your continued support!
8
ControlNet Dw_openpose ComfyUi

ControlNet Dw_openpose ComfyUi

ControlNet Dw_openpose
7
Things to consider before training.

Things to consider before training.

The Importance of Proper Dataset Selection in Training to Prevent OverfittingIn the realm of machine learning, achieving a well-performing model hinges significantly on the quality and appropriateness of the training dataset. One of the critical challenges faced during model training is overfitting, where the model learns the training data too well, including its noise and outliers, resulting in poor generalization to new, unseen data. To mitigate overfitting, it's imperative to select and curate the right dataset. Here's why a proper dataset is essential in preventing overfitting and how it can be achieved.Understanding OverfittingOverfitting occurs when a model becomes overly complex, capturing not only the underlying patterns in the training data but also the noise. This leads to high accuracy on the training dataset but poor performance on validation or test datasets. Essentially, an overfitted model has memorized the training data rather than learning to generalize from it. This issue is particularly prevalent in datasets that are too small, noisy, or unrepresentative of the problem space.The Role of a Proper DatasetDiversity and Representativeness: A good dataset should be diverse and representative of the various scenarios the model will encounter in real-world applications. This means including a wide range of examples, ensuring that the model learns to generalize from different patterns and conditions rather than memorizing specific instances.Sufficient Size: The size of the dataset is a crucial factor. Small datasets often lead to overfitting because the model doesn't have enough examples to learn the underlying patterns adequately. Larger datasets provide more opportunities for the model to see varied examples, reducing the chance of overfitting.Balanced and Unbiased Data: An imbalanced dataset, where certain classes or conditions are overrepresented, can cause the model to be biased towards those classes. This imbalance leads to overfitting on the overrepresented classes. Ensuring that the dataset is balanced helps the model learn to generalize across all classes more effectively.Clean and Preprocessed Data: Noisy data with errors or irrelevant information can mislead the model during training. Proper preprocessing, such as removing outliers, normalizing values, and handling missing data, is essential to provide the model with clean data that accurately reflects the problem domain.Augmentation Techniques: Data augmentation involves creating variations of the training data through transformations such as rotations, translations, and scaling. This technique artificially increases the dataset size and diversity, helping to prevent overfitting by exposing the model to more varied examples.Strategies to Ensure a Proper DatasetCross-Validation: Using cross-validation techniques, where the dataset is split into multiple training and validation sets, can provide a better estimate of the model's performance and help in identifying overfitting. This method ensures that the model is tested on different subsets of data, promoting better generalization.Regularization: Applying regularization techniques such as L1 or L2 regularization can help in penalizing overly complex models, encouraging simpler models that generalize better. This approach works well in conjunction with a well-curated dataset to prevent overfitting.Data Splitting: Properly splitting the data into training, validation, and test sets is crucial. The training set should be used to train the model, the validation set to tune hyperparameters, and the test set to evaluate the final model performance. Ensuring that these sets are representative of the entire dataset helps in achieving a balanced training process.Monitoring Learning Curves: By monitoring the learning curves of training and validation losses, practitioners can identify signs of overfitting early. If the training loss continues to decrease while the validation loss starts increasing, it's a clear indication of overfitting.
7
Area Composition

Area Composition

Get more specific generations each time!Have you ever heard of Area composition?Area composition is a technique where you can specify and set custom locations for every element you want to generate. In order to create this simple but effective workflow all you need is:NodesLoad checkpoint: here you select your desired model.Load LoRA: here you select your desired style with any LoRA (this one is optional).Clip Set Last Layer: this node works as your Clip Skip (set it to -2 for better results).Clip text encode: here is where your lovely prompt will be. you will need to have two of these because one will work as your positives and the other as negatives.Ksampler: this node is important because it is like the brain of the main process. here is where your prompt and image size gets read it and transformed into an image. here you can use the sampler and scheduler you like the most (set the denoise strength to 1.0 for better results).Empty latent image: as important as the ksampler, the empty latent image node is where you decide the specific size of your initial image (can be portrait or landscape).Clip text encode: wait, again? yes. just as the last ones, this node will focus on the specific element you want to generate. it is important to keep it simple and only consider the main element to represent (you can have as many nodes for every element you want to generate. keep in mind that these nodes will only work as positives. for this example i will only use 2 clip text encode nodes).MultiArea conditioning: ok so, this is the most important node of the process. here, for explaining purposes, i will call each one of my positives as conditionings.conditioning 0 will be my first positive (the one i made on step 4).conditioning 1 and 2 will be my second and third positive (the one i made on step 7).it is very important to know that for each conditioning you will have to set a desired size for each element. in this example conditioning 0 i set it to 512x718 because is the base prompt and i want all of the canvas to represent it. for conditioning 1, which is my main character, i set it to 384x576 on lower part of the center of the canvas. and for conditioning 2, which is the background /setting, i set it to 512x718 because i want all of the canvas to work as the background. (you may notice that for each conditioning, while setting it's position, a different color will show on the multiarea conditioning node. keep calm, these colors will work just as a visual representation for the position of each element).also important, as you have figured it out, this node works just as a super detailed composition instruction, therefore, this multiarea conditioning node will work as your positive, so be sure to connect it as positive in your ksampler.Upscale latent: until this part of the process we have only created the base image, which means it is time to upscale it. to do so, i have used the upscale latent node. it not only upscale the image to a desired size but also introduces more detail in the process.Ksampler: yes, again. this second ksampler will work along the upscale latent node in order to refine details, so using the same configuration as your first one (step 5) is a good idea. (lowering the denoise strength on this second ksampler will help in avoiding drastic changes. for this example i set it to 0.5).VAE encode: the variational autoencoder or vae node is important because this node will transform the noise and commands into your beautiful masterpiece.Preview/Save image: lastly, what is left to add is the preview/save image node. (this one does not need an explanation, right?).And there you go, you will now be able to generate more personalized images.Intended image to create: cyborg girl inside abandoned building.Do not forget to set this article as favorite if you found it useful.Happy generations!
7
4
How to Create Effective Prompts for AI Image Generation

How to Create Effective Prompts for AI Image Generation

In recent years, artificial intelligence (AI) technology has advanced rapidly, including in the field of image generation. One popular application of AI is generating images based on text prompts. This process, known as text-to-image generation, allows users to create digital images simply by providing a text description. This article will discuss how to create effective prompts for generating images with AI and offer tips for achieving optimal results.What is a Prompt for Image Generation?A prompt is a text description used to provide instructions to an AI model about the image you want to generate. AI models like DALL-E, Stable Diffusion, and MidJourney use these prompts to understand the context and details desired by the user, then generate an image that matches the description.How to Create an Effective PromptBe Clear and Specific: The clearer and more specific your prompt, the more accurate the generated image will be. For example, instead of just writing "dog," you could write "a golden retriever playing in a park on a sunny afternoon with a clear sky."Use Relevant Keywords: Include relevant keywords to help the AI model understand the essential elements of the image you want to create. These keywords can include the subject, setting, mood, colors, and style.Describe Emotions and Atmosphere: If you want an image with a particular mood or emotion, make sure to mention it in your prompt. For instance, "a peaceful mountain landscape with a warm sunset" provides a specific atmosphere compared to just "mountains."Include Visual Details: Visual details like colors, textures, and composition greatly assist in generating an image that matches your vision. For example, "a vintage red car with white stripes on an empty road."Experiment with Styles and Formats: If you want an image in a specific style (e.g., cartoon, realistic, painting, etc.), be sure to mention it in your prompt. For example, "a portrait of a face in cartoon style with a colorful background."Examples of Effective PromptsHere are some examples of prompts you can use as inspiration:Landscape: "A vast green valley with a river flowing through it, surrounded by distant blue mountains under a clear sky."Portrait: "A portrait of a young woman with long red hair, wearing a blue dress, standing in front of an old brick wall with soft lighting."Animals: "A white Persian cat sleeping on a brown couch in a living room decorated with green plants and sunlight streaming through the window."Fantasy: "A large dragon with shimmering silver scales flying over an ancient castle atop a mountain, with a night sky full of stars in the background."Tips for Improving ResultsProvide Enough Context: Don't hesitate to give additional context in your prompt if it helps to clarify the image you want.Use Synonyms and Variations: If the desired result isn't achieved, try using synonyms or variations of words to describe the same elements.Experiment with Prompt Length: Sometimes, longer and more detailed prompts can generate better images, but other times, shorter and more to-the-point prompts can be more effective.Use the Right Model: Each AI model has its strengths and weaknesses. Experiment with different models to find the one that best fits your needs.By understanding how to create effective prompts, you can leverage AI technology to generate stunning images that match your desires. Happy experimenting!
7