Articles

Some prompts I've collected 一些我收藏的提示词

Some prompts I've collected 一些我收藏的提示词

风格提示词——StylizationInfographic drawing, The concept character sheet 信息图表,概念字符character sheet style 人物表character sheet ilustration 人物表插画Realistic 真实感bokeh 背景散焦Ethereal 空灵 幽雅Warm tones 暖色调The lighting is soft 灯光柔和natural light 自然光ink wash 水墨Splash pigment effect 飞溅颜料cybernetic illuminations 科技光Neo-Pop 新波普艺术风格Art nouveau 新艺术Grandparentcore 复古老派风格Cleancore 简约风格red theme 红色主题sticker 贴纸Reflection 反射Backlit 逆光depth of field 景深A digital double exposure photo 双重曝光blurry foreground 模糊前景blurry background 模糊背景motion_blur 动作模糊split theme 分裂主题Paisley patterns 佩斯利图案(花纹)lineart 线条画silhouette art 剪影艺术concept art 概念艺术graffiti art 涂鸦艺术Gothic art 哥特式艺术Goblincore 地精自然风格ukiyo-e 浮世绘sumi-e 墨绘magazine cover 杂志封面commercial poster 商业海报视角提示词——ViewPerspective view 透视视角Three-quarter view 三分之一视角Thigh-level perspective 大腿水平视角close-up 特写Macro photo 微距图像Headshot 头像portrait 肖像low angle shot 低视角front and back view 前视图和后视图(正反面)various views 各种视角(多视角)Panoramic view 全景Mid-shot/Medium shot 中景cowboy_shot 牛仔镜头Waist-up view 腰部以上视图Bust shot 半身照Torso shot 躯干照foot focus 足部焦点looking at viewer 看着观众from above 俯视from below 仰视full body 全身像sideways/profile view 侧面fisheye lens 鱼眼镜头Environmental portrait 环境人像表情提示词——Facial expressionSmile 微笑grin 咧嘴笑biting lip 咬嘴唇adorable 萌tearing up/crying_tears 泪目tearful 含泪wave mouth 波浪嘴spiral_eyes 螺旋眼Cheerful 乐观nose blush 潮红running mascara 流动睫毛膏发型提示词——Hairstylesmooth 柔顺hair over one eye 刘海遮住一只眼睛twintails 双马尾ponytail 马尾辫diagonal bangs 斜刘海Dynamic hair 飘发hanging hair 垂发ahoge 呆毛braid 辫子braided bun 包子头Undercut 剃鬓发型装饰提示词——Ornamentforehead mark 额头痣mole under eye 泪痣Skindentation 勒痕eyepatch 单眼罩blindfold 眼罩hairpin 发卡hairclip 发圈headband 发箍hair holder 束发hair ribbon 发带Ribbon 缎带 蝴蝶结maid headdress 女仆头饰headveil 头纱tassel 流苏thigh strap 大腿带服装提示词——Clothingjkseifuku jk 日本女子校服miko 女巫idol clothes 偶像服competition swimsuit 竞速泳装Rococo 洛可可pelvic curtain 盆骨帘midriff 分体式halterneck 露背装enmaided 女仆装backless sweater 露背毛衣turtleneck sweater 高领毛衣French-style suspender skirt 法式吊带裙winter coat 冬大衣Trench Coat 风衣race queen 赛车女郎Highleg/Leotard 高叉紧身衣slit skirt 分衩裙Stirrup legwear 踩脚裤fishnet stockings 渔网袜thighhighs/thigh-high socks 大腿袜kneehighs 过膝袜toeless legwear 无指袜yoga pants 瑜伽裤frilled 荷叶边(花边)动作提示词——Action(crossed_legs_(sitting)/crossed legs 二郎腿坐cross-legged sitting 盘腿坐semireclining position 半卧姿势head tilt 头部倾斜leaning forward 向前俯身planted sword 种植剑heart hand duo 双人心形手double thumbs up 点赞peace sign 比耶Salute (≧ω≦)/Energetic Pose 活力姿态sitting on seiza 正坐身体提示词——Bodythick eyebrows 浓眉Abs 腹肌toned 强壮navel 露脐off-shoulder 露肩tsurime 吊梢眼cyborg 半机械人tan skin 日晒肤色cocoa skin 可可肤色fit physique 健美体态
503
177
10 Halloween Character Inspirations for Women That Still Look Beautiful. Halloween2024

10 Halloween Character Inspirations for Women That Still Look Beautiful. Halloween2024

Halloween isn't always about frightening or gory characters. Many artists and creators blend elements of horror with elegance and beauty, crafting Halloween characters that are memorable, haunting, and captivating. Here are 10 inspiration ideas for creating beautiful female Halloween characters for 2024. Each character includes a sample prompt to generate a unique and mesmerizing AI creation.1. Elegant Vampire QueenImagine a vampire queen who is more graceful than frightening. Picture a regal vampire in a luxurious gown, with pale skin and an alluringly mysterious gaze.Prompt: "A beautiful vampire queen with pale skin, wearing an elegant dark gown with intricate lace details, crimson lips, piercing eyes, and a mysterious, alluring smile. The scene is set in a dark Gothic castle."2. Glamorous Modern WitchA witch with a modern twist, wearing a sparkling outfit and bold makeup. She’s mysterious yet fashion-forward.Prompt: "A glamorous modern witch with dark, smoky makeup, dressed in a sleek black velvet dress with sparkling accessories, holding a magical crystal ball. Background with a soft mystical light."3. Dark Forest FairyThis fairy has a dark aura yet remains graceful. Imagine a forest fairy with long, silvery hair, a dress made of natural leaves, and a slightly eerie but enchanting look.Prompt: "A dark forest fairy with long silvery hair, wearing a dress made of forest leaves and twigs, glowing emerald eyes, delicate wings with dark tones. A haunting, mysterious beauty surrounded by mist."4. Victorian Ghost PrincessA ghostly character resembling a princess from the Victorian era, wearing a vintage gown with lace and transparent fabrics that give a floating effect.Prompt: "A beautiful Victorian ghost princess with flowing white gown, lace details, soft and translucent skin, gentle expression with a hint of sorrow. Background in an abandoned mansion with soft, ethereal lighting."5. Enchanting Angel of DeathInspired by the Grim Reaper, but more graceful, with black wings, a mysterious aura, and captivating beauty.Prompt: "A stunning female Grim Reaper with long black hair, dark angel wings, wearing an elegant black robe. Her expression is calm and entrancing, holding a scythe with delicate, intricate designs."6. Deep-Sea Dark MermaidThis mermaid captures the eerie beauty of the deep sea, with flowing dark hair, shimmering scales, and pale skin.Prompt: "A dark deep-sea mermaid with shimmering dark blue scales, long flowing black hair, pale skin, mysterious glowing eyes, and an elegant, haunting beauty. Background in an underwater cave with bioluminescent light."7. Seductive SuccubusCreate a succubus who is charming and alluring. Perfect for a mysterious and playful, yet glamorous character.Prompt: "A seductive succubus with dark red lips, smoky eyes, soft curls, dressed in a refined, dark lace gown, with a playful and mysterious smile. Wings are delicate yet dark."8. Mysterious Fire QueenThis character merges the elements of fire and beauty. Picture a queen with blazing red hair and a warm yet dangerous aura.Prompt: "A beautiful fire queen with flaming red hair, wearing an intricate gold and crimson gown with fire motifs, eyes that burn like embers, and a powerful, enchanting aura."9. Dark AngelA dark angel character with black wings that retains elegance and calm.Prompt: "A dark angel with majestic black wings, long wavy dark hair, wearing a flowing black dress with silver accents, soft and serene expression with gentle, glowing eyes."10. Vintage Beauty ZombieThis zombie inspiration is not overly scary. Create a stylish zombie character in a 1920s-inspired outfit, retaining a sense of elegance despite the undead vibe.Prompt: "A vintage-style beautiful zombie woman from the 1920s, with soft, pale makeup, wearing a tattered but elegant flapper dress, soft curls, and a subtle, haunting smile. Background of an old, dimly lit ballroom."With these character inspirations, artists can craft works that blend horror and beauty, creating a Halloween atmosphere that’s not only eerie but also enchanting. Happy creating, and may this Halloween be filled with stunning, artistic creations!
332
6
Mastering FLUX Prompt Engineering: A Practical Guide with Tools and Examples

Mastering FLUX Prompt Engineering: A Practical Guide with Tools and Examples

FLUX AI Tools:https://tensor.art/template/768387980443488839https://tensor.art/template/759877391077124092https://tensor.art/template/761803391851647087https://tensor.art/template/763734477867329638FLUX Prompt Tools:https://chatgpt.com/g/g-NLx886UZW-flux-prompt-pro ⇦⇦⇦ Although I am doing my best to optimize my AI prompt generation tool, I am currently facing malicious negative reviews from competitors. If you have any suggestions for improvement, please feel free to share, and I will do my best to make the necessary optimizations. However, please refrain from giving unfair ratings, as it really discourages my creative efforts. If you find this GPT helpful, please give it a fair rating. Thank you.AI-generated images are revolutionizing the creative landscape, and mastering the art of prompt engineering is crucial for creating visually stunning outputs with models like FLUX. This guide provides practical steps, examples, and introduces a specialized tool to help you craft the perfect prompts for FLUX.1. Start with Descriptive AdjectivesThe foundation of any good prompt lies in the details. Descriptive adjectives are essential for guiding the AI to produce the nuances you desire. For instance, instead of a simple "cityscape," you might specify "a bustling, neon-lit cityscape at dusk with reflections on wet asphalt." This level of detail helps FLUX understand the specific atmosphere and mood you're aiming for, leading to richer and more visually engaging results.2. Integrate Specific Themes and StylesIncorporating themes or art styles can significantly shape the output. For example, you could combine cyberpunk elements with classic art references: "a cyberpunk city with Baroque architectural details, under a sky filled with digital rain." This blend of styles allows FLUX to draw from various visual traditions, creating a unique and layered image​.3. Utilize Technical SpecificationsBeyond adjectives and themes, technical aspects like lighting, perspective, and camera angles add depth to your images. Consider using prompts such as "soft, diffused lighting" or "extreme close-up with shallow depth of field" to control how FLUX renders the scene. These details can make a significant difference, turning a simple image into a masterpiece by manipulating light and shadow, and focusing attention where it matters most​.4. Combine Multiple ElementsTo achieve a more complex and detailed output, combine several of the above strategies in a single prompt. For example: "A close-up shot of a futuristic warrior standing on a neon-lit street, wearing cyberpunk armor with glowing accents, under a sky filled with dark clouds and lightning." This prompt merges detailed descriptions, stylistic choices, and technical elements to create a vivid and engaging scene​ (Magai).5. Experiment and IteratePrompt engineering is an iterative process. Start with a basic idea and refine it based on the results FLUX generates. If the initial output isn't what you expected, adjust the adjectives, tweak the themes, or alter the technical specifications. Continuous refinement is key to mastering prompt engineering​ (Hostinger).6. Utilize the FLUX Prompt Pro ToolIf you're finding it challenging to craft precise prompts, or if you want to speed up your process, try using the FLUX Prompt Pro tool. This tool is designed to generate accurate English prompts specifically for the FLUX AI model. By inputting your basic idea, the tool helps you flesh out the details, ensuring that your prompts are both clear and comprehensive. It's an excellent way to enhance your creative process and achieve better results faster.Try it here: 🚀FLUX Prompt Pro! 🚀 https://chatgpt.com/g/g-NLx886UZW-flux-prompt-pro7. Practical ExampleLet’s put all these strategies into practice with an example:Basic Idea: A futuristic city.Refined Prompt: "A wide-angle shot of a neon-lit, futuristic city at night, with towering skyscrapers reflecting in rain-soaked streets, cyberpunk style, featuring soft backlighting from holographic billboards, and a lone figure in a trench coat standing on a rooftop."This prompt uses descriptive adjectives, specific themes, technical specifications, and combines multiple elements to create a detailed and dynamic image. By following these steps, you can consistently produce high-quality visuals with FLUX.ConclusionMastering FLUX prompt engineering involves blending creativity with precision. By leveraging descriptive language, specific themes, and technical details, and by iterating on your prompts, you can unlock the full potential of FLUX to generate stunning, personalized images. Don’t forget to use the FLUX Prompt Pro tool to streamline your process and achieve even better results.Keep experimenting, stay curious, and enjoy creating!======================================================If you enjoy listening to great music while creating AI-generated art, I highly recommend subscribing to my SUNO AI music channel. I believe it will help ignite your inspiration and creativity even more. I’ll be regularly updating the channel with new AI-generated music. Thank you all for your support! Feel free to leave suggestions or let me know what music styles you’d like to hear. I’ll be creating more tracks in various styles over time. Here are my AI music channel and featured playlists:Lo-fi music: https://suno.com/playlist/e1087fe1-950a-448b-94f4-ddb17ccf84d0FuturEvoLab AI music: https://suno.com/@futurevolab
326
143
AI Tool for Storytelling: Visualizing Fictional Worlds with Custom Parameters

AI Tool for Storytelling: Visualizing Fictional Worlds with Custom Parameters

The power of storytelling has always been one of the most fundamental elements of human culture, connecting people through inspiring, captivating, and mesmerizing tales. Now, artificial intelligence (AI) technology is opening new doors for creators to bring their fictional worlds to life visually. With AI tools for generating images, such as DALL·E, Stable Diffusion, and MidJourney, artists and writers can create complex fictional universes using only textual descriptions. This article explores how AI tools can support storytelling in innovative and unique ways.Why is Visualization Important in Storytelling?Visualization adds depth to storytelling. In fiction, the worlds built through words can often be difficult to imagine in detail. AI tools solve this challenge by:Bringing Abstract Concepts to Life: With simple descriptions, AI can create complex images, such as “a city at the edge of the galaxy with crystal-based architecture.”Ensuring Visual Consistency: AI can produce a series of images following a specific theme or style, maintaining coherence throughout the narrative.Accelerating the Creative Process: Without requiring technical drawing skills, writers can focus on their imagination without being limited by artistic ability.Using Custom Parameters for More Accurate ResultsOne of the most powerful features of AI tools is their ability to customize parameters to achieve results that align with the creator's vision. Here are some steps to make the most of custom parameters in storytelling:Define Style and AtmosphereBefore starting, think about the mood of your fictional world. Is it a dark dystopia? Or perhaps a vibrant utopia? Use parameters such as:Lighting: “Dimly lit with neon hues.”Texture: “Smooth and futuristic vs. rugged and natural.”Use Negative Prompts to Avoid Unwanted DetailsIf you want specific results, negative prompts help eliminate distracting elements. For example, for a futuristic city without trees, use:Prompt: “A futuristic city skyline, no vegetation, no natural elements.”Outpainting for Expansive WorldsIf your story requires a large and interconnected world, use the outpainting feature to expand images, creating seemingly endless landscapes.Reference Base ImagesFor continuity, use a base image as a reference. AI will generate variations based on the image, ensuring your world feels cohesive.Case Study: Visualizing a Fictional World with AI ToolsImagine a writer creating a story titled "The Shattered Realms," set in a parallel world blending magic and technology. Here’s how to visualize it using AI:Sky and Landscape:Prompt description: “A fractured sky with glowing magical rifts, over a city built on floating islands with gears and steam-powered towers.”Characters:Prompt: “A mage with glowing blue tattoos, holding a staff of crystal shards, standing in a windswept meadow of neon flowers.”Action:Prompt: “A battle scene between two armies, one wielding swords of fire, the other armed with glowing shields powered by ancient tech.”Advantages and ChallengesAdvantages:Flexibility: AI allows easy exploration of various styles and concepts.Accessibility: High-quality visuals can be created without professional art skills.Multidisciplinary Collaboration: AI tools enable collaboration between writers, artists, and designers.Challenges:Detail Consistency: AI sometimes generates inconsistent elements between images.Understanding Context: AI is still limited in comprehending complex contexts, requiring manual adjustments.ConclusionAI tools for generating images not only support storytelling but also redefine creative boundaries. By leveraging custom parameters, creators can visually bring their fictional worlds to life, offering a more immersive experience for readers or audiences. These tools are not just instruments but collaborative partners in crafting extraordinary stories.If you’re a writer, artist, or creator, why not give these tools a try and see how far your imagination can go? Your fictional world is waiting to come to life!
272
1
Negative Prompts

Negative Prompts

Avoid unwanted artifacts!Did you know using negative prompts can benefit your image?Here is a short list with some good negative prompts i have collected over one year of being a tensor user(SD1.5, 3.0 and XL compatible).Negatives for landscapesblurryboringclose-updark (optional)details are lowdistorted detailseeriefoggy (optional)gloomy (optional)grainsgrainygrayscale (optional)homogenouslow contrastlow qualitylowresmacromonochrome (optional)multiple anglesmultiple viewsopaqueoverexposedoversaturatedplainplain backgroundportraitsimple backgroundstandardsurrealunattractiveuncreativeunderexposedNegatives for street viewsanimals (optional)asymmetrical buildingsblurrycars (optional)close-upcreepydeformed structuresgrainyjpeg artifactslow contrastlow qualitylowresmacromultiple anglesmultiple viewsoverexposedoversaturatedpeople (optional)pets (optional)plain backgroundscarysolid backgroundsurrealunderexposedunreal architectureunreal skyweird colorsNegatives for people3Dabsent limbsage spotadditional appendagesadditional digitsadditional limbsaltered appendagesamputeeasymmetricasymmetric earsbad anatomybad earsbad eyesbad facebad proportionsbeard (optional)broken fingerbroken handbroken legbroken wristcartoonchildish (optional)cloned facecloned headcollapsed eyeshadowcombined appendagesconjoinedcopied visagecorpsecripplecropped headcross-eyeddepresseddesiccateddisconnected limbdisfigureddismembereddisproportionatedouble faceduplicated featureseerieelongated throatexcess appendagesexcess body partsexcess extremitiesextended cervical regionextra limbfatflawed structurefloating hair (optional)floating limbfour fingers per handfused handgroup of peoplegruesomehigh depth of fieldimmatureimperfect eyesincorrect physiologykitschlacking appendageslacking bodylong bodymacabremalformed handsmalformed limbsmangledmangled visagemerged phalangesmissing armmissing legmissing limbmustache (optional)nonexistent extremitiesoldout of focusout of frameparchedplasticpoor facial detailspoor morphologypoorly drawn facepoorly drawn feetpoorly drawn handspoorly rendered facepoorly rendered handssix fingers per handskewed eyesskin blemishessquintstiff facestretched napestuffed animalsurplus appendagessurplus phalangessurrealuglyunbalanced bodyunnaturalunnatural bodyunnatural skinunnatural skin toneweird colorsNegatives for photorealism3D renderaberrationsabstractanimeblack and white (optional)cartooncollapsedconjoinedcreativedrawingextra windowsharsh lightingillustrationjpeg artifactslow saturationmonochrome (optional)multiple levelsoverexposedoversaturatedpaintingphotoshoprottensketchessurrealtwistedUIunderexposedunnaturalunreal engineunrealisticvideo gameNegatives for drawings and paintings3dbad artbad artistbad fan artCGIgrainyhuman (optional)inaccurate skyinaccurate treeskitschlazy artless creativelowresnoisephotorealisticpoor detailingrealismrealisticrenderstacked backgroundstock imagestock phototextunprofessionalunsmoothAdditional negativesBad anatomy: flawed structure, incorrect physiology, poor morphology, misshaped bodyBad proportions: improper scale, incorrect ratio, disproportionateBlurry: unfocused, hazy, indistinctCloned face: duplicated features, replicated countenance, copied visageCropped: trimmed, cut, shortenedDark images: dark theme, underexposed, dark colorsDeformed: distorted, misshapen, malformedDehydrated: dried out, desiccated, parchedDisfigured: mangled, dismembered, mutilatedDuplicate: copy, replicate, reproduceError: mistake, flaw, faultExtra arms: additional limbs, surplus appendages, excess extremitiesExtra fingers: additional digits, surplus phalanges, excess appendagesExtra legs: additional limbs, surplus appendages, excess extremitiesExtra limbs: additional appendages, surplus extremities, excess body partsFingers: conjoined fingers, crooked fingers, merged fingers, fused fingers, fading fingersFused fingers: joined digits, merged phalanges, combined appendagesGross proportions: disgusting scale, repulsive ratio, revolting dimensionsJPEG artifacts: compression artifacts, digital noise, pixelationLong neck: extended cervical region, elongated throat, stretched napeLow quality: poor resolution, inferior standard, subpar gradeLowres: low resolution, inadequate quality, deficient definitionMalformed limbs: deformed appendages, misshapen extremities, malformed body partsMissing arms: absent limbs, lacking appendages, nonexistent extremitiesMissing legs: absent limbs, lacking appendages, nonexistent extremitiesMorbid: gruesome, macabre, eerieMutated hands: altered appendages, changed extremities, transformed body partsMutation: genetic variation, aberration, deviationMutilated: disfigured, dismembered, butcheredOut of frame: outside the picture, beyond the borders, off-screenPoorly drawn face: badly illustrated countenance, inadequately depicted visage, incompetently sketched featuresPoorly drawn hands: badly illustrated appendages, inadequately depicted extremities, incompetently sketched digitsSignature: autograph, sign, markText: written language, printed words, scriptToo many fingers: excessive digits, surplus phalanges, extra appendagesUgly: unattractive, unsightly, repellentUsername: screen name, login, handleWatermark: identifying mark, branding, logoWorst quality: lowest standard, poorest grade, worst resolutionDo not forget to set this article as favorite if you found it useful.Happy generations!
240
71
Understanding Score_9, Score_8_Up, Score_7_Up, Score_6_Up, Score_5_Up, Score_4_Up

Understanding Score_9, Score_8_Up, Score_7_Up, Score_6_Up, Score_5_Up, Score_4_Up

IntroductionIn the realm of AI image generation, particularly with models like Pony Diffusion, achieving high-quality outputs consistently is a significant challenge. A crucial innovation to address this challenge involves the use of aesthetic ranking tags such as score_9, score_8_up, score_7_up, score_6_up, score_5_up, and score_4_up. These tags play a vital role in guiding the model to produce better images by leveraging human-like aesthetic judgments. This article delves into what these tags are, their purpose, and how they are utilized to enhance the quality of AI-generated images.What Are Score Tags?Score tags are annotations added to image captions during the training phase of AI models. These annotations indicate the aesthetic quality of the images, based on a scale derived from human ratings. Here is a breakdown of the specific tags:1. Score_9: Represents the highest quality images, typically in the top 10% of all images.2. Score_8_Up: Includes images that are in the top 20%, from 80% to 90% in quality.3. Score_7_Up: Covers images in the top 30%, from 70% to 80% in quality.4. Score_6_Up: Encompasses images in the top 40%, from 60% to 70% in quality.5. Score_5_Up: Represents images in the top 50%, from 50% to 60% in quality.6. Score_4_Up: Includes images in the top 60%, from 40% to 50% in quality.These tags are used during the training of AI models to help the model distinguish between different levels of image quality, thereby enabling it to generate better images during the inference phase.Purpose of Score TagsEnhancing Model TrainingThe primary purpose of score tags is to improve the training process by providing the model with a clear understanding of what constitutes a good image. By repeatedly exposing the model to images annotated with these tags, it learns to recognize the characteristics that make an image aesthetically pleasing.Providing Fine-Grained ControlScore tags offer fine-grained control over the quality of the generated images. Users can specify the desired quality level in their prompts, ensuring that the output meets their expectations. For example, using the score_9 tag in a prompt indicates that the user expects the highest quality images.Overcoming Data Quality ChallengesIn large datasets, not all images are of high quality. Score tags help in filtering out lower-quality images during the training phase, ensuring that the model is trained on the best possible data. This selective training helps in achieving better overall performance and higher quality outputs.How Score Tags Are UsedTraining PhaseDuring the training phase, images in the dataset are manually or semi-automatically annotated with score tags based on their aesthetic quality. This process involves:1. Data Collection: Gathering a diverse set of images from various sources.2. Manual Ranking: Expert reviewers rank the images on a scale, typically from 1 to 5, based on aesthetic criteria.3. Tag Assignment: Images are tagged with the corresponding score tags (e.g., score_9 for top-tier images).The model is then trained on this annotated dataset, learning to associate the score tags with the quality levels of the images.Inference PhaseDuring the inference phase, users can include score tags in their prompts to influence the quality of the generated images. For example:•A prompt with the tag score_9 will generate images that the model has learned to associate with the highest quality.•A prompt with the tag score_6_up will generate images that meet the quality standards from 60% to 100%.This tagging system provides users with the flexibility to request images of varying quality levels, depending on their specific needs.Practical ApplicationIn practice, the use of score tags can vary depending on the tools and interfaces available. Some tools, like the PSAI Discord bot, automatically add these tags to prompts, simplifying the process for users. In other interfaces, such as Auto1111, users may need to manually add these tags to their prompts. This can be done by saving the tags as a style or copying and pasting them into the beginning of the prompts.Limitations and Future ImprovementsWhile score tags significantly enhance the quality of AI-generated images, there are some limitations:1. Bias in Tags: The tags can introduce biases, especially when using style or artist-specific LoRAs. This may affect the diversity and creativity of the outputs.2. Negative Tags: Negative tags (e.g., score_4) are less effective because the training data does not include extremely low-quality images. Therefore, their impact on steering the model away from bad images is limited.Future improvements for Pony Diffusion V7 aim to refine the tagging system and enhance the model’s ability to understand and utilize these tags effectively. Simplifying the tags and ensuring a more diverse training dataset are key areas of focus.ConclusionScore tags like score_9, score_8_up, score_7_up, score_6_up, score_5_up, and score_4_up play a crucial role in enhancing the quality of AI-generated images in models like Pony Diffusion. By providing a clear indication of image quality and enabling fine-grained control during the inference phase, these tags help in achieving more consistent and aesthetically pleasing outputs. As the development of AI models continues, refining these tagging systems and addressing their limitations will further improve the quality and versatility of AI-generated content.If you like this article, please give it a thumbs up and share it. You can also try using my Pony Diffusion model for generation. Thank you.〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓 ★★★ FuturEvoLab ★★★ 〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓Welcome to FuturEvoLab! We greatly appreciate your continuous support. Our mission is to delve deep into the world of AI-generated content (AIGC), bringing you the latest innovations and techniques. Through this platform, we hope to learn and exchange ideas with you, pushing the boundaries of what's possible in AIGC. Thank you for your support, and we look forward to learning and collaborating with all of you.In our exploration, we recommend several powerful models:Pony XL (Realistic)[Pony XL]Aurora Realism - FuturEvoLab[Pony XL]Lifelike Doll Romance - FuturEvoLabPony XL (Anime)[Pony XL]Cyber Futuristic Maidens - FuturEvoLab[Pony XL]Cyberworld Anime - FuturEvoLabDream Brush SDXL - FuturEvoLabSDXL 1.0 (Realistic)[SDXL]Lover's Light - FuturEvoLab[SDXL]Real style fantasy - FuturEvoLab[SDXL]Soulful Particle Genesis - FuturEvoLabSDXL 1.0 (Anime)[SDXL]Lovepunk Synth - FuturEvoLabFutureDreamWorks-SDXL-FuturEvoLabDreamEvolution-SDXL-FuturEvoLabStable Diffusion 1.5 (Realistic)[SD1.5]Genesis Realistic - FuturEvoLabTemptation Core - FuturEvoLab[SD1.5]Meris Realistic - FuturEvoLab[SD1.5]Fantasy Epic - FuturEvoLab[SD1.5]Fantasy - FuturEvoLabStable Diffusion 1.5 (Anime)[SD1.5]LoveNourish EX Anime - FuturEvoLab[SD1.5]LoveNourish Anime - FuturEvoLab[SD1.5]Temptation Heart【2.5D style】- FuturEvoLabBy leveraging these models, creators can generate images that range from hyper-realistic to vividly imaginative, catering to various artistic and practical applications.〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓 ★★★ FuturEvoLab ★★★ 〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓
208
63
Let's make LoRA with 🎨TensorArt 🖌️

Let's make LoRA with 🎨TensorArt 🖌️

[Updated on October 26th]Hello everyone, this time I will briefly explain his LoRA creation feature in TensorArt. This feature allows you to create your own learning files.⚠️As I am self-studying, I would appreciate it if you could read this for reference only⚠️overview:LoRA (low rank adaptation) is an efficient method for fine-tuning AI models. TensorArt allows you to easily create your own training files with specific styles and characteristics using LoRA. This allows you to generate images for individual projects and creative needs.process:1. Login and menu selection:First, log into TensorArt and select the Online Training option from the menu.2. Upload images:Add the images you want to train to the upload area on the left. To do this, prepare multiple images with specific characteristics, such as the character's expression or pose.3. Set model and trigger word:Select the model theme to be used, the base model (select from first, anime, reality, 2.5D, standard, custom), and set the trigger word etc.🌸 If you select Custom, you can now train using your favorite base model (SDXL, SD3, Hanyuan, FLUX).🌸Trigger words do not necessarily need to be set depending on the purpose.4. About tags:Tags are automatically generated for each image when you upload it. Click on the uploaded image to check, delete, or add the generated tags. By deleting the tags of the features you want to learn, you can learn the features more accurately. Adjusting the tags will improve the quality of the images produced.(If this is your first time, feel free to ignore it.)5. Run the training:Once the settings are complete, click the "Train Now" button at the bottom right. Training can take minutes to hours, and you can track your progress on a dedicated page. The amount of credits consumed will vary depending on the number of images you prepare, the number of training sessions, and the number of epochs, so please proceed in a planned manner.6. Download the file:Once training is complete, download the generated LoRA file and use it for actual image generation.7. Host the model:Proceed to create a project from "Host my model", enter the necessary information and click the create button.Completion and testing:Let's try image generation using the trained LoRA file. Although features can be learned sufficiently with a small number of images, similar compositions and poses are more likely to be generated if there are fewer reference images.             Completed: "Shizuku-chan" general-purpose XLCredit consumption:Creating LoRA consumes credits. The number of credits consumed varies depending on the number of images you prepare, the number of training sessions, and the number of epochs, so it is important to check the number of credits you need in advance and proceed accordingly. If you don't have enough credits, we recommend purchasing more or considering a Pro plan.summary:By using TensorArt's LoRA creation function, you can create illustrations with a higher degree of freedom. Please try out the features that are easy to use even for beginners. By using LoRA, your creative projects will be even more fulfilling.Let's create! !
177
111
Understanding the Use of Parentheses in Prompt Weighting for Stable Diffusion

Understanding the Use of Parentheses in Prompt Weighting for Stable Diffusion

Prompt weighting in Stable Diffusion allows you to emphasize or de-emphasize specific parts of your text prompt, giving you more control over the generated image. Different types of brackets are used to adjust the weights of keywords, which can significantly affect the resulting image. In this tutorial, we will explore how to use parentheses (), square brackets [], and curly braces {} to control keyword weights in your prompts.Basics of Prompt WeightingBy default, each word or phrase in your prompt has a weight of 1. You can increase or decrease this weight to control how much influence a particular word or phrase has on the generated image. Here’s a quick guide to the different types of brackets:1. Parentheses (): Increase the weight of the enclosed word or phrase.2. Square Brackets []: Decrease the weight of the enclosed word or phrase.3. Curly Braces {}: In some implementations, they behave similarly to parentheses but with slightly different multipliers.Using Parentheses to Increase WeightParentheses () are used to increase the weight of the enclosed keywords. This means the AI model will give more importance to these words when generating the image.• Single Parentheses: Increase the weight by 1.1 times.• Example: (girl) increases the weight of “girl” to 1.1.• Nested Parentheses: Increase the weight further.• Example: ((girl)) increases the weight of “girl” to 1.21 (1.1 * 1.1).You can also specify a custom weight:• Custom Weight: Specify the exact multiplier.• Example: (girl:1.5) increases the weight of “girl” to 1.5.Example Prompts:(masterpiece, best quality), (beautiful girl:1.5), highres, looking at viewer, smileUsing Square Brackets to Decrease WeightSquare brackets [] are used to decrease the weight of the enclosed keywords. This means the AI model will give less importance to these words when generating the image.• Single Square Brackets: Decrease the weight by 0.9 times.• Example: [background] decreases the weight of “background” to 0.9.• Nested Square Brackets: Decrease the weight further.• Example: [[background]] decreases the weight of “background” to 0.81 (0.9 * 0.9).Example Prompts:(masterpiece, best quality), (beautiful girl:1.5), highres, looking at viewer, smile, [background:0.8]Using Curly BracesCurly braces {} are less commonly used but in some implementations (e.g., NovelAI), they serve a similar purpose to parentheses with different default multipliers. For instance, {word} might be equivalent to (word:1.05).Example Prompts:(masterpiece, best quality), {beautiful girl:1.3}, highres, looking at viewer, smileCombining WeightsYou can combine different types of brackets to fine-tune the prompt further:• Example: ((beautiful girl):1.2), [[background]:0.7]Example Prompts:(masterpiece, best quality), ((beautiful girl):1.2), highres, looking at viewer, smile, [[background]:0.7]Practical ExamplesIncreasing Emphasis:To generate an image where the focus is heavily on the “girl”:(masterpiece, best quality), (beautiful girl:1.5), highres, looking at viewer, smile, [background:0.8]Decreasing Emphasis:To generate an image where the “background” is less emphasized:(masterpiece, best quality), beautiful girl, highres, looking at viewer, smile, [background:0.5]ConclusionBy using parentheses, square brackets, and curly braces effectively, you can guide Stable Diffusion to prioritize or de-prioritize certain elements in your prompt, resulting in images that better match your vision. Practice using these weighting techniques to see how they affect your generated images, and adjust accordingly to achieve the best results.〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓 ★★★ FuturEvoLab ★★★ 〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓Welcome to FuturEvoLab! We greatly appreciate your continuous support. Our mission is to delve deep into the world of AI-generated content (AIGC), bringing you the latest innovations and techniques. Through this platform, we hope to learn and exchange ideas with you, pushing the boundaries of what's possible in AIGC. Thank you for your support, and we look forward to learning and collaborating with all of you.In our exploration, we recommend several powerful models:Pony XL (Realistic)[Pony XL]Aurora Realism - FuturEvoLab[Pony XL]Lifelike Doll Romance - FuturEvoLabPony XL (Anime)[Pony XL]Cyber Futuristic Maidens - FuturEvoLab[Pony XL]Cyberworld Anime - FuturEvoLabDream Brush SDXL - FuturEvoLabSDXL 1.0 (Realistic)[SDXL]Lover's Light - FuturEvoLab[SDXL]Real style fantasy - FuturEvoLab[SDXL]Soulful Particle Genesis - FuturEvoLabSDXL 1.0 (Anime)[SDXL]Lovepunk Synth - FuturEvoLabFutureDreamWorks-SDXL-FuturEvoLabDreamEvolution-SDXL-FuturEvoLabStable Diffusion 1.5 (Realistic)[SD1.5]Genesis Realistic - FuturEvoLabTemptation Core - FuturEvoLab[SD1.5]Meris Realistic - FuturEvoLab[SD1.5]Fantasy Epic - FuturEvoLab[SD1.5]Fantasy - FuturEvoLabStable Diffusion 1.5 (Anime)[SD1.5]LoveNourish EX Anime - FuturEvoLab[SD1.5]LoveNourish Anime - FuturEvoLab[SD1.5]Temptation Heart【2.5D style】- FuturEvoLabBy leveraging these models, creators can generate images that range from hyper-realistic to vividly imaginative, catering to various artistic and practical applications.〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓 ★★★ FuturEvoLab ★★★ 〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓
159
7
Prompt reference for "Lighting Effects"

Prompt reference for "Lighting Effects"

Hello. I usually use "lighting/lighting effects" when generating images.I will introduce some of the "words" I use when I want to add something.Please note that these words alone do not provide 100% effectiveness, and the base modelThe effect you get will differ depending on the LoRA sampling method and where you place it in the prompt.Words related to "lighting effects"・ Backlight :  Light from behind the subject・ Colorful lighting :  The impression itself is not colored, but the color changes depending on the light.・ moody lighting :  natural lighting, not direct artificial light・ studio lighting :  A term used to describe the artificial lighting of a photography studio.・ Directional Light :  directional light source is a light source that shines parallel rays in a selected direction.・ Dramatic lighting :  Lighting techniques in the field of photography・ Spot lighting :  A lighting technique that uses artificial light in a small area.・ Cinematic lighting :  A single word that describes several lighting techniques used in movies.・ Bounce Lighting :  Light reflected by a reflex plate, etc.・ Practical Lighting :  Photographs and videos that depict the light source itself in the composition・ Volumetric lighting :  A word derived from 3DCG. It tends to be a picture with a divine golden light source.・ Dynamic lighting :  I don't really understand what it means, but it tends to create high-contrast images.・ Warm lighting :  Creates a warm picture illuminated with warm colors・ Cold lighting :  Lights with a cold light source.・ High-key lighting :  Soft light, minimal shadows, low contrast, resulting in bright frames・ Low-key lighting :  It provides high contrast, but the impression is a little weak.・ Hard light :  Strong light. Highlights appear strong.・ soft light :  A word that refers to faint light.・ strobe lighting :  strong artificial light (stroboscopic lighting)・ Ambient light :  An English word that refers to ambient lighting/indoor lighting.・ flash lighting  :  For some reason, the characters themselves tend to emit light, and there are often flashes of light. (flash lighting photography) ・ Natural lighting :  This tends to create a natural-looking picture that feels contrasting with artificial light.
143
29
The Significance of “BREAK” in Stable Diffusion Prompts

The Significance of “BREAK” in Stable Diffusion Prompts

The Significance of “BREAK” in Stable Diffusion PromptsUnderstanding Prompts and TokenizationStable Diffusion interprets prompts by dividing them into tokens, with earlier tokens often having a stronger influence on the resulting image than later ones. The model processes prompts in blocks of 75 tokens, making the order of tags within the prompt crucial for achieving the desired visual effects.The Role of “BREAK”“BREAK” is used to create a deliberate separation within the prompt, capping a token block at 75 tokens even if the prompt is shorter. This forces the model to reset the influence of subsequent words, allowing for more controlled and segmented impacts within the prompt.Practical ExampleConsider the following prompt sequence:Score_9, Score_8_up, Score_7_up, Score_6_up, Score_5_up, Score_4_up, masterpiece, best quality, BREAK (FuturEvoLabBadge:1.5), cyberpunk badge, BREAK Cyberpunk style, Cyberpunk girl head, wings composed of brushes behind her, BREAK front view, Purple energy gem, neon colors, intricate design, symmetrical pattern, futuristic emblem, vibrant hues, high-tech, black backgroundThis prompt is divided into segments using “BREAK” to manage the influence of each section:1. Quality and Style:Score_9, Score_8_up, Score_7_up, Score_6_up, Score_5_up, Score_4_up, masterpiece, best quality, BREAK1. This segment ensures the image is of the highest quality.2. Cyberpunk Badge:(FuturEvoLabBadge:1.5), cyberpunk badge, BREAK2. This specifies a detailed, futuristic badge with a cyberpunk theme.3. Character Description:Cyberpunk style, Cyberpunk girl head, wings composed of brushes behind her, BREAK3. This describes the main character and specific elements such as wings made of brushes.4. Additional Elements:front view, Purple energy gem, neon colors, intricate design, symmetrical pattern, futuristic emblem, vibrant hues, high-tech, black background4. These details enhance the overall composition with specific colors, patterns, and a high-tech appearance.Benefits of Using “BREAK”1. Controlled Influence: Isolates specific tags to minimize unwanted interactions and maintain visual coherence.2. Editing Flexibility: Allows easier adjustments and refinements of prompts without drastically altering other parts of the image.3. Precision: Ensures that certain descriptive tags only affect intended parts of the image.By strategically placing “BREAK” in your prompts, you can significantly enhance control over the image generation process, leading to more precise and visually appealing results.ConclusionInserting “BREAK” in Stable Diffusion prompts is a powerful method for advanced users aiming for high levels of control and detail in AI-generated images. It helps manage the influence of different tags, ensuring each element of the prompt contributes as intended to the final output.By understanding and applying the concept of “BREAK,” users can improve their prompt crafting skills, leading to more sophisticated and desired AI art creations.〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓 ★★★ FuturEvoLab ★★★ 〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓Welcome to FuturEvoLab! We greatly appreciate your continuous support. Our mission is to delve deep into the world of AI-generated content (AIGC), bringing you the latest innovations and techniques. Through this platform, we hope to learn and exchange ideas with you, pushing the boundaries of what's possible in AIGC. Thank you for your support, and we look forward to learning and collaborating with all of you.In our exploration, we recommend several powerful models:Pony XL (Realistic)[Pony XL]Aurora Realism - FuturEvoLab[Pony XL]Lifelike Doll Romance - FuturEvoLabPony XL (Anime)[Pony XL]Cyber Futuristic Maidens - FuturEvoLab[Pony XL]Cyberworld Anime - FuturEvoLabDream Brush SDXL - FuturEvoLabSDXL 1.0 (Realistic)[SDXL]Lover's Light - FuturEvoLab[SDXL]Real style fantasy - FuturEvoLab[SDXL]Soulful Particle Genesis - FuturEvoLabSDXL 1.0 (Anime)[SDXL]Lovepunk Synth - FuturEvoLabFutureDreamWorks-SDXL-FuturEvoLabDreamEvolution-SDXL-FuturEvoLabStable Diffusion 1.5 (Realistic)[SD1.5]Genesis Realistic - FuturEvoLabTemptation Core - FuturEvoLab[SD1.5]Meris Realistic - FuturEvoLab[SD1.5]Fantasy Epic - FuturEvoLab[SD1.5]Fantasy - FuturEvoLabStable Diffusion 1.5 (Anime)[SD1.5]LoveNourish EX Anime - FuturEvoLab[SD1.5]LoveNourish Anime - FuturEvoLab[SD1.5]Temptation Heart【2.5D style】- FuturEvoLabBy leveraging these models, creators can generate images that range from hyper-realistic to vividly imaginative, catering to various artistic and practical applications.〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓 ★★★ FuturEvoLab ★★★ 〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓
121
6

Tips for new Users

Intro Hey there! If you're reading this, you're probably new to AI image generation and want to learn more. If you're not, you probably already know more than me :). Yeah, full disclosure: I'm still pretty inexperienced at this whole thing, but I thought I could still share some of the things I've learned with you! So, in no particular order:1. You can like your own posts I doubt there's anyone who doesn't know this already, but if you're posting your favorite generations and you care about getting likes, you can always like them yourself. Sketchy? Kinda. Do I still do it? Yes. And on the topic of getting more likes:2. Likes will often be returned Whenever I receive a like on one of my posts, I'll look at that person's pictures and heart any that I particularly enjoy. I know a lot of people do this, so one of the best ways to get people to notice and like your content is to just browse through posts and be generous with your own likes. It's a great way to get inspiration too!3. Use turbo/lightning LORAs If you find yourself running out of credits, there are ways to conserve them. When I'm iterating on an idea, I'll use a SDXL model (Meina XL) paired with this LORA. This lets me get high quality images in 10 steps for only 0.4 credits! It's really nice, and works with any SDXL model. Unfortunately, if there is a similar method for speeding up SD 1.5 models I don't know it, so it only works with XL.4. Use ADetailer smartly ADetailer is the best solution I've found for improving faces and hands. It's also a little difficult to figure out. So, though I'm still not a professional with it, I thought I could share some of the tricks I've learned. The models I normally use are face_yolo8s.pt and hand_yolo8s.pt. The "8s" versions are better than the "8n" versions, though they are slightly slower. In addition to these models, I'll often add the Attractive Eyes and Perfect Hand LORAs respectively. These are all just little things you can do to improve these notoriously hard parts of image generation. Also, using ADetailer before upscaling the image is cheaper in terms of credits, though the upscaling process can sometimes mess up the hands and face a little bit so there's some give and take there.5. Use an image editing app Wait a minute, I hear you saying, isn't this a guide for using Tensor Art? Yes, but you can still use other tools to improve your images. If I don't like a specific part of my image, I'll download it, open it in Krita (Or Photoshop or Gimp) and work on it. My art skills are pretty bad, (which is why I'm using this site in the first place,) but I can still remove, recolor, or edit certain aspects of the image. I can then reupload it to Tensor Art, and Img2img with a high denoising strength to improve it further. You could also just try inpainting the specific thing you want to change, but I always find it a bit of a struggle to get inpaint to make the changes I want.6. Experiment! The best way to learn is to do, so just start generating images, fiddling with settings, and trying new things. I still feel like I'm learning new stuff every day, and this technology is improving so fast that I don't think anyone will ever truly master it. But we can still try our hardest and hone our skills through experimentation, sharing knowledge, and getting more familiar with these models. And all the anime girls are a big plus too.Outro If you have anything to add, or even a tip you'd like to share, definitely leave a comment and maybe I can add it to this article. This list is obviously not exhaustive, and I'm no where near as talented as some of the people on this platform. Still though, I hope to have helped at least one person today. If that was you, maybe give the article a like? I appreciate it a ton, so if you enjoyed, just let me know. Thanks for reading!
117
25
Radio Buttons are awesome in AI Tools: [How to set-up guide]

Radio Buttons are awesome in AI Tools: [How to set-up guide]

Dear Tensorians,Thanks to the implemented feature of radio buttons for AI Tools, we can use the AI tools with much more fun now. Because I'm the one who insisted to implement it and more importantly the radio button's setting import/export features, I'll give an easy tutorial about them for the beginners. 🤗😉https://tensor.art/template/765762196352358016This is an example AI tool using radio buttons. You can see the cool radio buttons on the right. Yes! The cool thing about radio button GUI is that you don't have to remember or re-type all those crazy prompt words at all any more. You can store them in those buttons and click them! Especially if you have a very wide range of different prompting styles as most users are, you cannot even remember them all. I bet you already have your own backup memo file for those special prompts lol. Yes, we have to do it for important prompts. However, more conveniently, if you make this kind of AI tool with radio button UI, you can just store them online next to you all the time. You can click on the buttons and generate various images whenever you want, even when you are driving (just kidding, don't ever do that lol). Of course you can add extra prompt together with the buttons. (Click "custom" button and you can always input more prompt!)To create the radio buttons, click edit in the ... menu.Then you move to the EDIT page of the AI tool.in the middle of the page, you see the user-configurable settings.By clicking "Add" button, you can choose your AI interface. By clicking "edit" in the prompt's text box, you can enter the radio buttons option page.From the scratch, you can choose the pre-defined groups and buttons. In addition, you can add your own new buttons! Make a button name and its content. The content is part of prompts you want to add for the button's place.After you are done with all the button settings, click "confirm" and then "publish" your AI tool. Then you'll see your cool radio buttons in the AI tool. (Note that there are certain prompt text box nodes in comfyUI unable to edit for buttons. Basic text prompt nodes and more nodes can be used for button edit. You can check it after you publish your workflow into a tool. If it doesn't support the radio buttons, use different prompt text nodes.)Whenever you update your workflow for the AI tool, all the AI tool UI is reset to none!! Yes. It was a real headache at the beginning. However, now we have a cool import/export button for the radio buttons! (Thanks God~ 👯‍♀️⛈💯🤗). BTW, when you edit the button groups, you might choose part of the 6 or 7 groups (e.g, "character settings" and "role" groups) first and add some nice buttons, then later you change your mind and want to add another group, e.g., "style" group, however, if you press the add button for that, your previous button data will be gone!! You restart from the beginning. Be very careful! (You'll understand what I mean when it happens. lol)Before updating your workflow, you can export the radio button settings as a JSON file. Then you can import it back in later anytime you want. More importantly, you can just edit the radio buttons from the editors (like MS visual studio) for easier copy and paste from the existing files. Trust me. This will save your enormous amount of time remaking those terrible buttons all the time whenever the workflow is modified.Sometimes you must want to edit an existing button JSON file for another AI tool. Editing a JSON file is not really an entertaining work. However, it's much better than remaking the whole radio buttons at GUI~ So find the place to edit in the JSON file and change it very carefully. The JSON syntax is not very editor-friendly and error-prone. But you'll get used to it soon by trial and errors. It's always useful to use "find" command to look for the button you want in the file. You'll realize more interesting things while using the button JSON files. I'll leave them for your own pleasant surprise~ LOL.I shared my JSON file of the AI tool in the comfy-chatroom of Discord. Feel free to use it.I hope this article helped you make the radio button UI more easily. Enjoy~ 🤗😉⛈
103
19
Welcome to FuturEvoLab!

Welcome to FuturEvoLab!

Welcome to FuturEvoLab! We greatly appreciate your continuous support. Our mission is to delve deep into the world of AI-generated content (AIGC), bringing you the latest innovations and techniques. Through this platform, we hope to learn and exchange ideas with you, pushing the boundaries of what's possible in AIGC. Thank you for your support, and we look forward to learning and collaborating with all of you.In our exploration, we recommend several powerful models:Pony XL (Realistic)[Pony XL]Aurora Realism - FuturEvoLab[Pony XL]Lifelike Doll Romance - FuturEvoLabPony XL (Anime)[Pony XL]Cyber Futuristic Maidens - FuturEvoLab[Pony XL]Cyberworld Anime - FuturEvoLabDream Brush SDXL - FuturEvoLabSDXL 1.0 (Realistic)[SDXL]Lover's Light - FuturEvoLab[SDXL]Real style fantasy - FuturEvoLab[SDXL]Soulful Particle Genesis - FuturEvoLabSDXL 1.0 (Anime)[SDXL]Lovepunk Synth - FuturEvoLabFutureDreamWorks-SDXL-FuturEvoLabDreamEvolution-SDXL-FuturEvoLabStable Diffusion 1.5 (Realistic)[SD1.5]Genesis Realistic - FuturEvoLabTemptation Core - FuturEvoLab[SD1.5]Meris Realistic - FuturEvoLab[SD1.5]Fantasy Epic - FuturEvoLab[SD1.5]Fantasy - FuturEvoLabStable Diffusion 1.5 (Anime)[SD1.5]LoveNourish EX Anime - FuturEvoLab[SD1.5]LoveNourish Anime - FuturEvoLab[SD1.5]Temptation Heart【2.5D style】- FuturEvoLabBy leveraging these models, creators can generate images that range from hyper-realistic to vividly imaginative, catering to various artistic and practical applications.
97
3
Flux Ultimates Custom Txt 2 Vid Tensor Workfkow

Flux Ultimates Custom Txt 2 Vid Tensor Workfkow

Welcome to Dream Diffusion FLUX ULTIMATE, TXT 2 VID With its own custom workflow made for Tensor Arts Comfy Workspace. The workflow can be downloaded on this page....... ENJOYThis is a 2nd stage Trained checkpoint to its predecessor FLUX HYPER.When you think you had it nailed in the last version and notice a 10% margin that could still be trained........ Well that's what happened ..So now this version has even more font styles, Better adherence, Sharper image clarity and a better grasp for anime, water painting and such on....This model has the same setting parameters as Flux HyperPrompt Example : Logo in neon lights, 3D, colorful, modern, glossy, neon background,with a huge explosion of fire with epic effects, the text reads  "FLUX ULTIMATE , GAME CHANGER ",Set steps at : 20Sampler : DPM++ 2M or EULER Gives best resultsScheduler : SimpleDenoise : 1.00Image Size : 576 x 1024 or 1024 x 576 You can choose any size but this model is optimized for faster rendering with those sizes.Download the links from below and save them to your comfy folders...Comfy Workflow :  https://openart.ai/workflows/maitruclam/comfyui-workflow-for-flux-simple/iuRdGnfzmTbOOzONIiVVVae download this to your Vae folder inside of your model folderDownload them from: https://huggingface.co/black-forest-labs/FLUX.1-schnell/tree/main/vaeClip:  download clip_l.safetensors and t5xxl_fp8_e4m3fn.safetensors download these 2 and save them to your clip folder inside of your models folderDownload them from : https://huggingface.co/comfyanonymous/flux_text_encoders/tree/mainIf you have any questions or issues feel free to drop a comment below and I will get back to you as soon as I can. Enjoy  DICE
83
43
50 Inspiration Beauty Monster or Creature  - HALLOWEEN2024

50 Inspiration Beauty Monster or Creature - HALLOWEEN2024

Looking to stand out this Halloween with a fierce, captivating costume? Dive into our 50 Beauty Monster and Creature Inspirations for Halloween 2024!From the alluring vampire queen with fangs and pale skin, to the mystical forest spirit with branches for hair, this list features a variety of iconic, feminine creatures to embody. Each entry provides five key characteristics to make your costume pop with creativity. Whether you want elegance, spookiness, or a combination of both, these ideas will help you slay this Halloween!Vampire: Fangs, cloak, pale skin, red lips, pointed ears.Witch: Pointed hat, broomstick, black dress, potion bottles, striped stockings.Medusa: Snake hair, stony gaze, green skin, gold jewelry, ancient toga.Banshee: Ghostly white dress, flowing hair, haunting scream, pale makeup, chains.Succubus: Bat wings, red dress, horns, glowing eyes, tail.Werewolf: Furry ears, sharp claws, fangs, torn clothes, wild hair.Mermaid: Scales, seashell bra, fishtail, wet-look hair, pearls.Harpy: Feathered wings, talons, bird-like eyes, fierce expression, ragged clothes.Fairy: Sparkling wings, flower crown, wand, glittery makeup, light dress.Zombie: Torn clothes, blood stains, decayed skin, lifeless eyes, open wounds.Siren: Wet-look hair, seashell jewelry, seaweed skirt, alluring voice, eerie glow.Elf: Pointed ears, elegant gown, bow and arrow, long hair, ethereal glow.Gorgon: Snake tail, golden scales, slit eyes, regal crown, sharp claws.Mummy: Wrapped in bandages, dark eye makeup, jewelry, ancient amulet, dusty appearance.Ghost: Flowing white sheet, transparent, eerie wail, glowing eyes, pale hands.Queen of the Dead: Black gown, skull crown, skeletal makeup, dark veil, red roses.Demoness: Red skin, black horns, tail, wings, sharp claws.Bride of Frankenstein: Black and white hair, stitched skin, bride gown, lightning bolts, scars.Voodoo Priestess: Skull face paint, voodoo doll, bones, beads, tribal clothing.Phoenix: Fiery wings, flame patterns, red and orange outfit, glowing skin, feathers.Chimera: Lion mane, snake tail, dragon wings, golden eyes, muscular build.Spider Queen: Black web dress, spider crown, long legs, red eyes, venomous fangs.Lady Death: Black cloak, scythe, skeletal hands, skull mask, dark aura.Nymph: Nature gown, flowers in hair, earthy tones, glowing skin, delicate wings.Selkie: Fur cloak, watery skin, ocean jewels, seal tail, wet hairGiantess: Massive build, oversized clothes, earthy skin, towering presence, big jewelry.Forest Witch: Mossy cloak, animal bones, green skin, potions, tree branches in hair.Dragoness: Scaly skin, horns, tail, fiery breath, armored chestplate.Lilith: Dark wings, black robe, seductive look, glowing red eyes, ancient symbols.Hag: Wrinkled skin, tattered clothes, long nose, hunched posture, warts.Valkyrie: Winged helmet, sword, battle armor, braided hair, shield.Troll Woman: Green skin, sharp tusks, club, fur clothes, wild hair.Ice Queen: Frosted crown, shimmering cape, blue skin, ice staff, glowing cold eyes.Scarecrow: Straw-filled body, stitched mouth, tattered hat, pumpkin head, patched overalls.Djinn: Flowing robes, magic lamp, glowing eyes, ornate jewelry, smoke swirling around.Cheshire Cat: Striped fur, wide grin, cat ears, mischievous eyes, tail.Swamp Creature: Muddy skin, algae hair, webbed fingers, water plants, gills.Basilisk Queen: Reptilian skin, glowing eyes, snake tail, venomous fangs, ancient armor.Lamia: Snake body, golden armor, hypnotic eyes, deadly claws, venomous bite.Wendigo Woman: Deer antlers, skeletal body, glowing eyes, fur cloak, sharp claws.Shadow Witch: Black shadowy figure, dark veil, glowing red eyes, spectral form, floating.Frost Maiden: Icicle crown, snowflake gown, pale blue skin, icy breath, shimmering frost.Baba Yaga: Hunched back, long nose, flying broom, warts, iron teeth.Kitsune: Fox ears, fluffy tail, red kimono, mystical powers, mask.Forest Spirit: Tree branches for hair, bark-like skin, moss gown, glowing eyes, ethereal glow.Plague Doctoress: Black cloak, plague mask, long gloves, eerie eyes, dark potions.Dullahan: Headless woman, flowing black cloak, horse-riding, holding a skull, eerie lantern.Succubus Queen: Leather bodice, wings, horns, glowing eyes, seductive aura.Dryad: Bark skin, leaves in hair, tree branches for arms, glowing green eyes, earthy gown.Banshee Queen: Flowing black dress, ghostly hair, skeletal hands, pale skin, sorrowful wail.settings usedAll created using Juggernaut SDXL modelsteps 25cfg 6dpmpp_2m karrasnot all creature recognize well by the checkpoint, you may use LoRA or other checkpoint if needed to create certain characterWith these 50 beauty monster and creature inspirations, you're all set to embrace the eerie, enchanting side of Halloween 2024. Whether you choose to transform into a seductive vampire, a magical forest spirit, or a chilling banshee queen, each idea is designed to make you stand out in both style and spookiness. Let your creativity soar this Halloween, and enjoy bringing these unique creatures to life. Get ready to slay (literally!) with hauntingly beautiful looks that will leave everyone spellbound!
79
12
PhotoReal Makeup Edition - V3 Slider

PhotoReal Makeup Edition - V3 Slider

PhotoReal Makeup Edition - V3 Slider (no trigger)Introducing the PhotoReal Makeup Edition - V3 Slider! Slide to the right to add beautiful, realistic makeup. Slide to the left to reduce the makeup effect for a more natural look. It's perfect for adjusting the makeup to get just the style you want.Try it out and see the amazing changes you can make!More Information:- Model linkYour feedback is invaluable to me. Feel free to share your experiences and suggestions in the comment section. For more personal interactions, join our Discord server where we can discuss and learn together.Thank you for your continued support!
79
8
How to enable "Mature Content" (N-SFW or R-18) and S*ensitive Word lists  ლ(́◕◞౪◟◕‵ლ)

How to enable "Mature Content" (N-SFW or R-18) and S*ensitive Word lists ლ(́◕◞౪◟◕‵ლ)

I believe the most important guideline for new users should be: How to enable "Mature Content" ლ(́◕◞౪◟◕‵ლ)Just kidding, alright, I just saw the new feature about article publishing and noticed that there are very few people here. So, I'll take the initiative to explain it ( ~'ω')~How to enable "Mature Content":Go to settings, and the option to enable "Mature Content" should be the first button. Activate it and you're good to go, enjoy! (́◉◞౪◟◉‵)If you don't want N-SFW content to cause your social de/ath, I suggest enabling "Blur Mature Content." This way, all N-SFW content will be blurred. This button is located just below "Mature Content" and will only appear when you have enabled "Mature Content."Q: Why should I enable "Mature Content"?A: Besides some reasons you don't want to admit, it's because Tensor Art activated AI filtering for N-SFW content. Unfortunately, the AI filter they're using is not very stable yet, and many times it mistakenly identifies SFW images as N-SFW. If you haven't enabled "Mature Content," the misjudged images you post will be hidden from your account, even you won't be able to see your own pictures. From your perspective, you might think that the image you just posted was immediately deleted by Tensor Art.So, I'm also forced to enable "Mature Content." I'm doing this for academic purposes, definitely not for any reasons I don't want to admit. I'm pure, really! Please believe me! (◕ܫ◕)Q: Would it be harmful to my account if many of my posts were classified as N-SFW content?A: No, it wouldn't have any negative impact on your account. Your posts will simply not be visible to users who haven't enabled "Mature Content" or to unregistered individuals. It may also make you wonder if the AI is trolling you.--Updated on 23 May 2024, if you post child p*ornography related images, your account may be banned, please be aware.In addition, please be careful with ‘The Forbidden Fox’ (see ‘The Forbidden Fox’ incident below for more information).Q: How does Tensor Art's AI review my posts?A: I'm not an official representative, but based on my observations, Tensor Art's AI determines whether your post contains N-SFW content by analyzing the text prompt, rather than directly analyzing the image itself. Therefore, you may notice that Tensor Art's AI can detect N-SFW content within less than a second after posting (although its accuracy may...) (:3 」∠ )Here are some prompts that I have observed, which might not resemble N-SFW content but could still be mistakenly identified as such (as of May 23, 2024, and may be updated from time to time):S*ensitive Word lists:b*reast, b*are, s*eductive, b*lood, unc*ensored,s*exy, ad⚠️ult , hard⚠️core--Updated on 23 May 2024, "The Forbidden Fox" incidentDate of occurrence: May 22, 2024Hazard Level: Post replacementDetails: If your prompt contains certain specific words, including "ad⚠️ult" and "hard⚠️core," your post will be automatically banned by the AI filter. Images will be replaced with an illustration of a fox wearing a crown interacting with a safe, accompanied by the message "Websites Oops! Forbidden," as shown in the images below.The individuals affected by this fox image will not be aware that their pictures have been replaced. In their accounts, they will see their original images instead of the fox image. However, others will only see the fox image.The purpose behind the fox's interaction with the safe remains unknown, but it is evident that the victims' images have been replaced and taken without their knowledge.According to official sources, the appearance of this image is considered abnormal, and a team of hunters is being dispatched to eliminate the fox. The encounter between the hunters and the fox took place on ██th of ██, and the battle officially commenced.The battle lasted for ██ hours, with the hunters attempting to resolve the situation using the "UPDATE" method. However, the fox retaliated with a curse called "BUG." During this period, the "BUG" caused a disruption in the order of all users' posts, with over ██ posts hidden by the fox. Approximately ██ affected users were impacted, forcing the hunters to retreat.Later, the hunters regrouped and sought the assistance of a sage. They utilized a time reversal magic to restore the posts affected by the "BUG" back to normal. With the sage's cooperation, the hunters employed the "UPDATE" method once again and successfully executed the long-wicked fox.Currently, prompts such as "ad⚠️ult" and "hard⚠️core" will no longer result in post replacements, and the majority of previously stolen posts have been successfully recovered.As stated by official personnel, the affected images can be reported for restoration. If you discover that your pictures have been taken by this fox, please report it immediately through the official Discord bug report channel.The official Discord is located in the top right corner of the website, just to the left of your avatar.If you can't locate it, you can also click on this link.Q: Why does the AI filter often make mistakes?A: Perhaps Tensor Art's AI is still only ten years old, but she will grow up (I hope).
77
52
FLUX1 - Mastering Camera Exposure to Achieve Realism in AI Image Generation - Leica Camera

FLUX1 - Mastering Camera Exposure to Achieve Realism in AI Image Generation - Leica Camera

When venturing into the realm of AI-powered image generation, precision in your prompts is paramount to achieving truly compelling results. While the term "realistic photo" might seem straightforward, it lacks the specificity needed to guide these systems effectively. AI image generators are literal interpreters, meticulously following your instructions. To unlock their full potential and generate images that feel authentic and believable, we must embrace a more nuanced approach. By incorporating detailed camera exposure commands into our prompts, we can provide the AI with a clearer roadmap, leading to more focused and visually striking outputs. Let's explore the power of these commands by experimenting with variations of a single prompt, observing firsthand how subtle changes can dramatically impact the final image.Try experimenting with this sentence by copying and pasting variations at the end, and experience the differences for yourself.① GeneralLeica M10-R, low exposure, high contrast black and white, ISO 100, with a 50mm prime lens.Result: Emphasizes texture with deep shadows and fine monochrome details.Leica SL2-S with tilt-shift lens, low exposure, high contrast, ISO 100, with a 45mm tilt-shift lens.Result: Produces a surreal perspective with precise focus and deep shadows, highlighting scale and architectural details.Leica Q2 Monochrom, low exposure, extreme high contrast, ISO 50, with a 28mm macro lens.Result: Highlights intricate details with a glowing outline, using backlighting for a dramatic effect.Leica S3, low exposure, high contrast, ISO 50, with a 120mm macro lens.Result: Captures sharp macro details with soft contrast, enhancing textures and reflections in fine details.Leica SL2 with long exposure, low exposure, high contrast, ISO 100, with a 35mm wide-angle lens.Result: Captures dynamic movement with streaks of light, enhancing the contrast of urban night scenes. For better separation of the subject and bokeh effect: 100mm lens.Leica M10 Monochrom, low exposure, high contrast, ISO 100, with a 50mm prime lens.Result: Delivers fine natural textures and detail, emphasizing delicate patterns with soft lighting.Leica SL2-S, low exposure, high contrast, ISO 50, with a 24-90mm zoom lens.Result: Captures architectural details with dramatic lighting and long shadows, creating a striking urban landscape.Leica Q2, low exposure, high contrast, ISO 50, with a 28mm prime lens.Result: Reveals light refraction and swirling colors in delicate textures, producing a magical and ethereal image.Leica SL2-S, low exposure, high contrast, ISO 64, with a 75mm prime lens.Result: Enhances botanical detail with soft lighting, revealing the intricate patterns of petals.Leica S3 with bellows extension, low exposure, high contrast, ISO 64, with a 100mm macro lens.Result: Highlights mechanical precision with sharp contrast and deep shadows, focusing on fine details.Leica M10-R with ND filter, low exposure, high contrast, ISO 50, with a 28mm wide-angle lens.Result: Freezes fast-moving water with strong contrast, capturing dramatic texture under harsh light.Leica Q2 Monochrom with polarizing filter, low exposure, high contrast, ISO 100, with a 28mm prime lens.Result: Captures abstract reflections and interplay of light, emphasizing contrasts on smooth reflective surfaces.Leica M10 Monochrom, low exposure, high contrast, ISO 400 (pushed to 800), with a 50mm prime lens.Result: Creates a moody, silhouetted image with vintage film grain and intense backlighting.Leica M-A with black and white film, low exposure, high contrast, ISO 400 (pushed to 1600), with a 50mm prime lens.Result: Utilizes deep shadows and grain to create a powerful and evocative monochrome scene.Leica M10 with infrared film, low exposure, high contrast, ISO 400 (pushed to 800), with a 35mm wide-angle lens.Result: Produces a surreal and ethereal image, highlighting hidden patterns with unique lighting effects.② Street & DocumentaryLeica M10-R, low exposure, high contrast black and white, ISO 100, with a 35mm prime lens.Result: Captures strong contrast and dynamic light interplay, emphasizing the urban textures and shadows.Leica Q2, low exposure, high contrast, ISO 100, with a 28mm prime lens.Result: Produces a stark silhouette with vibrant city lights creating dramatic contrast and a halo effect.③ Portrait & LifestyleLeica M11, low exposure, high contrast, ISO 200, with a 50mm prime lens.Result: Highlights natural light, creating an intimate and flattering portrait with soft shadows.Leica Q2 Monochrom, low exposure, high contrast, ISO 800, with a 28mm prime lens.Result: Emphasizes warm, romantic light, capturing candid emotions with nostalgic undertones.④ Landscape & ArchitectureLeica M10-R, low exposure, high contrast, ISO 100, with a 24mm wide-angle lens.Result: Enhances landscape drama, capturing long shadows and the grandeur of the scene with precise detail.Leica Q2, low exposure, high contrast, ISO 64, with a 28mm prime lens.Result: Showcases intricate architectural details with side lighting, bringing out textures and design elements.⑤ Reflection & AbstractionLeica SL2-S with tilt-shift lens, low exposure, high contrast, ISO 100, with a 45mm tilt-shift lens.Result: Creates a hyper-realistic scene with a unique perspective, using long shadows for dramatic effect.Leica Q2 Monochrom, low exposure, extreme high contrast, ISO 50, with a 28mm macro lens.Result: Emphasizes the fine details of a snowflake, using backlighting to create a glowing effect and enhance texture.⑥ Film & MoodLeica M10 Monochrom, low exposure, high contrast, ISO 400 (pushed to 800), with a 50mm prime lens.Result: Captures a moody silhouette with film grain and backlighting, enhancing the emotional depth of the scene.Leica M-A with black and white film, low exposure, high contrast, ISO 400 (pushed to 1600), with a 50mm prime lens.Result: Creates a powerful black-and-white image with deep shadows and film grain, emphasizing the drama and raw emotion.⑦ Still Life & AbstractLeica S3, low exposure, high contrast, ISO 50, with a 120mm macro lens.Result: Produces a sharp, high-detail macro image, capturing the delicate textures of a single water droplet with soft shadows.Leica SL2 with long exposure, low exposure, high contrast, ISO 100, with a 35mm wide-angle lens.Result: Captures dynamic city movement with long exposure, blending streaks of light and blurred motion for a high-energy urban image.⑧ Mechanical & DetailLeica S3 with bellows extension, low exposure, high contrast, ISO 64, with a 100mm macro lens.Result: Emphasizes fine mechanical details with deep shadows and sharp focus, bringing out the intricate craftsmanship of the vintage watch.🟨 FAQWhy are Hasselblad, Phase One, and Leica cameras the primary focus, when there are other great options?☝️ These three brands are known for their high-end cameras, which are often used by professionals due to their superior quality. Based on their training on the images produced by these high-end professionals, the algorithm, called artificial intelligence, focuses on producing output that reflects the exceptional quality associated with these brands.For Phase One Camera - https://tensor.art/articles/771043378340050555For Hasselblad Camera - https://tensor.art/articles/771012991446530480
73
4
Art Mediums (127 Style)

Art Mediums (127 Style)

Art MediumsVarious art mediums. Prompted with '{medium} art of a woman MetalpointMiniature PaintingMixed MediaMonotype PrintingMosaic Tile ArtMosaicNeonOil PaintOrigamiPapermakingPapier-mâchéPastelPen And InkPerformance ArtPhotographyPhotomontagePlasterPlastic ArtsPolymer ClayPrintmakingPuppetryPyrographyQuillingQuilt ArtRecycled ArtRelief PrintingResinReverse Glass PaintingSandScratchboard ArtScreen PrintingScrimshawSculpture WeldingSequin ArtSilk PaintingSilverpointSound ArtSpray PaintStained GlassStencilStoneTapestryTattoo ArtTemperaTerra-cottaTextile ArtVideo ArtVirtual Reality ArtWatercolorWaxWeavingWire SculptureWoodWoodcutGlassGlitch ArtGold LeafGouacheGraffitiGraphite PencilIceInk Wash PaintingInstallation ArtIntaglio PrintingInteractive MediaKinetic ArtKnittingLand ArtLeatherLenticular PrintingLight ProjectionLithographyMacrameMarbleMetalColored PencilComputer-generated Imagery (cgi)Conceptual ArtCopper EtchingCrochetDecoupageDigital MosaicDigital PaintingDigital SculptureDioramaEmbroideryEnamelEncaustic PaintingEnvironmental ArtEtchingFabricFeltingFiberFoam CarvingFound ObjectsFrescoAugmented Reality ArtBatikBeadworkBody PaintingBookbindingBronzeCalligraphyCast PaperCeramicsChalkCharcoalClayCollageCollagraphy3d PrintingAcrylic PaintAirbrushAlgorithmic ArtAnimationArt GlassAssemblage
72
9
An examination on the effect of Denoise on Flux Img2Img with LoRA, a journey from Boat to Campervan

An examination on the effect of Denoise on Flux Img2Img with LoRA, a journey from Boat to Campervan

I made an AI Tool yesterday ( FLUX IMG2IMG + LORAS + UPSCALE + CHOICE | ComfyUI Workflow | Tensor.Art ) that allows you to combine up to 3 LoRA's and upscale - it has model switching to let you choose whether to turn on 0/1/2/3 if the available LoRA inputs - you can choose the weighting 1 by 1 and swap out the base Flux model and all the LoRA's to your own preferences. I have implemented Radio Button prompting so that the main Trigger words for the LoRA's I use most often are already behind the buttons - and you can use "Custom" to add your own prompt or triggers into the mix.For this test I used a 6k Adobe Stock licensed image of a boat on the beach, with the Model Switcher set to "2" to prevent any bleed from other LoRA's in the tool, everything is upscaled by 4x-Ultrasharp at a factor of 2 (the tool will size your longest edge to 1024 as it processes so you will end up with a 2048 pixel final image ready for facebook servers):Original Input Image: (downsized for article)So the first test was simply putting it through the AI Tool on base Flux model - no denoise - no LoRA at all:Now I have added in the LoRA "TQ - Flux Frozen" by @TracQuoc at .9 Weight, and added .25 Denoise:Next I changed the Denoise to 0.5, you can see subtle changes, a signature has appeared, the boat is starting to change in areas and writing appearing on the side of the boat:At 0.6 Denoise the boat is starting to adapt more and the beach is changing a lot:By 0.65 you can really see dramatic changes as the boat starts to develop wheels, its almost as if the AI has a plan for this one...At 0.7 - the second boat has disappeared all together, the whole boat is on a trailer, the beach is changing into grassland:Now I am stepping to 0.01 increments as all the drama happens between .7 and .8 normally with FluxSo 0.71:0.72: (the boat is definitely changing its shape now, and you can start to see snow)0.73 you can see its becoming a land based vehicle now:0.74 it feels like a towing caravan/trailer:0.75 more detail in the towing section0.76 - everything changes and suddenly we have some kind of Safari Land Cruiser0.77 now its a camper van with a pop up roof:0.78 just some more camper style detailing but nothing dramatic:0.79 There's almost no resemblance to the original scene except sky and colours:0.8 I can't see much change here:Now I will go up in increments of 0.05 again0.85 the Frozen world has taken over, although it still has the style and colour feel of the original to some extent0.9 it's all gone (it ignored inputs over .9 and changed them back to .9)I hope you have found this a useful experiment - and will save you time and coins in playing with img2img and denoise.You can check out all my AI Tools and LoRA's on my profile here: KurtC PhotoEdLet me know if you enjoyed this and I might make some more (this was my first one).
68
23
What exactly are the "node" and the "workflow" in AI image platform (explanation for the beginner)

What exactly are the "node" and the "workflow" in AI image platform (explanation for the beginner)

The Traditional Way of Generating AI Images for the BeginnerIf you are a beginner in the AI community, maybe you will be very confused and have no clue about what is "Node", and "Workflow" and their relations with "AI Tools" in the TensorArtTo start with the most simple way. We need to first mention how the user generates an image using a "Remixing" button that brings us to the "Normal Creation menu"Needless to say, by just editing the prompt (what you would like to see your picture look like) and negative prompt (what you do not want to see in the output image). Then push the Generate button, and the wonderful AI tool will kindly draw the new illustration serving you within a minute!!!!That sounds great, don't you think? If we imagine how humans spent a huge amount of time in the past to publish just 1 single piece of art. (Yeah, today, in 2024, in my personal opinion, both AI and human abilities are still not fully replaceable, especially in the terms of beautiful perfect hand :P ) However, the backbone or what happens behind the User-friendly menu allows us to "Select model", "Add LoRA", "Add ControlNet", "Set the aspect ratio (the original size of the image)" and so on, all of them are collected "Node" in a very complex "Workflow" PS.1. The Checkpoint or The Model often refers to the same thing. They are the core program that had been trained to draw the illustration. Each one has its strengths and weaknesses (I.E. Anime oriented or Realistic oriented) PS.2. The LoRA (Low-Rank Adaptation) is like an add-on to the Model allowing it to adapt to a different style, theme, and user preference. A concrete example is the Anime Character LoRAPS.3 The ControlNet is like a condition setting of the image. It helps the model to truly understand what is beyond the text prompt can describe. For instance, how a character poses in each direction and the angle of the camera.So here comes "The Comfyflow" (the nickname of the Workflow, people also mentioned it by the name "ComfyUI") which gives me a super headache when I see things like this for the first time in my life!!!!!!!!!(This image is a flow I have spent a lot of time studying, it is a flow for combining what is in the two images into a single one) Yeah, maybe, it is my fault that did not go to class about the workflow from the beginning or search for the tutorial on YouTube the first time (as my first language is not English). But would it be better if we had an instructor to tell us step-by-step here in Tensor.ArtAnd that is the reason why I got inspired to write this article solely for the beginner. So let's start with the main content of the article.What is ComfyFlowComfyFlow or the Workflow is an innovative AI image-generating platform that allows users to create stunning visuals with ease. To get the most out of this tool, it's important to understand two key concepts: "workflow" and "node." Let's break these down in the simplest way possible.What is a Workflow?A workflow is like a blueprint or a recipe that guides the creation of an image. Just as a recipe outlines the steps to make a dish, a workflow outlines the steps and processes needed to generate an image. It’s a sequence of actions that the AI follows to produce the final output.Think of it like this:Recipe (Workflow): Tells you what ingredients to use and in what order.Ingredients (Nodes): Each step or component used in the recipe.Despite the recommended pre-set template that TensorArt kindly gives to the users, from the beginner view's viewpoint without the knowledge of the workflow, it is not that helpful because, after clicking the "Try" button, we will bombarded with the complexity of the Node!!!!!!!What is a Node?Nodes are the building blocks of a workflow. Each node represents a specific action or process that contributes to the final image. In ComfyFlow, nodes can be thought of as individual steps in the workflow, each performing a distinct function.Imagine nodes as parts of a puzzle:Nodes: Individual pieces that fit together to complete the picture (workflow).How Do Workflows and Nodes Work Together? 1-2) Starting Point: Every workflow begins with an initial node, which might be an image input from the user, together with Checkpoint and LoRA serving the role of image references. 3-4) Processing Nodes: These are nodes that draw or modify the image in some way, such as adding color, or texture, or applying filters. 5) Ending Point: The node outputs the completed image which works very closely with the node of the previous stage in terms of sampling and VAE PS. A Variational Autoencoder (VAE) is a generative model that learns input data, such as images, to reconstruct and generate new, similar, or variations of images based on the patterns it has learned.Here is the list of nodes I have used in the normal image-generating images of my Waifu using 1checkpoint, and 2LoRAs to help the reader understand how ComfyFlow worksThe numbers 1-5 represent the overview process of the workflow and the role of each type of node I have mentioned above. However, in the case of more complex tasks like in AI Tools, the number of nodes sometimes is higher than 30!!!!!!!By the way, when starting with an empty ComfyFlow page, the way to add a node is "Right Click" -> "Add Node" -> Scroll down to the top, since the most frequently used node will be over there.1) loaders -> Load CheckPointLike in the normal task creation menu, this node is the one we can choose CheckPoint or the Core model.It is important to note that nodes work together using input/output. The "Model/CLIP/VAE" (the output) circles have to connect to the next one in which it has to correspond. We link them together by left-clicking on the circle's inner area and then drag to the destination. PS. CLIP (Contrastive Language-Image Pre-training) is a model developed by OpenAI that links images and text together in a way that helps AI understand and generate images based on textual descriptions.2) loaders -> Load LoRACheckpoint is very closely related to LoRA and that is a reason why they are connected by the input/output named "model/MODEL", "clip/CLIP"Anyway, since in this example, I have used 2 LoRAs (first for The theme of the picture and the Second for the character reference of my Waifu), two nodes of LoRAs then have to be connected as well. Here we can adjust the strength of the LoRA or the weight like it happens in the normal task generation menu.3) CLIP Text Encode (Prompt)This node is the prompt and negative prompt we normally see in the menu. The input here is only clip (Contrastive Language-Image Pre-training) and the output is "CONDITIONING" User tip: If you click on the output circle of the "Load LoRA" node and drag it to the empty area, the ComfyFlow will pop up a corresponding next node list to create a new one with ease. 4) KSampler & Empty Latent ImageThe sampling method is used to tell the AI how it should start generating visual patterns from the initial noise and everything associated with its adjustment will be set here in this type of sampling node together with "Empty Latent Image" The inputs in this step here are models (from LoRA node), positive and negative (from prompt node) and the output is "Latent"5) VAE Decode & Final output nodeOnce we establish the sampling node, the output named "LATENT" will then have to connect with "samples" Meanwhile the "vae" is the linkage between this one and the "Load Checkpoint" node from the beginning.And when everything is done the "IMAGE" as a final output here will be served at your hand.PS. An AI Tool is a more complex Workflow created to do some specific task such as swapping the face of the human in the original picture with the target face or changing the style of the input illustration to another one and etc.
67
7
did you know? “tensor.Art” [ Enhance Prompt] feature

did you know? “tensor.Art” [ Enhance Prompt] feature

Hello everyone. Today, I would like to introduce a feature of tensor.art that is often overlooked.[Enhanced Prompt] Have you heard of it?It's a feature that is a little inconspicuous on the image generation screen that you usually use, but it's very useful.1. What is this feature?The Prompt reinforcement automatically generates multiple prompts that are enhanced and reconstructed by AI using the words of the prompt you entered. This makes the original prompt richer and more diverse, and increases the variety of images generated.2. How to use:It's very easy!Enter the prompt: First, enter the prompt for the image you want to generate as you normally would.Click the prompt extension button: Click the "Enhance Prompt " button at the bottom right of the prompt input field.It's a feature that is a little inconspicuous on the image generation screen that you usually use, but it's very useful.3. Check the suggested prompts:The AI ​​automatically analyzes the prompt you entered and suggests multiple enhanced prompts.Several restructured and enhanced prompts appear.All you have to do is choose the prompt of your choice.Choose the prompt of your choice and click the Select button.It will be automatically filled in the prompt field.Then, generate as usual! !4. UtilizationBroaden your ideas: Using the prompt extension allows you to discover new expressions and ideas that you never thought of.Improve the quality of your projects: Using more sophisticated prompts can improve the quality of the images generated.Breaking down creative barriers: When you're stuck for ideas, this feature can help you find new inspiration.5. Actual effectsBy using the prompt extension, you can achieve the following effects:Generate more diverse images: Increase the variety of images generated from the original prompt.Improved accuracy: With more specific and detailed prompts, you can easily generate images that are closer to your expectations.Save time: You can avoid the trouble of manually trying out prompts.*Summary*As you can see, the tensor.art Enhance Prompt is a powerful tool that can further stimulate your creativity and streamline the image generation process. Give it a try and see the effects for yourself!!!
66
25
Guide to Using SDXL / SDXLモデルの利用手引

Guide to Using SDXL / SDXLモデルの利用手引

Guide to Using SDXLI occasionally see posts about difficulties in generating images successfully, so here is an introduction to the basic setup.1. IntroductionSDXL is a model that can generate images with higher accuracy compared to SD1.5. It produces high-quality representations of human bodies and structures, with fewer distortions and more realistic fine details, textures, and shadows.With SD1.5, generation parameters were generally applicable across different models, so there was no need for specific adjustments.However, while SDXL can still use some SD1.5 techniques without issues, the recommended generation parameters vary significantly depending on the model.Additionally, LoRA and Embeddings (such as EasyNegative) are completely incompatible, requiring a review of prompt construction.Notably, embeddings commonly used in SD1.5 negative prompts are recognized merely as strings in the XL model, so you must replace them with corresponding embeddings or add appropriate tags.This guide explains the recommended parameter settings for using SDXL.2. Basic ParametersVAESelecting "sdxl-vae-fp16-fix.safetensors" will suffice.Many models have this built-in, so specification might not be necessary.Image SizeUsing the presets provided by TensorArt for resolution should be sufficient.Small or excessively large resolutions may not yield appropriate generation results, so please avoid using the sizes that were frequently used with SD1.5 wherever possible.Even if you want to create vertically or horizontally elongated images, do so within the range that does not significantly alter the total pixel count (adjust by increasing height and decreasing width, for example).Sampling MethodChoose the sampler recommended for the model first.Then, select according to your preference.Typically, selecting Euler a or DPM++ 2M SDE Karras should work well.Sampling StepsXL models might generate images effectively with lower steps due to optimizations like LCM or Turbo.Be sure to check the recommended values for the selected model.CFG ScaleThis varies by model, so check the recommended values.Typically, the range is around 2 to 8.Hires.fixFor free users, specifying 1.5x might hit the upper limit, so use custom settings with the following resolutions:768x1152 -> 1024x15361152x768 -> 1536x10241024x1024 -> 1248x1248Choose the upscaler according to your preference.Set the denoising strength to around 0.3 to 0.4.3. PromptSDXL handles natural language better.You can input elements separated by commas or simply write a complete sentence in English, and it will generate images as intended.Using a tool like ChatGPT to create prompts can also be beneficial.However, depending on how the model was additionally trained, it might be better to use existing tags.Furthermore, some models have tags specified to enhance quality, so always check the model’s page.For example:AnimagineXL3.1: masterpiece, best quality, very aesthetic, absurdres is recommended.Pony Models: score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up is recommended.ToxicEchoXL: masterpiece, best quality, aesthetic is recommended.In this way, especially for XL models, particularly anime or illustration models, appropriate tag usage is crucial.4. Negative PromptsForget the negative prompts used in SD1.5. "EasyNegative" is just a string.The embeddings usable on TensorArt are negativeXL_D and unaestheticXLv13.Choose according to your preference.Some models have recommended prompts listed.For AnimagineXLnsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]For ToxicEchoXLnsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digits, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name.For photo models, sometimes it is better not to use negative prompts to create a certain atmosphere, so try various approaches.5. Recommended SDXL modelToxicEnvisionXLhttps://tensor.art/models/736585744778443103/ToxicEnvisionXL-v1Recently released high-quality photo model. Yes, I created it.If you are looking for a photo model, you can't go wrong with this one.Check the related posts to see what kind of images can be created.You can create a variety of realistic images, from analog photo styles to gravure, movies, fantasy, and surreal depictions.Although it is primarily a photo-based model, it can also create analog-style images.ToxicEtheRealXLhttps://tensor.art/models/702813703965453448/ToxicEtheRealXL-v1A versatile model that supports both illustrations and photorealistic images. Yes, I created it.The model's flexibility requires well-crafted prompts to determine whether the output is more illustrative or photorealistic.Using LoRA to strengthen the direction might make it easier to use.ToxicEchoXLhttps://tensor.art/models/689378702666043553/ToxicEchoXL-v1A high-performance model specialized for illustrations. Yes, I created it.It features a unique style based on watercolor painting, with custom learning and adjustments.I have also created various LoRA for style changes, so please visit my user page.My current favorite is Beautiful Warrior XL + atmosphere.The model covers a range from illustrations to photos, so give it a try.However, it is weak in generating copyrighted characters, so use LoRA or models like AnimagineXL or Pony for those.ToxicEchoXL can produce unique illustration styles when using character LoRA, making it highly suitable for fan art.6. ConclusionI hope this guide helps those who struggle to generate images as well as others.Well... if you remix from Model Showcase, you can create beautiful images without this guide...SD3 has also been released, so if possible, I would like to create models for that as well.It seems that a commercial license is required for commercial use, though...SDXLモデルの利用手引ここではSDXLの基本的な設定を紹介します。1. はじめにSDXLはSD1.5と比較してより高精度な生成が行えるモデルです。人体や構造物はより高品質で破綻が少なく、微細なディテールがよりリアルに表現され、自然なテクスチャや影を描写します。SD1.5ではどのモデルでも生成パラメータは概ね流用可能で、特に気にする必要はありませんでした。SDXLは一部SD1.5の手法を利用しても問題ありませんが、推奨される生成パラメータがモデルによってもだいぶ変わります。またLoRAやEmbeddings(EasyNegativeなど)も一切互換性はありませんので、プロンプトの構築も見直す必要があります。特にSD1.5のネガティブプロンプトでよく使用されているEmbeddingsをそのままXLモデルで入力しても、ただの文字列としてしか認識されていませんので、対応するEmbeddingsに差し替えるか、適切なタグを追加しなければいけません。このガイドでは、SDXLを使用する際の推奨パラメータ設定について説明します。2. 基本的なパラメータVAEsdxl-vae-fp16-fix.safetensorsを選択しておけば問題ありません。モデルに内蔵されている場合も多いですので、指定しなくても大丈夫な場合もあります。画像サイズ解像度はTensorArtで用意されているプリセットを使えば問題ありません。小さかったり大きすぎる解像度は適切な生成結果を得られなくなりますので、SD1.5でよく使用していたサイズはなるべく使用しないでください。プリセットよりも縦長や横長にしたい場合でも、総ピクセル数を大幅に変更しない範囲で行ってください。(縦を増やしたら横は減らす等で調整)サンプリング法モデルによって推奨されるサンプラーがありますので、まずはそれを選択してください。あとはお好みです。基本は Euler a か DPM++ 2M SDE Karras あたりを選択しておけば大丈夫です。サンプリング回数XLではLCMやターボなど低ステップで生成できるようになっていたりしますので、必ずモデルの推奨値を確認してください。CFG Scaleこれもモデルによって異なりますので推奨値を確認してください。概ね2~8程度です。高解像度修復無料ユーザーだと1.5xを指定すると上限に引っかかってしまいますので、使用する場合はカスタムにして以下の解像度を指定してください768x1152 -> 1024x15361152x768 -> 1536x10241024x1024 -> 1248x1248Upscalerはお好みで指定してください。Denoising strengthは0.3~0.4程度。3. プロンプトSDXLはより自然言語の取り扱いに長けています。要素をコンマで区切って入力するだけではなく、普通に英文を入力するだけでも意図した通りの生成が行えます。ChatGPTなどにプロンプトを作ってもらうのもいいでしょう。ただしモデルが追加学習をどのように行ったかによって、既存のタグで記述したほうがいい場合もあります。また、モデルによっては品質を上げるためのタグが指定されていますので、使用するモデルのページは必ず見るようにしましょう。例えば…AnimagineXL3.1では「masterpiece, best quality, very aesthetic, absurdres」を指定することが推奨されています。Pony系モデルでは「score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up」が基本テンプレートとなっています。ToxiEchoXLでは「masterpiece, best quality, aesthetic」を指定することが推奨されています。このように、XLモデル、特にアニメ・イラストモデルでは適切なタグの使用が求められる場合があります。4. ネガティブプロンプトSD1.5で使用していたネガティブプロンプトは忘れてください。EasyNegativeはただの文字列です。TensorArtで使用できるEmbeddingsは negativeXL_D と unaestheticXLv13 です。お好みで指定してください。推奨されるプロンプトが記載されているモデルもあります。AnimagineXLでは以下のようなプロンプトが推奨されていますので、これをベースに組むのがいいかもしれません。nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]ToxicEchoXLでは以下のようなプロンプトが推奨されていますnsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digits, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name,フォトモデルではネガティブプロンプト無しのほうが雰囲気のある画作りができる場合もありますので、色々試してみてください。5. おすすめのSDLXモデル紹介ToxicEnvisionXLhttps://tensor.art/models/736585744778443103/ToxicEnvisionXL-v1最近リリースされた高品質フォトモデル。実写系モデルを探しているならこれを選んでおけば間違いありません。関連する投稿からどういった画像が作成できるか見てみてください。アナログ写真風からグラビア、映画、ファンタジー、非現実的な描写等、様々な実写的な画像が作成できます。基本的にはフォトベースのモデルですが、アナログ画風も作成できたりします。ToxicEtheRealXLhttps://tensor.art/models/702813703965453448/ToxicEtheRealXL-v1イラストからフォトリアルまで幅広く対応したモデル。プロンプトによってイラストかフォトリアルか振れ幅が大きいので、明確にプロンプトの作り込みが必要です。LoRAで方向性を強めると使いやすいかもしれません。ToxicEchoXLhttps://tensor.art/models/689378702666043553/ToxicEchoXL-v1イラスト特化の超高性能モデル。水彩をベースに独自の学習・調整を行っているので、わりと独特な画風を持っています。画風変更に様々なLoRAも作成していますので、是非私のユーザーページへお越しください。https://tensor.art/u/649265516304702656最近のお気に入りはBeautiful Warrior XL + atmosphere です。イラストからフォトまで一通り網羅できるので、是非使ってみてください。なお版権キャラの生成は弱いので、その辺はLoRAかAnimagineXLとかPonyとか使うといいと思います。ToxicEchoXLはキャラLoRAを使うと他のモデルとはタッチの違うイラストが作れますので、ファンアート適正自体は高いです。6. おわりにモデルのサンプルやみんなみたいにうまく生成できないな…という方の助けになれば幸いです。まあ…モデルのショーケースからリミックスすればこんなガイド見なくてもきれいな画像が作れますけどね…SD3もリリースされたので、もし可能ならそちらのモデルも作成してみたいですね。どうも商用利用は有償のライセンスが必要そうですが…
62
1
How to transform your images into a Halloween party atmosphere. | 🎃 Halloween 2024

How to transform your images into a Halloween party atmosphere. | 🎃 Halloween 2024

INSTRUCTIONS:This is a very simple workflow, just upload your image and press RUN.PROMPT basically does not need to be modified, but you can still add more Halloween elements to make the theme richer.Hope you all have a good time.PROMPT:(masterpiece), ((halloween elements)),a person, halloween striped thighhighs, witch hat, grin, (ghost), sweets, candy, candy cane, cookie, string of flags, halloween costume, jack-o'-lantern bucket, halloween, pumpkins,black cat,halloween,little ghost,magic robe,autumn leaves,candle,skull, 3d cg.Negative PROMPT:None.Below is the workflow link:https://tensor.art/workflows/786144487641608308Below is the AI-tool link:https://tensor.art/template/786150277257599620model used:CKPThttps://tensor.art/models/757279507095956705/FLUX.1-dev-fp8
61
35
[Guide] Make your own Loras, easy and free

[Guide] Make your own Loras, easy and free

This article helped me to create my first Lora and upload it to Tensor.art, although Tensor.art has its own Lora Train , this article helps to understand how to create Lora well.🏭 PreambleEven if you don't know where to start or don't have a powerful computer, I can guide you to making your first Lora and more!In this guide we'll be using resources from my GitHub page. If you're new to Stable Diffusion I also have a full guide to generate your own images and learn useful tools.I'm making this guide for the joy it brings me to share my hobbies and the work I put into them. I believe all information should be free for everyone, including image generation software. However I do not support you if you want to use AI to trick people, scam people, or break the law. I just do it for fun.Also here's a page where I collect Hololive loras.📃What you needAn internet connection. You can even do this from your phone if you want to (as long as you can prevent the tab from closing).Knowledge about what Loras are and how to use them.Patience. I'll try to explain these new concepts in an easy way. Just try to read carefully, use critical thinking, and don't give up if you encounter errors.🎴Making a Lorat has a reputation for being difficult. So many options and nobody explains what any of them do. Well, I've streamlined the process such that anyone can make their own Lora starting from nothing in under an hour. All while keeping some advanced settings you can use later on.You could of course train a Lora in your own computer, granted that you have an Nvidia graphics card with 6 GB of VRAM or more. We won't be doing that in this guide though, we'll be using Google Colab, which lets you borrow Google's powerful computers and graphics cards for free for a few hours a day (some say it's 20 hours a week). You can also pay $10 to get up to 50 extra hours, but you don't have to. We'll also be using a little bit of Google Drive storage.This guide focuses on anime, but it also works for photorealism. However I won't help you if you want to copy real people's faces without their consent.🎡 Types of LoraAs you may know, a Lora can be trained and used for:A character or personAn artstyleA poseA piece of clothingetcHowever there are also different types of Lora now:LoRA: The classic, works well for most cases.LoCon: Has more layers which learn more aspects of the training data. Very good for artstyles.LoHa, LoKR, (IA)^3: These use novel mathematical algorithms to process the training data. I won't cover them as I don't think they're very useful.📊 First Half: Making a DatasetThis is the longest and most important part of making a Lora. A dataset is (for us) a collection of images and their descriptions, where each pair has the same filename (eg. "1.png" and "1.txt"), and they all have something in common which you want the AI to learn. The quality of your dataset is essential: You want your images to have at least 2 examples of: poses, angles, backgrounds, clothes, etc. If all your images are face close-ups for example, your Lora will have a hard time generating full body shots (but it's still possible!), unless you add a couple examples of those. As you add more variety, the concept will be better understood, allowing the AI to create new things that weren't in the training data. For example a character may then be generated in new poses and in different clothes. You can train a mediocre Lora with a bare minimum of 5 images, but I recommend 20 or more, and up to 1000.As for the descriptions, for general images you want short and detailed sentences such as "full body photograph of a woman with blonde hair sitting on a chair". For anime you'll need to use booru tags (1girl, blonde hair, full body, on chair, etc.). Let me describe how tags work in your dataset: You need to be detailed, as the Lora will reference what's going on by using the base model you use for training. If there is something in all your images that you don't include in your tags, it will become part of your Lora. This is because the Lora absorbs details that can't be described easily with words, such as faces and accessories. Thanks to this you can let those details be absorbed into an activation tag, which is a unique word or phrase that goes at the start of every text file, and which makes your Lora easy to prompt.You may gather your images online, and describe them manually. But fortunately, you can do most of this process automatically using my new 📊 dataset maker colab.Here are the steps:1️⃣ Setup: This will connect to your Google Drive. Choose a simple name for your project, and a folder structure you like, then run the cell by clicking the floating play button to the left side. It will ask for permission, accept to continue the guide.If you already have images to train with, upload them to your Google Drive's "lora_training/datasets/project_name" (old) or "Loras/project_name/dataset" (new) folder, and you may choose to skip step 2.2️⃣ Scrape images from Gelbooru: In the case of anime, we will use the vast collection of available art to train our Lora. Gelbooru sorts images through thousands of booru tags describing everything about an image, which is also how we'll tag our images later. Follow the instructions on the colab for this step; basically, you want to request images that contain specific tags that represent your concept, character or style. When you run this cell it will show you the results and ask if you want to continue. Once you're satisfied, type yes and wait a minute for your images to download.3️⃣ Curate your images: There are a lot of duplicate images on Gelbooru, so we'll be using the FiftyOne AI to detect them and mark them for deletion. This will take a couple minutes once you run this cell. They won't be deleted yet though: eventually an interactive area will appear below the cell, displaying all your images in a grid. Here you can select the ones you don't like and mark them for deletion too. Follow the instructions in the colab. It is beneficial to delete low quality or unrelated images that slipped their way in. When you're finished, send Enter in the text box above the interactive area to apply your changes.4️⃣ Tag your images: We'll be using the WD 1.4 tagger AI to assign anime tags that describe your images, or the BLIP AI to create captions for photorealistic/other images. This takes a few minutes. I've found good results with a tagging threshold of 0.35 to 0.5. After running this cell it'll show you the most common tags in your dataset which will be useful for the next step.5️⃣ Curate your tags: This step for anime tags is optional, but very useful. Here you can assign the activation tag (also called trigger word) for your Lora. If you're training a style, you probably don't want any activation tag so that the Lora is always in effect. If you're training a character, I myself tend to delete (prune) common tags that are intrinsic to the character, such as body features and hair/eye color. This causes them to get absorbed by the activation tag. Pruning makes prompting with your Lora easier, but also less flexible. Some people like to prune all clothing to have a single tag that defines a character outfit; I do not recommend this, as too much pruning will affect some details. A more flexible approach is to merge tags, for example if we have some redundant tags like "striped shirt, vertical stripes, vertical-striped shirt" we can replace all of them with just "striped shirt". You can run this step as many times as you want.6️⃣ Ready: Your dataset is stored in your Google Drive. You can do anything you want with it, but we'll be going straight to the second half of this tutorial to start training your Lora!⭐ Second Half: Settings and TrainingThis is the tricky part. To train your Lora we'll use my ⭐ Lora trainer colab. It consists of a single cell with all the settings you need. Many of these settings don't need to be changed. However, this guide and the colab will explain what each of them do, such that you can play with them in the future.Here are the settings:▶️ Setup: Enter the same project name you used in the first half of the guide and it'll work automatically. Here you can also change the base model for training. There are 2 recommended default ones, but alternatively you can copy a direct download link to a custom model of your choice. Make sure to pick the same folder structure you used in the dataset maker.▶️ Processing: Here are the settings that change how your dataset will be processed.The resolution should stay at 512 this time, which is normal for Stable Diffusion. Increasing it makes training much slower, but it does help with finer details.flip_aug is a trick to learn more evenly, as if you had more images, but makes the AI confuse left and right, so it's your choice.shuffle_tags should always stay active if you use anime tags, as it makes prompting more flexible and reduces bias.activation_tags is important, set it to 1 if you added one during the dataset part of the guide. This is also called keep_tokens.▶️ Steps: We need to pay attention here. There are 4 variables at play: your number of images, the number of repeats, the number of epochs, and the batch size. These result in your total steps.You can choose to set the total epochs or the total steps, we will look at some examples in a moment. Too few steps will undercook the Lora and make it useless, and too many will overcook it and distort your images. This is why we choose to save the Lora every few epochs, so we can compare and decide later. For this reason, I recommend few repeats and many epochs.There are many ways to train a Lora. The method I personally follow focuses on balancing the epochs, such that I can choose between 10 and 20 epochs depending on if I want a fast cook or a slow simmer (which is better for styles). Also, I have found that more images generally need more steps to stabilize. Thanks to the new min_snr_gamma option, Loras take less epochs to train. Here are some healthy values for you to try:10 images × 10 repeats × 20 epochs ÷ 2 batch size = 1000 steps20 images × 10 repeats × 10 epochs ÷ 2 batch size = 1000 steps100 images × 3 repeats × 10 epochs ÷ 2 batch size = 1500 steps400 images × 1 repeat × 10 epochs ÷ 2 batch size = 2000 steps1000 images × 1 repeat × 10 epochs ÷ 3 batch size = 3300 steps▶️ Learning: The most important settings. However, you don't need to change any of these your first time. In any case:The unet learning rate dictates how fast your Lora will absorb information. Like with steps, if it's too small the Lora won't do anything, and if it's too large the Lora will deepfry every image you generate. There's a flexible range of working values, specially since you can change the intensity of the lora in prompts. Assuming you set dim between 8 and 32 (see below), I recommend 5e-4 unet for almost all situations. If you want a slow simmer, 1e-4 or 2e-4 will be better. Note that these are in scientific notation: 1e-4 = 0.0001The text encoder learning rate is less important, specially for styles. It helps learn tags better, but it'll still learn them without it. It is generally accepted that it should be either half or a fifth of the unet, good values include 1e-4 or 5e-5. Use google as a calculator if you find these small values confusing.The scheduler guides the learning rate over time. This is not critical, but still helps. I always use cosine with 3 restarts, which I personally feel like it keeps the Lora "fresh". Feel free to experiment with cosine, constant, and constant with warmup. Can't go wrong with those. There's also the warmup ratio which should help the training start efficiently, and the default of 5% works well.▶️ Structure: Here is where you choose the type of Lora from the 2 I mentioned in the beginning. Also, the dim/alpha mean the size of your Lora. Larger does not usually mean better. I personally use 16/8 which works great for characters and is only 18 MB.▶️ Ready: Now you're ready to run this big cell which will train your Lora. It will take 5 minutes to boot up, after which it starts performing the training steps. In total it should be less than an hour, and it will put the results in your Google Drive.🏁 Third Half: TestingYou read that right. I lied! 😈 There are 3 parts to this guide.When you finish your Lora you still have to test it to know if it's good. Go to your Google Drive inside the /lora_training/outputs/ folder, and download everything inside your project name's folder. Each of these is a different Lora saved at different epochs of your training. Each of them has a number like 01, 02, 03, etc.Here's a simple workflow to find the optimal way to use your Lora:Put your final Lora in your prompt with a weight of 0.7 or 1, and include some of the most common tags you saw during the tagging part of the guide. You should see a clear effect, hopefully similar to what you tried to train. Adjust your prompt until you're either satisfied or can't seem to get it any better.Use the X/Y/Z plot to compare different epochs. This is a builtin feature in webui. Go to the bottom of the generation parameters and select the script. Put the Lora of the first epoch in your prompt (like "<lora:projectname-01:0.7>"), and on the script's X value write something like "-01, -02, -03", etc. Make sure the X value is in "Prompt S/R" mode. These will perform replacements in your prompt, causing it to go through the different numbers of your lora so you can compare their quality. You can first compare every 2nd or every 5th epoch if you want to save time. You should ideally do batches of images to compare more fairly.Once you've found your favorite epoch, try to find the best weight. Do an X/Y/Z plot again, this time with an X value like ":0.5, :0.6, :0.7, :0.8, :0.9, :1". It will replace a small part of your prompt to go over different lora weights. Again it's better to compare in batches. You're looking for a weight that results in the best detail but without distorting the image. If you want you can do steps 2 and 3 together as X/Y, it'll take longer but be more thorough.If you found results you liked, congratulations! Keep testing different situations, angles, clothes, etc, to see if your Lora can be creative and do things that weren't in the training data.source: civitai/holostrawberry
60
8
ComfyUI Core Nodes Loaders #CHRISTMAS WALKTHROUGH

ComfyUI Core Nodes Loaders #CHRISTMAS WALKTHROUGH

1. Load CLIP VisonDecode the image to form descriptions (prompts), and then convert them into conditional inputs for the sampler. Based on the decoded descriptions (prompts), generate new similar images. Multiple nodes can be used together. Suitable for transforming concepts, abstract things, used in combination with Clip Vision Encode.2. Load CLIPThe Load CLIP node can be used to load a specific CLIP model, CLIP models are used to encode text prompts that guide the diffusion process.*Conditional diffusion models are trained using a specific CLIP model, using a different model than the one which it was trained with is unlikely to result in good images. The Load Checkpoint node automatically loads the correct CLIP model.3. unCLIP Checkpoint LoaderThe unCLIP Checkpoint Loader node can be used to load a diffusion model specifically made to work with unCLIP. unCLIP Diffusion models are used to denoise latents conditioned not only on the provided text prompt, but also on provided images. This node will also provide the appropriate VAE and CLIP and CLIP vision models.*even though this node can be used to load all diffusion models, not all diffusion models are compatible with unCLIP.4. load controlnet modelThe Load ControlNet Model node can be used to load a ControlNet model, Used in conjunction with Apply ControlNet.5. Load LoRA6. Load VAE7. Load Upscale Model8. Load Checkpoint9. Load Style ModelThe Load Style Model node can be used to load a Style model. Style models can be used to provide a diffusion model a visual hint as to what kind of style the denoised latent should be in.* Only T2IAdaptor style models are currently supported.10. Hypernetwork LoaderThe Hypernetwork Loader node can be used to load a hypernetwork. Similar to LoRAs, they are used to modify the diffusion model, to alter the way in which latents are denoised. Typical use-cases include adding to the model the ability to generate in certain styles, or better generate certain subjects or actions. One can even chain multiple hypernetworks together to further modify the model.
59
1
Halloween2024 | Unlocking Creativity: The Power of Prompt Words in Writing

Halloween2024 | Unlocking Creativity: The Power of Prompt Words in Writing

Unlocking Creativity: The Power of Prompt Words in WritingWriting can sometimes feel tough, especially when you’re staring at a blank page. If you’re struggling to find inspiration, prompt words can be a helpful tool. These words can spark ideas and make writing easier and more fun. Let’s explore how prompt words can boost your creativity and how to use them effectively.What Are Prompt Words?Prompt words are specific words or phrases that inspire you to write. They can be anything from a single word to a short phrase that gets your imagination going. For example, words like "adventure," "friendship," or "mystery" can lead to exciting stories or poems.Why Use Prompt Words?1. Overcome Writer’s Block: If you’re stuck and don’t know what to write, a prompt word can give you a direction to start.2. Spark Creativity: One word can trigger a flood of ideas. It helps you think outside the box.3. Try New Styles: Prompt words encourage you to write in different genres or styles you might not normally explore.4. Build a Writing Habit: Using prompt words regularly can help you develop a consistent writing routine.How to Use Prompt Words1. Make a ListStart by writing down some prompt words that inspire you. Here are a few examples:- Adventure- Dream- Secret- Journey- Change2. Quick Writing ExercisePick a prompt word and set a timer for 10 minutes. Write anything that comes to mind without worrying about making it perfect. This helps you get your ideas flowing.3. Write a Story or SceneChoose a prompt word and try to write a short story or scene based on it. For example, if your word is "mystery," think about a detective solving a case.4. Create a PoemUse a prompt word to write a poem. Let the word guide your ideas and feelings. You can write a simple haiku or free verse.5. Share with FriendsShare your prompt words with friends and challenge each other to write something based on the same word. This can lead to fun discussions and new ideas.Tips for Using Prompt Words- Write Daily: Spend a few minutes each day writing with a prompt word. This builds your skills and keeps your creativity flowing.- Make a Prompt Jar: Write different prompt words on slips of paper and put them in a jar. Whenever you need inspiration, pull one out and start writing.- Reflect on Your Work: After you write, take a moment to think about what you created. What did you like? What can you improve?- Explore Different Genres: Use prompt words to try writing in genres you don’t usually write in, like fantasy or poetry. This helps you grow as a writer. ConclusionPrompt words are a simple yet powerful way to boost your creativity and make writing enjoyable. They can help you overcome blocks, spark new ideas, and develop a consistent writing habit. So, the next time you feel stuck, remember that a single word can lead to amazing stories. Embrace the power of prompt words and watch your creativity soar!
58
6
RealAnime Event: Toon Drifter Faction Showdown! ~11/28 日本語訳

RealAnime Event: Toon Drifter Faction Showdown! ~11/28 日本語訳

アニメキャラクターが第四の壁を突破できるTensorArt専用モデル「RealAnime」が登場! 🎉使いやすい AI ツールを使用して、現実世界のシーンでアニメ キャラクターを生成できます。プロンプトを入力するだけで、魔法が起こるのを観察できます。ショー・ドリフター目覚ましが鳴ったら、起きて仕事に行く時間です!アニメのキャラクターもお腹を満たすために頑張らなければなりません。指定されたものを利用する AIツール お気に入りのアニメキャラクターの職場生活をデザインしてみませんか? 💼✨ブルース・ウェインとは異なり、ジョーカーは仕事の後、食料品を買い、自分で食事を作らなければなりません。 🤡レムはメイドカフェでコーヒーとデザートの作り方を学ぶ必要があります。給料が低かったため、サノスは指を鳴らして会社を爆破することを決意しました。 💥派閥対決に参加しよう!派閥を選択し、指定されたタグを付けて投稿することで派閥の評判を高めましょう!派閥タグ(たぶん必須 どれか一つを使う)#Driftermon#DrifterAvengers#DrifterDoom評判の計算ルール:評判 = (投稿した Pro ユーザーの数 0.4 + 投稿した Standard ユーザーの数 0.2 + いいねをした人の数 0.1 + リミックスした人の数 0.3) * 100各勢力の評判は毎日更新されるので、毎日投稿してチームへのサポートを結集することを忘れないでください。 🏆*公式のイベントページにチーム評価を表示するタグがあります。最高の評判ボーナス:トップ派閥のメンバー全員に 500 クレジットと 1 日 Pro が与えられます。 🎉特別ボーナス:質の高い投稿には不思議な報酬が当たるチャンスも! 🎁ソーシャルメディア投稿報酬ソーシャル メディアへの投稿ごとに 100 クレジット、最大 500 クレジットを獲得できます。コンテンツ形式:無制限!タグを含める必要があります: #TensorArt そして #RealAnimeサポートされているプラ​​ットフォーム: Instagram、TikTok、Twitter、Facebook、Reddit、YouTube、Pinterest。追加の報酬:500 件以上の「いいね!」: $20500 リツイート以上: 70 ドルフォロワーが 5,000 人を超える場合、500 件以上の「いいね!」で 40 ドル、500 件以上のリツイートで 140 ドルを獲得できます。クリック 記入するアイコン 参加情報を確認して報酬を受け取りましょう! 📲イベント期間11月18日~11月28日イベントルール投稿のテーマと内容はイベントのスタイルと一致している必要があります。各投稿にはイベント タグを 1 つだけ含めることができます。デフォルトのアバターとニックネームを持つユーザーは特典を受け取る資格がありません。NSFW、児童セレブのポルノ、低品質のコンテンツは有効な参加としてカウントされません。不正行為があった場合はイベントから失格となります。イベントの最終的な解釈権は Tensor.Art に帰属します。正しい生成方法(公式)たった4ステップでアツい「第四の壁突破」画像が完成!クリック AIツール 始めましょう! 🖱️✨ステップ1ページの右側で、キャラクター名のオプションを選択するか、「カスタム」をクリックしてアニメキャラクターの名前を入力します。ステップ2以下の「何かを行う」セクションで、対応するアクションのオプションを選択するか、「カスタム」をクリックしてアクションを説明します。詳細な説明により、「赤いドレスを着て本物のオープンカーでワインを飲む」など、より正確な生成結果が得られます。ステップ3「画像サイズ」を選択します。ニーズに基づいて選択できる 9 つの一般的なサイズがあります。ステップ4下の「go」ボタンをクリックして、画像が生成されるまで辛抱強く待ちます。上のタブを切り替えると過去の結果が表示されます。ヒント「翻訳」をクリックすると、入力テキストを英語に翻訳できます。生成結果に満足できない場合は、キャラクターやシーンを変更して再試行してください。 🎨✨ハムスター式生成方法12つに分かれてたらいいだろうというノリで、好きに書く。ハムスター式生成方法2もう②にはスペース「 」しか入れない。ヒント・普通にプロンプト書いた方が手っ取り早い
57
9
halloween2024 Alien Pumpkin

halloween2024 Alien Pumpkin

PROMPT:photography of Alien incubator, seen from above,lots Halloween pumpkin With facial features, A close-up shot of a Halloween pumpkin With facial features, (An top open Halloween pumpkin,There's a real brain inside, bloody, a chestburster Out of the pumpkin, facehugger jump Out of the pumpkin),The word "halloween" scrawled in blood on floor,Cold interior lighting,direct sunlight on Halloween pumpkin,dramatic light and shadow, Horrifying movie scenes,Negative PROMPT:low quality, low resolution, unrealistic, semi-realistic, animation, drawing, 2D, painting, lack detail, flat background, bad lighting, bad composition,Below is the workflow link:https://tensor.art/workflow/editor/786104965352589871Below is the AI-tool link:https://tensor.art/template/786112649048914425model used:CKPThttps://tensor.art/models/757279507095956705/FLUX.1-dev-fp8LORAhttps://tensor.art/models/768181864964856710/FLUX-Cinematic-V1https://tensor.art/models/782762523020600332https://tensor.art/models/783607553541144419
56
37
Decoding AI Art Prompts: Why "Score_9, etc" Won't Get You a Better Image.

Decoding AI Art Prompts: Why "Score_9, etc" Won't Get You a Better Image.

⛔️ DO NOT USE Score_9, Score_8_Up, Score_7_Up etc.AI-powered image generation has surged in popularity with models like FLUX, DALL-E 2, Stable Diffusion, and Midjourney producing highly realistic and imaginative images from simple text prompts. These tools have empowered users to create visual art with just a few words. However, understanding the inner workings of these models can help improve the quality of prompts and ultimately, the images they generate.🟥 What Are "Score_9, Score_8_Up" and Similar Terms?You may have seen terms like “Score_9” or “Score_8_Up” in discussions about AI-generated images. These terms refer to internal scoring mechanisms used during the training of AI models, where the system assesses images based on various quality levels. For example:"Score_9": Indicates the highest quality images during training."Score_5_Up": Refers to images of moderate quality, not as refined as those with a "Score_9."The system uses these scores during training to fine-tune the model and help it differentiate between images of varying quality. Over time, this process leads to better, more accurate output when the model is fully trained.🟨 Why Including These Scores in Prompts Is IneffectiveWhile these scoring mechanisms are crucial during model training, they serve no purpose when included in user prompts. Here’s why:🚷 Scores Are Internal: These scores are part of the model’s training process and are not accessible or relevant to the end-user prompt system. When you include terms like "Score_9" or "Score_8_Up" in your prompt, the model does not understand them as it would a descriptive term. Instead, it may interpret them as arbitrary text, which could confuse the output and lead to unexpected or undesirable results.⚠️ Prompts Should Be Descriptive, Not Coded: The AI models work best when given clear, descriptive language. Including internal scoring jargon could dilute the clarity of your prompt, resulting in less relevant or lower-quality images.🟩 How to Write Better AI Image PromptsTo create high-quality images, focus on providing the AI with precise, vivid descriptions. Here are some tips for improving your prompts:Use Clear, Concise Language: Be specific about what you want. Instead of relying on scoring terms, describe the image you envision. For example, instead of "Score_9", say "highly detailed portrait in soft lighting."Incorporate Key Details: Include information about the image’s colors, style, lighting, composition, and subject. The more detail you provide, the more likely the model will produce an image that aligns with your vision.Provide Style References: Mention well-known artistic styles, mediums (such as watercolor or oil painting), or even specific artists (if relevant). Alternatively, if you have a particular style in mind, including links to reference images can help guide the AI’s output.Experiment and Refine: AI image generation is still an evolving field. Don’t hesitate to tweak your prompts, try different combinations of words, or run multiple iterations to explore the model’s full capabilities. Experimenting is key to achieving better results.🟦 ConclusionWhile it may be tempting to use internal training terms like “Score_9” in your prompts, doing so won’t improve the quality of your AI-generated images. These scores are meaningful only during the model’s training phase and have no value when generating images for users. Instead, focus on crafting well-thought-out prompts using descriptive language, key details, and style references. With clear and specific instructions, you’ll be able to harness the full power of AI art generators and create visuals that align with your creative vision.📚ReferencesFLUX AI, DALL-E, and Midjourney documentation. (2023). Understanding AI image models and their scoring mechanisms.Brown, T., et al. (2021). "Language Models are Few-Shot Learners." OpenAI Research Paper.Chen, M., et al. (2022). "Learning Transferable Visual Models From Natural Language Supervision." Clip (Contrastive Language–Image Pre-training), OpenAI Research Paper.Radford, A., et al. (2021). "DALL·E: Creating Images from Text." OpenAI Blog.Zhang, R., et al. (2022). "Diffusion Models in Vision: A Comprehensive Survey." Stable Diffusion Research Paper.
55
5
ComfyUi: Text2Image Basic Glossary

ComfyUi: Text2Image Basic Glossary

Hello! This is my first article; I hope it will be of benefit to the person who reads it. I still have limited knowledge about WorkFlow; but I have researched and learned little by little. If anyone would like to contribute some content; you are totally free to do so. Thank you.I made this article to give a brief and basic explanation about basic concepts about Comfyui or WorkFlow. This is a technology with many possibilities and it would be great to make it easier to use for everyone! What is Workflow?Workflow is one of the two main image generation systems that Tensor Art has at the moment. It corresponds to a generation method that is characterized by a great capacity to stimulate the creativity of the users; also, it allows us to access to some Pro features being Free users.How do I access the WorkFlow mode?To access the WorkFlow mode, you must place the mouse cursor on the “Create” tab as if you were going to create an image by conventional means. Once you have done that; click on the “ComfyFlow” option and you are done.After that, you will see a tab with two options “New WorkFlow” and “Import WorkFlow”. The first one allows you to start a workflow from a template or from scratch; while the second option allows you to load a workflow that you have saved on your pc in a JSON file.If you click on the “New WorkFlow” option, a tab with a list of various templates will be displayed (each template will have a different purpose). But the main one will be “Text2Image”; it will allow us to create images from text, similarly to the conventional method we always use. You can also create a workflow from scratch in the “Empty WorkFlow Template” option but for a better explanation of the basics we will use the “Text2Image”.Once you click on the "Text2Image" option, you must wait a few seconds and a new tab will be displayed with the template, which contains the basics to create an image by means of text. Nodes and Borders: ¿What are they and how do they work?Well, to understand the basics of how a WorkFlow works, it is necessary to have a clear understanding of what Nodes and Border are.Nodes are small boxes that are present in the workflow; each node will have a specific function necessary for the creation, enhancement or editing of the image or video. The basics of Text2Image are the CheckPoint loader, the Clip Text Encoders, the Empty Lantent Image, the Ksampler, the VAE decoder, and Save Image. It should be noted that there are hundreds of other nodes besides these basics and they all have many different functions.On the other hand, the “Borders” are the small colored wires that connect the different nodes. They are the ones that will set which nodes will be directly related. The Borders are ordered by colors that are generally related to a specific function.The purple is related to the Model or Lora used.The yellow one is intended for connection to the model or lora with the space to place the prompt.The red refers to VAE.The orange color refers to the connection between the spaces for placing the prompt and the “Ksampler” node.The fucsia color makes allusion to the latent, which will serve for many things; but for this case it serves to connect the “Empty Latent Image” node with the “Ksampler” node and establish the number and size of the images that will be generated.And the blue color is related to everything that has to do with images; it has many uses but this case is related to the “Save Image” node.What are the Text2Image template Nodes used for?Having this clear is of utmost relevance, since it allows you to know what each node of this basic template is for. It's like knowing what each piece in a lego set is for and understanding how they should be connected to create a beautiful masterpiece! Also, if you get to know what these nodes are for, it will be easier for you to intuit the functionality of its variants and other derived nodes.A) The first one is the node called “Load Chckpoint”, this node has three specific functions. The first one is to load the base model or checkpoint with which an image will be created. The second is the Clip, which will take care of connecting the positive and negative prompts that you write to the checkpoint. And the third is that it connects and helps to load the VAE model. B) The second one is the “Empty Latent Image”; which is the node in charge of processing the image dimensions from the latent space. It has two functions: First, set the width and length of the image; and second, set how many images will be generated simultaneously according to the “Batch Size” option.C) The third is the two “Clip Text Enconder” nodes: in this case there will always be at least two of these nodes, since they are responsible for setting both the positive and negative prompts that you write to describe the image you want. They are usually connected to the "Load Checkpoint" or any LoRa and are also connected to the “Ksampler” node.D) Then, there is a node “Ksampler”. This node is the central point of all WorkFlow; it is the one that sets the most important parameters in the creation of images. It has several functions: the first one is to determine which is the seed of the image and to regulate how much it changes from image to generated image by means of the “control_after_generate” option. The second function is to set how many steps are needed to create the image (you set them as you wish); the third function is to determine which sampling method is used and also what is the scheduler of this method (this helps to regulate how much space is eliminated when creating the image).E) The penultimate one is the VAE decoder. This node is in charge of assisting the processing of the image to be generated: its main function is to be responsible for materializing the written prompt into an image. That is to say, it reconstructs the description of the image we want as one of the final steps to finish the generation process. Then, the information is transmitted to the “Save Image” node to display the generated image as the final product.F) The last node to explain is the “Save Image”. This node has the simple function of saving the generated image and providing the user with a view of the final work that will later be stored in the taskbar where all the generated images are located.Final Consideration:This has been a small summary and explanation about very basic concepts about ComfyUI Mode; you could even say that it is like a small glossary about general terms. I have tried to give a small notion that tries to facilitate the understanding of this image generation tool. There is still a lot to explain, but I will try to cover all the topics; the information would not fit in a single article (ComfyUI is a whole universe of possibilities). ¡Thank you so much for taking the time to read this article!
53
15
My Models list & need Suggestions in comment. Part - 1

My Models list & need Suggestions in comment. Part - 1

Hello all!I uploaded my models 3 different category1st - ColorMaxColorMax models i made only for Comic, Cartoon and Anime art style. all my models with ColorMax Title have Different art styles. Models list :-ColorMax(SDXL) - https://tensor.art/models/628320743211433592This model was my first SDXL model and have unique 2.5d + anime art style. that works better with detailed prompt.ColorMax Anime-Verse(SD1.5) - https://tensor.art/models/667900735154841978This model was detailed semi-Anime + 2.5d style. i made 2 versions V1 was beta and v2 is full and highly detailed. but its take 25 or more steps for make proper image. i will start work on v3 soon.ColorMax Anime (Shiny) - https://tensor.art/models/660959423759510296This was my First, best and most detailed anime model. i made it with 3gb+ anime images dataset. and i am so glad and Thankful for everyone love and got positive response. and Shiny i write because i made 2 models and 2ed one with dark style.ColorMax Anime (Dark) - https://tensor.art/models/667209700686802899Same as ColorMax Anime (shiny)... this model i train with dark anime dataset but its have some issues. i was plan to make v2 but i decided to give other model priority first according to people demand.ColorMax Eye details - https://tensor.art/models/680228304906628453After getting lots of people demand... i made Eye Detailer. i am Glad people love it.ColorMax AniAir - https://tensor.art/models/700620204037809468This model For multi colorful anime arts... mostly Sci-fi, fantasy or any unique anime art can easily make this model.ColorMax Anime V2 - https://tensor.art/models/701391262336569195After many people request and some little prompt issues in ColorMax Anime(shiny)... i decided to improve it and made V2 with highly detailed and low steps setting... that need only 20 steps for make image.ColorMax Neon lights and background - https://tensor.art/models/695080125722681685This was my first lora made with TA Training and this mainly for neon style in 2.5d or anime images.ColorMax Toon Style - https://tensor.art/models/713960892150149766My newest SD1.5 model. i made it very detailed and cartoon style.ColorMax Anime Shiny Detailer & Enhancer - https://tensor.art/models/715722275418201651This is Detailer for SDXL ColorFul anime... I made with TA Training.I am not gonna add pony models in this list...To be continue... in part 2
53
4
[Generation Guide] Worldview: Add it to create a punk world.

[Generation Guide] Worldview: Add it to create a punk world.

Hello everyone. When you say "punk" in the image generation prompt, SteamPunk, CyberPunk, etc. are often used,but there are many other punks that can produce interesting image effects.The effects will vary depending on the base model, LoRA, filters, etc., but we'll introduce a few of them.* Word      * Image Features (Effects)SteamPunk ・・・ Steam engines and Victorian atmosphere, gears, goggles, top hats,icons such as corsets,distorted utopias and contradictions, airships,mechanical calculators, etc.CyberPunk ・・・ Neon colors, neon lights, neon green, neon blue, gadgets, goggles near future, high-tech machinesRococoPunk ・・・ A whimsical aesthetic derived from cyberpunk that thrusts a punk attitude into the18th century Rococo eraSlimePunk ・・・ Radioactivity/biohazard symbols, green luminous objects, gas masks, rubber gloves,mall goth themes and clothing, etc. Black, green, purple, redRayPunk ・・・ Technological psychedelia, non-standard "science", alternative or distorted, twisted realities, etc.AfroPunk ・・・  Afro hair, voluminous hair, perms, black people, dark skin, thick lips, earth colors, etc.CatholicPunk ・・・  Crosses, priests, priest robes, cassocksClownPunk ・・・  Clowns, red noses, white makeup, face paint, colorful, pop colors, red          Funky hairstyles and fashionAtomPunk ・・・  Goggles, headgear, headphones, orangeBioPunk ・・・  Green, tubes, cords, headgear A ScientistClockPunk ・・・ Clocks and pocket watches, gears, goggles, hats, gadgetsDecoPunk ・・・  Almost steampunk, hats, goggles, instrumentsDieselunk ・・・  Diesel-type machinery, goggles, hatsElfPunk ・・・  Elves and fairies, green, pointy earsLunarPunk ・・・  Fewer gadgets and machines than steampunkMythPunk ・・・  Mythpunk is a subgenre of mythological fiction that originates from mythologyNanoPunk ・・・ Futuristic, sophisticated machines, not many gadgetsNowPunk ・・・  Leather jackets, punk rock fashion* Other than these...SteelPunk ・ StonePunk ・ SpacePunk ・ SolarPunk ・ NeonPunk ・ AnalogPunk ・ AnimalPunk ・ BirdPunk ・CosmoPunk ・ DigitalPunk ・ DinosaurPunk ... and many more 🙂It might be interesting to try other combinations too 🙂* The following images were created under the following conditions :Base model: RealCartoon - Special, LoRA: None, VAE: Automatic, Sampler: DPM2Prompt = 1girl, creates beautiful women with the theme of "■■Punk"
50
11
Batch Resize Images

Batch Resize Images

https://www.presize.io/batch resize images for dataset training models/lora,etc. (allow tweaking each image's scale,zoom)as the general size desired.512-5121024-1024etcin one go.'optional'https://www.birme.net/batch resize images for dataset training models/lora,etc.as the general size desired.512-5121024-1024etcin one go.
50
1
The future of AI image generation: endless possibilities -

The future of AI image generation: endless possibilities -

introduction{{For those who are about to start AI image generation}}In recent years, advances in AI technology have brought about revolutionary changes in the field of image generation. In particular, AI-powered illustration generation has become a powerful tool for artists and designers. However, as this technology advances, issues of creativity and copyright arise. In this article, we will explain the possibilities of AI image generation, specific use cases, how to create prompts, how to use LoRA and its effects, keywords for improving image quality, consideration for copyright, etc.Fundamentals of AI image generationAI image generation uses artificial intelligence to learn from data and generate new images. Deep learning techniques are often used for this, and one notable approach is stable diffusion. Stable Diffusion employs a probabilistic method called a diffusion model to gradually remove noise during image generation, resulting in highly realistic, high-quality output.Generating real imagesAI technology is excellent not only for creating cute illustrations, but also for generating realistic images. For example, you can generate high-resolution images that resemble photorealistic landscapes or portraits. By utilizing Stable Diffusion, it is possible to generate more detailed images, which expands the possibilities of application in various fields such as advertising, film production, and game design.Generate cute illustrationsOne of the practical applications of AI image generation is the creation of cute illustrations. This is useful for things like character design and avatar creation, allowing you to quickly generate different styles. This process typically involves collecting a large dataset of illustrations, training an AI model on this data to learn different styles and patterns, and generating new illustrations based on user input or keywords.creativity and AIAI image generation also influences creative ideas. Artists can use her AI-generated images as inspiration for new works or expand on ideas, which can lead to the creation of new styles and concepts never thought of before.Use and effects of LoRALoRA (Low-Rank Adaptation) is a technique used to improve the performance of AI models. Its impacts include:1. Fine-tune models: LoRA allows you to fine-tune existing AI models to learn specific styles and features, allowing for customization based on user needs.2. Efficient learning: LoRA reduces the need for large-scale data collection and training costs by efficiently training models using small datasets.3. Rapid adaptation: LoRA allows you to quickly adapt to new styles and trends, making it easy to generate images tailored to your current needs.For example, LoRA can be leveraged to efficiently achieve high-quality results when generating illustrations in a specific style.Creating a promptWhen instructing an AI to generate illustrations, it's important to create effective prompts. Key points for creating prompts include providing specific instructions, using the right keywords, trial and error, and an optional reference image to help the AI figure out what you're looking for. Keywords for improving image qualityWhen creating prompts for AI image generation, you can incorporate keywords related to image quality improvement to improve the overall quality of the images generated. Useful keywords include "high resolution," "detail," "clean lines," "high quality," "sharp," "bright colors," and "photorealistic."Copyright considerationsImage generation using AI also raises copyright issues. If the dataset used to train your AI model contains copyrighted works, the resulting images may infringe your copyright. When using AI image generation tools, it's important to be aware of the data source, ensure that the generated images comply with copyright laws, and check the license agreement.conclusionAI image generation offers great possibilities for artists and designers, but it also raises challenges related to copyright. By using data responsibly and understanding copyright law, you can leverage AI technology to create innovative work. Leveraging technologies like LoRA can further improve efficiency and quality. Users can adjust the output by incorporating image enhancement keywords into the prompt. Let's explore new ways of expression while being aware of advances in AI technology and the considerations that come with it! !
50
24
【12/9更新あり】 日本語訳 11月29日~12月26日 公式イベント ChristmasWalkthrough

【12/9更新あり】 日本語訳 11月29日~12月26日 公式イベント ChristmasWalkthrough

11/29~12/26までのクリスマスイベントの日本語訳です。<時間がない人・何していいかわからない人>12月13日の朝8:59までにホームにピン留めされた「3DVideo」「RealAnime」で動画と画像投稿で2day Pro GET!の激熱イベントなので、ぜひこれだけはやっておきましょう。元記事https://tensor.art/blackboard/ChristmasWalkthroughhttps://docs.google.com/document/d/10GsQgVS-myqSHJGDLVQT3Su9o7gjxvCFl3CehL8ICwk/edit?tab=t.0こんにちは、旅人さん!🎅🎄ようこそ、Tensor Impactへ!これから君はクリスマスの冒険の旅に出るのだよ。探索タスクを次々と達成して、素晴らしい報酬を手に入れてくれたまえ!✨⏰ 探索期間:11月29日 UTC → 11月29日 09:00 JSTから12月26日 UTC → 12月26日 09:00 JSTまで。この28日間で**「クリスマスウォークスルー」の全タスクを達成し、成功した探検家になろう!🎁達成者には49.9ドルの現金報酬**と、**新年プロモーション(1つ購入で1つ無料!)**が待っている!さらに、タスクごとに20ドル相当の報酬やPro会員特典、クレジットを獲得できるぞ。📅 探索タスクカレンダー毎日1つずつタスクが用意されており、各週内にタスクをすべてクリアすればウィークリーバッジをゲット!もしタスクを1つでも達成できなかった場合は「マジックバッジ」を使って補完できるので安心じゃ!各タスクには難易度が表示されているよ(例: 🌟 = 簡単, 🌟🌟🌟 = 難しい)。難しいタスクにはガイドも用意されているから活用してくれたまえ!すべての投稿には必ず「#Christmas Walkthrough」のタグを付けるのをお忘れなく!🎨ウィーク1: 11月29日~12月5日期間中にすべてのタスクを完了すると、200クレジット(ボーナス込み)がもらえる!日付タスク報酬11/29 毎日のテーマに投稿20クレジット11/30 テーマカレンダーに沿った投稿20クレジット12/1 テーマカレンダーに沿った投稿20クレジット12/2 テーマカレンダーに沿った投稿20クレジット12/3 テーマカレンダーに沿った投稿20クレジット12/4 テーマカレンダーに沿った投稿20クレジット12/5 テーマカレンダーに沿った投稿20クレジットウィーク2: 12月6日~12月12日(情報更新されているので注意!!)期間中にすべてのタスクを完了すると、10日分のPro会員特典がもらえる!12/6 ワークフローの公開 → 1日分のPro会員特典12/7 動画生成AIツールを公開 → 1日分のPro会員特典12/8 ホームにピン留めされた「3DVideo」AIツールで動画を投稿 → 1日分のPro会員特典12/9 ホームにピン留めされた「RealAnime」AIツールで投稿 → 1日分のPro会員特典12/10 AIツール関連の記事を公開 → 1日分のPro会員特典12/11 ラジオボタンを含むAIツールを公開 → 1日分のPro会員特典12/12 サブスクリプションを開設(12/13以前なら達成) → 1日分のPro会員特典日付日本語訳原文12.6タスク: ワークフローを公開する Task: Publish a workflow12.7タスク: 動画投稿を公開する Task: Publish a video post12.8タスク: 動画生成AIツールを公開する Task: Publish an AI Tool that generate video12.9タスク: ホームページに固定されている「RealAnime」AIツールを使って投稿を公開する Task: Use the "RealAnime" AI Tool pinned on the homepage to publish a post12.10タスク: AIツールに関連する記事を公開し、タイトルに「AI Tool」を含める Task: Publish an article related to AI Tools, include text “AI Tool” in title12.11タスク: 「ラジオボタン」を含むAIツールを公開する Task: Publish an AI Tool containing "Radio Button"12.12タスク: サブスクリプションを作成する(12.13以前に作成すれば完了と見なされる) Task: Create a buffet plan. (considered as completed as long as created before 12.13)ウィーク3: 12月13日~12月19日期間中にすべてのタスクを完了すると、20ドルの現金報酬を獲得!12/13 クリスマスをテーマにしたモデルを公開 → $212/14 モデル関連の記事を公開 → $212/15 「ゲームデザイン」「ビジュアルデザイン」「スペースデザイン」のチャンネルに合うモデルを公開 → $212/16 TenStarFundに参加したモデルを公開 → $212/17 11月29日以降にアップロードされ、20件以上の投稿があるモデルを持つ → $212/18 ベースモデルをIllustriousとしてオンライントレーニングを使ったモデルを公開 → $212/19 サブスクリプション活動(購入または購入される)を行う → $2| 日付 | 難易度 | 日本語訳 | 英語訳 | 報酬 ||--------|--------|----------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|------------------|| 12/13 | ★★ | クリスマスをテーマにしたモデルを公開する。 | Publish a Christmas-themed Model. | $2 キャッシュ || 12/14 | ★★ | 「Model Training」という文字をタイトルに含めた、モデルに関する記事を公開する。 | Publish an article related to Model, include text "Model Training" in title. | $2 キャッシュ || 12/15 | ★★ | 「ゲームデザイン」「ビジュアルデザイン」「宇宙デザイン」チャンネルのいずれかに、 | Publish a Model in one of the "Game Design, Visual Design, Space Design" channels, | $2 キャッシュ || | | そのチャンネルのスタイルに合わせたモデルを公開する。 | matching the style of the chosen channel. | || 12/16 | ★★★ | TenStarFund に成功裏に参加したモデルを公開する。 | Publish a Model that successfully joined the TenStarFund. | $2 キャッシュ || 12/17 | ★★ | 11月29日以降にアップロードされ、20以上のユーザーポストがあるモデルを用意する。 | Have a Model uploaded after November 29th with over 20 user posts. | $2 キャッシュ || 12/18 | ★★★ | 基本モデルとして「Illustrious」を使用し、オンライントレーニングでモデルを公開する。 | Publish a Model using Online Training with the base model being Illustrious. | $2 キャッシュ || 12/19 | ★★★ | 11月28日 0:00 UTC以降の購入記録があること。 | Have Purchase Record since 11.28 00:00 UTC. | 20 クレジット |### ボーナス| 内容 | 英語訳 | 報酬 ||----------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|------------------|| すべての探索タスクを3週目に完了すると合計 $20 ($2×7 + 追加 $6) が獲得可能。 | Complete all exploration tasks in the third week to earn a total of $20 cash ($2×7 + extra $6). | - |ウィーク4: 12月20日~12月26日この週には特別な名誉バッジがもらえるタスクもあるぞ!12/20 イベント期間中に公開された投稿が「リミックス」される12/21 TensorArtに関連する内容をSNSでシェアし、アンケートに回答12/22 #Christmas Walkthroughのタグが付いた投稿に「いいね」「コメント」「スター」のいずれかをする12/23 30クレジットでバッジを交換(マイページ→クレジット)12/24 イベント中に公開されたAIツールが「ブラックホースAIツール」ランキングトップ100に入る12/25 イベント中に公開されたモデルが「ブラックホースモデル」ランキングトップ100に入る12/26 「クリエイター」ランキングトップ100に入るさあ、冒険の旅を楽しんでくれたまえ!サンタも応援しているぞ!🎁✨
49
2
Are score_tags neccessary in PDXL/SDXL Pony Models?  |  Halloween2024

Are score_tags neccessary in PDXL/SDXL Pony Models? | Halloween2024

Consensus is that the latest generation of Pony SDXL models no linger require "score_9 score_8 score_7" written in the prompt to "look good".//----//It is possible to visualize our actual input to the SD model for CLIP_L ( a 1x768 tensor) as a 16x16 grid , each with RGB values since 16 x 16 x 3 = 768I'll assume CLIP_G in the SDXL model can be ignored. Its assumed CLIP_G is functionally the same but for 1024 dimension instead of 768.So the here we have the prompt : "score_9 score_8_up score_8_up"Then I can do the same but for the prompt : "score_9 score_8_up score_8_up" + XWhere X is some random extremely sus prompt I fetch from my gallery. Assume it to fill up to the full 77 tokens (I set truncate=True on the tokenizer so it just caps off past the 77 token limit)Examples:etc. etc.Granted , first three tokens in the prompt for the 768 encoding greatly influnces the "theme" of the output.But from above images one can see that the "appearance" of the text encoding can vary a lot.Thus , the "best" way to write a prompt is rarely universal.Here I'm running some random text I write myself to check similarity to our "score prompt" (top result should be 100% , so I might have some rounding error) :score_6 score_7_up score_8_up : 98.03% score 8578 : 85.42% highscore : 82.87% beautiful : 77.09% score boobs score : 73.16% SCORE : 80.1% score score score : 83.87% score 1 score 2 score 3 : 87.64% score : 80.1% score up score : 88.45% score 123 score down : 84.62%So even though the model is trained for "score_6 score_7_up score_8_up"we can be kinda loose in how we want to phrase it , if we want to phrase it.Same principle applies for all LoRA and their activation keywords.Negatives are special. The text we write in the negatives are split by whitespace , and the chunks are encoded individually.Link to Notebook if you want to run your own tests:https://huggingface.co/datasets/codeShare/fusion-t2i-generator-data/blob/main/Google%20Colab%20Jupyter%20Notebooks/fusion_t2i_CLIP_interrogator.ipynbI use this thing to search up prompt words using the CLIP_L model//---//These are the most similiar items to the Pony model "score prompt" within my text corpusItems of zero similarity (perpendicular) negative similarity (vector at opposite direction) to encoding are omitted from these results.Note that this are encodings similiar to the "score prompt" trigger encoding , not analysis of what the Pony Model considers good quality.Prompt phrases among my text corpus most similiar to "score_9 score_8_up score_8_up" according to CLIP (the peak of the graph above): Community: sfa_polyfic - 68.3 % holding blood ephemeral dream - 68.3 % Excell - 68.3 % supacrikeydave - 68.3 % Score | Matthew Caruso - 67.8 % freckles on face and body HeadpatPOV - 67.8 % Kazuno Sarah/Kunikida Hanamaru - 67.8 % iers-kraken lun - 67.8 % blob whichever blanchett - 67.6 % Gideon Royal - 67.6 % Antok/Lotor/Regris (Voltron) - 67.6 % Pauldron - 66.7 % nsfw blush Raven - 66.7 % Episode: s08e09 Enemies Domestic - 66.7 % John Steinbeck/Tanizaki Junichirou (Bungou Stray Dogs) - 66.7 % populism probiotics airspace shifter - 65.4 % Sole Survivor & X6-88 - 65.4 % Corgi BB-8 (Star Wars) - 65.4 % Quatre Raberba Winner/Undisclosed - 65.2 % resembling a miniature fireworks display with a green haze. Precision Shoot - 65.2 % bracelet grey skin - 65.2 % Reborn/Doctor Shamal (Katekyou Hitman Reborn!)/Original Male Character(s) - 65.2 % James/Madison Li - 65.1 % Feral Mumintrollet | Moomintroll - 65.1 % wafc ccu linkin - 65.1 % Christopher Mills - 65.0 % at Overcast - 65.0 % Kairi & Naminé (Kingdom Hearts) - 65.0 % with magical symbols glowing in the air around her. The atmosphere is charged with magic Ghost white short kimono - 65.0 % The ice age is coming - 65.0 % Jonathan Reid & Bigby Wolf - 65.0 % blue doe eyes cortical column - 65.0 % Leshawna/Harold Norbert Cheever Doris McGrady V - 65.0 % foxtv matchups panna - 65.0 % Din Djarin & Migs Mayfeld & Grogu | Baby Yoda - 65.0 % Epilogue jumps ahead - 65.0 % nico sensopi - 64.8 % 秦风 - Character - 64.8 % Caradoc Dearborn - 64.8 % caribbean island processing highly detailed by wlop - 64.8 % Tim Drake's Parents - 64.7 % probiotics hardworkpaysoff onstorm allez - 64.7 % Corpul | Coirpre - 64.7 % Cantar de Flor y Espinas (Web Series) - 64.7 % populist dialog biographical - 64.7 % uf!papyrus/reader - 64.7 % Imrah of Legann & Roald II of Conte - 64.6 % d brown legwear - 64.6 % Urey Rockbell - 64.6 % bass_clef - 64.6 % Royal Links AU - 64.6 % sunlight glinting off metal ghost town - 64.6 % Cross Marian/Undisclosed - 64.6 % ccu monoxide thcentury - 64.5 % Dimitri Alexandre Blaiddyd & Summoner | Eclat | Kiran - 64.5 %
47
4
Navigating the World of AI Image Upscaling: A Guide to Popular Models

Navigating the World of AI Image Upscaling: A Guide to Popular Models

In the age of digital imagery, we often encounter low-resolution photos or artwork that we wish were sharper and more detailed. Thankfully, AI-powered image upscaling models have emerged to address this need, breathing new life into pixelated images. But with a plethora of options available, choosing the right model can feel overwhelming. This article aims to demystify the world of AI upscaling by exploring some of the most popular models and their unique characteristics.Understanding the BasicsBefore diving into specific models, let's grasp some fundamental concepts:Upscaling: This process involves increasing the size of an image (e.g., doubling or quadrupling its resolution) while attempting to preserve and enhance details.GANs (Generative Adversarial Networks): Many upscaling models are based on GANs, which involve two neural networks competing against each other. One network generates upscaled images, while the other tries to distinguish real from generated images. This adversarial training process leads to increasingly realistic results.Datasets: The quality and diversity of the dataset used to train a model significantly influence its performance.Exploring Popular Upscaling ModelsLet's categorize these models based on their upscaling factor and delve into their strengths:2x Upscaling:2x-ESRGAN: A versatile and widely used model known for its good balance of detail preservation and sharpness.4x Upscaling:4x-AnimeSharp: Specifically designed for anime-style images, excelling at preserving line art and vibrant colors.4x-UltraSharp: Aims for maximum sharpness and detail, but may introduce some artifacts.4xFaceUpSharpDAT: Focuses on enhancing facial details, making it ideal for portraits.4xLexicaDAT2_otf: Trained on a vast dataset from Lexica.art, known for producing visually appealing results.4x_APISR_GRL_GAN_generator: These models utilize different GAN architectures (GRL and RRDB), each with its own nuances.4x_APISR_RRDB_GAN_generator: These models utilize different GAN architectures (GRL and RRDB), each with its own nuances.4x_IllustrationJaNa_V1_DAT2_190k: Specialized for illustrations and anime art, trained on datasets tailored to this style.4x_IllustrationJaNa_V1_ESRGAN_135k: Specialized for illustrations and anime art, trained on datasets tailored to this style.4x_NMKD-Siax_200k: Models from the NMKD project, known for their solid performance across various image types.4x_NMKD-Superscale-SP_178000_G: Models from the NMKD project, known for their solid performance across various image types.4x_RealisticRescaler_100000_G: Aims for realistic upscaling, suitable for photographs and natural scenes.4x_foolhardy_Remacri: A model known for its ability to handle complex textures and details.ESRGAN_4x: A widely used ESRGAN model specifically trained for 4x upscaling.8x Upscaling:8x_NMKD-Superscale_150000_G: An NMKD model capable of significant upscaling, but may require more processing power.Other Upscaling Factors:DAT_x2, DAT_x3, DAT_x4: Models from the DAT project, known for their speed and efficiency.SwinIR_4x: A model based on the Swin Transformer architecture, known for its ability to capture fine details.RealESRGAN Models:RealESRGAN_x2plus: Improved versions of RealESRGAN, focusing on realistic 2x upscaling.RealESRGAN_x4plus: Improved versions of RealESRGAN, focusing on realistic 4x upscaling.RealESRGAN_x4plus_anime_68: A variant specifically trained on anime-style images.Finding the Right ModelThe "best" model is subjective and depends on our specific needs:Image Type: Anime, illustrations, photographs, etc.Desired Style: Sharpness, smoothness, realism, etc.Upscaling Factor: 2x, 4x, 8x, etc.Processing Power: Some models are more demanding than others.Experimentation is Key!The best way to find the perfect model is to experiment. Many online tools and software allow you to easily test various upscaling models and compare the results. Don't be afraid to try different options until you find the one that best suits your needs.
47
3