Tensor.Art

Creation

Get start with Stable Diffusion!

ComfyFlow

ComfyUI's amazing experience!

Host My Model

Share my models,get more attention!

Online Training

Make LoRA Training easier!
795094937493352415

RealAnime/动漫牛马

18K
805151843935079525

Tahilalat style

302

3DVideo/3D视频

3.7K

Make your pictures come alive with CogVideo-5B

24K

Western Vintage "XMas & Happy New year" Edition | FLUX

2.1K

Fast Pastel Vintage Xmas Generate

16
805165420341602663

Arcane Flux

135
775496361771594264

Flux dev x Tensor

475K
760796028761828651

Flux Mimic

478
795411059986060294

Flux dev simple. cute anime girl

120
761019710643990583

Photo To Disney Style

1.7K
755351869951468962

Kolors

386
715788070019977993

Pony Vision

3.6K
748689675595750566

中割り動画生成

152
731699351893743290

Hidden Art

1.3K
754816059881559926

[SD3]Stable Diffusion 3 AI Tool - FuturEvoLab

1.7K
756318061484782188

Realistic Vision

1.7K
709250141952945848

Outpaint (Beta!)

970
727591761136057773

TATTOO Yourself !

2K
757285361134094668

Photo To Ghibli Style

1.4K
719099820517770838

DreamShoes - Trendy shoe designer

271
804873981025954049

Photo to Comic/Anime Style

7.1K
780545714303328975

Archi Retro - Flux

747
783784295753004333

Flux Pulid Halloween Yourself

2.7K
753963448743877808

3D Cartoon Vision | SD3M

1.1K
721431888989999280

Home interior design ideas

491
725140838063019285

Oil painting Wallpaper (Wide) V1.0

6.4K

View All AI Tools

Models

Workflows

Articles

[REYApping] Simple and Brief Explanation of AI Tool

[REYApping] Simple and Brief Explanation of AI Tool

Hello and welcome to the third edition of REYApping, a space where I write a bunch of nonsense. Without further ado, let's begin.Never in my entire Tensor life would I actually try to explain something. But here we are, an article about AI tool. What is AI Tool? Why make an AI Tool? How is it different from the "create mode"? I'll try to explain them.What is AI Tool?Now, I might be wrong here (roast me in the comment) but here's my answer: AI tool is a simplified, more straightforward interface of a comfyui workflow. It saves you from seeing bunch of tangled spaghetti that can potentially break your eyes and mind. Instead of customizing directly on the workflow nodes, you get a similar interface as the "create mode". The downside is that it can have limited parameter since those are set by the tool's creator, and you won't know how the workflow works. Also it suck your credits and soul (Riiwa, 2024), but sadly doesn't suck your coc- *cough* Nevermind that last part.Here's an image of comfyui workflow:Here's when that workflow is made into an AI Tool:Why Make an AI Tool?Simplicity and straightforwardness in the palm(?) of your hand. That's it. Especially if your flow has a few variables that can be modified such as prompts, steps, etc.. If your flow has a lot of modifiable variables and/or you want more control over your workflow, then I suggest you do that directly in the comfyui.How is It Different from Creation Mode?Creation mode allows you to control basic functions such as samplers, which T5 would you use, and other thing like ADetailer, img2img, controlnet, etc.. AI Tool, while it can do that if set by the author, it's generally limited to basic things only such as prompts, steps, resolution, batch size, and maybe seeds. But you can't really use things like ADetailer or img2img and other fancy stuff by yourself and you really depends on what is provided by the tool itself. In short: Creation Mode allows broader range of functions but with only basic abilities while AI Tool mostly allows specific functions, but can have better result because of the dark magic trickery inside its comfy flow.Thank you for reading this part of REYApping. See you in the next one (if there's any).
1
1
How to publis an AI Tool

How to publis an AI Tool

To publish a tool you need to have a workflow preparedYou can find them in ComfyFlow.From here you either make a new workflow, import a workflow file or choose already made one.When you selected a workflow to make into Ai tool, enter that workflows editor.Inside of selected workflow you need to have at least one AI Tool Node (TA Nodes) integrated in to the workflow.(More about TA Nodes: https://tensor.art/about/aitool-tutorial)Then you need to run the workflow.After you run it press "Publish" button in top right corner and select "Ai Tool"Now you need to fill out the boxes (Name, Channel)If you had done everything correctly you can also change "User-configurable Settings"Fill everything according to the Tool/Workflow and pres publish.
🎨 AI Tool: Turning Your Workflow into a Magical Black Box of Creativity! 🪄

🎨 AI Tool: Turning Your Workflow into a Magical Black Box of Creativity! 🪄

Hey there, fellow tinkerers and pixel wizards! 🌟 Ever wanted to create an AI tool so powerful, even your future self wouldn’t know how it works? Well, buckle up! Today, we’re diving into the quirky world of workflow wizardry—where you’ll craft AI tools using ComfyUI and publish them like a mysterious, shiny black box. The best part? Your users won’t see the chaos inside. 🤫So, What’s the Deal with AI Tools?Imagine you’re assembling a Lego masterpiece, except each piece is a node, and the result isn’t a castle—it’s an AI tool. 🏰 These tools take user inputs (like prompts or images), process them through a hidden workflow, and spit out something magical. Your users don’t need to know what’s under the hood—they’ll just press buttons and enjoy the ride!How to Build Your AI Tool (Without Losing Your Marbles):1️⃣ Dream It: Start by conceptualizing what your AI tool will do. Want to turn doodles into masterpieces or mix Christmas sweaters with robot aesthetics? The possibilities are endless. 🎅🤖2️⃣ Craft It: In ComfyUI, build your workflow by connecting nodes like a pro pipefitter. Each node has a purpose, from loading models to decoding images. This is where the magic happens—or chaos, depending on your coffee intake. ☕✨3️⃣ Test It: Run the workflow as an AI tool. At this stage, expect some hiccups. Maybe the colors look weird, or your robot Santa has three arms. That’s fine—it’s all part of the process!4️⃣ Polish It: Update, adjust, and repeat until your tool is sleeker than a freshly polished apple. 🍎 Then, publish it for the world to admire (or fear).The Secret Sauce: Export/Import User Settings 🍔When you update your workflow, the user-configurable settings can reset. 😱 But fear not! With the Export/Import feature, you can save and reload those settings faster than you can say "workflow meltdown."How It Works:Export: Before hitting the update button, export your settings. Think of it as taking a backup of your genius. 💾Import: After updating your workflow, reload the saved settings. Voilà—no more starting from scratch. 🪄Pro Tip: This feature doesn’t work if you change the nodes too drastically. So, proceed with caution or risk hearing your inner monologue scream. 😬Nodes and Workflows: A Quickie Guide for the Clueless 🤷‍♂️Nodes:Think of nodes as puzzle pieces. Each one handles a small task, like:Loading a model 🎒Decoding text 🧾Sampling images 🎨Connect them, and you’ve got a functional pipeline. Disconnected nodes, however, are just sad little islands of potential. 😢Workflows:A workflow is what you get when you chain nodes together. It’s like a recipe for your AI tool:Load a model.Process a prompt.Generate an image.Save it.Simple? Yes. Satisfying? Extremely.When to Publish Your AI Tool 🎉Once you’ve created your workflow and polished it to perfection, it’s time to publish! Your users will only see the polished front end, not the spaghetti-like chaos of nodes and connections you wrangled into submission.Encourage users to interact by configuring input fields like prompts or sliders. Their creativity meets your innovation—it’s a win-win!Tips for AI Tool Wizards-in-Training 🧙Start Small: Begin with simple workflows to avoid brain freeze. 🧊Tinker Away: Play with parameters to see how they affect the output.Be Bold: Experiment with styles and features. Combine multiple LoRAs for maximum chaos (and brilliance).ConclusionCongratulations, you’re now equipped to create AI tools that will wow, confuse, and delight users! 🎉 So, go forth and turn your wildest ideas into shiny black-box tools. And remember: with great power comes great responsibility—or at least some very weird outputs. 😜Happy creating! 🎨🪄BlackPantherP.S. Don’t forget to export those settings. Nobody likes redoing work twice!
1
AI Tool -👌Easily create an Ai tool without prompt (Part 1)

AI Tool -👌Easily create an Ai tool without prompt (Part 1)

Often, we have a picture in our mind and can find a similar picture, but we don’t know how to write a prompt. Although tensor provides a reverse inference tool, there are other processes such as copying and pasting in the middle, and it does not support NSFW.In short, filling in the options is a very troublesome thing. I am not the only one who thinks so!Using the workflow to make a small tool can simplify a lot of processes. You can see the various small tools I made. Basically, there is no need to write prompt words, because I am a very lazy artistThe following is a simple tutorial to teach you how to make the first AI Tool. It is very simple. Just follow my picture step by step!Step 1: Create a new workflowStep 2: Select the img2img templateStep 3: Double-click the mouse on a blank area of ​​the interface, search for [wd] in the interface that appears, and select the [WD14 Tigger] plug-inStep 4: Drag the image on the load image panel and connect it to the image on the WD. This is the basis of the workflow [connecting nodes]!Step 5: Change the WD14 modle to V3 version, which is the latest image reverse model. With it, you can change your image to prompt.Step 6. Right-click on the Clip Text Encode panel and select Covert Text to Input.Step 7: Double-click the blank area again and enter string functionStep 8: Right-click on the string function panel and click [convert text_b to input]; then connect [string] on the WD14 panel to [text_b] of [stringfunction]Step 9: Connect the string in the stringfunction panel to the text in the CLIP Text encode panel, so your image will become a positive prompt!Step 10: Are you tired of reading? I am also tired of writing, let’s take a break😀😀😀😀😀😀😀😀Step 11: Click the load checkpoint panel ckpt_name, you can select the model, this time we choose a pony modelStep 12: In the string function and another clip text encode panel, fill in the pony's mass prompt[positive]:score_9,score_8_up,score_7_up, [negative]:score_3,score_2,score_1Step 13: It's almost done! Set the numbers in the ksample panel, refer to my values:Step 14: Click upload in the load image panel, select the image you like (the longest side should not exceed 1280), then click generate, and that's it!Step 15: Click Publish in the upper right corner, and then select Share Workflow. You will have your own workflow tool. You can find and run it on your personal homepage.This tutorial ends here. In the next issue, we will teach you how to convert WORKFLOW into a gadget and make it more useful and complete. Thank you for your support! This tutorial ends here. In the next issue, we will teach you how to convert WORKFLOW into a gadget and make it more useful and complete. Thank you for your support!
🎭 AI Tool Spotlight: Facial Expression Adjuster & GPTs Flux Prompt PRO 🚀

🎭 AI Tool Spotlight: Facial Expression Adjuster & GPTs Flux Prompt PRO 🚀

Unleashing Creative Potential with AI: A Spotlight on the Facial Expression Adjuster and GPTs Flux Prompt PROIn the ever-evolving world of artificial intelligence, precision and flexibility are at the heart of creating truly engaging and realistic digital content. From lifelike character animations to the fine-tuning of AI-generated imagery, a new generation of tools is enabling creators, animators, and designers to bring their visions to life with unprecedented control and detail. Two such cutting-edge tools—The Facial Expression Adjuster and GPTs Flux Prompt PRO—demonstrate the transformative power of intelligent automation in the creative workflow.1. The Facial Expression AdjusterLink: https://tensor.art/template/795874684511075193The Facial Expression Adjuster is a versatile AI solution designed to enhance and personalize digital facial expressions down to the tiniest detail. Whether you’re creating a 3D animated character or refining the emotional nuances of a still portrait, this tool lets you achieve unmatched accuracy and expressiveness. Key features include:Head Positioning: Easily control parameters such as pitch, yaw, and roll, ensuring perfect alignment and posture.Eye Expressions: Fine-tune blink and wink behaviors, adjust eyebrow angles, and position pupils for subtle or dramatic effects.Mouth Phonetics: Simulate mouth shapes corresponding to various phonemes (“A,” “E,” “W,” etc.) to produce speech-like expressions.Smile Calibration: Dial in the intensity of smiles, from a faint grin to a broad beam, adding depth and realism to character personalities.Ideal for animators, 3D artists, and AI developers, the Facial Expression Adjuster makes it simple to breathe life into digital avatars and scenes. By offering granular control over facial parameters, it unlocks new creative possibilities for storytelling and user engagement.2. GPTs Flux Prompt PROLink: https://chatgpt.com/g/g-NLx886UZW-flux-prompt-proAs AI-generated images increasingly reshape the creative landscape, the need for effective prompt engineering has never been greater. GPTs Flux Prompt PRO is a specialized tool that streamlines the process of crafting compelling, visually rich prompts for models like FLUX. By guiding creators through practical steps, offering real-world examples, and applying proven methods, it ensures that the prompts you design unlock the full potential of AI-generated visuals. Through this hands-on approach, even newcomers to prompt engineering can rapidly learn how to produce captivating outcomes that align with their artistic vision.Reinventing Your Workflow with AIBy incorporating The Facial Expression Adjuster and GPTs Flux Prompt PRO into your toolkit, you can drastically enhance the quality and impact of your creative output. These tools don’t just automate routine tasks; they empower you to direct AI-driven systems with precision and clarity, resulting in more refined, expressive, and emotionally compelling digital content.From breathing authenticity into virtual characters to perfecting your prompt-crafting skills, these advanced resources provide a blueprint for success in a world where technology and artistry continue to converge. If you’re ready to push your creative boundaries and discover new dimensions in AI-assisted art and animation, The Facial Expression Adjuster and GPTs Flux Prompt PRO stand ready to elevate your work to new heights.
1
how to create ai tool for beginner - Christmas Walkthrough AI TOOL

how to create ai tool for beginner - Christmas Walkthrough AI TOOL

In this article i will share how easy to create AI Tool for beginner. check it out.1. click comfyFlow at create menu at the top2. click New Workflow or import workflow if you have any workflow3. Choose any template you want, in this section i will add text2img template4. The new tab browser will appear, wait until completed5. Setting the paramater you want, in this section i will change checkpoint and prompt only, then do running test6. after successfully testing, click publish it then choose AI Tool7. New tab will appear, then fill it, then click Publish8. TADA, your AI Tool now go public.
Using New TA Nodes with SelectParams to adjust Redux Style Model (new AI Tool)

Using New TA Nodes with SelectParams to adjust Redux Style Model (new AI Tool)

Guide to Using New TA Nodes with SelectParams on Tensor.artTensor.art recently introduced the powerful TA Nodes tool, enabling users to have more control and flexibility in AI-driven art creation. This article will guide you on how to use the SelectParams Node to adjust the application intensity of the Redux Style Model through the ConditioningAverage Node.1. What are TA Nodes?TA Nodes is a node-based workflow system that allows you to connect nodes to customize your image creation process. The SelectParams Node is a crucial feature, enabling you to fine-tune input parameters and control how much the Style Model influences the final output.2. Redux Style Model and the Role of SelectParamsThe Redux Style Model on Tensor.art is designed to produce artwork with a bold, minimalist yet sharp aesthetic. To manage the intensity of the Style Model's application and ensure the output aligns with your creative vision, the SelectParams Node allows you to adjust parameters dynamically via the ConditioningAverage Node.3. Steps to Use TA Nodes with SelectParamsStep 1: Create a Workflow with Redux Style ModelOpen the TA Nodes interface on Tensor.art.Add the Load Style Model node and select the model flux1-redux-dev.safe.tensors.Connect the Load Style Model node to the Apply Style Model node.Step 2: Add a PromptAdd the CLIP Text Encode (Prompt) node and input your creative idea.Example: "A cyberpunk cityscape at sunset with neon lights."Connect the output of CLIP Text Encode to the Apply Style Model node.Step 3: Add the SelectParams NodeAdd the SelectParams Node from the node list.Configure the settings:Creativity Levels: Choose between Low, Medium, or High.Set corresponding values for each level (e.g., Low: 0.1, Medium: 0.5, High: 0.8).Connect the SelectParams Node to the ConditioningAverage Node.Step 4: Integrate and AdjustConnect the ConditioningAverage Node to the output of the Apply Style Model node.In the ConditioningAverage Node, fine-tune additional parameters like Conditioning Strength to blend the values from SelectParams effectively.Step 5: Preview and FinalizeClick Preview AI Tool to inspect the output.If needed, go back and adjust the values in SelectParams.Once satisfied, click Go to generate the final artwork.4. Benefits of the SelectParams NodeFlexible Adjustments: The SelectParams Node allows you to increase or decrease the intensity of the Apply Style Model, ensuring the final image matches your creative intent.Seamless Integration with ConditioningAverage: It works directly with the ConditioningAverage Node, letting you control the Style Model's application intensity based on predefined levels (Low, Medium, High).Optimized Workflow: Quickly experiment with different settings without manually tweaking small parameters.High Precision: The ability to fine-tune specific levels ensures you achieve the desired result without excessive trial and error.Time-Saving: Predefined Low, Medium, and High settings make the adjustment process straightforward and efficient.5. Tips for Using the SelectParams NodeStart with Medium: This level is balanced and ideal for initial experimentation.Go High for Bold Results: Increase to High when aiming for detailed or striking artistic effects.Use Low for Subtlety: Lower the intensity when you want a natural and minimalist output.6. ConclusionThe SelectParams Node not only enables you to adjust the application intensity of the Redux Style Model but also optimizes your creative process. It's an ideal tool for ensuring that every piece of artwork reflects your vision and style. Start experimenting today on Tensor.art! 🎨
Understanding "Ai Tools" and How They Work on Tensor Art Platform

Understanding "Ai Tools" and How They Work on Tensor Art Platform

Understanding AI Tools and How They Work on Tensor Art PlatformIn recent years, Artificial Intelligence (AI) has revolutionized the way artists and creators produce visual content. One of the platforms making waves in this space is Tensor Art, a hub for AI-generated art enthusiasts and professionals. But how do AI tools work on such a platform, and what makes it special? Let’s break it down.What Are AI Tools?AI tools are software or programs powered by machine learning algorithms that analyze and learn from large datasets. In the context of art, these tools are trained on millions of images, patterns, and artistic techniques. This enables them to mimic styles, create unique visuals, and assist artists in enhancing or generating content with ease.How AI Works on Tensor ArtThe Tensor Art Platform integrates AI tools to provide users with a seamless creative experience. Here’s a simple overview of how it functions:Input Creation:Users provide an initial input, often in the form of text prompts, sketches, or existing images. For example, you might type, "A futuristic city at sunset with glowing skyscrapers."AI Processing:The platform’s AI engine processes the input using advanced algorithms. It deciphers the elements of your prompt, breaks down styles, and matches them with patterns in its database.Image Generation:Based on the input, the AI generates an image. On Tensor Art, users can choose between different artistic styles, such as impressionism, photorealism, or surrealism.Customization:Tensor Art allows users to refine the generated image by adjusting parameters like color tones, composition, or level of detail. This ensures that creators retain control over their work.Exporting and Sharing:Once satisfied, users can download their art or share it directly through the platform’s community. Tensor Art also supports high-resolution exports for professional use.Why Use Tensor Art?Tensor Art is designed with both amateurs and professionals in mind. Its user-friendly interface, combined with powerful AI capabilities, makes it ideal for:Experimenting with new art styles.Creating quick drafts or concepts.Generating high-quality visuals for personal or commercial projects.Final ThoughtsAI tools on platforms like Tensor Art are transforming how we approach creativity. By combining human imagination with machine precision, they open up endless possibilities for artists, designers, and hobbyists alike. Whether you’re looking to explore new ideas or speed up your workflow, Tensor Art is a powerful ally in the world of AI-generated art.
8
1
Christmas Walkthrough | AI Tool - small tips and tricks

Christmas Walkthrough | AI Tool - small tips and tricks

Hi guys, it's me Manuela here, This is my first Article so if there are mistakes on my post, feel free to correct it d^o^b3 small tips for beginners to create AI TOOL1/ You can rename any node if you feel it is not satisfactory or can cause confusion for new users2/ You can edit the prompt directly this way, instead of going back to the comfyui workflow environment3/ Instead of having to use the images created from the comfyui environment/workspace, you can upload your own unique cover image to make your AI tool look better. Hopefully this will help you somehow, Merry Xmas UwU
AI Tool & Radio Button. The beast is not as scary as it is portrayed.

AI Tool & Radio Button. The beast is not as scary as it is portrayed.

Let me start by saying that to create an "AI Tool" you first need to make a working "workflow". It is not for nothing that the first task of the second week, the "Christmas Walkthrough" event, is to create your own "workflow". To start creating it, just click here, as shown in the picture. For a better understanding of working with "workflow", create an empty "workflow", as shown in the picture. Now, you probably got scared. A strange black grid and an incomprehensible interface. Everything is fine, everything is quite simple here. Everything consists of nodes that are connected to each other, by connecting ones, similar to wires. You can watch the video of the "Tensor.art" team on "youtube", in which you will be introduced to the main nodes. The method of adding nodes in the video has a drawback. The list of nodes available for adding is very large and starts from the end of the list, and the most frequently used nodes are at the beginning of the list. Scrolling to the beginning of the list is very long and takes 2-3 minutes. Therefore, I advise using the search to add nodes. To open the search by nodes, you need to double-click the Left Mouse Button on an empty space. So which nodes exactly need to be added and what is their name? Let's try a method known from school. Let's copy someone's ready-made "workflow". For copying, I suggest my "workflow" - "Introvert Christmas & Phlegmatic New Year #Christmas Walkthrough". Everything as in the picture below you will try to copy.I made it according to the video guide of the "Tensor.art" team on "youtube", which I wrote about above. Adding a couple of other nodes from myself. Add all the necessary nodes using the search by nodes.Now fill in all the nodes, as in the "workflow" - "Introvert Christmas & Phlegmatic New Year #Christmas Walkthrough" or fill them with your custom parameters. Now exactly repeat the connections of the nodes as in the "workflow" - "Introvert Christmas & Phlegmatic New Year #Christmas Walkthrough" Hold down the left mouse button on the desired "light" then drag the wire to the other "light" as in the picture. To complete the task "AI Tool" containing "Radio Button". In addition to the two nodes "CLIP Text Encode (prompt)" I added one node "TA Node - PromptText". Then I turned one node with a positive prompt "CLIP Text Encode (prompt)" into "Input" as in the picture.As a result, I got this. I checked the functionality of the "workflow". With the "Run" button. After that I added the "Radio Button" as in the picture. The buttons are added, now you can press the publish button. Next, select the publication of the "AI Tool" and fill in all the sections. After pressing the publication again, the "AI Tool" is ready. It's not difficult, but at first it was scary? Congratulations!
6
2
AI Tool / The beast is not as scary. Guide to creation From A to Z, from yesterday's newbie.

AI Tool / The beast is not as scary. Guide to creation From A to Z, from yesterday's newbie.

Let me start by saying that to create an "AI Tool" you first need to make a working "workflow". It is not for nothing that the first task of the second week, the "Christmas Walkthrough" event, is to create your own "workflow".To start creating it, just click here, as shown in the picture.For a better understanding of working with "workflow", create an empty "workflow", as shown in the picture.Now, you probably got scared. A strange black grid and an incomprehensible interface. Everything is fine, everything is quite simple here. Everything consists of nodes that are connected to each other, by connecting ones, similar to wires. You can watch the video of the "tensor.art" team on "youtube", in which you will be introduced to the main nodes.The method of adding nodes in the video has a drawback. The list of nodes available for adding is very large and starts from the end of the list, and the most frequently used nodes are at the beginning of the list. Scrolling to the beginning of the list is very long and takes 2-3 minutes. Therefore, I advise using the search to add nodes. To open the search by nodes, you need to double-click the Left Mouse Button on an empty space.So which nodes exactly need to be added and what is their name? Let's try a method known from school. Let's copy someone's ready-made "workflow". For copying, I suggest my own "workflow" - "Introvert Christmas & Phlegmatic New Year #Christmas Walkthrough". Everything as in the picture below, you will try to copy.I made it according to the video guide of the "tensor.art" team on "youtube", which I wrote about above. Adding a couple of other nodes from myself. Add all the necessary nodes using the search by nodes.Now fill in all the nodes as in the "workflow" - "Introvert Christmas & Phlegmatic New Year #Christmas Walkthrough" or fill them with your custom parameters.Now exactly repeat the connections of the nodes as in the "workflow" - "Introvert Christmas & Phlegmatic New Year #Christmas Walkthrough". Hold down the left mouse button on the desired "light" then drag the wire to the other "light" as in the picture.To complete the task "AI Tool" containing "Radio Button". In addition to the two nodes "CLIP Text Encode (prompt)" I added one node "TA Node - PromptText". Then I turned one node with a positive prompt "CLIP Text Encode (prompt)" into "Input" as in the picture.As a result, I got this. I checked the functionality of the "workflow". With the "Run" button.After that I added the "Radio Button" as in the picture.The buttons are added, now you can click the publish button.Next, select the publication of the "AI Tool" and fill in all the sections. After clicking the publication again, everything "AI Tool" is ready. It's not difficult, really, but was it scary at first?Congratulations!
3
⚙️Beginner's guide to creating "AI tool": workflow basics and practice⚙️

⚙️Beginner's guide to creating "AI tool": workflow basics and practice⚙️

 IntroductionHello everyone. In this article, we will explain the basic mechanism for creating AI tools with "Tensor Art". We will introduce the particularly important meanings of "workflow" and "node", how to set them up, and procedures. What is an AI tool?AI Tools is a node-based tool for visually designing AI image generation. It consists of a processing flow (workflow) that combines nodes (like pieces of a puzzle) to generate an image.Main features of workflowIntuitive operability: simply place and connect nodes with drag and drop.Flexible configuration: Fully customize models (checkpoints), LoRA, prompts, and more.Real-time generation: You can start image generation immediately after setting.It is important to understand first! What is a node?A node is a "small unit responsible for one process" in image generation. For example, there are nodes such as "Load Checkpoint" that loads an AI model and "CLIP Text Encode" that analyzes prompts. Basic configuration of a node:Input: Materials that start processing (e.g. prompt or model).Output: Passes the results of processing to the next node.In an analogy, nodes are like the "parts" of a pipeline. Connecting these together creates the overall flow. What is a workflow?A workflow is a series of image generation flows designed by connecting multiple nodes.For example, create the following flow:Load an AI model (Load Checkpoint)Analyze the prompt (Prompt Encode)Generate an image (Sampler)Save the image (Save Node)Visually constructing these flows enables image generation in Tensor Art.Image Generation Workflow in Tensor Art: Basic Configuration and StepsBelow, we will explain the basic workflow and the role of each node in detail. Overview of the Basic Workflow The basic configuration for image generation in Tensor Art is as follows:Load Checkpoint (AI model): Select the base generative model.→ Node name: Load CheckpointEncode prompt (generation instruction): Specify the direction of image generation.→ Node name: CLIP Text EncodeApply LoRA model (optional): Add style and features.→ Node name: Load LoRAImage generation process: Generate image based on prompt and model.→ Node name: KSamplerVAE decode: Adjust generated image to make it human-readable.→ Node name: VAE DecodeSave image: Save generated image to file.→ Node name: Save ImageDetailed explanation and setting method for each node         🌸⬇️Let's use a workflow using FLUX as an example. ⬇️🌸1. Load CheckpointRole: Loads the model that is the basis of AI image generation.Settings: ckpt_name: Specify the model name you want to use.Example: FLUX-1-dev-fp8 (recommended checkpoint for TensorArt).2. Load LoRA (Add style)Role: Apply LoRA model that adds specific features and style.Settings: lora_name: Enter the name of the LoRA model you want to use.strength_model and strength_clip: Model influence (1.0 recommended).3. CLIP Text EncodeRole: Converts the content of the image to be generated (prompt) into a format that AI can understand.Settings:Example prompt: "futuristic cityscape, neon lights, digital painting".4. KSampler (Central process of image generation)Role: Generates the actual image based on the prompt and model.Settings:steps: Accuracy of generation (approximately 20-30).cfg: Applicability of the prompt (usually 1.0).sampler_name: Sampling method (e.g. Euler).5. VAE DecodeRole: Converts the generated latent image into the final image data.Note: Select a VAE that corresponds to the checkpoint.6. Save ImageRole: Saves the generated image as a file.Settings:filename_prefix: Specifies the beginning of the image name (e.g. "TensorArt_").Example of actual workflow: Node connectionBelow is an example of an actual node connection. Image generation is possible by reproducing this flow in the Tensor Art node editor.Load Checkpoint → Load the AI ​​model.Add Load LoRA if necessary and apply styles.Enter prompts into CLIP Text Encode and set the generation content.Use FluxGuidance (adjust guidance scale) to fine-tune the influence of the prompt.Generate an image with KSampler.Adjust the image through VAE Decode.Finally, save the image with Save Image. Frequently Asked QuestionsQ1. What is the difference between Checkpoint and LoRA?Checkpoint: A model that is the basis of AI image generation. It determines the overall style.LoRA: A module for adding specific additional styles and fine features.Q2. Is VAE required?Basically used in conjunction with Checkpoint. Without VAE, the color and resolution of the image may not be displayed properly. SummaryOnce you understand how nodes and workflows work, you can create your own images just the way you want them. Use this guide to get started with a simple setup! 😆👍           ⬆️This is an image generated using the workflow introduced this time 😊 Next stepsTry your own prompts and settings.Combine multiple LoRAs to pursue originality.Experiment with high resolution and special styles.By publishing the completed workflow, you can have many people use it as your AI tool 👍Enjoy your creative adventure with Tensor Art! Side note: Tips for beginnersUnderstand the basics of nodes: First, understand what each node does.Start with a simple workflow: Try a workflow with a minimum number of nodes to help you understand how it works.Repeat the experiment: Adjust the parameters of each node and see how the generated image changes.
6
4
🔰 “AI Tool” Export/import user settings

🔰 “AI Tool” Export/import user settings

Hello 🙂 Today, I will explain the "export" and "import" settings when publishing an AI tool, which has been introduced several times on the site.To publish an AI tool, roughly speaking, 1) Create a workflow based on the concept of the AI ​​tool you want to make 2) Configure, adjust, and test the workflow 3) Test as an AI tool, and publishI think you will create and publish in the above order, butAfter you are satisfied with part 2), when you turn it into a tool and test it in part 3), you may see a "defect" or "finishing discrepancy" that you could not see in workflow mode.In this case, you should configure and adjust the "workflow that is the basis of the tool" again, then "update the workflow" on the "edit" mode screen of the AI ​​tool, reflecting the changes in the AI ​​tool settings, and then publish it again. This is the flow.When you perform this "workflow update", the information displayed on the AI tool operation screen will change.“User-configurable Settings” will be “reset”! What you have gone through a lot of trouble configuring, sorting, etc., is now starting from scratch.I can't help but sigh at having to start over 😮 Corresponding to this resetThe method is " Export/Import settings ''.With this function, even if you need to repeatedly update your workflow, you can easily return to the "User-configurable Settings" settings you used when you first created the tool, so you can avoid unnecessary work.Here's how to do it:● Export (save settings before workflow update) 1) Open the "Edit" mode screen of the AI ​​tool and scroll down to find "User-configurable Settings". To the right of it there are three buttons: "Import", "Export", and "Empty". Click "Export". 2) Then, if you are using a PC, a dialog box will appear asking you to confirm the “file save destination”.Save it in an easy-to-find location so you don't get lost later.At this time, it is even more convenient to give the file a "file name" that is easy to identify.(This procedure applies to smartphones, tablets, etc.) This is the end of "Export".◆ Import (Load previous settings after updating workflow) 1) Go to "User-configurable Settings"in the same way as when exporting and click "Import". 2) A dialog box will appear asking you which file to import,so specify the file you "saved" (exported) earlier and click "OK".(For smartphones, tablets, etc., follow these guidelines)This will restore the "settings" that were reset by "update workflow". You should see that it is.The above is how to "Export" and "Import" "User - configurable Settings", but there is one thing to keep in mind.That is, if you "change the node type" or "add a new node or delete a node" when updating the workflow,"Export" and "Import" will not work. Please note that these are functions to save and restore the "initial settings made into a tool".I think there are many similar articles on the site, but this time's "Christmas Walkthrough"I hope this will be helpful for those who are trying to create an AI tool for the first time at the event 🙂
13
5
Christmas Walkthrough | Add Radio Buttons to an old Ai Tool.

Christmas Walkthrough | Add Radio Buttons to an old Ai Tool.

What are Radio Buttons?They allow you to use name syntax in your prompt to get a lines of prompt from a file. in TensorArt we will use it as susbtitition for personalized wildcards. So Radio Buttons are pseudo-wildcards. Check this article to know how to manipulate and personalize them. Radio Buttons requires a <Clip Text Encoder> node to be storo within.What do we need?Any working Ai ToolIn my current exploration only certain <CLIP Text Encoder> nodes allows you to use them as Radio Button containers. For this example I'll use my ai tool: 📸 Shutterbug | SD3.5L Turbo.Duplicate/Download your Ai Tool workflow (To have a Backup).Add a <CLIP Text Encode> node.Add a <Conditioning Combine> node,Ensamble the nodes as the illustration shows; be careful with the combine method, use concat if you're not experienced at combining clips, this will instruct your prompting to ADD the Radio Button calling prompt.💾 Save your Ai Tool workflow.Go to Edit mode in your Ai Tool.Export your current User-configurable Settings (JSON).↺ Update your Ai Tool.Import your old User-configurable Settings (JSON).Look for the new <CLIP TextEncode> node, and load it.Hover over the <CLIP TextEncode> new tab, and select Edit.Config your Radio Buttons.Publish your Ai Tool.Done! Enjoy the Radio Button feature in your Ai Tools, so in my case my new Ai Tool looks like this:📹 Shutterbug | SVD & SD3.5L Turbo.Note: I also included SVD video to meet the requirements of the Christmas Walkthrough event.
3

TensorArt New Feature Tutorial: Classic Workbench Text-to-Video and Image-to-Video

Hello everyone! TensorArt has recently launched a new feature in the Classic Workbench, supporting Text-to-Video and Image-to-Video functionalities. Today, I’ll walk you through how to use these exciting new features to create your own video content! ��Step 1: Open the Classic WorkbenchFirst, open the TensorArt Classic Workbench and go to the main interface. Then, locate the Text to Video module. Step 2: Select Model and SettingsIn the Text to Video page, you'll see two important options: Models and Settings. Currently, there are three models available for you to choose from.·FPS (Frames Per Second): FPS stands for Frames Per Second, which indicates how many frames of images are displayed per second. The higher the FPS, the smoother the video looks. For example, we can set the FPS to 24, which is typically suitable for most video productions.Duration: Duration refers to how long your video will play, from start to finish. You can set it in seconds, minutes, or longer, depending on your needs.Once you've adjusted these settings, input your Prompts (the text description of what you want to generate), and click Generate. Voila! Your video will be created based on the prompts you provided! ✨Step 3: Image-to-VideoNext, let's take a look at the Image to Video feature. Here, you’ll see two models available. First, click to upload the image you want to use. Then, set the related parameters, such as FPS and Duration. Finally, input your Prompts (describing how you want the image to be turned into a video) and click Generate.It’s that simple! By adjusting the settings, you can create creative image-to-video works.SummaryHow easy is that? �� With just a few simple steps, you can turn text into lively video or transform static images into dynamic video content. Why not give it a try?If you have any questions or want to share your creations, feel free to leave a comment below! ��We look forward to seeing your creative works! Come try out the Text-to-Video and Image-to-Video features on TensorArt today! 
34
3
【12/9更新あり】 日本語訳 11月29日~12月26日 公式イベント ChristmasWalkthrough

【12/9更新あり】 日本語訳 11月29日~12月26日 公式イベント ChristmasWalkthrough

11/29~12/26までのクリスマスイベントの日本語訳です。<時間がない人・何していいかわからない人>12月13日の朝8:59までにホームにピン留めされた「3DVideo」「RealAnime」で動画と画像投稿で2day Pro GET!の激熱イベントなので、ぜひこれだけはやっておきましょう。元記事https://tensor.art/blackboard/ChristmasWalkthroughhttps://docs.google.com/document/d/10GsQgVS-myqSHJGDLVQT3Su9o7gjxvCFl3CehL8ICwk/edit?tab=t.0こんにちは、旅人さん!🎅🎄ようこそ、Tensor Impactへ!これから君はクリスマスの冒険の旅に出るのだよ。探索タスクを次々と達成して、素晴らしい報酬を手に入れてくれたまえ!✨⏰ 探索期間:11月29日 UTC → 11月29日 09:00 JSTから12月26日 UTC → 12月26日 09:00 JSTまで。この28日間で**「クリスマスウォークスルー」の全タスクを達成し、成功した探検家になろう!🎁達成者には49.9ドルの現金報酬**と、**新年プロモーション(1つ購入で1つ無料!)**が待っている!さらに、タスクごとに20ドル相当の報酬やPro会員特典、クレジットを獲得できるぞ。📅 探索タスクカレンダー毎日1つずつタスクが用意されており、各週内にタスクをすべてクリアすればウィークリーバッジをゲット!もしタスクを1つでも達成できなかった場合は「マジックバッジ」を使って補完できるので安心じゃ!各タスクには難易度が表示されているよ(例: 🌟 = 簡単, 🌟🌟🌟 = 難しい)。難しいタスクにはガイドも用意されているから活用してくれたまえ!すべての投稿には必ず「#Christmas Walkthrough」のタグを付けるのをお忘れなく!🎨ウィーク1: 11月29日~12月5日期間中にすべてのタスクを完了すると、200クレジット(ボーナス込み)がもらえる!日付タスク報酬11/29 毎日のテーマに投稿20クレジット11/30 テーマカレンダーに沿った投稿20クレジット12/1 テーマカレンダーに沿った投稿20クレジット12/2 テーマカレンダーに沿った投稿20クレジット12/3 テーマカレンダーに沿った投稿20クレジット12/4 テーマカレンダーに沿った投稿20クレジット12/5 テーマカレンダーに沿った投稿20クレジットウィーク2: 12月6日~12月12日(情報更新されているので注意!!)期間中にすべてのタスクを完了すると、10日分のPro会員特典がもらえる!12/6 ワークフローの公開 → 1日分のPro会員特典12/7 動画生成AIツールを公開 → 1日分のPro会員特典12/8 ホームにピン留めされた「3DVideo」AIツールで動画を投稿 → 1日分のPro会員特典12/9 ホームにピン留めされた「RealAnime」AIツールで投稿 → 1日分のPro会員特典12/10 AIツール関連の記事を公開 → 1日分のPro会員特典12/11 ラジオボタンを含むAIツールを公開 → 1日分のPro会員特典12/12 サブスクリプションを開設(12/13以前なら達成) → 1日分のPro会員特典日付日本語訳原文12.6タスク: ワークフローを公開する Task: Publish a workflow12.7タスク: 動画投稿を公開する Task: Publish a video post12.8タスク: 動画生成AIツールを公開する Task: Publish an AI Tool that generate video12.9タスク: ホームページに固定されている「RealAnime」AIツールを使って投稿を公開する Task: Use the "RealAnime" AI Tool pinned on the homepage to publish a post12.10タスク: AIツールに関連する記事を公開し、タイトルに「AI Tool」を含める Task: Publish an article related to AI Tools, include text “AI Tool” in title12.11タスク: 「ラジオボタン」を含むAIツールを公開する Task: Publish an AI Tool containing "Radio Button"12.12タスク: サブスクリプションを作成する(12.13以前に作成すれば完了と見なされる) Task: Create a buffet plan. (considered as completed as long as created before 12.13)ウィーク3: 12月13日~12月19日期間中にすべてのタスクを完了すると、20ドルの現金報酬を獲得!12/13 クリスマスをテーマにしたモデルを公開 → $212/14 モデル関連の記事を公開 → $212/15 「ゲームデザイン」「ビジュアルデザイン」「スペースデザイン」のチャンネルに合うモデルを公開 → $212/16 TenStarFundに参加したモデルを公開 → $212/17 11月29日以降にアップロードされ、20件以上の投稿があるモデルを持つ → $212/18 ベースモデルをIllustriousとしてオンライントレーニングを使ったモデルを公開 → $212/19 サブスクリプション活動(購入または購入される)を行う → $2ウィーク4: 12月20日~12月26日この週には特別な名誉バッジがもらえるタスクもあるぞ!12/20 イベント期間中に公開された投稿が「リミックス」される12/21 TensorArtに関連する内容をSNSでシェアし、アンケートに回答12/22 #Christmas Walkthroughのタグが付いた投稿に「いいね」「コメント」「スター」のいずれかをする12/23 30クレジットでバッジを交換(マイページ→クレジット)12/24 イベント中に公開されたAIツールが「ブラックホースAIツール」ランキングトップ100に入る12/25 イベント中に公開されたモデルが「ブラックホースモデル」ランキングトップ100に入る12/26 「クリエイター」ランキングトップ100に入るさあ、冒険の旅を楽しんでくれたまえ!サンタも応援しているぞ!🎁✨
49
2
Christmas Walkthrough 【日本語訳】11/29~12/26

Christmas Walkthrough 【日本語訳】11/29~12/26

クリスマスイベントの日本語訳です。(12/7修正)こんにちは、旅行者さん! Tensor Impact へようこそ。これから一連の探索タスクに着手し、さまざまな豪華報酬を獲得してください。⏰ 探索期間:  11 月 29 日から 12 月 26 日 (協定世界時)28 日以内にクリスマス ウォークスルーのすべてのタスクを完了して、成功した探検家になりましょう!勝つ $49.9 現金と 1 つ買うともう 1 つ無料の新年プロモーション!各探索タスクを完了すると、対応する報酬 ($20、プロ、クレジット) も獲得できます。 📅 探索タスクカレンダー毎日 1 つのタスクがあり、その週以内にタスクを完了するとウィークリー バッジを獲得できます。タスクの 1 つを完了できなかった場合でも、心配する必要はありません。バッジ引き換えセクションを確認し、未完了のタスクを自動的に完了としてマークする Magic バッジを引き換えてください。詳細については、 の数 🌟タスクの後には、このタスクを達成するのがどれほど難しいかを意味します。私たちは提供します 「探索タスクガイド」 3 つ星以上のタスクに関するガイダンスを参照してください。 参加しているすべてのモデル、AI ツール、投稿には、 タグ「#Christmas Walkthrough」 公開されたとき。記事とワークフローに「ChristmasWalkthrough」タグを含める必要はありません。# お知らせ: すべてのタスクは必ずしもその日に完了する必要はありません。事前に、またはその週の終わりまでに行うことができます。ただし、週末までに完了しない場合、タスクは欠落したものとみなされます。私たちのブラックホースリーダーボード: [TensorArt] Christmas Walkthrough: Dark Horse Leaderboard12.21 タスクについては、ソーシャル メディアに投稿した後、このアンケートに回答してください。 Googleフォーム毎日のテーマ🔱バッジの紹介– バッジの構成毎日のバッジ: 合計 26 個、毎日の探索タスクを完了すると授与されます (1 月 10 日まで有効)。ウィークリーバッジ: 合計 4 個、各週のすべてのタスクを完了すると授与されます (1 月 10 日まで有効)。究極のバッジ: 合計 1 つ、すべての探索タスクを完了すると授与されます (90 日間有効)。12.23 タスクバッジ: 合計 1 つで、12 月 23 日のタスクと引き換えるにはクレジットが必要です (1 月 10 日まで有効)。マジックバッジ: 合計 4 つ、未完了のタスクを自動的に完了としてマークするために引き換えることができますが、報酬は与えられません (1 月 10 日まで有効)。名誉バッジ: 合計 1 つ。12 月 26 日のタスクを完了すると自動的に授与されます。引き換え可能ですが、報酬は与えられません (1 月 10 日まで有効)。 – 発行ルールすべてのイベント時間は UTC で計算されます。タスクは UTC 時間内に完了するようにしてください。毎週金曜日に、前週に完了したタスクに対してバッジを発行します。毎週のタスク (金曜日から次の木曜日まで) は、完了したとみなされるために、同じ週内に完了する必要があります。 たとえば、12 月 6 日のタスクは 12 月 1 日から 12 月 7 日までに完了する必要があります。毎週のタスクをすべて完了すると、週ごとのバッジのみが獲得でき、毎日のバッジは獲得できません。タスク、マジック、名誉のバッジは引き換え時に自動的に付与されます。タスクバッジは交換でのみ入手できます。魔法のバッジでは代用できません。名誉バッジは交換を通じてのみ入手できます。魔法のバッジでは代用できません。– 引き換えルールバッジ引き換え期間は11月29日から12月26日まで。12 月 26 日のタスクでは、完了済みとしてマークされる「名誉バッジ」を引き換えるのに 10,000 クレジットが必要です。マジック バッジは 5 つあり、そのうち 4 つを引き換えるには「5、50、500、1000」クレジットが必要ですが、完了報酬は付与されません。マジック バッジでは、12 月 23 日と 12 月 26 日のバッジを引き換えることはできません。一度引換したバッジは返品できません。📜イベントルールシステムのデフォルトのアバターとニックネームを持つユーザーは報酬を受け取りません。現金報酬はイベント終了時に GPU 基金に入金され、いつでも引き出す​​ことができます。イベントモデルはオリジナルである必要があり、再印刷またはマージはカウントされません。イベントの内容はコミュニティのルールに準拠する必要があります。 NSFW、児童ポルノ、有名人の画像、暴力、低品質のコンテンツは対象外です。不正行為は失格となります。 Tensor.Art はイベントの最終解釈権を留保します。ご不明な点がございましたら、Discord でチケットを開いてスタッフにお問い合わせください。タグ「#Christmas Walkthrough」を使おう忘れやすそうなので大きくしましたタグ「#Christmas Walkthrough」を使いましょう。#はタグを示すマークです。タグ欄に「Christmas Walkthrough」と入力するといいです。(記事とワークフローに「ChristmasWalkthrough」タグを含める必要はありません。)日本人向け注意事項おそらくタスクはUTC時間に合わせてする必要があります。朝9時がUTCの0時です。ユーザー名と画像の設定をしましょう。日本人の認識より児童系は判定がきついことが多いです。子供やちびキャラの画像は避けましょう。運営からの回答「クリスマス ウォルトロウ」の 2 週目に作成された同じ AI ツールは、その週のすべてのタスクにカウントできますか? (すべての要件を満たしている場合)それとも、別のツールを作成する必要がありますか? もう 1 つは、新しい AI ツールを公開するのではなく、新しい要件を満たすために古い AI ツールを更新した場合、カウントされますか?A:2 週目に作成された AI ツールはカウントできます。11.29以降に作成されたすべての AI ツールがカウントされます。ただし、古い AI ツールを更新するだけでは要件を満たさないため、新しい AI ツールにする必要があります。マジックバッジについてA:マジックバッジは、一種の補償メカニズムです。特定の日にタスクを逃したり、完了できなかったりした場合は、マジックバッジを購入して逃したバッジを引き換えることができます。これにより、最終的な報酬を獲得しやすくなります。たとえば、12 月 17 日のタスクは「11 月 29 日以降に 20 を超えるユーザー投稿を含むモデルをアップロードする」であり、このタスクを達成できませんでしたが、最終的な報酬である 49.9 ドルにはバッジが 1 つ足りません。マジックバッジを引き換えると、自動的に不足分が補われ、最終的な大賞を獲得できるようになります。投稿し忘れちゃった当日に日替わりテーマカレンダーを投稿し忘れたとしても、心配しないでください。今週中に7 つの日替わりテーマの投稿を公開すれば、バッジと報酬を獲得できます。数日前に見逃してしまった場合は、ぜひ追いついてください。探索タスクガイドこのガイドでは、次の詳細な手順を説明します。 3 つ星以上の高難易度探索タスク。12.7 探索タスク: 動画を生成するAIツールを公開します。完了方法:次のビデオ ノードのいずれかを使用することをお勧めします: Cogvideo、Mochi、Pyramid-Flow。ビデオ ワークフロー (テキストからビデオ、または画像からビデオ) を作成し、AI ツールとして公開します。12.8 探索タスク: ホームページに固定されている「3DVideo」AI ツールを使用して、ビデオ投稿を公開します。完了方法:指定された AI ツールを使用します: 👉 3Dビデオ 👈 画像を生成して投稿します。12.9 探索タスク: ホームページに固定されている「RealAnime」AI ツールを使用して投稿を公開します。完了方法:指定された AI ツールを使用します: 👉 リアルアニメ 👈 画像を生成して投稿します。12.11探索タスク: 「ラジオボタン」を含むAIツールを公開します。完了方法:AI ツールを公開するときは、ユーザーが設定するプロンプト ノード (テキストなど) を開きます。「入力タイプ」で「ラジオボタン」を選択します。12.16探索タスク: TenStarFund に正常に参加したモデルを公開します。完了方法:💸 TenStar Fund プロジェクトを通じてモデルを実行して収入を稼ぎます。詳しい操作方法や導入方法については、以下をご確認ください。 [リンク]12.18探索タスク: Illustrious のベース モデルを使用して、オンライン トレーニングを使用してモデルを公開します。完了方法:基本モデル Illustrious を使用したオンライン トレーニングについては、提供される特定の指示に従ってください。12.26探索タスク: 「クリエイター」リーダーボードのトップ 100 にランクインします。完了方法:リンクをクリックしてリーダーボードを表示します。 [リンク]
31
4
RealAnime Event: Toon Drifter Faction Showdown! ~11/28 日本語訳

RealAnime Event: Toon Drifter Faction Showdown! ~11/28 日本語訳

アニメキャラクターが第四の壁を突破できるTensorArt専用モデル「RealAnime」が登場! 🎉使いやすい AI ツールを使用して、現実世界のシーンでアニメ キャラクターを生成できます。プロンプトを入力するだけで、魔法が起こるのを観察できます。ショー・ドリフター目覚ましが鳴ったら、起きて仕事に行く時間です!アニメのキャラクターもお腹を満たすために頑張らなければなりません。指定されたものを利用する AIツール お気に入りのアニメキャラクターの職場生活をデザインしてみませんか? 💼✨ブルース・ウェインとは異なり、ジョーカーは仕事の後、食料品を買い、自分で食事を作らなければなりません。 🤡レムはメイドカフェでコーヒーとデザートの作り方を学ぶ必要があります。給料が低かったため、サノスは指を鳴らして会社を爆破することを決意しました。 💥派閥対決に参加しよう!派閥を選択し、指定されたタグを付けて投稿することで派閥の評判を高めましょう!派閥タグ(たぶん必須 どれか一つを使う)#Driftermon#DrifterAvengers#DrifterDoom評判の計算ルール:評判 = (投稿した Pro ユーザーの数 0.4 + 投稿した Standard ユーザーの数 0.2 + いいねをした人の数 0.1 + リミックスした人の数 0.3) * 100各勢力の評判は毎日更新されるので、毎日投稿してチームへのサポートを結集することを忘れないでください。 🏆*公式のイベントページにチーム評価を表示するタグがあります。最高の評判ボーナス:トップ派閥のメンバー全員に 500 クレジットと 1 日 Pro が与えられます。 🎉特別ボーナス:質の高い投稿には不思議な報酬が当たるチャンスも! 🎁ソーシャルメディア投稿報酬ソーシャル メディアへの投稿ごとに 100 クレジット、最大 500 クレジットを獲得できます。コンテンツ形式:無制限!タグを含める必要があります: #TensorArt そして #RealAnimeサポートされているプラ​​ットフォーム: Instagram、TikTok、Twitter、Facebook、Reddit、YouTube、Pinterest。追加の報酬:500 件以上の「いいね!」: $20500 リツイート以上: 70 ドルフォロワーが 5,000 人を超える場合、500 件以上の「いいね!」で 40 ドル、500 件以上のリツイートで 140 ドルを獲得できます。クリック 記入するアイコン 参加情報を確認して報酬を受け取りましょう! 📲イベント期間11月18日~11月28日イベントルール投稿のテーマと内容はイベントのスタイルと一致している必要があります。各投稿にはイベント タグを 1 つだけ含めることができます。デフォルトのアバターとニックネームを持つユーザーは特典を受け取る資格がありません。NSFW、児童セレブのポルノ、低品質のコンテンツは有効な参加としてカウントされません。不正行為があった場合はイベントから失格となります。イベントの最終的な解釈権は Tensor.Art に帰属します。正しい生成方法(公式)たった4ステップでアツい「第四の壁突破」画像が完成!クリック AIツール 始めましょう! 🖱️✨ステップ1ページの右側で、キャラクター名のオプションを選択するか、「カスタム」をクリックしてアニメキャラクターの名前を入力します。ステップ2以下の「何かを行う」セクションで、対応するアクションのオプションを選択するか、「カスタム」をクリックしてアクションを説明します。詳細な説明により、「赤いドレスを着て本物のオープンカーでワインを飲む」など、より正確な生成結果が得られます。ステップ3「画像サイズ」を選択します。ニーズに基づいて選択できる 9 つの一般的なサイズがあります。ステップ4下の「go」ボタンをクリックして、画像が生成されるまで辛抱強く待ちます。上のタブを切り替えると過去の結果が表示されます。ヒント「翻訳」をクリックすると、入力テキストを英語に翻訳できます。生成結果に満足できない場合は、キャラクターやシーンを変更して再試行してください。 🎨✨ハムスター式生成方法12つに分かれてたらいいだろうというノリで、好きに書く。ハムスター式生成方法2もう②にはスペース「 」しか入れない。ヒント・普通にプロンプト書いた方が手っ取り早い
57
9
Halloween2024 | Unlocking Creativity: The Power of Prompt Words in Writing

Halloween2024 | Unlocking Creativity: The Power of Prompt Words in Writing

Unlocking Creativity: The Power of Prompt Words in WritingWriting can sometimes feel tough, especially when you’re staring at a blank page. If you’re struggling to find inspiration, prompt words can be a helpful tool. These words can spark ideas and make writing easier and more fun. Let’s explore how prompt words can boost your creativity and how to use them effectively.What Are Prompt Words?Prompt words are specific words or phrases that inspire you to write. They can be anything from a single word to a short phrase that gets your imagination going. For example, words like "adventure," "friendship," or "mystery" can lead to exciting stories or poems.Why Use Prompt Words?1. Overcome Writer’s Block: If you’re stuck and don’t know what to write, a prompt word can give you a direction to start.2. Spark Creativity: One word can trigger a flood of ideas. It helps you think outside the box.3. Try New Styles: Prompt words encourage you to write in different genres or styles you might not normally explore.4. Build a Writing Habit: Using prompt words regularly can help you develop a consistent writing routine.How to Use Prompt Words1. Make a ListStart by writing down some prompt words that inspire you. Here are a few examples:- Adventure- Dream- Secret- Journey- Change2. Quick Writing ExercisePick a prompt word and set a timer for 10 minutes. Write anything that comes to mind without worrying about making it perfect. This helps you get your ideas flowing.3. Write a Story or SceneChoose a prompt word and try to write a short story or scene based on it. For example, if your word is "mystery," think about a detective solving a case.4. Create a PoemUse a prompt word to write a poem. Let the word guide your ideas and feelings. You can write a simple haiku or free verse.5. Share with FriendsShare your prompt words with friends and challenge each other to write something based on the same word. This can lead to fun discussions and new ideas.Tips for Using Prompt Words- Write Daily: Spend a few minutes each day writing with a prompt word. This builds your skills and keeps your creativity flowing.- Make a Prompt Jar: Write different prompt words on slips of paper and put them in a jar. Whenever you need inspiration, pull one out and start writing.- Reflect on Your Work: After you write, take a moment to think about what you created. What did you like? What can you improve?- Explore Different Genres: Use prompt words to try writing in genres you don’t usually write in, like fantasy or poetry. This helps you grow as a writer. ConclusionPrompt words are a simple yet powerful way to boost your creativity and make writing enjoyable. They can help you overcome blocks, spark new ideas, and develop a consistent writing habit. So, the next time you feel stuck, remember that a single word can lead to amazing stories. Embrace the power of prompt words and watch your creativity soar!
58
6
ComfyUI Core Nodes Loaders #CHRISTMAS WALKTHROUGH

ComfyUI Core Nodes Loaders #CHRISTMAS WALKTHROUGH

1. Load CLIP VisonDecode the image to form descriptions (prompts), and then convert them into conditional inputs for the sampler. Based on the decoded descriptions (prompts), generate new similar images. Multiple nodes can be used together. Suitable for transforming concepts, abstract things, used in combination with Clip Vision Encode.2. Load CLIPThe Load CLIP node can be used to load a specific CLIP model, CLIP models are used to encode text prompts that guide the diffusion process.*Conditional diffusion models are trained using a specific CLIP model, using a different model than the one which it was trained with is unlikely to result in good images. The Load Checkpoint node automatically loads the correct CLIP model.3. unCLIP Checkpoint LoaderThe unCLIP Checkpoint Loader node can be used to load a diffusion model specifically made to work with unCLIP. unCLIP Diffusion models are used to denoise latents conditioned not only on the provided text prompt, but also on provided images. This node will also provide the appropriate VAE and CLIP and CLIP vision models.*even though this node can be used to load all diffusion models, not all diffusion models are compatible with unCLIP.4. load controlnet modelThe Load ControlNet Model node can be used to load a ControlNet model, Used in conjunction with Apply ControlNet.5. Load LoRA6. Load VAE7. Load Upscale Model8. Load Checkpoint9. Load Style ModelThe Load Style Model node can be used to load a Style model. Style models can be used to provide a diffusion model a visual hint as to what kind of style the denoised latent should be in.* Only T2IAdaptor style models are currently supported.10. Hypernetwork LoaderThe Hypernetwork Loader node can be used to load a hypernetwork. Similar to LoRAs, they are used to modify the diffusion model, to alter the way in which latents are denoised. Typical use-cases include adding to the model the ability to generate in certain styles, or better generate certain subjects or actions. One can even chain multiple hypernetworks together to further modify the model.
59
1
Are score_tags neccessary in PDXL/SDXL Pony Models?  |  Halloween2024

Are score_tags neccessary in PDXL/SDXL Pony Models? | Halloween2024

Consensus is that the latest generation of Pony SDXL models no linger require "score_9 score_8 score_7" written in the prompt to "look good".//----//It is possible to visualize our actual input to the SD model for CLIP_L ( a 1x768 tensor) as a 16x16 grid , each with RGB values since 16 x 16 x 3 = 768I'll assume CLIP_G in the SDXL model can be ignored. Its assumed CLIP_G is functionally the same but for 1024 dimension instead of 768.So the here we have the prompt : "score_9 score_8_up score_8_up"Then I can do the same but for the prompt : "score_9 score_8_up score_8_up" + XWhere X is some random extremely sus prompt I fetch from my gallery. Assume it to fill up to the full 77 tokens (I set truncate=True on the tokenizer so it just caps off past the 77 token limit)Examples:etc. etc.Granted , first three tokens in the prompt for the 768 encoding greatly influnces the "theme" of the output.But from above images one can see that the "appearance" of the text encoding can vary a lot.Thus , the "best" way to write a prompt is rarely universal.Here I'm running some random text I write myself to check similarity to our "score prompt" (top result should be 100% , so I might have some rounding error) :score_6 score_7_up score_8_up : 98.03% score 8578 : 85.42% highscore : 82.87% beautiful : 77.09% score boobs score : 73.16% SCORE : 80.1% score score score : 83.87% score 1 score 2 score 3 : 87.64% score : 80.1% score up score : 88.45% score 123 score down : 84.62%So even though the model is trained for "score_6 score_7_up score_8_up"we can be kinda loose in how we want to phrase it , if we want to phrase it.Same principle applies for all LoRA and their activation keywords.Negatives are special. The text we write in the negatives are split by whitespace , and the chunks are encoded individually.Link to Notebook if you want to run your own tests:https://huggingface.co/datasets/codeShare/fusion-t2i-generator-data/blob/main/Google%20Colab%20Jupyter%20Notebooks/fusion_t2i_CLIP_interrogator.ipynbI use this thing to search up prompt words using the CLIP_L model//---//These are the most similiar items to the Pony model "score prompt" within my text corpusItems of zero similarity (perpendicular) negative similarity (vector at opposite direction) to encoding are omitted from these results.Note that this are encodings similiar to the "score prompt" trigger encoding , not analysis of what the Pony Model considers good quality.Prompt phrases among my text corpus most similiar to "score_9 score_8_up score_8_up" according to CLIP (the peak of the graph above): Community: sfa_polyfic - 68.3 % holding blood ephemeral dream - 68.3 % Excell - 68.3 % supacrikeydave - 68.3 % Score | Matthew Caruso - 67.8 % freckles on face and body HeadpatPOV - 67.8 % Kazuno Sarah/Kunikida Hanamaru - 67.8 % iers-kraken lun - 67.8 % blob whichever blanchett - 67.6 % Gideon Royal - 67.6 % Antok/Lotor/Regris (Voltron) - 67.6 % Pauldron - 66.7 % nsfw blush Raven - 66.7 % Episode: s08e09 Enemies Domestic - 66.7 % John Steinbeck/Tanizaki Junichirou (Bungou Stray Dogs) - 66.7 % populism probiotics airspace shifter - 65.4 % Sole Survivor & X6-88 - 65.4 % Corgi BB-8 (Star Wars) - 65.4 % Quatre Raberba Winner/Undisclosed - 65.2 % resembling a miniature fireworks display with a green haze. Precision Shoot - 65.2 % bracelet grey skin - 65.2 % Reborn/Doctor Shamal (Katekyou Hitman Reborn!)/Original Male Character(s) - 65.2 % James/Madison Li - 65.1 % Feral Mumintrollet | Moomintroll - 65.1 % wafc ccu linkin - 65.1 % Christopher Mills - 65.0 % at Overcast - 65.0 % Kairi & Naminé (Kingdom Hearts) - 65.0 % with magical symbols glowing in the air around her. The atmosphere is charged with magic Ghost white short kimono - 65.0 % The ice age is coming - 65.0 % Jonathan Reid & Bigby Wolf - 65.0 % blue doe eyes cortical column - 65.0 % Leshawna/Harold Norbert Cheever Doris McGrady V - 65.0 % foxtv matchups panna - 65.0 % Din Djarin & Migs Mayfeld & Grogu | Baby Yoda - 65.0 % Epilogue jumps ahead - 65.0 % nico sensopi - 64.8 % 秦风 - Character - 64.8 % Caradoc Dearborn - 64.8 % caribbean island processing highly detailed by wlop - 64.8 % Tim Drake's Parents - 64.7 % probiotics hardworkpaysoff onstorm allez - 64.7 % Corpul | Coirpre - 64.7 % Cantar de Flor y Espinas (Web Series) - 64.7 % populist dialog biographical - 64.7 % uf!papyrus/reader - 64.7 % Imrah of Legann & Roald II of Conte - 64.6 % d brown legwear - 64.6 % Urey Rockbell - 64.6 % bass_clef - 64.6 % Royal Links AU - 64.6 % sunlight glinting off metal ghost town - 64.6 % Cross Marian/Undisclosed - 64.6 % ccu monoxide thcentury - 64.5 % Dimitri Alexandre Blaiddyd & Summoner | Eclat | Kiran - 64.5 %
47
3
My Personal Guide to Choosing the Right AI Base Model for Generate Halloween2024 Images

My Personal Guide to Choosing the Right AI Base Model for Generate Halloween2024 Images

Simple comparison of the models (Base on Personal opinion)1. SDXL: Best for producing high-quality, realistic images and works well with various styles. It excels in detail enhancements, especially for faces, and offers many good LoRA variations. It generates large, sharp images that are perfect for detailed projects. However, some images may appear distinctly "AI-generated," which might not suit everyone's preference.2. Pony Diffusion: Known for its artistic flexibility, it doesn’t copy specific artist styles but gives beautiful, customizable results. It is also fine-tuning capable, producing stunning SFW and NSFW visuals with simple prompts. Users can describe characters specifically, making it versatile for various creative needs.3. SD3: Focuses on generating realistic and detailed images, offering more control and customization than earlier versions. Despite the many controversies surrounding SD3, SD3 is also widely used in Comfyui.4. Flux: Ideal for fixing image issues like anatomy or structure problems. It enhances image quality by adding fidelity and detail, particularly in text and small image elements, can provide a clearer concept, better prompt implementation with more natural depiction. 5. Kolors: Great for styling, and make colorful and vibrant artwork, especially in fantasy or creative designs.6. Auraflow: Specializes in smooth, flowing images, often with glowing or ethereal effects, perfect for fantasy or sci-fi themes.And if you want to combine the best of different AI models? you can try my workflow or my ai tool:SDXL MergeSimple - this simple workflow can merge 2 checkpoints with the same base, Pony + FLUX Fixer - and you can try this ai tool if you want to merging 2 different base, since FLUX good at fixing image, text, and small detail, so it will be effective without having to work twice.Finally, all of this is my personal opinion from what I experienced, How about you? do you have a different opinion? and which model do you prefer? share your thoughts in the comments below! let's open the discussion!
5
LoRA Training for Stable Diffusion 3.5

LoRA Training for Stable Diffusion 3.5

Full article can be found here : Stable Diffusion 3.5 Large Fine-tuning TutorialImages should be cropped into these aspect ratios:If you need help automatically pre-cropping your images, this is a lightweight, barebones [script](https://github.com/kasukanra/autogen_local_LLM/blob/main/detect_utils.py) I wrote to do it. It will find the best crop depending on:(1024, 1024), (1152, 896), (896, 1152), (1216, 832),(832, 1216), (1344, 768), (768, 1344), (1472, 704)1. Is there a human face in the image? If so, we’ll do the cropping oriented around that region of the image.2. If there is no human face detected, we’ll do the cropping using a saliency map, which will detect the most interesting region of the image. Then, a best crop will be extracted centered around that region.Here are some examples of what my captions look like:k4s4, a close up portrait view of a young man with green eyes and short dark hair, looking at the viewer with a slight smile, visible ears, wearing a dark jacket, hair bangs, a green and orange background k4s4, a rear view of a woman wearing a red hood and faded skirt holding a staff in each hand and steering a small boat with small white wings and large white sail towards a city with tall structures, blue sky with white clouds, cropped If you don't have your own fine-tuning dataset, feel free to use this dataset of paintings by John Singer Sargent (downloaded from WikiArt and auto-captioned) or a synthetic pixel art dataset.I’ll be showing results from several fine-tuned LoRA models of varying dataset size to show that the settings I chose generalize well enough to be a good starting point for fine-tuning LoRA.repeats duplicates your images (and optionally rotates, changes the hue/saturation, etc.) and captions as well to help generalize the style into the model and prevent overfitting. While SimpleTuner supports caption dropout (randomly dropping captions a specified percentage of the time), it doesn’t support shuffling tokens (tokens are kind of like words in the caption) as of this moment, but you can simulate the behavior of kohya’s sd-scripts where you can shuffle tokenswhile keeping an n amount of tokens in the beginning positions. Doing so helps the model not get too fixated on extraneous tokens.Steps calculationMax training steps can be calculated based on a simple mathematical equation (for a single concept):There are four variables here:Batch size: The number of samples processed in one iteration.Number of samples: Total number of samples in your dataset.Number of repeats: How many times you repeat the dataset within one epoch.Epochs: The number of times the entire dataset is processed.There are 476 images in the fantasy art dataset. Add on top of the 5 repeats from multidatabackend.json . I chose a train_batch_size of 6 for two reasons:This value would let me see the progress bar update every second or two.It’s large enough in that it can take 6 samples in one iteration, making sure that there is more generalization during the training process.If I wanted 30 or something epochs, then the final calculation would be this:represents the number of steps per epoch, which is 396.As such, I rounded these values up to 400 for CHECKPOINTING_STEPS .⚠️ Although I calculated 11,900 for MAX_NUM_STEPS, I set it to 24,000 in the end. I wanted to see more of samples of the LoRA training. Thus, anything after the original 11,900 would give me a good gauge on whether I was overtraining or not. So, I just doubled the total steps 11,900 x 2 = 23,800, then rounded up.CHECKPOINTING_STEPS represents how often you want to save a model checkpoint. Setting it to 400 is pretty close to one epoch for me, so that seemed fine.CHECKPOINTING_LIMIT is how many checkpoints you want to save before overwriting the earlier ones. In my case, I wanted to keep all of the checkpoints, so I set the limit to a high number like 60.Multiple conceptsThe above example is trained on a single concept with one unifying trigger word at the beginning: k4s4. However, if your dataset has multiple concepts/trigger words, then your step calculation could be something like this so:2 concepts [a, b]Lastly, for learning rate, I set it to 1.5e-3 as any higher would cause the gradient to explode like so:The other relevant settings are related to LoRA.{ "--lora_rank": 768, "--lora_alpha": 768, "--lora_type": "standard" } Personally, I received very satisfactory results using a higher LoRA rank and alpha. You can watch the more recent videos on my YouTube channel for a more precise heuristic breakdown of how image fidelity increases the higher you raise the LoRA rank (in my opinion).Anyway, If you don’t have the VRAM, storage capacity, or time to go so high, you can choose to go with a lower value such as 256 or 128 .As for lora_type , I’m just going with the tried and true standard . There is another option for the lycoris type of LoRA, but it’s still very experimental and not well explored. I have done the deep-dive of lycoris myself, but I haven’t found the correct settings that produces acceptable results.Custom config.json miscellaneousThere are some extra settings that you can change for quality of life.{ "--validation_prompt": "k4s4, a waist up view of a beautiful blonde woman, green eyes", "--validation_guidance": 7.5, "--validation_steps": 200, "--validation_num_inference_steps": 30, "--validation_negative_prompt": "blurry, cropped, ugly", "--validation_seed": 42, "--lr_scheduler": "cosine", "--lr_warmup_steps": 2400, } "--validation_prompt": "k4s4, a waist up view of a beautiful blonde woman, green eyes""--validation_guidance": 7.5 "--validation_steps": 200 "--validation_num_inference_steps": 30 "--validation_negative_prompt": "blurry, cropped, ugly""--lr_scheduler": "cosine""--lr_warmup_steps": 2400These are pretty self-explanatory:"--validation_prompt"The prompt that you want to use to generate validation images. This is your positive prompt."--validation_negative_prompt"Negative prompt."--validation_guidance"Classifier free guidance (CFG) scale."--validation_num_inference_steps"The number of sampling steps to use."--validation_seed"Seed value when generating validation images."--lr_warmup_steps"SimpleTuner has set the default warm up to 10% of the total training steps behind the scenes if you don’t set it, and that’s a value I use often. So, I hard-coded it in (24,000 * 0.1 = 2,400). Feel free to change this."--validation_steps"The frequency at which you want to generate validation images is set with "--validation_steps". I set mine to 200, which is a 1/2 of 400 (number of steps in an epoch for my fantasy art example dataset). This means that I generate a validation image every 1/2 of an epoch. I suggest generating validation images at least every half epoch as a sanity check. If you don’t, you might not be able to catch errors as quickly as you can.Lastly is "--lr_scheduler" and "--lr_warmup_steps".I went with a cosine scheduler. This is what it will look like:### Memory usageIf you aren’t training the text encoders (we aren’t), `SimpleTuner` saves us about `10.4 GB` of VRAM.![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/4e8dae13-2612-4518-91a4-53485ccdba7c/316002db-297b-45a9-b919-cec6b311c773/image.png)With the settings of `batch size` of `6` and a `lora rank/alpha` of `768`, the training consumes about `32 GB` of VRAM.![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/4e8dae13-2612-4518-91a4-53485ccdba7c/c2aac70a-8c65-4f6f-b602-487f24de4bd2/image.png)Understandably, this is out of the range of consumer `24 GB` VRAM GPUs. As such, I tried to decrease the memory costs by using a `batch size` of `1` and `lora rank/alpha` of `128` .Tentatively, I was able to bring the VRAM cost down to around `19.65 GB` of VRAM.However, when running inference for the validation prompts, it spikes up to around `23.37 GB` of VRAM.![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/4e8dae13-2612-4518-91a4-53485ccdba7c/0c5240d6-6f71-404e-bea7-b18cc35ee5ad/image.png)![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/4e8dae13-2612-4518-91a4-53485ccdba7c/026be306-8331-45a2-9c02-541005f2cdfd/image.png)To be safe, you might have to decrease the `lora rank/alpha` even further to `64`. If so, you’ll consume around `18.83 GB` of VRAM during training.![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/4e8dae13-2612-4518-91a4-53485ccdba7c/5edcaaf9-bf0d-4db0-a183-cfab44963b8e/image.png)During validation inference, it will go up to around `21.50 GB` of VRAM. This seems safe enough.![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/4e8dae13-2612-4518-91a4-53485ccdba7c/bd41ce4e-a0db-443b-b3d2-63eac136779d/image.png)If you do decide to go with the higher spec training of `batch size` of `6` and `lora rank/alpha` of `768` , you can use the `DeepSpeed` config I provided [above](https://www.notion.so/Stable-Diffusion-3-5-Large-Fine-tuning-Tutorial-11a61cdcd1968027a15bdbd7c40be8c6?pvs=21) if your GPU VRAM is insufficient and you have enough CPU RAM.
Exploring DORA, LoRA, and LOKR: Key Insights Before Halloween2024 Training

Exploring DORA, LoRA, and LOKR: Key Insights Before Halloween2024 Training

In the world of artificial intelligence (AI), especially in training image-based models, the terms DORA, LoRA, and LOKR often play different but complementary roles in developing more efficient and accurate AI models. Each has a unique approach to understanding data, adapting models, and involving developers in the process. This article will discuss what DORA, LoRA, and LOKR are in the context of AI image training, as well as their respective strengths and weaknesses.1. DORA (Distributed Organization and Representation Architecture) in AI Image Training DORA is a model better known in the fields of cognitive science and AI, focusing on how systems understand and represent information. Although not commonly used directly in AI image training, DORA's principle of distributed representation can be applied to how models understand relationships between elements in an image—such as color, texture, shape, or objects—and how those elements are connected in a broader context.Strengths: Understanding complex relationships: DORA allows AI models to understand complex relationships between objects in an image, crucial for tasks such as object recognition or object detection.Strong generalization: Helps models learn more abstract representations from visual data, allowing for object recognition even with variations in form or context.Weaknesses: Less specific for certain visual tasks: DORA may be less optimal for tasks requiring high accuracy in image details, such as image segmentation.Computational complexity: Using a model based on complex representations like DORA requires more computational resources.2. LoRA (Low-Rank Adaptation) in AI Image Training LoRA is a method widely used in AI for fine-tuning large models without requiring significant resources. LoRA reduces model complexity by factoring heavy layers into low-rank representations. This allows for adjustments to large models (such as Vision Transformers or GANs) without retraining the entire model from scratch, saving time and cost.Strengths: Resource efficiency: LoRA enables faster and more efficient adaptation of models, especially when working with large models and smaller datasets.Reduces overfitting: Since only a small portion of the parameters are adjusted, the risk of overfitting is reduced, which is essential when working with limited image datasets.Pretrained model adaptation: LoRA allows for the reuse of large pretrained models trained on vast datasets, making it easier to adapt them to more specific datasets.Weaknesses: Limited to minor adjustments: LoRA is excellent for minor adjustments, but if significant changes are needed or if the dataset differs greatly from the original, the model may still require deeper retraining.Dependent on base model: The best results from LoRA heavily rely on the quality of the pretrained model. If the base model is not strong enough, the adapted results may be unsatisfactory.3. LOKR (Locus of Control and Responsibility) in AI Image Training LOKR, derived from psychology, refers to how a person perceives control and responsibility over something. In the context of AI development, this concept can be applied to how developers feel responsible for and control the training process of the model. Developers with an internal locus of control feel they have full control over the training process, while those with an external locus of control might feel that external factors such as datasets or hardware are more influential.Strengths: Better decision-making: Developers with an internal locus of control are usually more focused on optimizing parameters and trying various approaches to improve results, which can lead to better AI models.High motivation: Developers who feel in control of the training outcomes are more motivated to continuously improve the model and overcome technical challenges.Weaknesses: Challenges with external factors: Developers with an external locus of control might rely too much on external factors such as the quality of the dataset or available hardware, which can limit innovation and control over the training process.Not directly related to AI technicalities: While this concept provides good psychological insights, it does not offer direct solutions in the technical training of AI models.Conclusion DORA, LoRA, and LOKR bring different perspectives to AI image-based training. DORA offers insight into how models can understand complex relationships in images, though it comes with computational challenges. LoRA is highly useful for adapting large models in a more resource-efficient way, but has limitations if larger changes are required. Meanwhile, LOKR, although derived from psychology, can influence how AI developers approach training, especially in terms of control and responsibility. By understanding the strengths and weaknesses of each approach, developers can more effectively choose the method that best fits the specific needs of their AI projects, maximizing both efficiency and model performance in processing images.
Halloween2024 - ComfyUI experiences

Halloween2024 - ComfyUI experiences

Hello everyone.I have been working more intensively with various AI tools in the last few days and weeks. In this article I would like to briefly share my opinion on the "workflows" that you can create with ComfyUI.First of all, my computer is not the "more expensive, faster, better" type. It is a Ryzen 5 with a GForce 3060 Ti. So it is not bad, but by far not the best for training LoRAs, checkpoints or other AI things. It simply takes longer than with a Ryzen 9 and a GForce 4090 ;)But back to ComfyUI and the workflows.Since I have only been working with it for a few days, before that I used A1111 (Stable Diffusion), I am of course far from someone who can give you tips if you have problems. But one thing is certain: ComfyUI is definitely extremely faster than A1111 when creating images.With my current setup, I need over 2 minutes per XL image and almost 5 minutes for FLUX-based images with A1111. Anyone who can do a bit of math knows that this is really incredibly slow...ComfyUI, on the other hand, even with my setup, needs less than 20 seconds for an XL image and almost 60 seconds for a FLUX-based image. Of course, that depends on the workflow.The problem with ComfyUI, in my opinion, is that it is not at all beginner-friendly. There is a "standard" workflow, but that is not enough. After all, we want to integrate or test various checkpoints, LoRAs or other things.So you start and look at the different options... and then... then you don't know what to do next. So without looking at various documentation or examples, you will have an extremely difficult time understanding this tool.If we take the "fresh" installation of ComfyUI, after a long browse you will find that the things you actually want are "not" there. This includes things like using placeholders or a "better" way to save the files you create.This brings us to the possible extensions. Like in so many other communities, there are a huge number here. Unfortunately, this also makes things very confusing. Again, you have to look closely at what you want, need or expect, but even then it doesn't mean that the extension does what you want.The worst thing about ComfyUI in my opinion is the confusing menu and it gets worse with every extension. If you just look at the "Workflow" tool here in Tensort.Art, you immediately understand what I mean.Still. ComfyUI is a very good and powerful tool. Most importantly, it is much faster than the other tools I have tried so far. I also really like the flexibility of the tool. However, it could have a "better" menu to make it more user-friendly.If you haven't done it before: It's worth to check it out.
4
2
How to transform your images into a Halloween party atmosphere. | 🎃 Halloween 2024

How to transform your images into a Halloween party atmosphere. | 🎃 Halloween 2024

INSTRUCTIONS:This is a very simple workflow, just upload your image and press RUN.PROMPT basically does not need to be modified, but you can still add more Halloween elements to make the theme richer.Hope you all have a good time.PROMPT:(masterpiece), ((halloween elements)),a person, halloween striped thighhighs, witch hat, grin, (ghost), sweets, candy, candy cane, cookie, string of flags, halloween costume, jack-o'-lantern bucket, halloween, pumpkins,black cat,halloween,little ghost,magic robe,autumn leaves,candle,skull, 3d cg.Negative PROMPT:None.Below is the workflow link:https://tensor.art/workflows/786144487641608308Below is the AI-tool link:https://tensor.art/template/786150277257599620model used:CKPThttps://tensor.art/models/757279507095956705/FLUX.1-dev-fp8
61
35
50 Inspiration Beauty Monster or Creature  - HALLOWEEN2024

50 Inspiration Beauty Monster or Creature - HALLOWEEN2024

Looking to stand out this Halloween with a fierce, captivating costume? Dive into our 50 Beauty Monster and Creature Inspirations for Halloween 2024!From the alluring vampire queen with fangs and pale skin, to the mystical forest spirit with branches for hair, this list features a variety of iconic, feminine creatures to embody. Each entry provides five key characteristics to make your costume pop with creativity. Whether you want elegance, spookiness, or a combination of both, these ideas will help you slay this Halloween!Vampire: Fangs, cloak, pale skin, red lips, pointed ears.Witch: Pointed hat, broomstick, black dress, potion bottles, striped stockings.Medusa: Snake hair, stony gaze, green skin, gold jewelry, ancient toga.Banshee: Ghostly white dress, flowing hair, haunting scream, pale makeup, chains.Succubus: Bat wings, red dress, horns, glowing eyes, tail.Werewolf: Furry ears, sharp claws, fangs, torn clothes, wild hair.Mermaid: Scales, seashell bra, fishtail, wet-look hair, pearls.Harpy: Feathered wings, talons, bird-like eyes, fierce expression, ragged clothes.Fairy: Sparkling wings, flower crown, wand, glittery makeup, light dress.Zombie: Torn clothes, blood stains, decayed skin, lifeless eyes, open wounds.Siren: Wet-look hair, seashell jewelry, seaweed skirt, alluring voice, eerie glow.Elf: Pointed ears, elegant gown, bow and arrow, long hair, ethereal glow.Gorgon: Snake tail, golden scales, slit eyes, regal crown, sharp claws.Mummy: Wrapped in bandages, dark eye makeup, jewelry, ancient amulet, dusty appearance.Ghost: Flowing white sheet, transparent, eerie wail, glowing eyes, pale hands.Queen of the Dead: Black gown, skull crown, skeletal makeup, dark veil, red roses.Demoness: Red skin, black horns, tail, wings, sharp claws.Bride of Frankenstein: Black and white hair, stitched skin, bride gown, lightning bolts, scars.Voodoo Priestess: Skull face paint, voodoo doll, bones, beads, tribal clothing.Phoenix: Fiery wings, flame patterns, red and orange outfit, glowing skin, feathers.Chimera: Lion mane, snake tail, dragon wings, golden eyes, muscular build.Spider Queen: Black web dress, spider crown, long legs, red eyes, venomous fangs.Lady Death: Black cloak, scythe, skeletal hands, skull mask, dark aura.Nymph: Nature gown, flowers in hair, earthy tones, glowing skin, delicate wings.Selkie: Fur cloak, watery skin, ocean jewels, seal tail, wet hairGiantess: Massive build, oversized clothes, earthy skin, towering presence, big jewelry.Forest Witch: Mossy cloak, animal bones, green skin, potions, tree branches in hair.Dragoness: Scaly skin, horns, tail, fiery breath, armored chestplate.Lilith: Dark wings, black robe, seductive look, glowing red eyes, ancient symbols.Hag: Wrinkled skin, tattered clothes, long nose, hunched posture, warts.Valkyrie: Winged helmet, sword, battle armor, braided hair, shield.Troll Woman: Green skin, sharp tusks, club, fur clothes, wild hair.Ice Queen: Frosted crown, shimmering cape, blue skin, ice staff, glowing cold eyes.Scarecrow: Straw-filled body, stitched mouth, tattered hat, pumpkin head, patched overalls.Djinn: Flowing robes, magic lamp, glowing eyes, ornate jewelry, smoke swirling around.Cheshire Cat: Striped fur, wide grin, cat ears, mischievous eyes, tail.Swamp Creature: Muddy skin, algae hair, webbed fingers, water plants, gills.Basilisk Queen: Reptilian skin, glowing eyes, snake tail, venomous fangs, ancient armor.Lamia: Snake body, golden armor, hypnotic eyes, deadly claws, venomous bite.Wendigo Woman: Deer antlers, skeletal body, glowing eyes, fur cloak, sharp claws.Shadow Witch: Black shadowy figure, dark veil, glowing red eyes, spectral form, floating.Frost Maiden: Icicle crown, snowflake gown, pale blue skin, icy breath, shimmering frost.Baba Yaga: Hunched back, long nose, flying broom, warts, iron teeth.Kitsune: Fox ears, fluffy tail, red kimono, mystical powers, mask.Forest Spirit: Tree branches for hair, bark-like skin, moss gown, glowing eyes, ethereal glow.Plague Doctoress: Black cloak, plague mask, long gloves, eerie eyes, dark potions.Dullahan: Headless woman, flowing black cloak, horse-riding, holding a skull, eerie lantern.Succubus Queen: Leather bodice, wings, horns, glowing eyes, seductive aura.Dryad: Bark skin, leaves in hair, tree branches for arms, glowing green eyes, earthy gown.Banshee Queen: Flowing black dress, ghostly hair, skeletal hands, pale skin, sorrowful wail.settings usedAll created using Juggernaut SDXL modelsteps 25cfg 6dpmpp_2m karrasnot all creature recognize well by the checkpoint, you may use LoRA or other checkpoint if needed to create certain characterWith these 50 beauty monster and creature inspirations, you're all set to embrace the eerie, enchanting side of Halloween 2024. Whether you choose to transform into a seductive vampire, a magical forest spirit, or a chilling banshee queen, each idea is designed to make you stand out in both style and spookiness. Let your creativity soar this Halloween, and enjoy bringing these unique creatures to life. Get ready to slay (literally!) with hauntingly beautiful looks that will leave everyone spellbound!
79
12
Algunos cambios / some changes

Algunos cambios / some changes

He actualizado todos mis modelos para que la gente pueda generar imágenes de manera ilimitada y gratuita con ellos, la descarga sigue sujeta al pago del bufet, asi que adelante, den rienda suelta a su creatividad.//I've updated all my models so that people can generate unlimited images with them for free, downloading them is still subject to paying the buffet, so go ahead and unleash your creativity.
42
12
🎃 Halloween2024 | Optimizing Sampling Schedules in Diffusion Models

🎃 Halloween2024 | Optimizing Sampling Schedules in Diffusion Models

You migh have seen this kind of images in the past if you've girly tastes when navigate on pinterest, well guess what? I'll teach you about some parammeters to enhance your Pony SDXL future generations. It's been a while since my last post, today I'll teach you about a cool feature launched by NVIDIA on July 22, 2024. For this task I'll provide an alternative workflow (Diffusion Workflow) for SDXL. Now lets go with the content.ModelsFor my research (AI Tool) I decided to use the next models:Checklpoint model: https://tensor.art/models/757869889005411012/Anime-Confetti-Comrade-Mix-v30.60 LoRA: https://tensor.art/models/7025156632998356040.80 LoRA: https://tensor.art/models/757240925404735859/Sailor-Moon-Vixon's-Anime-Style-Freckledvixon-1.00.75 LoRA: https://tensor.art/models/685518158427095353NodesThe Diffusion Workflow has many nodes I've merged in single nodes I'll explain them below, remember you can group nodes and edit their values to enhance your experience.👑 Super Prompt Styler // Advanced Manager (CLIP G) text_positive_g: positive prompt, subject of the scene (all the elements the scene is meant for, LoRA Keyword activators).(CLIP L) text_positive_l: positive prompt, all the scene itself is meant (composition, lighting, style, scores, ratings).text:negative: negative prompt.◀Style▶: artistic styler, select the direction for your prompt, select 'misc Gothic' for halloween direction.◀Negative Prompt▶: prepares the negative prompt splitting it in two (CLIP G and CLIP L) for the encoder.◀Log Prompt▶: add information to metadata, produces error 1406 when enabled, so turn it off.◀Resolution▶: select the resolution of your generation.👑 Super KSampler // NVIDIA Aligned Stepsbase_seed: similar to esnd (know more here).similarity: this parameter influences base_seed noise to be similar to noise_seed value.noise_seed: the exact same noise seed you know.control after generate: dictates the behavior of noise_seed.cfg: guidance for the prompt, read about <DynamicThresholdingFull> to know the correct value. I recomend 12sampler_name: sampling method.model_type: NVIDIA sampler for SDXL and SD models.steps: the exact same steps you know, dictates how much the sampling denoises the noise injected.denoise: the exact same denoise you know, dictates the strong the sampling denoises the noise injected.latent_offset: select between {-1.00 Darker to 1.00 Brighter} to modify the input latent, any value different than 0 adds information to enhance final result.factor_positive: upscale factor for the conditioning.factor_negative: upscale factor for the conditioning.vae_name: the exact same vae you know, dictates how the noise injected is denoised by the sampler.👑 Super Iterative Upscale // Latent/on Pixel Spacemodel_type: NVIDIA sampler for SDXL and SD models.steps: number of steps the UPSCALER (Pixel KSampler) will use to correct the latent on pixel space while upscaling it.denoise: dictates the strenght of the correction on the latent on pixel space.cfg: guidance for the prompt, read about <DynamicThresholdingFull> to know the correct value. I recomend 12upscale_factor: number of times the upscaler will upscale the latent (must match factor_positive and factor_positive) upscale_steps: dictates the number of steps the UPSCALER (Pixel KSampler) will use to upscale the latent.MiscellaneousDynamicThresholdingFullmimic_scale: 4.5 (Important value. go to learn more)threshold_percentile: 0.98mimic_mode: half cosine downmimic_scale_min: 3.00cfg_mode: half cosine downcfg_scale_min: 0.00sched_val: 3.00separate_feature_channels: enablescaling_starpoint: meanvariability_measure: ADinterpolate_phi: 0.85Learn more: https://www.youtube.com/watch?v=_l0WHqKEKk8Latent OffsetLearn more: https://github.com/spacepxl/ComfyUI-Image-Filters?tab=readme-ov-file#offset-latent-imageAlign Your StepsLearn more: https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/LayerColor: Levelsset black_point = 0 (base level of black)set white_point = 255 (base level of white)Set output_black_point = 20 (makes blacks less blacks)Set output_white_point = 220 (makes whites less whites)Learn more: https://docs.getsalt.ai/md/ComfyUI_LayerStyle/Nodes/LayerColor%3A%20Levels/LayerFilter:Filmcenter_x: 0.50center_y: 0.50saturation: 1.75vignete_intensity: 0.20grain_power: 0.50grain_scale: 1.00grain_sat: 0.00grain_shadows: 0.05grain_highs: 0.00blur_strenght: 0.00blur_focus_spread: 0.1 focal_depth: 1.00Learn more: https://docs.getsalt.ai/md/ComfyUI_LayerStyle/Nodes/LayerFilter%3A%20Film/?h=filmResultAi Tool: https://tensor.art/template/785834262153721417DownloadsPony Diffusion Workflow: https://tensor.art/workflows/785821634949973948
13
6
The Trials and Tribulations of a Halloween2024 Face Swap through Facepaint work in FLUX1D

The Trials and Tribulations of a Halloween2024 Face Swap through Facepaint work in FLUX1D

So I set out with what I thought was a simple idea:“Start with an image of someone’s face and turn that into a spooky Halloween character, with costume, makeup and full Facepaint with a spooky background.”BUT it had to look enough like them at the end - that they would be pleased with the result…The starting point was easy - I wanted to train a Halloween LoRA on lots of images of people wearing Halloween Facepaint - so I did that…A couple of the 48 images i used to train with:So I had a Flux LoRA - now I tested that in Tensor.Art with simple “Man in Halloween Facepaint”, “Woman in Halloween Facepaint”So far so good, I thought ok, this is going to be easy peasy!At this point (End of September 2024) there were limited options in TA for Flux Face swap… (No Pulid available then) so I started trying with Facedetailer…I built out the workflow - made a separate flow for the background - and was all excited…But no matter what i tried (and I tried a lot!) the facedetailer would wipe out the Facepaint from the Lora - restoring the face back to the original person, nice and clean, or with a half hearted smear of greasepaint.Or it would look nothing at all like the person and the makeup would look like it was a badly stuck on mask…So i went back to my Discord buddies and we talked about the options - and decided to try Reactor nodes with insightface…It would generate a Florence description of the original reference face (cropped) - build a dummy Halloween Image with a lookielikie from the description and with Facepaint - and then reactor the ref face back over the top (or so i thought)But the Reactor’d one cleaned up the face and removed 90% of the makeup and it didn’t want to do the costume or background at all the way I had envisaged… as soon as I gave it enough freedom to be creative, the reference person was lost completely…I think by now people in all my discord groups were sick of me asking for ideas on how to do this - I tried every setting and balance on reactor nodes.Could I use an llm to rewrite the visual description of the face to include the Halloween description first, and so on.I looked at IPAdapter and using Depth maps - but although they captured the shape of the face - they couldn’t preserve the familiar features through costume stylemakeup.At this point - I pretty much gave up in disgust… I put out a final round of help requests on various discord’s and went onto another projectA few days later my good friend told me “ hey - finally they released Pulid for Flux on TA!” - and I already had built Flux Pulid workflows for face swapping the previous week on my MimicPC Cloud version of Comfyui (where you can load any kind of node and model you want and really design and play with freedom) so I started to regain my enthusiasm…I managed to merge some of the earlier ideas for generating the Halloween style with LLM’s and a Joycaption of the cropped reference face - and the Flux Pulid face swaps - and experimented with the positioning of the LoRA to get maximum effect - and was finally able to release a workflow and AI Tool that did what i had seen in my head those few weeks back when I started… https://tensor.art/template/785795972520313546And the workflow - https://tensor.art/workflows/785793305345589081And the LoRA - https://tensor.art/models/785804669831296337If you have enjoyed my article - please like and use my AI Tools and Models…I welcome comments and constructive feedback.
17
9
🎃 Halloween2024 Generation Guide: Elevate Your Spooky Creations! 👻

🎃 Halloween2024 Generation Guide: Elevate Your Spooky Creations! 👻

Halloween is right around the corner, and it’s time to infuse your generation models with a touch of spooky magic! Whether you’re crafting images, stories, or even interactive AI experiences, this guide will help you conjure up the best Halloween-themed content for 2024. Let’s dive into some tips and tricks to make your generative AI creations truly spine-chilling! 🧛‍♂️🕸️1. Theme Selection: Classic Horror vs. Modern ThrillsStart with deciding the tone of your Halloween project. Are you going for classic horror, with haunted houses, creepy forests, and gothic vibes? Or are you leaning towards modern Halloween with neon lights, cyberpunk ghosts, or playful skeletons?Classic horror themes like vampires, witches, and ghosts never go out of style, but blending them with modern elements (think AI-enhanced haunted tech or neon-lit crypts) can bring a fresh twist to your content.2. Prompts and Inspiration IdeasFor image generation, try prompts that capture the Halloween atmosphere:"A haunted Victorian mansion under a full moon, surrounded by fog and dark twisted trees""A neon-lit skeleton playing an electric guitar on a cyberpunk street""A witch stirring a glowing cauldron, with enchanted bats swirling around"For story generation, build a suspenseful atmosphere with prompts like:"On Halloween night, a group of friends discovers a hidden portal in an abandoned amusement park...""A town where every carved pumpkin holds the soul of a spirit seeking freedom"Don't be afraid to add a bit of humor to your Halloween stories, like:"A vampire who’s afraid of the dark trying to overcome his fear"3. Style Adjustments: The Magic of LightingLighting can make or break the eerie ambiance of your Halloween images. Play with shadows, moonlit scenes, or dimly lit rooms to add that sense of unease.Experiment with different color palettes—orange, black, and purple are classics, but consider adding splashes of neon green or eerie blue for a modern twist.For a vintage horror feel, use grainy textures, sepia tones, or black-and-white effects to mimic old horror films.4. Interactive Elements: Make it a Thrilling ExperienceFor those building interactive experiences, consider adding branching storylines where users can explore haunted locations or solve spooky mysteries.Add random elements to make the experience unpredictable—imagine a haunted AI guide that offers different creepy clues each time users interact with it.Build suspense with sound effects like whispering winds, distant footsteps, or creaking doors that play as users engage with your content.5. Community Collaboration: Share and Get Inspired!The best part about generative projects is sharing them with the community! Post your Halloween creations, get feedback, and see how others are getting into the spirit.Participate in Halloween-themed challenges or host one yourself—like a Spookiest Story Contest or Best Halloween Image Generation.Don’t forget to use the hashtag #Halloween2024 when sharing your spooky content so others can easily find and engage with your posts.6. Ethical Considerations: Keep It Fun and RespectfulWhile Halloween is all about embracing the creepy and the supernatural, it's important to remain sensitive to cultural traditions and symbols. Respectful representation goes a long way in keeping the spirit of fun alive for everyone.Ensure that your generative content is age-appropriate if targeting younger audiences—creepy doesn’t always have to mean terrifying!Happy Halloween & Happy Generating! 🎃👻We hope these tips help you create some truly terrifying (or delightfully spooky) Halloween content this year. Let your creativity run wild and embrace the eerie, the whimsical, and the downright strange. Looking forward to seeing what you conjure up this Halloween season!
4
2
HORROR ARTIST AND ART STYLE (Special article for HALLOWEEN2024)

HORROR ARTIST AND ART STYLE (Special article for HALLOWEEN2024)

1. H.R. Giger (Biomechanical Horror) Giger is famous for his nightmarish "biomechanical" art style, blending human forms with machinery and grotesque alien creatures. His designs inspired the terrifying creatures in the Alien film series, making his style a staple in sci-fi horror.2. Junji Ito (Manga Horror) Junji Ito is a Japanese manga artist known for his unsettling and disturbing imagery. His style combines detailed linework with surreal body horror, where human forms often twist, decay, or transform into unimaginable horrors.3. Francis Bacon (Abstract Horror) Bacon’s style is known for its raw and chaotic energy, often depicting distorted, screaming faces and bodies. His abstract approach creates a sense of psychological horror, focusing on human suffering and existential dread.4. Zdzisław Beksiński (Surreal Horror) Beksiński's paintings are filled with surreal, dystopian landscapes and nightmarish creatures. His style is dreamlike, featuring decaying cities, skeletal figures, and eerie, otherworldly atmospheres that evoke a sense of dread and desolation.5. Edward Gorey (Gothic Macabre) Gorey's distinctive pen-and-ink illustrations have a whimsical yet dark, gothic tone. His art features victorian-style settings, eerie characters, and morbid humor, often telling unsettling stories in a playful, minimalist way.6. Clive Barker (Fantasy Horror) Known for creating Hellraiser's Cenobites, Barker's art mixes body horror with fantasy. His style incorporates grotesque, skin-crawling depictions of demons and twisted creatures, blurring the line between pleasure and pain.7. Wayne Barlowe (Dark Fantasy) Barlowe's art focuses on the grotesque, otherworldly creatures of hellish dimensions. His works are often visually complex, mixing detailed anatomy with imaginative designs that are both disturbing and awe-inspiring.8. Dave McKean (Mixed Media Horror) McKean's style is a unique blend of photography, collage, and painting, creating eerie, surreal images that evoke fear through abstraction and texture. His works often appear in horror comics and graphic novels, including collaborations with Neil Gaiman.Each of these artists brings a distinct approach to the horror genre, using their unique styles to evoke fear, unease, or existential dread.
15
6
How install Kohya_SS to Ubuntu WSL under Windows 11

How install Kohya_SS to Ubuntu WSL under Windows 11

How to install Kohya_SS to Ubuntu WSL under Windows 111)Prepare:1.Check CPU virtualization on Windows > Task Manager > Perfomance > CPU > Virtualization: Enabled or Disabled.If Disabled - Access the UEFI (or BIOS). The way the UEFI (or BIOS) appears depends on your PC manufacturer. https://support.microsoft.com/en-us/windows/enable-virtualization-on-windows-c5578302-6e43-4b4b-a449-8ced115f58e12.Make sure you are using a recent version of Windows 10/11. If needed update to the latest version. (No earlier than Windows 10, Version 1903, Build 18362)2)Install WSL and Ubuntu1.Open Terminal > Use the command -wsl --install2.Open the Microsoft Store > Find - Ubuntu. (Ubuntu which doesn't show the version in a name is the latest)3.Install Ubuntu4.Open Ubuntu5.Create profile > For example:Username - UserPassword - User3)Install Kohya_SS on WSL Ubuntu:: Preparesudo apt updatesudo apt install software-properties-common -ysudo add-apt-repository ppa:deadsnakes/ppasudo apt updatesudo apt install python3.10 python3.10-venv python3.10-dev -ysudo apt update -y && sudo apt install -y python3-tksudo apt install python3.10-tksudo apt install git -y:: NVIDIA CUDA Toolkitwget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.0-1_all.debsudo dpkg -i cuda-keyring_1.0-1_all.debsudo apt-get updatesudo apt-get -y install cudaexport PATH=/usr/local/cuda-12.6/bin${PATH:+:${PATH}}export LD_LIBRARY_PATH=/usr/local/cuda-12.6/lib64\${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}:: Rebootsudo reboot:: Kohya_ss installcd ~git clone --recursive https://github.com/bmaltais/kohya_ss.gitcd kohya_ssgit checkout sd3-sd3.5-flux./setup.sh:: Configuration settingssource venv/bin/activateaccelerate config>This machine>No distributed training

>No>No>No>All>YesIf you have a RTX 30/40 series video card choose >bf16. If don't have choose >fp16.4)Run Kohya_SS on WSL Ubuntucd kohya_ss./gui.shNotes:To find Kohya_ss folder use \\wsl.localhost\Ubuntu\home\user in Explorer. You can move there a model to train and dataset.Additional commands for Windows Terminal:Shutdown -wsl --shutdownUninstall or reset Ubuntu -wsl --unregister Ubuntu
2
Tensor.Art Becomes World's Largest VisionAI Resource Hosting Platform in Under a Year

Tensor.Art Becomes World's Largest VisionAI Resource Hosting Platform in Under a Year

source: Yahoo FinanceTensor.Art Becomes World's Largest VisionAI Resource Hosting Platform in Under a Year, Empowering Enterprise AIFounded in July 2023, Tensor.Art has seen its global traffic surpass 15 million in less than a year.Currently hosting over 330,000 resources and generating more than 2 million images daily, it has positioned itself as a leading generative AI service platform worldwide. Remarkably, Tensor.Art has already started to turn a profit.As a pioneering explorer of a sustainable Gen AI ecosystem, Tensor.Art provides cloud computing power for model creators and users while offering AI solutions tailored to real-world applications across various industries.Founder Shen, possessing a keen sense of computer and AI technology, quickly decided to enter the AIGC (AI-generated content) market during its early rise. This swift decision led to the establishment of a platform that offers robust support.Tensor.Art is the world's first model platform that supports online inference and online operation of full-scale models. It consistently maintains a keen insight into the latest AI technologies and promptly embraces various cutting-edge advancements, such as the globally popular Stable Diffusion3, HunYuan DiT, Kolors, Flux, and more!As one of the first to deploy StableDiffusion on the cloud, Tensor.Art maintains a keen insight into new AI technologies and rapidly integrates the latest advancements. This includes globally impactful technologies like Stable Diffusion 3, HunYuan DiT, Kolors, Flux, and more.404’s report on Tensor.Art as the world’s first company to hold AI events in 2023Operations head Sawoo states, “We are committed to providing the best platform and community for AI enthusiasts and model creators.As early as 2023, we were pioneers in the AIGC platform space, hosting diverse events and launching creator incentive programs, which have since been emulated by competitors like CivitAI.Moreover, we tirelessly promote new global technologies, ensuring rapid online integration and training capabilities.With a comprehensive community ecosystem and rich activities, Tensor.Art now leads the global growth in new foundational models, growing at 5-6 times the rate of other leading competitors, earning praise from AI enthusiasts and model creators worldwide.”A successful collaboration between Tensor.Art and SnapChatIn an effort to democratize AI and make generative services more accessible, Tensor.Art has explored numerous real-world applications.For instance, in February 2024, the platform used its AI generative capabilities in collaboration with SnapChat to create a new paradigm in creativity through AR.Subsequent partnerships include renowned tattoo artists from Austria, a famous sticker website in the UK, and an architectural firm in Turkey, offering AI-generated design inspiration.API Service:https://tams.tensor.art/Additionally, Tensor.Art is committed to serving the B2B sector by providing a GPU API platform and simplified AI tool workflows, significantly lowering the AI adoption barrier for enterprises and catering to customized needs. This makes AI services more accessible and efficient, enhancing corporate productivity and creative inspiration.Looking ahead, Tensor.Art will maintain its competitive edge by continuing to explore and quickly integrate new global technologies while also launching its own large models. This vision aims to offer an even better community experience and technical capabilities for AI enthusiasts and model creators.
11
Flux Ultimates Custom Txt 2 Vid Tensor Workfkow

Flux Ultimates Custom Txt 2 Vid Tensor Workfkow

Welcome to Dream Diffusion FLUX ULTIMATE, TXT 2 VID With its own custom workflow made for Tensor Arts Comfy Workspace. The workflow can be downloaded on this page....... ENJOYThis is a 2nd stage Trained checkpoint to its predecessor FLUX HYPER.When you think you had it nailed in the last version and notice a 10% margin that could still be trained........ Well that's what happened ..So now this version has even more font styles, Better adherence, Sharper image clarity and a better grasp for anime, water painting and such on....This model has the same setting parameters as Flux HyperPrompt Example : Logo in neon lights, 3D, colorful, modern, glossy, neon background,with a huge explosion of fire with epic effects, the text reads  "FLUX ULTIMATE , GAME CHANGER ",Set steps at : 20Sampler : DPM++ 2M or EULER Gives best resultsScheduler : SimpleDenoise : 1.00Image Size : 576 x 1024 or 1024 x 576 You can choose any size but this model is optimized for faster rendering with those sizes.Download the links from below and save them to your comfy folders...Comfy Workflow :  https://openart.ai/workflows/maitruclam/comfyui-workflow-for-flux-simple/iuRdGnfzmTbOOzONIiVVVae download this to your Vae folder inside of your model folderDownload them from: https://huggingface.co/black-forest-labs/FLUX.1-schnell/tree/main/vaeClip:  download clip_l.safetensors and t5xxl_fp8_e4m3fn.safetensors download these 2 and save them to your clip folder inside of your models folderDownload them from : https://huggingface.co/comfyanonymous/flux_text_encoders/tree/mainIf you have any questions or issues feel free to drop a comment below and I will get back to you as soon as I can. Enjoy  DICE
83
43
An examination on the effect of Denoise on Flux Img2Img with LoRA, a journey from Boat to Campervan

An examination on the effect of Denoise on Flux Img2Img with LoRA, a journey from Boat to Campervan

I made an AI Tool yesterday ( FLUX IMG2IMG + LORAS + UPSCALE + CHOICE | ComfyUI Workflow | Tensor.Art ) that allows you to combine up to 3 LoRA's and upscale - it has model switching to let you choose whether to turn on 0/1/2/3 if the available LoRA inputs - you can choose the weighting 1 by 1 and swap out the base Flux model and all the LoRA's to your own preferences. I have implemented Radio Button prompting so that the main Trigger words for the LoRA's I use most often are already behind the buttons - and you can use "Custom" to add your own prompt or triggers into the mix.For this test I used a 6k Adobe Stock licensed image of a boat on the beach, with the Model Switcher set to "2" to prevent any bleed from other LoRA's in the tool, everything is upscaled by 4x-Ultrasharp at a factor of 2 (the tool will size your longest edge to 1024 as it processes so you will end up with a 2048 pixel final image ready for facebook servers):Original Input Image: (downsized for article)So the first test was simply putting it through the AI Tool on base Flux model - no denoise - no LoRA at all:Now I have added in the LoRA "TQ - Flux Frozen" by @TracQuoc at .9 Weight, and added .25 Denoise:Next I changed the Denoise to 0.5, you can see subtle changes, a signature has appeared, the boat is starting to change in areas and writing appearing on the side of the boat:At 0.6 Denoise the boat is starting to adapt more and the beach is changing a lot:By 0.65 you can really see dramatic changes as the boat starts to develop wheels, its almost as if the AI has a plan for this one...At 0.7 - the second boat has disappeared all together, the whole boat is on a trailer, the beach is changing into grassland:Now I am stepping to 0.01 increments as all the drama happens between .7 and .8 normally with FluxSo 0.71:0.72: (the boat is definitely changing its shape now, and you can start to see snow)0.73 you can see its becoming a land based vehicle now:0.74 it feels like a towing caravan/trailer:0.75 more detail in the towing section0.76 - everything changes and suddenly we have some kind of Safari Land Cruiser0.77 now its a camper van with a pop up roof:0.78 just some more camper style detailing but nothing dramatic:0.79 There's almost no resemblance to the original scene except sky and colours:0.8 I can't see much change here:Now I will go up in increments of 0.05 again0.85 the Frozen world has taken over, although it still has the style and colour feel of the original to some extent0.9 it's all gone (it ignored inputs over .9 and changed them back to .9)I hope you have found this a useful experiment - and will save you time and coins in playing with img2img and denoise.You can check out all my AI Tools and LoRA's on my profile here: KurtC PhotoEdLet me know if you enjoyed this and I might make some more (this was my first one).
68
23
Radio Buttons are awesome in AI Tools: [How to set-up guide]

Radio Buttons are awesome in AI Tools: [How to set-up guide]

Dear Tensorians,Thanks to the implemented feature of radio buttons for AI Tools, we can use the AI tools with much more fun now. Because I'm the one who insisted to implement it and more importantly the radio button's setting import/export features, I'll give an easy tutorial about them for the beginners. 🤗😉https://tensor.art/template/765762196352358016This is an example AI tool using radio buttons. You can see the cool radio buttons on the right. Yes! The cool thing about radio button GUI is that you don't have to remember or re-type all those crazy prompt words at all any more. You can store them in those buttons and click them! Especially if you have a very wide range of different prompting styles as most users are, you cannot even remember them all. I bet you already have your own backup memo file for those special prompts lol. Yes, we have to do it for important prompts. However, more conveniently, if you make this kind of AI tool with radio button UI, you can just store them online next to you all the time. You can click on the buttons and generate various images whenever you want, even when you are driving (just kidding, don't ever do that lol). Of course you can add extra prompt together with the buttons. (Click "custom" button and you can always input more prompt!)To create the radio buttons, click edit in the ... menu.Then you move to the EDIT page of the AI tool.in the middle of the page, you see the user-configurable settings.By clicking "Add" button, you can choose your AI interface. By clicking "edit" in the prompt's text box, you can enter the radio buttons option page.From the scratch, you can choose the pre-defined groups and buttons. In addition, you can add your own new buttons! Make a button name and its content. The content is part of prompts you want to add for the button's place.After you are done with all the button settings, click "confirm" and then "publish" your AI tool. Then you'll see your cool radio buttons in the AI tool. (Note that there are certain prompt text box nodes in comfyUI unable to edit for buttons. Basic text prompt nodes and more nodes can be used for button edit. You can check it after you publish your workflow into a tool. If it doesn't support the radio buttons, use different prompt text nodes.)Whenever you update your workflow for the AI tool, all the AI tool UI is reset to none!! Yes. It was a real headache at the beginning. However, now we have a cool import/export button for the radio buttons! (Thanks God~ 👯‍♀️⛈💯🤗). BTW, when you edit the button groups, you might choose part of the 6 or 7 groups (e.g, "character settings" and "role" groups) first and add some nice buttons, then later you change your mind and want to add another group, e.g., "style" group, however, if you press the add button for that, your previous button data will be gone!! You restart from the beginning. Be very careful! (You'll understand what I mean when it happens. lol)Before updating your workflow, you can export the radio button settings as a JSON file. Then you can import it back in later anytime you want. More importantly, you can just edit the radio buttons from the editors (like MS visual studio) for easier copy and paste from the existing files. Trust me. This will save your enormous amount of time remaking those terrible buttons all the time whenever the workflow is modified.Sometimes you must want to edit an existing button JSON file for another AI tool. Editing a JSON file is not really an entertaining work. However, it's much better than remaking the whole radio buttons at GUI~ So find the place to edit in the JSON file and change it very carefully. The JSON syntax is not very editor-friendly and error-prone. But you'll get used to it soon by trial and errors. It's always useful to use "find" command to look for the button you want in the file. You'll realize more interesting things while using the button JSON files. I'll leave them for your own pleasant surprise~ LOL.I shared my JSON file of the AI tool in the comfy-chatroom of Discord. Feel free to use it.I hope this article helped you make the radio button UI more easily. Enjoy~ 🤗😉⛈
103
19
Hunyuan-DiT: Recommendations

Hunyuan-DiT: Recommendations

ReviewHello everyone; I want to share some of my impressions about the Chinese model, Hunyuan-DiT from tencent. First of all let’s start with some mandatory data to know so we (westerns) can figure out what is meant for:Hunyuan-DiT works well as multi-modal dialogue with users (mainly Chinese and English language), the better explained your prompt the better your generation will be, is not necessary to introduce only keywords, despite it understands them quite well. In terms of rating HYDiT 1.2 is located between SDXL and SD3; is not as powerful than SD3, defeats SDXL almost in everything; for me is how SDXL should’ve be in first place; one of the best parts is that Hunyuan-DiT is compatible with almost all SDXL node suit.Hunyuan-DiT-v1.2, was trained with 1.5B parameters.mT5, was trained with 1.6B parameters.Recommeded VAE: sdxl-vae-fp16-fixRecommended Sampler: ddpm, ddim, or dpmmsPrompt as you’d like to do in SD1.5, don’t be shy and go further in term of length; HunyuanDiT combines two text encoders, a bilingual CLIP and a multilingual T5 encoder to improve language understanding and increase the context length; they divide your prompt on meaningful IDs and then process your entire prompt, their limit is 100 IDs or to 256 tokens. T5 works well on a variety of tasks out-of-the-box by prepending a different prefix to the input corresponding to each task.To improve your prompt, place your resumed prompt in the CLIP:TextEncoder node box (if you disabled t5), or place your extended prompt in the T5:TextEncoder node box (if you enabled t5).You can use the "simple" text encode node to only use one prompt, or you can use the regular one to pass different text to CLIP/T5.The worst is the model only benefits from moderated (high for TensorArt) step values: 40 steps are the basis in most cases.Comfyui (Comfyflow) (Example)TensorArt added all the elements to build a good flow for us; you should try it too.AdditionalWhat can we do in the Open-Source plan? (link)Official info for LoRA training (link)ReferencesAnalysis of HunYuan-DiT | https://arxiv.org/html/2405.08748v1Learn more of T5 | https://huggingface.co/docs/transformers/en/model_doc/t5How CLIP and T5 work together | https://arxiv.org/pdf/2205.11487
24
12
Unlock the Power of Detailed Beauty with TQ-HunYuan-More-Beautiful-Detail v1.7

Unlock the Power of Detailed Beauty with TQ-HunYuan-More-Beautiful-Detail v1.7

In the world of digital artistry, achieving that perfect blend of intricate details and stunning visuals can be a game-changer. That's where our latest model, TQ-HunYuan-More-Beautiful-Detail v1.7, comes into play. Designed with precision and a keen eye for aesthetics, this model is your go-to solution for elevating your artwork to new heights.What is TQ-HunYuan-More-Beautiful-Detail v1.7?TQ-HunYuan-More-Beautiful-Detail v1.7 is a state-of-the-art LoRA (Low-Rank Adaptation) model created to enhance the finer details in your digital creations. Whether you're working on portraits, landscapes, or abstract designs, this model ensures that every nuance and subtlety is brought to life with extraordinary clarity and beauty.Why Choose TQ-HunYuan-More-Beautiful-Detail v1.7?Unmatched Detail Enhancement: As the name suggests, this model excels at adding more beautiful details to your artwork. It meticulously enhances textures, refines edges, and highlights intricate patterns, making your creations visually striking.Versatility Across Genres: No matter the style or genre of your artwork, TQ-HunYuan-More-Beautiful-Detail v1.7 adapts seamlessly. From hyper-realistic portraits to fantastical landscapes, this model enhances every element with precision.User-Friendly Integration: Designed for ease of use, integrating TQ-HunYuan-More-Beautiful-Detail v1.7 into your workflow is straightforward. Compatible with various platforms and software, it allows artists of all levels to harness its power without a steep learning curve.Boost Your Creativity: By handling the intricate details, this model frees up your creative energy. Focus on the broader aspects of your work while TQ-HunYuan-More-Beautiful-Detail v1.7 takes care of the fine-tuning, resulting in a harmonious and polished final piece.How to Get StartedGetting started with TQ-HunYuan-More-Beautiful-Detail v1.7 is simple. Visit this link to access the model. Download and integrate it into your preferred digital art software, and watch as your creations transform with enhanced details and breathtaking beauty.Ready to take your art to the next level? Download TQ-HunYuan-More-Beautiful-Detail v1.7 now and start creating masterpieces with more beautiful detail than ever before.
43
4
SD3 - 3D lettering designer

SD3 - 3D lettering designer

SD3 understands prompts better compared to SDXL. You can use this to create interesting 3D lettering. For this purpose, use this WF! You can use a gradient as the background or any image you like. Have fun!Link to workflow: SD3 - 3D lettering designer | ComfyUI Workflow | Tensor.Art
13