# Guide to HuggingFace: Animagine XL 3.0

## Model Details
– **Developed by**: [Cagliostro Research Lab](https://huggingface.co/cagliostrolab)
– **Model type**: Diffusion-based text-to-image generative model
– **Model Description**: Animagine XL 3.0 is engineered to generate high-quality anime images from textual prompts. It features enhanced hand anatomy, better concept understanding, and prompt interpretation, making it the most advanced model in its series.
– **License**: [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd)
– **Finetuned from model**: [Animagine XL 2.0](https://huggingface.co/Linaqruf/animagine-xl-2.0)

## Gradio & Colab Integration
Animagine XL 3.0 is accessible through user-friendly platforms such as Gradio and Google Colab.

## 🧨 Diffusers Installation
To use Animagine XL 3.0, install the required libraries as follows:
“`bash
pip install diffusers –upgrade
pip install transformers accelerate safetensors
“`

Example script for generating images with Animagine XL 3.0:
“`python
import torch
from diffusers import (
StableDiffusionXLPipeline,
EulerAncestralDiscreteScheduler,
AutoencoderKL
)

# Additional code for prompt and image generation

# Additional code for prompt and image generation
“`

## Usage Guidelines

### Tag Ordering
Prompting is a bit different in this iteration, for optimal results, it’s recommended to follow the structured prompt template:
`1girl/1boy, character name, from what series, everything else in any order.`

### Special Tags
This model was trained with special tags to steer the result toward quality, rating, and when the posts were created.

### Quality Modifiers
– masterpiece: >150
– best quality: 100-150
– high quality: 75-100
– medium quality: 25-75
– normal quality: 0-25
– low quality: -5-0
– worst quality: <-5 ### Rating Modifiers - rating: general: General - rating: sensitive: Sensitive - rating: questionable, nsfw: Questionable - rating: explicit, nsfw: Explicit ### Year Modifier These tags help to steer the result toward modern or vintage anime art styles. ### Recommended settings To guide the model towards generating high-aesthetic images, use negative prompts like: - nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name For higher quality outcomes, prepend prompts with: - masterpiece, best quality Be careful to use `masterpiece`, `best quality` because many high-scored datasets are NSFW. It’s better to add `nsfw`, `rating: sensitive` to the negative prompt and `rating: general` to the positive prompt. ### Multi-Aspect Resolution This model supports generating images at different dimensions and aspect ratios. ## Training and Hyperparameters ### Hyperparameters - Feature Alignment Stage - Epochs: 10 - UNet Learning Rate: 7.5e-6 - Train Text Encoder: True - Text Encoder Learning Rate: 3.75e-6 - Batch Size: 48 x 2 - Mixed Precision: fp16 - Noise Offset: N/A - Refining UNet Stage - Epochs: 10 - UNet Learning Rate: 2e-6 - Train Text Encoder: False - Batch Size: 48 - Mixed Precision: fp16 - Noise Offset: 0.0357 - Aesthetic Tuning Stage - Epochs: 10 - UNet Learning Rate: 1e-6 - Train Text Encoder: False - Batch Size: 48 - Mixed Precision: fp16 - Noise Offset: 0.0357 ## Model Comparison ### Training Config | Configuration Item | Animagine XL 2.0 | Animagine 3.0 | |-------------------------|------------------|---------------| | GPU | A100 80G | 2 x A100 80G | | Dataset | 170k + 83k images| 1271990 + 3500 Images | | Shuffle Separator | N/A | True | | Global Epochs | 20 | 20 | | Learning Rate | 1e-6 | 7.5e-6 | | Batch Size | 32 | 48 x 2 | | Train Text Encoder | True | True | | Train Special Tags | True | True | | Image Resolution | 1024 | 1024 | | Bucket Resolution | 2048 x 512 | 2048 x 512 | Source code and training config are available [here](https://github.com/cagliostrolab/sd-scripts/tree/main/notebook). ## Limitations While "Animagine XL 3.0" represents a significant advancement in anime text-to-image generation, it's important to acknowledge its limitations to understand its best use cases and potential areas for future improvement. 1. Concept Over Artstyle Focus 2. Non-Photorealistic Design 3. Anatomical Challenges 4. Dataset Limitations 5. Natural Language Processing 6. NSFW Content Risk ## Acknowledgements We extend our gratitude to the entire team and community that contributed to the development of Animagine XL 3.0, including our partners and collaborators who provided resources and insights crucial for this iteration. ## Collaborators ## License Animagine XL 3.0 now uses the [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd), compatible with Stable Diffusion models. Key points: 1. Modification Sharing 2. Source Code Accessibility 3. Distribution Terms 4. Compliance The choice of this license aims to keep Animagine XL 3.0 open and modifiable, aligning with open source community spirit. For further details and inquiries, refer to the official documentation and community channels.
Source link
# Hugging Face Tutorial

## Introduction
Welcome to the tutorial on Hugging Face, an open-source platform for building and sharing natural language processing (NLP) models and tools. In this tutorial, you’ll learn about Animagine XL 3.0, the latest iteration of the sophisticated open-source anime text-to-image model developed by Cagliostro Research Lab. We’ll cover model details, Gradio & Colab integration, diffusers installation, usage guidelines, recommended settings, training and hyperparameters, limitations, acknowledgements, collaborators, and the model’s license.

## Model Details
– **Developed by**: [Cagliostro Research Lab](https://huggingface.co/cagliostrolab)
– **Model type**: Diffusion-based text-to-image generative model
– **License**: [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd)
– **Finetuned from model**: [Animagine XL 2.0](https://huggingface.co/Linaqruf/animagine-xl-2.0)

## Gradio & Colab Integration
Animagine XL 3.0 is accessible through user-friendly platforms such as Gradio and Google Colab.

## Diffusers Installation
To use Animagine XL 3.0, install the required libraries using the following commands:
“`bash
pip install diffusers –upgrade
pip install transformers accelerate safetensors
“`
Example script for generating images with Animagine XL 3.0 is provided in the code block.

## Usage Guidelines
### Tag Ordering
The structured prompt template is recommended for optimal results. The model was trained to understand concepts in this order: 1girl/1boy, character name, from what series, everything else in any order.

### Special Tags
The model was trained with special tags to steer the result toward quality, rating, and the time when the posts were created. Quality modifiers, rating modifiers, and year modifiers are provided.

## Recommended Settings
To guide the model towards generating high-aesthetic images, use negative prompts like ‘nsfw’, ‘low quality’, ‘jpeg artifacts’. For higher quality outcomes, prepend prompts with ‘masterpiece’ and ‘best quality’. Careful usage of quality modifiers, rating modifiers, and classifier-free guidance (CFG) is recommended.

### Multi-Aspect Resolution
This model supports generating images at various dimensions and aspect ratios as listed in the table.

## Training and Hyperparameters
Animagine XL 3.0 was trained on a 2x A100 GPU with 80GB memory for 21 days. The training process encompassed three stages: Base, Curated, and Aesthetic Tuning. Hyperparameters for each stage are provided in the table.

## Model Comparison
Training configuration details for Animagine XL 2.0 and Animagine XL 3.0 are compared. The GPU, dataset, shuffle separator, learning rate, batch size, and other parameters are compared for both models.

## Limitations
Several limitations of Animagine XL 3.0 are highlighted, including concept over art style focus, non-photorealistic design, anatomical challenges, dataset limitations, natural language processing, and NSFW content risk.

## Acknowledgements
The team and community that contributed to the development of Animagine XL 3.0, partners, collaborators, and community members are acknowledged.

## Collaborators
Various collaborators are mentioned for their contributions to the development and refinement of Animagine XL 3.0.

## License
Animagine XL 3.0 uses the Fair AI Public License 1.0-SD, which includes modification sharing, source code accessibility, distribution terms, and compliance requirements.

This concludes the tutorial on Animagine XL 3.0 and its usage guidelines. We hope you find this information valuable for working with the model.

Animagine XL 3.0 is the latest version of the sophisticated open-source anime text-to-image model, building upon the capabilities of its predecessor, Animagine XL 2.0. Developed based on Stable Diffusion XL, this iteration boasts superior image generation with notable improvements in hand anatomy, efficient tag ordering, and enhanced knowledge about anime concepts. Unlike the previous iteration, we focused to make the model learn concepts rather than aesthetic.



Model Details

  • Developed by: Cagliostro Research Lab
  • Model type: Diffusion-based text-to-image generative model
  • Model Description: Animagine XL 3.0 is engineered to generate high-quality anime images from textual prompts. It features enhanced hand anatomy, better concept understanding, and prompt interpretation, making it the most advanced model in its series.
  • License: Fair AI Public License 1.0-SD
  • Finetuned from model: Animagine XL 2.0



Gradio & Colab Integration

Animagine XL 3.0 is accessible through user-friendly platforms such as Gradio and Google Colab:



🧨 Diffusers Installation

To use Animagine XL 3.0, install the required libraries as follows:

pip install diffusers --upgrade
pip install transformers accelerate safetensors

Example script for generating images with Animagine XL 3.0:

import torch
from diffusers import (
    StableDiffusionXLPipeline, 
    EulerAncestralDiscreteScheduler,
    AutoencoderKL
)


vae = AutoencoderKL.from_pretrained(
    "madebyollin/sdxl-vae-fp16-fix", 
    torch_dtype=torch.float16
)


pipe = StableDiffusionXLPipeline.from_pretrained(
    "cagliostrolab/animagine-xl-3.0", 
    vae=vae,
    torch_dtype=torch.float16, 
    use_safetensors=True, 
    variant="fp16"
)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.to('cuda')


prompt = "1girl, arima kana, oshi no ko, solo, upper body, v, smile, looking at viewer, outdoors, night"
negative_prompt = "nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"

image = pipe(
    prompt, 
    negative_prompt=negative_prompt, 
    width=832,
    height=1216,
    guidance_scale=7,
    num_inference_steps=28
).images[0]



Usage Guidelines



Tag Ordering

Prompting is a bit different in this iteration, for optimal results, it’s recommended to follow the structured prompt template because we train the model like this:

1girl/1boy, character name, from what series, everything else in any order.



Special Tags

Like the previous iteration, this model was trained with some special tags to steer the result toward quality, rating and when the posts was created. The model can still do the job without these special tags, but it’s recommended to use them if we want to make the model easier to handle.



Quality Modifiers

Quality Modifier Score Criterion
masterpiece >150
best quality 100-150
high quality 75-100
medium quality 25-75
normal quality 0-25
low quality -5-0
worst quality <-5



Rating Modifiers

Rating Modifier Rating Criterion
rating: general General
rating: sensitive Sensitive
rating: questionable, nsfw Questionable
rating: explicit, nsfw Explicit



Year Modifier

These tags help to steer the result toward modern or vintage anime art styles, ranging from newest to oldest.

Year Tag Year Range
newest 2022 to 2023
late 2019 to 2021
mid 2015 to 2018
early 2011 to 2014
oldest 2005 to 2010



Recommended settings

To guide the model towards generating high-aesthetic images, use negative prompts like:

nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name

For higher quality outcomes, prepend prompts with:

masterpiece, best quality

However, be careful to use masterpiece, best quality because many high-scored datasets are NSFW. It’s better to add nsfw, rating: sensitive to the negative prompt and rating: general to the positive prompt. it’s recommended to use a lower classifier-free guidance (CFG Scale) of around 5-7, sampling steps below 30, and to use Euler Ancestral (Euler a) as a sampler.



Multi Aspect Resolution

This model supports generating images at the following dimensions:

Dimensions Aspect Ratio
1024 x 1024 1:1 Square
1152 x 896 9:7
896 x 1152 7:9
1216 x 832 19:13
832 x 1216 13:19
1344 x 768 7:4 Horizontal
768 x 1344 4:7 Vertical
1536 x 640 12:5 Horizontal
640 x 1536 5:12 Vertical



Training and Hyperparameters

  • Animagine XL 3.0 was trained on a 2x A100 GPU with 80GB memory for 21 days or over 500 gpu hours. The training process encompassed three stages:
    • Base:
      • Feature Alignment Stage: Utilized 1.2m images to acquaint the model with basic anime concepts.
      • Refining UNet Stage: Employed 2.5k curated datasets to only fine-tune the UNet.
    • Curated:
      • Aesthetic Tuning Stage: Employed 3.5k high-quality curated datasets to refine the model’s art style.



Hyperparameters

Stage Epochs UNet Learning Rate Train Text Encoder Text Encoder Learning Rate Batch Size Mixed Precision Noise Offset
Feature Alignment Stage 10 7.5e-6 True 3.75e-6 48 x 2 fp16 N/A
Refining UNet Stage 10 2e-6 False N/A 48 fp16 0.0357
Aesthetic Tuning Stage 10 1e-6 False N/A 48 fp16 0.0357



Model Comparison



Training Config

Configuration Item Animagine XL 2.0 Animagine 3.0
GPU A100 80G 2 x A100 80G
Dataset 170k + 83k images 1271990 + 3500 Images
Shuffle Separator N/A True
Global Epochs 20 20
Learning Rate 1e-6 7.5e-6
Batch Size 32 48 x 2
Train Text Encoder True True
Train Special Tags True True
Image Resolution 1024 1024
Bucket Resolution 2048 x 512 2048 x 512

Source code and training config are available here: https://github.com/cagliostrolab/sd-scripts/tree/main/notebook



Limitations

While “Animagine XL 3.0” represents a significant advancement in anime text-to-image generation, it’s important to acknowledge its limitations to understand its best use cases and potential areas for future improvement.

  1. Concept Over Artstyle Focus: The model prioritizes learning concepts rather than specific art styles, which might lead to variations in aesthetic appeal compared to its predecessor.
  2. Non-Photorealistic Design: Animagine XL 3.0 is not designed for generating photorealistic or realistic images, focusing instead on anime-style artwork.
  3. Anatomical Challenges: Despite improvements, the model can still struggle with complex anatomical structures, particularly in dynamic poses, resulting in occasional inaccuracies.
  4. Dataset Limitations: The training dataset of 1.2 million images may not encompass all anime characters or series, limiting the model’s ability to generate less known or newer characters.
  5. Natural Language Processing: The model is not optimized for interpreting natural language, requiring more structured and specific prompts for best results.
  6. NSFW Content Risk: Using high-quality tags like ‘masterpiece’ or ‘best quality’ carries a risk of generating NSFW content inadvertently, due to the prevalence of such images in high-scoring training datasets.

These limitations highlight areas for potential refinement in future iterations and underscore the importance of careful prompt crafting for optimal results. Understanding these constraints can help users better navigate the model’s capabilities and tailor their expectations accordingly.



Acknowledgements

We extend our gratitude to the entire team and community that contributed to the development of Animagine XL 3.0, including our partners and collaborators who provided resources and insights crucial for this iteration.

  • Main: For the open source grant supporting our research, thank you so much.
  • Cagliostro Lab Collaborator: For helping quality checking during pretraining and curating datasets during fine-tuning.
  • Kohya SS: For providing the essential training script and merged our PR about keep_tokens_separator or Shuffle Separator.
  • Camenduru Server Community: For invaluable insights and support and quality checking
  • NovelAI: For inspiring how to build the datasets and label it using tag ordering.



Collaborators



License

Animagine XL 3.0 now uses the Fair AI Public License 1.0-SD, compatible with Stable Diffusion models. Key points:

  1. Modification Sharing: If you modify Animagine XL 3.0, you must share both your changes and the original license.
  2. Source Code Accessibility: If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
  3. Distribution Terms: Any distribution must be under this license or another with similar rules.
  4. Compliance: Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.

The choice of this license aims to keep Animagine XL 3.0 open and modifiable, aligning with open source community spirit. It protects contributors and users, encouraging a collaborative, ethical open-source community. This ensures the model not only benefits from communal input but also respects open-source development freedoms.

The

tag in HTML is a versatile container for block-level elements. It can be used for various purposes and offers a wide range of use cases, especially when working with complex and structured content on a web page. Some of the common use cases of the

tag are listed below:

1. Organizing Content: The

tag is commonly used to group and organize sections of content on a web page. It allows developers to create visually distinct areas for different types of content, such as text, images, or multimedia.

2. Layout Structure: One of the key use cases of the

tag is to create and manage the layout structure of a web page. By using

elements, developers can easily design and position elements on a page to create a visually appealing and user-friendly layout.

3. Styling and CSS: The

tag is often used as a container for applying styles and CSS to specific sections of a web page. It can be used as a target for applying background colors, borders, padding, and margins, among other visual enhancements.

4. Script Integration: When working with JavaScript or other scripting languages, the

tag can be used as a container for integrating and interacting with scripts. It can provide a designated area for script-generated content and functionality.

5. Responsive Design: With the increasing emphasis on responsive web design, the

tag is instrumental in creating flexible and adaptable page layouts. It allows developers to create responsive grids and columns, enabling a consistent user experience across different devices.

6. Dropdown Menus: The

tag can be used to create dropdown menus and navigation bars on a web page. It serves as a container for grouping menu items and providing a structured navigation system for site visitors.

7. Modal Windows: Modal windows are an effective way to display additional content or interactive elements without leaving the current page. The

tag is frequently used to create and manage modal windows for displaying pop-up content.

8. Form Controls: When designing web forms, the

tag can be applied to group form elements, labels, and controls. This allows for improved organization and styling of form components for better user interaction.

9. Interactive Widgets: The

tag is used to create interactive widgets and components, such as sliders, carousels, and accordions, which enhance user engagement and interaction on a web page.

10. Embedding External Content:

tags can be used to embed external content from other sources, such as videos, maps, or social media feeds, allowing web developers to integrate third-party content seamlessly into their web pages.

In conclusion, the

tag in HTML offers a wide range of use cases for web developers, from structuring content and managing layout to controlling styles, scripts, and interactive elements on a web page. Its versatility and flexibility make it an essential element in modern web design and development.

2024-01-11T16:41:32+01:00