Giter Site home page Giter Site logo

codinginsomnia / digihuman-avatar Goto Github PK

View Code? Open in Web Editor NEW
0.0 1.0 0.0 90.27 MB

The digihuman nexus combines digital tech with human behavior analysis, using deep learning to interpret movements and facial expressions from digital data. It impacts human-computer interaction, gaming, healthcare, and more, improving experiences and outcomes through intuitive interfaces and personalized interventions.

C# 61.77% ShaderLab 9.39% HLSL 1.43% Dockerfile 0.02% Python 27.38% Shell 0.01%

digihuman-avatar's Introduction

Installation:

Backend Server Installation:

Install MediaPipe Python: Copy code pip install mediapipe Install OpenCV Python: Copy code pip install opencv-python Navigate to the backend directory and install other requirements: Copy code pip install -r requirements.txt Download the pre-trained generator model for the COCO dataset and place it into backend/checkpoints/coco_pretrained/. Unity3D Installation:

Download and install UnityHub. Add a new license in UnityHub and register it. Install a Unity Editor inside UnityHub (LTS versions and a version higher than 2020.3.25f1 are recommended). In the Unity project settings, allow HTTP connections in the player settings. Usage:

Run the backend server at the backend directory with the following command: Copy code python server.py Run the Unity project and open the main scene at Assets\Scenes\MainScene.unity. Test the program by uploading videos to the backend from the Unity project (You can test the application by selecting provided animations from the right-side menu). Adding New 3D Characters:

Find a 3D character model from the Unity asset store or download a free one. Set the rig to humanoid in the character settings. Drag and drop your 3D character model to CharacterChooser/CharacterSlideshow/Parent object in Unity main Scene. Add BlendShapeController and QualityData components to the character object in the scene. Set BlendShapeController values and add character SkinnedMeshRenderer component to BlendShapeController component. Add the character to the nodes property of CharacterSlideshow Object. Run the application and select your character for rendering animation. Features:

Making full body animation. Animating multiple blendShapes on 3D character (up to 40 blendshape animations are supported currently). Supporting any 3D models with Humanoid T-Pose rig. Exporting animation in a video file. Saving animation data and re-rendering it for future usage. Filtering MediaPipe outputs to detect and remove noises and better smoothness (Low Pass Filtering is used currently). Animating the character's face in great detail. Training a regression model to generate Blendmesh weights by feeding the output data of MediaPipe FaceMesh (468 points). Using StyleGan techniques to replace the whole character face mesh. Automatic rigging for 3D models without a humanoid rig (Using deep neural network models like RigNet). Generating a complete character mesh automatically using models like PIFuHD (in progress!). Animating the 3D character's mouth in great detail using audio signal or natural language processing methods. Generating a complete environment in 3D. User don't have code snippets i want the same in text ChatGPT Installation:

Backend Server Installation:

Install MediaPipe Python: Copy code pip install mediapipe Install OpenCV Python: Copy code pip install opencv-python Navigate to the backend directory and install other requirements: Copy code pip install -r requirements.txt Download the pre-trained generator model for the COCO dataset and place it into backend/checkpoints/coco_pretrained/. Unity3D Installation:

Download and install UnityHub. Add a new license in UnityHub and register it. Install a Unity Editor inside UnityHub (LTS versions and a version higher than 2020.3.25f1 are recommended). In the Unity project settings, allow HTTP connections in the player settings. Usage:

Run the backend server at the backend directory with the following command: Copy code python server.py Run the Unity project and open the main scene at Assets\Scenes\MainScene.unity. Test the program by uploading videos to the backend from the Unity project (You can test the application by selecting provided animations from the right-side menu). Adding New 3D Characters:

Find a 3D character model from the Unity asset store or download a free one. Set the rig to humanoid in the character settings. Drag and drop your 3D character model to CharacterChooser/CharacterSlideshow/Parent object in Unity main Scene. Add BlendShapeController and QualityData components to the character object in the scene. Set BlendShapeController values and add character SkinnedMeshRenderer component to BlendShapeController component. Add the character to the nodes property of CharacterSlideshow Object. Run the application and select your character for rendering animation. Features:

Making full body animation. Animating multiple blendShapes on 3D character (up to 40 blendshape animations are supported currently). Supporting any 3D models with Humanoid T-Pose rig. Exporting animation in a video file. Saving animation data and re-rendering it for future usage. Filtering MediaPipe outputs to detect and remove noises and better smoothness (Low Pass Filtering is used currently). Animating the character's face in great detail. Training a regression model to generate Blendmesh weights by feeding the output data of MediaPipe FaceMesh (468 points). Using StyleGan techniques to replace the whole character face mesh. Automatic rigging for 3D models without a humanoid rig (Using deep neural network models like RigNet). Generating a complete character mesh automatically using models like PIFuHD (in progress!). Animating the 3D character's mouth in great detail using audio signal or natural language processing methods. Generating a complete environment in 3D.

digihuman-avatar's People

Contributors

codinginsomnia avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.