🛠️WIP expanded🛠️
Further experimenting with face-tracking and PNG/MP4/GIF-Tubing by combing OBS & Live2D to communicate. Now we have the mouth voice-detection. It's still a bit buggy but the idea gets more real as it goes.
Lots of potential forward
Further experimenting with face-tracking and PNG/MP4/GIF-Tubing by combing OBS & Live2D to communicate. Now we have the mouth voice-detection. It's still a bit buggy but the idea gets more real as it goes.
Lots of potential forward
Comments
Again if you are curious behind the scenes look here
https://ko-fi.com/post/Face-Tracking-PNG-Tuber-model--Behind-The-Scenes-V7V11AGWQK
Have you seen Pixel Match Switcher? I don't know if it would perform better or worse than chaining chroma keys, but you could use the pixel match to see the color of the signal and change into the right OBS scene
👀👀
Vtuber models should (imo) always express the emotions of the user. And if it’s face tracked IT SHOULD track when I get happy, angry, sad. And no button pressing either… But how?
Hopefully as the days go by I’ll answer that question
Does this mean you don’t have to do something like this for Live2Ds parameters?
(Regardless of how you did it, this looks phenomenal!!!)