Mocap

Has anyone in ZA attempted using motion capture for character animation?
What hardware did you use, how was the experience (particularly integrating this into your workflow), and what would you do differently?

Comments

  • edited
    We've attempted to use mocap in a number of our projects. The results have been mixed. We found with the amount of touchup we had to do, it often ended up being better to just hand animate. To be fair that was some time ago, and the workflow may have improved since.

    Ideally, I think @counterspell should respond, he'd have the most useful information on it for you. He's away for a bit but I've pinged him to comment here when he's back.
  • Thanks for the reply @mattbenic
    What equipment were you using as a matter of interest?
    Thanked by 1mattbenic
  • @NickCuthbert I think the Industry standard is autodesks motion builder.
    I've had the misfortune of cleaning up mocap data before. The captured data isn't very accurate and the cleanup really takes the fun out of creating animation, because it becomes a tedious technical exercise instead of a creative one. I for one can't wait for the robots to take this job away from us.
    Thanked by 1mattbenic
  • edited
    @Pomb Hmm. That is a little disappointing. Were you cleaning up data from an optical system, an IMU one or did you go DIY and use something like a Kinect?

  • For one project we comissioned mocapped animations from a local studio with a professional rig, we ended up using just a handful of those animations if I remember correctly (again, it was a while ago). We've also tried using a DIY kinect-based solution.
    Thanked by 1NickCuthbert
  • The reason I'm interested is because it seems that more studios/engines are taking a data driven approach to animation (https://m.youtube.com/watch?v=KSTn3ePDt50# and also https://blogs.unity3d.com/2018/06/20/announcing-kinematica-animation-meets-machine-learning/)

    Motion matching seems like it has the ability to really improve the responsiveness of character locomotion. I'm wondering if the data needs to be that clean for this kind of use case.
  • edited
    @NickCuthbert, honestly don't know. We just got given the data to clean, not sure what their capture pipeline looked like.
    Not sure if you've seen what these guys have, but they have something really interesting going on.
    https://www.youtube.com/watch?v=uFJvRYtjQ4c
    Thanked by 1NickCuthbert
  • That's really cool!
  • So from my point of view Motion Capture has not been worth doing in our projects.

    To get good quality captures you need a LOT of money. The captures you get form cheaper alternatives take a lot of time to clean up and are very demoralizing for animators to work with.

    Then assuming you get motion capture that you are happy with, you need to realize that you can't change it very easily. If you are not happy the exact way that the animation reaches to pick up the gun, then you will have to spend 10 times as long trying to change the mocap compared to a keyframed animation. This is because every fraction of a second is filled with keyframes for every capture point.

    We have generally had to go back and change and tweak animation after the initial take many times over, and then motion capture would be a big drawback.

    If we could get our hands on tech like what was demoed by Ubisoft then this changes everything, but I'm not aware of anything that's public.

    Thanked by 1mattbenic
  • edited
    Thanks for the detailed answer @Counterspell.

    So what I think you're saying is that for a mocap system to be viable, it needs to satisfy three criteria:
    1. Requires little to no manual cleanup
    2. Needs to be accessible (easy, fast and cheap to capture more motion so that manual tweaking is less necessary)
    3. Be well integrated into the asset pipeline and engine (for example the engine offering motion matching)

    Please correct me if I've misunderstood anything in your reply
Sign In or Register to comment.