This week covers a reading of Edgar Allen Poe's classic poem Alone, read by Shane Morris (audio used is from the @BEKNOWN channel) with visuals by @playardstudios3161 . The film uses @UnrealEngine's metahuman and @NVIDIAOmniverse 's Audio-to-Face and there are some impressive introspective looks achieved with the process... among a few other things we comment on, not least being Ricky's experience of reading poetry.
Speakers: Ricky Grove, Tracy Harwood, Damien Valentine (MIA zsOverman)
Editor/Producer: Ricky Grove
Music: Tattoo by CreateStudio (licensed)
What is And Now For Something Completely Machinima?
Machinima, real-time filmmaking, virtual production and VR. Four veteran machinimators share news, new films & filmmakers, and discuss the past, present and future of machinima.
Ricky Grove 00:10
Welcome, my name is Ricky Grove and this is Completely Machinima.com podcast for everything machinima. We have reverted to the news format on our blog. And our regular podcast is covering films. We've done a series of films for November 2022. And this is our last one. It's a choice by Tracy. Please contact us if you have any thoughts or ideas about it at talk at completely machinima.com and Tracy Harwood is our guest and Damian Valentine is here. Hi, folks, how you doing? Now we're Phil Rice, who's regular guest here isn't with us because of the hurricane down in Florida. It's caused limited or no internet access. So he's been unable to participate. We wish him the best him and his family and hope that they'll be back. He'll be back in December. Take care, Phil. Yeah, absolutely. All right, Tracy, you've got a very interesting choice. You already had a terrific choice with your Unreal sort of demo story. What is this one? It's also an unreal film, right?
Tracy Harwood 01:19
It's unreal. Absolutely. Yeah. This one's called ALONE and it's by playard studios. It was released 26th of October certainly just out. Now it's it's described as a Metaverse reading of Edgar Allan Poe's iconic poem. And it's used a reading taken from the Beknown YouTube channel, which does reading from iconic texts by well known actors. This one's been done by Shane Morris was a professional actor, and someone whom you may have heard on films and adverts over the years. If I had to characterise his voice, I'd say it's a cross between Carl Sagan and James Earl Jones, Darth Vader. Very deep and chocolatey. It's a really lovely combination. Yeah. Right. So this machinima draws on this narration of a classic poem, and presents a character that might say those words. I think if I had to imagine a character I might not have actually come up with the visualisation that Playard Studios has but but it doesn't the less work at least for me, I get a real sense of deep loneliness from this character. And remember that the focus is on the metahuman, because it's actually a test of the metahuman tech, with some interesting assets in the background and an environment which portrays a kind of old family house perhaps handed down through the years until finally this person is responsible for it. I must say the mouth movement is much better than some we've seen, and the teeth are nowhere near as denture like because I've commented on in the past. In fact, interestingly, they're a little old and jaded looking just as this character is a little bit uneven yellowed in places actually, quite in sync with the character, I thought that was really quite well done. And the character has this kind of misty look in his eye, a real sort of bright glisten, which maybe is a little bit out of sync with an older person. It's something that I reflected on a little bit when we talked about The Remnants that Damien picked earlier in the month. I don't know maybe, maybe you'd normally see thread veins and shot whites, I don't know. His hair, the hair on his face, and the the kind of the orange peel effect on the skin and the light in the eyes. All of that, I think is really astonishing detail, absolutely incredible detail. But what caught my attention with this is it's the quality of the introspection portrayed by the character that's kind of mesmerising, for example, it kind of talks about the lightning and the storm, and his head's moving in that kind of general direction where this storm is unfolding around him, but doesn't really focus on any aspect of it, you can see that his focus is kind of inwards. I don't know how I know that but it just feels that it is inward looking, where this guy's you know, picking on a memory looking into a memory. I think that is a very interesting effect to have achieved through through the telling of this, this tale than the you know, the reading of this, this poem. And I think for me, the only thing that lets it down a little bit is actually the other characters the wolf and the crow, the details there but it's nothing like as polished as this man. And the storm in the shadows are all great. It's I think it's interesting to hear all the dark kind of sound effects. The, you know, the flapping wings, the howling of the hound, the wind, the storm braking around the character, and all play with the light. It's it's it definitely comes over as a dark and otherly place. But the sense of aloneness is maybe less convincing by the fact that there are these other creatures there. So they're kind of a collection of things together. So for me, it mostly works as a portrayal of the poem, but some further thought might have improved that connection between the visuals and the poem even more. What do you guys think?
Damien Valentine 05:41
Well, one of the mentioned about the environment being so a dark place, the other aspect of that, which is the, you know, there's old films and TV shows where you get someone sat by a fireplace and read a story, it's kind of got an element of that, which kind of makes it feel cosy rather than isolated, isolated, which I'm not sure that's on purpose or not. But I do really enjoy that. And I think the visually stunning, and the character has put a lot of work into the character in his face and his the way his lips move, as he points out the rest of his facial expressions as well. I believe it uses the NVIDIA audio to face tool
Ricky Grove 06:31
Oh, oh, really? Yeah.
Damien Valentine 06:35
I haven't looked too much at that that piece of software is something I need to do for good. Yeah. And you can see the results of it here. And he looks very lifelike. And he animates likes, you see a lot of animated characters, but they don't move that much that the math might move, but they forget to animate the rest of the faces along with it. And I think this shows that you if you put more effort into animating the whole face, because everyone's face moves, like my eyebrows are going up like that. And if you want any of us talking now on the video, our faces are moving. And so this guy, I think there's a lot of evidence for that. And it really shows and makes the narration more effective. It makes him interesting to watch. You know, it really pays off and you got little other aspects as well. Like my glasses, got the reflection of the lights around me is doing exactly the same thing in the video shows the visual quality of it. I try as little as I can say that it works where you are in the poem is associated, really well acted. I enjoyed it, I think, Tracy right that the animal creatures aren't necessarily as detailed as the human character, I guess, to the tools available in the character models available, the humans always going to be more detailed than non human character, you know, animal characters. I really enjoyed it.
Ricky Grove 08:05
Cool. You know, I think, before I make my comments, I'd like to point out that if the essential task of anything that you create, is to have the audience suspend their disbelief, and believe in what they see as a story or as a place or as a creation. If you're unable to do that, it makes it very hard not to look at all of the flaws and problems, and you just can't go along with it. While I think the great majority of people would watch this short film and suspend their disbelief, I couldn't. And it meant that I and I'll tell you why here in just a minute, but I couldn't finish it. And there were two reasons. One is is that the lip sync on it was had this uncanny valley quality to it. Although the lips were synced, there was some strange things going on with the jaw that caused it to be to keep saying, I'm a 3d model. I'm a 3d model. I'm a 3d model. I couldn't I could never see this as a character for me, which made it very hard to follow. The second thing is, is that and this is probably just me. But there's a thing in poetry that has always bothered me about when people read poetry. And that is, you see each line of the poem as the line of dialogue. And you read that the end, you go to the end of the line and you pause and then you go to the next line and you pause and then you go to the next line and you pause. Poetry isn't written like that. Many poems have run on sentences, which means it doesn't finish the thought until you get to the next period. You know what I mean? This actor made that classic mistake of reading each line and stopping, each line and stopping, each line and stopping. And that just drives me crazy. It's a particular it's a particular prejudice or observation or annoyance that I've had, every since I started learning how to recite poetry, that in the fact. Thank God, he didn't do the the chanting, you know, like, like, Dylan Thomas, and solo trade with her. And we all go to him. You know, he didn't do that. So I was really happy about that. But those two things kept me from really seeing it and finish it. I just didn't want to finish it. But I do acknowledge that that it was the technology aside from the lip syncing was very interesting. I'd like to set up. However, I think the actual motion capture was better in The Remnants, our previous film course now that didn't have talking. And so you would never have that uncanny valley with the lip sang. But the expression so fit each moment in the scene and The Remnants, whereas this one, it did seem kind of out of out of sync with the poem. Know one I mean, that combined with a sort of recitation, and stopping at the end made it really hard for me to get into it. So I didn't like it. And although I recognise it as an interesting piece of technology, it just didn't work for me.
Tracy Harwood 11:56
Really interesting points, Ricky, thank you for that. Sure.
Damien Valentine 12:00
Just watching again, when you're talking about the lip sync to see if I can pick up on what you're talking about, I can kind of see it. And it's kind of it's hard to explain exactly what it is. But I almost want to say the lips are moving too much. Like, obviously, when it's a real person, like I'm talking now, my lips are moving quite a bit. But when it's got an animated character, and I'm thinking about my own project, kind of doing lip sync, is, when I use the tools in iClone to put in the dialogue, the initial results is like, they move too much. Because the lip sync, it's trying to pronounce every single letter of the word if Yeah, and I turn that down. So the lips aren't moving quite like this, they're moving like this, right? And it feels more natural. Even though if it was a real person saying it, their lips would be moving a lot more.
Ricky Grove 12:58
I think you really put yourself a challenge, which I think is what they embraced when you create a headshot of a scene. And the actor is, as Tracy pointed out being internal, you really challenge yourself to get every aspect of that, right. Because from birth, we focus on our mother's face. And we learn to read the signs of faces very quickly. And I think viewers watching it are sophisticated, no matter who they are, what age they are, they're sophisticated reading faces. And if there's the slightest problem with the face, unless it's still highly stylized. It can really throw you off. One of my problems in games, for example, is when they have an NPC, and they're talking, they don't want to spend the extra effort of creating a lip sync. So they just have this general mouth movement. And we accept it. Because it's a game and, and, and no one expects to be full lip sync on that. But that always bugs me. It always sort of breaks the believability of the character and that you're not talking to a character, which has the history. I mean, they spent all this time creating a backstory, backstory for the character for the NPC, setting up the NPC, creating motion capture, but they don't do lip sync on it. It's as if they just don't want to take the extra time and it just ruins it for me. So I think it's important that you get the lip sync right.
Tracy Harwood 14:44
It's really interesting that you've picked up on that because I think for whatever reason I was more focused on the eyes. And I think it's because of the the eyes the eyes I don't know, there's some, you know, they've got they've obviously got some new tech that is making the eyes look more realistic and the brightness, this glistening wetness. Yeah, it's I think it's really kind of intriguing because because it was so, you know, so glistening, it's the bit that I focused on the most, I don't really remember looking so much at the mouth. But also, I remember thinking that, you know, with with the way they put a beard on the character, yeah, that kind of covers a little bit of a mouth as well. So, maybe that was a way of getting around some of the faults that they picked up as they created it, I don't know. But I was definitely focused, not on the mouth so much with this one,
Ricky Grove 15:48
I get it. I get it. I think Playard Studio is a professional level company that produces professional level results. So I'm just a tiny bit surprised that they didn't, you know, work on that the lip sync and facial animations a bit more. I think it may also be an inherent problem in the Audio-to-face development, they may need to meet do more development. The ANA Omniverse machinima tools are not quite as quickly developed, or as in heavy development as some other aspects of Omniverse by Nvidia, because it's not really a money making thing they're doing as the gesture towards the community, you know. And that's not to say that the people working on it are committed because they are. But I think the emphasis is more on the commercial ability to be able to work, you know, for professional groups to be able to work together to model and create scenes and graphics and things like that. But I think they have a way to go to, to get to higher level like accents, you know? Yeah. Anyway, interesting choice. I'm sorry, I didn't glom on to it. No. I wanted to,
Tracy Harwood 17:15
it's always really interesting to hear your comments. So ya know, fascinating chatting to you about this one. Thank you.
Ricky Grove 17:23
I did a seminar, participant seminar and reading poetry not too long ago. And the biggest problem was this, going to the end of the line of the poem and stopping. It's what you're taught when you read it. We're in school when you read poetry, and everybody says, that's the way it's supposed to be. And it's also enacted people do that and accept it, in general, and they recite poetry. But you have to remember that if you look at the line, the sense of the line goes all the way through the punctuation. So you have to look at the punctuation in the poem, to tell you where to stop, or where to pause, rather than one line, pause, next line, pause, next line pause, that kind of thing. And it makes the it turns the poem into more like speech, like recitation. And I think that's how good poems should be read. You know, it makes them more interesting. And the ideas connect to each other. We called it when I was in the master's degree programme at Yale. They called it imaging, meaning that you read through the complete image of the line. Each line has nouns and verbs, and you look through that to find what the essence of that line is about. And you speak that image. It's like the way Sinatra sings. When Sinatra sings, he doesn't go to the end of the verse and stop singing for a minute. He continues on all the way through, and he, he, he, he sings through it as an expression as a single expression. Anyway, that get off my soapbox there, but it's, it's a fascinating thing, and I wish that guy would have. He's a good reader. He's obviously a wonderful voice and skilled, but I wish he would have taken that course I think it would have changed the way he did the poem. All right, anything else on this at all you'd like to mention?
Damien Valentine 19:35
Me, I want to say,
Ricky Grove 19:38
I'd like to point out once again, that Unreal Engine and Nvidia's Omniverse are tending to dominate. A lot of interesting work that's been created, both Omniverse is free, although, at some point, it's going to probably be charged in Omniverse Machinima This is part of the omniverse platform, you just check Nvidia will give you a link for it. Audio to face is also an interesting programme. They're a little bit clunky right now the interface they're a bit a bit hard to to use compared to Unreal Engine which has everything laid out and tutorials on them are not as buried as they could be like in Unreal. So you're a little bit handicapped there but there's they're still there and you can learn it. There interesting ways to do machinima. Well, that's it for this month. Thank you so much for watching and listening. If you have comments or ideas about some of the subjects we broach today, or if you think I'm a fool for not liking ALONE by Playard Studios, please write me a talk or us at talk at completely machinima.com We hope to have filled back in December. Best to you, Phil, get that internet backup. We need you. That's it. We'll see you next time. Bye bye.