The Drawing Process

The footage has been edited and assembled into the software. I begin the usual systematic approach in the process. First, by looking at areas of footage which are clearly visible and areas which are not. When footage is difficult to render, a great deal of improvisation is involved in this process. Usually facial areas offer the most problems such as eyelashes, corners of lips and hair curls.  This is where line thickness is a big consideration.  For the aforementioned intricate areas, the footage is enlarged, this gives me greater control when rendering . For this animation I’m taking each sequence in stages. The first sequence involves 46 drawings. Frames 1 and 47 has been rendered and then textured. In my last animation, I textured the whole animation after completing the drawings (337). I’m hoping by using this approach, I can make better choices during the drawing process.

Public Information Film Project: Frames 1 & 47 (non-textured/textured)  

Public Information Film Project (frame 1)Public Information Film Project (frame 1).jpg

 

Public Information Film Project (frame 47)Public Information Film Project (frame 47).jpg

Translucency (2015)

Inspired by bottled ornamental translucent fish being sold in a department store in Tokyo last summer, I began working on a side-project titled, Translucency. Rob, a work colleague, who also happens to be a visual artist (well known in Tokyo) and DJ, modelled for me one afternoon in a university classroom. The concept was to represent the human image as a transparent image allowing the classroom environment to become visible. The viewer senses moving backgrounds/ translucent foregrounds. I opted for a hallucinogenicesque classroom. My rationale was based on literature, sartorialism and music. As follows: I think I had read an article about Timothy Leary around the same time and with Rob’s appearance and him being heavily into late 60s prog rock, it all seemed to tie in. However, over the past 11 months I have been way too busy on my course so had to shelve the 39 second (475 drawings) animation. Last month while waiting in Dubai airport for 9 hours I started tinkering with the animation again. So far 6.5 seconds (79 drawings).

Translucency Test 1 (6.5 seconds)

Frame 81

Translucency (frame 81)

 

‘Cicadas, Crackling and Popping Wet Wood and Lost Laughter in the Breeze’

As well as being glued to the cricket, yesterday afternoon was spent designing the audio in Audacity for the rotoscope animation in my last project, Don’t Have Nightmares 0.2: The Tokyo Underground.  The audio is made up from an eclectic mix of ambient sounds. The train carriage audio is applied primarily as the base sound. Ambient sounds such as cicadas, crackling and popping wet wood and lost laughter in the breeze have been arranged fading in/out at various points. In the past, I’ve found that when experimenting in Audacity, it’s very easy to over-tweak, cut too much, over-amplify, suffocate the sound with heavy effects and end up with a cacophony as a result. While editing yesterday, simplicity  is key with subtle adjustments. I’m pleased with the overall arrangement and finding my way round Audacity with more ease is encouraging.

Alienation & Conformity (2015)

DATA UPDATE MARCH 2017

ALIENATION & CONFORMITY (2015) Rotoscope animation, 29 seconds, 337 drawings

The Tokyo Underground was a site-specific project was made between February-July 2015 as part of a series (Don’t Have Nightmares) which explores aspects of fear in art. The resulting rotoscope animation, Alienation & Conformity is a personal interpretation of how fear pervades daily urban life while living and working in alien environments. The animated passenger pan is a comment on emotions and anxieties endured over a period of time. The journey begins with a feeling of trepidation, being watched, analysed and under scrutiny and over time the tension begins to diminish though the fear is still underlying. Though the work is largely a personal experience, it also a comment on how an economic and political system can be ruthlessly exposed and pushed into the psyche of the inhabitants. The many who are caught up within these brutal capitalistic parameters, all carrying the same flag in pursuit of profit, wealth and the material gain.

 

The Tokyo Underground: Project Presentation (June 2015)

My last project (Don’t Have Nightmares 0.2: The Tokyo Underground) was to be shown earlier this month to my peers. However, and unfortunately, due to time limitations, the project didn’t get a critique (gutted!).

Final Reflections: To be honest, I don’t think it’s an accurate representation of the project, the main aim was to capture the fear in confined spaces on the Underground but filming was so problematic at times, I ditched a lot of footage. In hindsight, I should have documented more of the downs as opposed to ups on my blog. However, I’m pleased with the animation. I had little idea as to the end result. That’s the beauty of working with the medium.  Tales of the Unexpected!

Sound: I’ve done quite a bit of experimenting with the audio, using train ambient, crackling fire wood and buzzing insects in Audacity. Not great results due to my technical ability but I’ve assembled the audio to how I would like it.

PART ONE  Duration: 3.00 (with voice-over) 

The Tokyo Undeground: Don't Have Nightmares 0.2 (Part One)

 

PART TWO Duration: 2.07 (with voice-over)

The Tokyo Underground: Don't Have Nightmares 0.2 (Part Two)

 

 

 

Alienation & Conformity Collage (2015)

Alienation & Conformity 1-25.jpgAlienation & Conformity 26-50.jpg

 

 

Bruce Lee, Audrey Hepburn and the ethics of digital necromancy -Hannah Ellis-Petersen

The American actress, Janet Leigh was born in 1927 and died in 2004 at the age of 77. My last project, Don’t Have Nightmares 0.1, involved taking a segment of icon cinematic footage from the film Psycho and animating approximately 28-seconds. In order to get a realistic likeness of Janet Leigh, enlarging the film resolution enabled me to analyse information such as light and shadow on her face, body and hair with greater visibility and more clarity. However, I still needed still images and other footage of the actress as it was difficult to animate blurry footage due to the shower water. By using digital technology, I could scan a person who was 32 years old at the time and manipulate her image with some moderate success. It was the first time I had employed digital necromancy in an art project. It got me thinking, with more advanced software tools, a technically skilled team of animators, time and of course consent from the Leigh family or whoever holds her image rights, Janet Leigh could continue to star in films 10 years after her death. It seems that resurrecting dead screen stars is becoming more prevalent in cinema/television these days. A few weeks ago I came across an engaging article in The Guardian (below) by Hannah Ellis-Petersen.  I’ve read quite a few articles like this over the past year. It makes you wonder, with all the increasing digital technology, will using human actors for films just become some antiquated, archaic concept?

The Guardian, Saturday, 11th April 2015 

Bruce Lee, Audrey Hepburn and the ethics of digital necromancy

by Hannah Ellis-Petersen

Recent figures show posthumous earnings by celebrities from their likeness now exceeds £1bn, with some selling image rights before death
In Arthur C Clarke’s July 20, 2019: Life in the 21st Century, his 1986 novel speculating what a day in the 21st century might look like, Clarke envisions a cinema listing of the future.
“Still Gone with the Wind: The sequel picks up several years after where the 80-year-old original left off, with Rhett and Scarlett reuniting in their middle age, in 1880. Features the original cast (Clark Gable, Olivia de Havilland, and Vivien Leigh) and studio sets resurrected by computer graphic synthesis. Still Gone sets out to prove that they do make ‘em like they used to.”
Clarke’s book was pure science fiction, but almost 30 years later his predictions have proved prescient. Death, once the finite end to a celebrity career, is now only a marker for the next stage, and digitally resurrected celebrities – be they Paul Walker or Audrey Hepburn – are now posthumously making their way back onto our screens.

But such digital necromancy is raising concerns. It was announced at the end of March that plans are in the works to digitally insert Bruce Lee, 42 years after his death, into Ip Man 3, the third film in a series about his former teacher. It’s not the first time computer graphics (CG) technology have been used to bring the martial arts star back to life on screen – his digitally reanimated figure recently starred in an advert for Johnnie Walker Blue whisky. However, the Bruce Lee estate is now seeking legal action to prevent his CG likeness appearing in the film, with their lawyer stating the family are “justifiably shocked” at the idea.
It is perhaps to stop such situations that Robin Williams, it was revealed last week, signed a deed to prevent his image, or any likeness of him, being used at least 25 years after his death. It restricts any posthumous exploitation of the actor, be it through the use of CG to digitally resurrect him in Mrs Doubtfire 2 or as a live hologram performing comedy on stage – something that the advancement of technology has made an increasingly likely occurrence.
Indeed, recent figures have shown that the posthumous earnings made by celebrities from their image or likeness alone now exceeds £1bn, with some, such as Muhammad Ali, even selling their image rights before death so they can reap the profits while still alive.
While the practice has mainly been restricted to finishing off performances of actors who died midway through filming – such as Paul Walker in Fast and Furious 7 – it has also been utilized by advertisers, keen to attach famous faces to their brands. Most notable is the recent reanimation of Audrey Hepburn in an advert for Galaxy chocolate.
Mike McGee, the co-founder and creative director of Framestore, the special effects studio who won an Oscar for Gravity, was in the team responsible for the Audrey Hepburn reanimation and said it still required “vast” amounts of work to make the replicas appear alive. However, he predicted the phenomenon of reviving dead celebrities was only just beginning.
It took Framestore four months of work to create the lifelike Audrey Hepburn, for just 60 seconds of advert, and managed it by using a combination of old photographs and a body double to build an accurate CG digital form of everything from her skin to her eyelashes – even going on location to get the lifelike light and shadow.
“We found that we could create a realistic still image of Hepburn quite quickly but as soon as she has to move, turn her head or open her mouth, that’s when things can start to look uncanny, when things don’t look 100% real,” he said.
“The human eye can spot it because we’re so used to looking at our own reflection, so we subconsciously know all those tiny details and it’s that final 5% of realism that takes the most time to achieve. It’s all about getting the moisture in the eyes to look right, getting the eyelids to flutter correctly when someone blinks, the corner of someone’s lips to turn up a little just before they smile, because it’s those subtle signal and movements that make a great performance by any actor. And to ask an animator to copy that onto a computer model and capture a human performance is really challenging.”
He added: “I do think this will happen more and more. As the technology develops, I see no reason that in the future we wouldn’t see a CG performance by a dead actor up for a Bafta or an Oscar.”

Don’t Have Nightmares 0.1 (2015)