Archive

Author Archive

Shareville

January 22, 2013 2 comments

Shareville is an online learning resource developed by the e-learning team at Birmingham City University. In essence, it is a virtual town with a hospital, university, school, law practice and care home. Environments are presented as 3D panoramas that students can explore, accessing material relevant to their area of study.

The shareville website, showcasing a patient in the virtual hospital

The shareville website, showcasing a patient in the virtual hospital

Much of this comprises of video scenarios: when a character in a location is clicked, the user is presented with a short video showcasing the character’s circumstances, followed with questions about it. Different answers lead to different consequences, again shown as a video. Quite complex scenarios can be simulated and decisions explored.

A clip from a Shareville video scenario

A clip from a Shareville video scenario

As well as videos, other resources can be accessed including documents, pictures and websites. For example, a room in the care home may have several characters each with their own situation, a filing cabinet full of relevant documents, a telephone with answer phone messages on it and so forth. It certainly leads to a more immersive and engaging way of learning.

Use at City

The School of Health Sciences at City University London has recently started using the system in their own teaching and, working with the MILL, developed their own scenarios to populate one of the rooms in Shareville’s Children’s Health Centre. These scenarios are aimed at student nurses and showcase a teenage girl with mild autism, a toddler with Downs and a young boy with hyperactivity. The scenarios explore how a nurse should deal with patients and their parents, who often need most of the help and support.

Children with their parents and the nurse

Children with their parents and the nurse

The videos were filmed in the MILL’s TV studio against blue screen – the idea being that the characters can be placed into the computer generated rooms of Shareville. There’s a good reason for this – some of the locations, such as care homes and hospitals, are impossible to film in and so a virtual computer-generated set is the only solution.

Filming against the TV Studios blue screen

Filming against the TV Studios blue screen

They say never work with children or animals but as it turned out, the filming went without issue and all the children, none of whom were professional actors, did an admirable job on the day.

Several issues did arise though. Although the MILL has a brand new blue screen, it was quite difficult to light evenly and we had to combat several dark shadows under tables and chairs. Also, the room had quite an echo that made the sound a little too deep. A way to combat this would have been to use lapel microphones, but these would have to be hidden under clothing with the risk of rubbing. Plus, toddlers and microphones are not the best combination!

Panasonic GH2

Panasonic GH2

It was also a good test for the MILLs new camera, a Panasonic GH2 dSLR. The video recorded was of impressive quality but a big issue did arise. The camera’s video format is 4:2:1, which means the colour information is recorded at a much lower resolution than the brightness information. Although the footage looked fantastic, when the blue background was removed the edges of characters and furniture became quite blocky and not particularly sharp. Getting it to look good took a lot of tweaking with the blue screen filter. One way to avoid this would have been to use an external capture drive, using a recording format with higher colour detail, say 4:2:2. This would have lead to much smoother edges. Still, considering these limitations the final footage looked really good.

At the moment, the video is with Birmingham City University ready to be dropped into the computer-generated environment, but see below for a still before and after blue screen removal.

Before and after blue screen removal

Before and after blue screen removal

We hope that in future, more scenarios can be filmed for use in the school and perhaps other schools can find a use for what seems to be an innovative and useful teaching resource.

Advertisements

Chromakey

November 26, 2012 Leave a comment

Chromakeying is a technique where a particular colour in video footage is removed and replaced with another video or image. Examples include weather broadcasts where a plain background behind the presenter is replaced with a weather map. The colour being removed can be any hue, but typically it is blue (blue screen) or more commonly green (green screen). The reason for blue and green is simple – skin doesn’t contain any blue or green tones and so remains unaffected by the colour removal. Of course, it’s important to ensure any presenters or actors are not wearing any garments the same colour as the background or these will magically disappear too! There are special blue or green suits that can be worn by actors who need to be removed from the scene – typically they are used to secretly direct animals or hold props in mid air for magical effects.

a dog against blue screen

a dog against blue screen

blue removed and a new background added

blue removed and a new background added

Blue and green screens work very effectively but green tends to be a better choice. There are several reasons for this. Firstly, green is a much brighter colour and software finds it easier to find and remove, particularly when it’s not lit very well. Secondly, cameras tend to be more sensitive to green and have more room set aside to record green information. And thirdly, people tend to wear more blue clothing than green, so it’s a little more practical. The Media & Innovation Lab (MILL) has recently upgraded its studio chromakey with a very pure blue hue, which does work much better than the old paint, which was a standard shade from a hardware store. The reason blue was chosen was quite simple – green is quite garish and as the studio walls are often used just as a plane background, it seemed sensible to stick with the blue.

Most video software has some kind of keying filter for removing blue/green backgrounds and these are tending to get better and better. Final Cut Pro X has a particularly effective one that does a remarkable job, even on default settings. The general rule for good chromakey is to make the blue or green as bright and as consistently lit as possible and to keep the foreground objects sharp and well defined. Blurred images or fast moving objects (which cause motion blur) can give poor results because of the blurring – the software finds it difficult to decide what exactly is the blue/green background and what exactly is foreground. Fine hair also gives similar issues.

blurring problem: what's foreground and what's background?

blurring problem: what’s foreground and what’s background?

Another issue is the video recording itself – most cameras record the brightness and colour information at different qualities, so although there may be lots of detail in the picture, the colour information is much cruder and this affects the ability to isolate foregrounds from coloured backgrounds. High-end cameras record brightness and colour at similar qualities that leads to finer quality removals.

Chromakeying is used extensively for special effects in movies and television, where coloured backgrounds can be replaced with computer generated cities or worlds. TV news use it also – news readers occupy small blue or green rooms which are replaced with much grander computer generated studios. High-end chromakey software and hardware is much better at dealing with blurring and shadows so that the foreground fits seamlessly into their new artificial background. Here’s a few examples – some are quite astonishing.

The MILL recently shot several films against blue screen for use in the Shareville project at the School of Health Sciences. Shareville is a web-based virtual town developed by Birmingham City University with a college, care home and range of colourful characters. These characters are played by actors filmed against blue/green screen, which is removed and replaced with (quite elaborate) computer generated rooms and environments. When lit properly, the characters do look as if they’re actually in the locations rather than against a coloured wall. This was the first big project using the MILL’s new blue screen walls and flooring and so far, the blue seems to be coming off really well. The only problems coming up are caused by the relatively low quality colour detail in the recording, causing a little roughness on foreground elements, especially hair, and dark shadows under feet, which are staying as black blobs.

Filming Shareville against bluescreen in the MILL's TV studio

Filming Shareville against bluescreen in the MILL’s TV studio

Other uses for chromakey at City have included presentations where slides display behind the presenter and interviews shot against blue and then replaced with more appealing backgrounds.

The alternative to chromakeying is to draw around every foreground element for every frame of video, and cut it out from the background – a process known as rotoscoping. This gives fantastic results but is a very laborious process. However, this is sometimes the only solution in features, especially when the character steps outside the coloured background or the blue/green screen isn’t clear enough because of dust, smoke or mist.

Bradford, Animation & City University London

November 25, 2012 2 comments

Video, when made well, can be very good at engaging students in a subject and explaining concepts quickly and informatively. However, the realism of standard video can be a little dry and unsuitable for demonstrating how processes work. This is where animation can be invaluable: due to it’s artistic style and character it can be immensely engaging to watch and can be very effective at explaining concepts using simplified or abstract images, moving on the screen.


How Does Animation Work?

Each second of a movie is made up of a number of individual pictures or frames. When these frames are displayed quickly, one after another, we see the illusion of movement. Typically there are 25 frames per second, so for a 5 minute movie, that means 5x60x25 = 7500 frames. If these frames are drawn, that’s 7500 individual drawings!


Traditional Animation

In traditional animation each frame is drawn by an animator. In the golden age of animation, drawings were sketched first in pencil then an inked version created on a piece of clear cell. These cells could then be composited on top of a painted background and photographed to give a complete frame. Disney, Warner Brothers and Hanna Barbara used this technique for decades to produce some of their most famous cartoons, such as Dumbo, Bugs Bunny and Tom & Jerry. These days, computers are used to ink, colour and composite.

Bugs Bunny by Chuck Jones

Bugs Bunny by Chuck Jones

A lot of traditional animation is now made using Adobe Flash. This allows the animator to create a library of characters, body parts and props that can be placed onto a background stage and moved around over time. Key frames can be created (say the start and end point of an arm movement) and the software used to automatically create all the poses in between – a process known as ‘tweening’. This can speed up the animation process immensely. A very popular animation created using Flash is “Simon’s Cat” by Simon Tofield, who happens to work near to City University in Islington.

Simon Tofield and Steven McCombe at the Cartoon Museum

Simon Tofield and Steven McCombe at the Cartoon Museum

In the Media & Innovation Learning Lab (MILL) at City, I’ve created some traditional animation using Adobe After Effects. This also does tweening but has the addition of motion blur, which gives a realistic blurring look to fast-moving objects. Here’s an example…

Stop-frame Animation

In stop-frame animation, models & puppets are used instead of drawings. These are moved small amounts between frames and when played back give the impression of life. Characters are sometimes made from plastercine (in the case of “Wallace & Gromit”) or from a rubber like material, with an adjustable skeleton or armature inside.

Wallace & Gromit plastercine models

Wallace & Gromit plastercine models

One of the pioneers of stop frame animation is Ray Harryhausen, the animator behind the monsters features of the 50s and 60s, such as “Jason & the Argonauts” and the Sinbad films. Regarded as a genius of his day, his work inspired many of today’s fantasy directors and leading animators. His puppets and sketches are soon to take pride of place in a permanent exhibition at the National Media Museum in Bradford. I was lucky enough to see some of these amazingly detailed puppets, including his famous Medusa from the film “Clash of the Titans’. With snakes for hair, they all had to be individually animated.

Medusa puppet by Ray Harryhausen

Medusa puppet by Ray Harryhausen

Stop frame animation is still popular today, with several features released over the past years including “Fantastic Mr Fox”, Tim Burton’s “Frankenweenie” and Ardmann’s “The Pirates in an Adventure with Scientists” being just a few examples. Although many of the techniques haven’t changed in decades, technology has made some big advances to the process. Colour 3D printers are used to ‘print’ different facial expressions for the characters, giving that extra bit of realism and bracketing / stands which hold up the puppets can be removed from shots using computer graphics. However, the process is still laborious and often only seconds of animation are created in a day.

Frankenweenie puppets

Frankenweenie puppets

Pirates puppets

Pirates puppets

Pirates faces made with a 3D colour printer

Pirates faces made with a 3D colour printer

Paranorman puppets

Paranorman puppets

Computer Generation Imagery or CGI

Some of the biggest advances in animation have come with the development of computer technology. With CGI, character models are moulded, painted and textured, a skeleton added and then animated – all inside the computer. The characters can be placed in elaborate sets, beautifully lit and then the resultant frames rendered out as a finished film. The beauty of computers is that changes can be made very quickly, both to the look of the visuals or the animation, and tweening can be very well controlled. As well as this, movement can be programmed so that the material in CGI clothes can flow and behave like real cloth, objects can bounce and react to forces and crowds of characters can move like a flock, behaving rules that govern the movement. Poses, facial expressions and animations can be stored and then stung together to form more complex movements.

Some of the most prominent names in CGI include Pixar, Dreamworks and of course Disney, who produce entirely CGI animated feature films and shorts. CGI is also used in blockbuster movies for combining animation with real footage seamlessly. Big names include Double Negative, MPC and The MILL in Soho (not to be confused with The MILL at City University London!)

___________

Animation has becoming hugely a hugely important part of visual entertainment, marketing, computer games and educational multimedia. There are many animation festivals and conferences in the world, some of the most important being Annecy Animation Festival in France and SIGGRAPH, the computer graphics expo. One of the best conferences in the UK is the Bradford Animation Festival (BAF), held every year at the National Media Museum. Almost 20 years old, the festival covers all things animation related, showcasing shorts and features from students, advertising, and independent animators, as well as holding detailed masterclasses from some of the biggest names in the industry. Networking and interviews with prominent people in the field are also in abundance. It’s a great place to live and breathe animation and meet others with similar interests, as well as spending time in the fantastic museum – 9 floors devoted to film, animation and photography.

Media Museum

Media Museum

Museum entrance

Museum entrance

BAF 2012

BAF 2012

9 floors

9 floors

Museum shop

Museum shop

TVs from every decade

TVs from every decade

Games Lounge

Games Lounge

I’ve been lucky enough to attend for the past several years and this year’s highlights included Double Negative talking about the special effects animation on “Total Recall”, a retrospective on “Bugs Bunny” legend Chuck Jones with his granddaughter Valerie Kausen, Laika Studios talking about stop-frame animation on “Coraline” and “Paranorman”, as well as 100s of shorts from cutting edge CGI to educational films using traditional techniques. It’s a festival that is going from strength to strength.

Valerie Kausen interviewed by Professor Paul Wells

Valerie Kausen interviewed by Professor Paul Wells

The MILL at the LDC mainly focuses on traditional video but I have been fortunate enough to produce several animations for education as well as assisting various students with their own animations. Partially for engagement, and partially to demonstrate processes clearly and effectively, each animation has it’s own character, style and charm and reaction so far has been very positive.

The trickiest piece I’ve attempted was a small film for the Law School. They requested a piece which showcased and demonstrated their Lawbore website in an engaging way. The site utilises an owl logo, so for the film I created a CGI owl and incorporated into the live action.

Lawbore Owl

Lawbore Owl

The owl talks, jumps about and has a rather interesting Scottish accent. The creation was relatively straight forward – the owl was sculped and painted with the software and a skeleton placed inside to give it movement.

Owl model and controllers

Owl model and controllers

The owl was then layered on top of the video footage and lighting adjusted so that it fitted naturally into the scene. Finally, shadows were added to give that extra bit of realism and make the owl look as if it were actually there. Probably the most difficult process was getting the beak to move in sync with the dialogue so that it looked like it was actually speaking. I did find a workaround for this rather than having to animate the beak to every bit of speech. Thankfully!

Other pieces have including animations explaining podcasting, advertising university events and many many animated title sequences for films. I’ve also created pieces in my own time for competitions, including this one for Red Bull, kindly voiced by a member of my team Sian Lindsay.

For CGI animations I use Softimage from Autodesk, which does have a large learning curve and is expensive. However, there are several free packages, the best probably being Blender. This is capable of professional results and several award-winning films have been made with it. For traditional style animation I love to use Adobe After Effects. Again, it’s not cheap and has a steep learning curve, but the results are impressive. For simple animation, Final Cut Pro can be used and students have used it to produce some very impressive stop-frame animations and title sequences. The following example by student Liz Hilder was produced by taking dozens of photographs, which were brought into Final Cut and then rendered out as a final film.

We’re left with the classic question: animation may be effective but is the reward worth the effort? It’s a question without a simple answer. If engagement is a must or a concept needs explaining clearly and cleanly, sometimes animation is not only the best option, it’s often the only option. I will finish with one of my favourite educational animations, created by the RSA. The talk itself is inspirational but the associated animation just nails the piece home. You won’t forget it in a hurry and I think it’s wonderful. Enjoy.

The rise of the still camera for film making

October 2, 2012 2 comments

Despite both using light to record images, until recently the worlds of video-making and photography were distinct entities, requiring separate equipment and facilities. If you wanted to take professional-looking photographs, you’d buy a SLR camera with a couple of good lenses. If you wanted to make quality videos, you’d buy a video camera: one with a decent flip out screen and professional microphone inputs. With the transition to digital over the past 15 years, the two worlds have become entwined and still cameras have become so proficient at recording video, the term ‘still’ is now a little misleading. We’re heading to the age of the hybrid – a camera that can shoot both photos and videos in superb quality.

Until recently, the only digital still cameras that would shoot video were small compacts. The video mode was a bit of a novelty; if you were out and about and wanted to grab a bit of video, then you’ll turn the dial, point and shoot. The quality wasn’t particularly inspiring but it was good enough. The big change came when Nikon decided to place high definition video onto one of its digital SLR cameras – the D90. With its large sensor and the ability to take a range of lenses, a whole assortment of creative possibilities were unleashed. HD video meant crisp, detailed visuals. A large sensor meant video could now be shot at night or in low light with great results. Zoom and wide angle lenses would lead to interesting shots and angles. And most importantly, the large sensor and flexible aperture could give a very narrow depth of field, enabling blurred backgrounds for truly cinematic shots. The man on the street now had a camera that was capable of true cinema.

Blurred backgrounds resulting from a large sensor and open iris on the lens

Canon soon released their version and pretty soon every camera manufacturer followed suit. We now have a range of dSLR cameras that can shoot stunning video and the quality is improving all the time. Canon recently released the EOS-1D, a camera capable of shooting 4K (ultra high def) video – the quality we see projected at the cinema. And something not to forgot – these are predominately still cameras, designed for taking beautiful photos.

It sounds like a winning combination – a still camera that can take great video. But it isn’t all positive: there are several limitations.

Firstly, despite being small and easy to carry, the shape of a dSLR isn’t always practical for shooting videos, especially over several hours. In response, some manufactures have produced rigs for carrying dSLRs on your shoulder or in more comfort. They also produce monitors, focusing systems and a range of other ingenious accessories.

A dSLR camera rig

Secondly, although they can record sound, many have poor setups for external microphones meaning the sound has to be recorded on a separate unit and later matched to the recorded footage. This is how it’s done in the movies, but it makes editing a little more time consuming. And, it’s very easy to forget to press record on the audio recorder!

Thirdly, the exclusions of important assist functions – like zebra lines indicating overly bright areas – make setting up a little trickier.

Most dSLRs also have slow & noisy autofocusing, can only record short clips at a time and suffer from moiré (distracting patterns on areas of small detail). One would think these limitations would be a huge burden and get in the way. They do, but if you work around them, the quality of the footage is so good it more than makes up for the pain.

As cameras develop, these limitations (especially the length of clip and quality of autofocus) are successfully being addressed. Some cameras have even been ‘hacked’ – users have written their own computer software that give them extra features and functions. One of the most popular is the Magic Lantern software for Canon cameras. Video quality can be vastly improved together with enhanced features for audio recording onto the camera itself. A hack for the Panasonic GH2 – the dSLR owned by The MILL – can raise its video quality so high, it can match the footage from a movie camera costing ten times as much.

So, if still cameras are becoming so good at taking video, what about the dedicated video camera? These still exist of course and many are great at what they do. They do have fast, silent autofocus and better setups for sound. And some even take stills photos! But most have built-in lenses and small sensors, which in my opinion will ultimately lead to their downfall. We are heading towards the hybrid super camera capable of all things visual: photos; video – maybe even ultra slow motion. The Panasonic AF-100 is probably the closest thing to it – a reasonably small video camera which takes dSLR camera lenses, and is capable of stunning results. It’s expensive at £5,000 but the size, weight and price of these cameras will come down over time.

Panasonic AF-100

This is a very exciting time for digital movie making and cameras in general. We now have cameras that can send their images and footage wirelessly to a laptop, cameras with such good shake reduction that they can iron out any wobble or jolts by the user. The new Apple iPhone has a fantastic camera for shooting hi-def video. Not only that, you can edit the video on the phone and upload it to a hosting site without the need for a separate PC. I’m looking forward to a camera that can do that to. Maybe the future of the camera is a pair of spectacles you wear on your head that records exactly what you see!

We’ve shot a couple of videos with a dSLR in the MILL with great results and have some exciting projects in the pipeline where good visuals are a must. Many amateur and independent film makers use dSLRs to make their films and many are showcased on Vimeo, which is always worth a visit. As an example, below is a short video I shot in July 2012 for a competition. It was filmed in a darkened room with my dSLR, the Canon 550D. A normal video camera would have had great difficulty bringing out the level of detail in such low light or providing the depth of field I ultimately achieved. Enjoy.

 

Recommended dSLR cameras: Canon EOS 5D Mk II or Mk III (expensive but industry standard), Panasonic GH2 (cheaper camera but capable of equal or better results)

%d bloggers like this: