System Architeture

The MUAN system was developed for multi-platform (Linux, Mac OS e Windows) and the software part uses a lot of libraries or program codes that are freely distributed, making the application very flexible for evaluations.

The main idea of the system MUAN is to show images comming from a camera video stream and allow user to select the desired images. A set of captured/selected images will form an animation. How this process works and its requirements are described below.

System Description

The Muan software provides very important features of Stop Motion like toggling between stored and live frames, instantaneous preview of animation, delete/insert of frames, frame markers and frame flipping.

This last feature is very useful for Stop Motion because it allows to compare the last frame with the image coming from the camera (live incomming frame) so the user can check if the frame to be stored is the desired one and can adjust the scene accordinly. Frame markers are used to modify the duration (delay), to play or delete an interval of frames. By duration we means the time that the frame will be repeated within the animation sequence, as shown in Figure below.

Representation of frame duration

To provide Stop Motion features, we create an image list to store the frames of the animation, that is, the captured images with their properties like delay, time stamp, duration and position at the list. With a list, we can manipulate positions, visualizing any frame of animation at any time. Thus, on the user interface we provide components for playing or stoping animation, rewind, forward, go to first, to last or to an arbitrary frame position on the list, and clear all the list or remove some or several stored frames. Besides, user can choose if application is in camera mode, showing live frames, or memory mode, showing the captured images. In both modes, images are displayed in an OpenGL window, so the image to be shown is an array of pixels in RGB system color. But in the list, the stored frames are saved in DV format. Although this format contains image and sound information, its lenght is more compressed than a simple array of pixels.

Because the application has to show live frames comming from the camera, we create a thread which is continuously getting images from the camera in real time. When connected via firewire, application establish a communication with the camera and grabs DV data. Images in DV format have 3:2 aspect ratio, that is, widescreen. In our purpose, we are not working with sound for the animation, we are focused on capture only images, so the captured DV frame has its sound track erased. When the connection with camera is over a composite or s-video input, application opens a direct communication with device, capturing images as an array of pixels in 24 bit RGB pallete. In this case, captured images are in 4:3 aspect ratio, so we put a black band on the laterals of the image and convert it into DV format to put it in the list of captured frames.

Once all desired frames were captured, user can save the animation asking to application for generate a video file. Muan can record video files in AVI or MPEG formats. Another option is to save each frame as an image file in JPEG format. Thus, we also provide functions to read AVI or MPG video files and RAS or JPG image files. User can open one of these files and edit it or can insert files of these formats in an animation that is being created.

Options like view image window in full screen, play the animation continuously or in loop and flip operation, that alternate camera mode and memory mode providing a helpful feature for the animator to analize the last or next frame to be captured, are also present in user interface, leading application to be more friendly.

System Interface

We can describe the interactions of our system like this:


In order to provide functionalities listed in section System Description, we used some libraries commonly and that are freely distributed.

We create a video library that implements functions of interface with I/O communication using the libraries/drivers according to the type of connection.

To implement the functions that store the animation in video files, we use the libavcodec and libavformat libraries, which compose the FFmpeg Project ( FFmpeg has functionalities to record, convert and stream audio and video and is developed under Linux, but it can be compiled under most operating systems, including Windows and MacOS. Libavcodec library contains all the FFmpeg audio/video encoders and decoders. Most of this codecs were developped from scratch to ensure best performances and high code reusability. Libavformat library contains parsers and generators for all common audio/video formats. With these libraries, application is able to read and write MPEG files.

To import JPEG images to an animation or export the frames of an animation to JPEG images, we use libjpeg which is a widely used free library for handling the JPEG image format, writed and distributed by IJG ( Our code for converting to and from this format was based, respectively, on cjpeg and djpeg that are programs included in this library.

The class ImgList implements the animation representation with a list used to store frames. And to represent images or the frames of an animation, we used 2 classes: one that implements the frame only as an image, that is, an array of pixels (colors), and another that implements the frame as DV data, like it comes from digital devices through a firewire connection.

Another widely used free C library that we use in MUAN is libsndfile. Libsndfile is a simple and easy to use API for reading and writing a large number of file formats containing sampled sound and can be usable on Unix, Win32, MacOS and others. On linux, libsndfile can works over ALSA driver (Advanced Linux Sound Architecture), which provides audio and MIDI functionality to this operating system, and we are requiring this option. As already said, we are not working with sound for animation, but sometimes we need sound on the user interface providing a better and easier interaction. So we are using libsndfile and requiring ALSA driver on the computer.