Monday 20 February 2012

College Task 2 - Video Fact Profile Continued...2

Interpolation = Motion interpolation is a form of video processing in which animation frames are generated between existing ones, in an attempt to make animation more fluid.

Compression = Video takes up a lot of space. Uncompressed footage from a camcorder takes up about 17MB per second of video. Because it takes up so much space, video must be compressed before it is put on the web. “Compressed” means that the information is packed into a smaller space. There are two kinds of compression: lossy and lossless.

Lossy compression = Lossy compression means that the compressed file has less data in it than the original file. In some cases this means lower quality files, because information has been “lost,” hence the name. However, a relatively large amount of data is lost before being noticed as a difference by the human eye. Lossy compression makes up for the loss in quality by producing comparatively small files. For example, DVDs are compressed using the MPEG-2 format, which can make files 15 to 30 times smaller.

Lossless compression = Lossless compression is compression where none of the information is lost. This is not as useful because files often end up being the same size as they were before compression. This may seem pointless, as reducing the file size is the primary goal of compression. However, if file size is not an issue, using lossless compression will result in a perfect-quality picture. For example, transferring files from one computer to another using a hard drive might choose to use lossless compression to keep the quality.

Codecs = Codec is a short name for coder-decoder, the software that takes a raw data file and turns it into a compressed file. Because compressed files only contain some of the data found in the original file, the codec is the “translator” that decides what data makes it in to the compressed version and what data gets discarded.

Different codecs translate in different ways, so a video file compressed using the Intel Indeo codec will be different from a file compressed using the Cinepak codec, for example. Sometimes the difference is noticeable, sometimes not, such examples consist of codecs codes like MP4, MPEG, h.264 (used the most) etc.



Progressive downloads =   Progressive download is a term used to describe the transfer of digital media files from a server to a client. The consumer may begin playback of the media before the download is complete. The difference between streaming media and progressive download is how the digital media data is received and stored by the end user device that is accessing the digital media.

That means that once the video begins downloading to the consumer’s temporary files for viewing. Because of this, the consumer can only watch up to the point in the video that has been downloaded so far. Also, the visitor cannot skip ahead until the entire video has been downloaded.

This is the preferred method of transmitting Flash video over the web and can be seen at YouTube, Google Video, MySpace, and almost any other video-enabled social video networking site.

The larger the file, the longer it will take to convert and upload to the server. Consequently, it will also take a long time to download depending on the consumer’s connection.

What colour saturation does = removes the colour therefore the image becomes black and white


 

College Task 2 - Video Fact File continued

Interlacing
Since the interlaced signal contains the two fields of a video frame shot at two different times, it enhances motion perception to the viewer and reduces ‘flicker’ by taking advantage of the ‘persistence of vision’ effect.
The Persistence of vision is the phenomenon of the eye by which an afterimage is thought to persist for approximately one twenty-fifth of a second on the retina.


An ‘afterimage or ‘image burn-in is an optical illusion that refers to an image continuing to appear in someone's vision after the original image has disappeared. One of the most common afterimages is the bright glow that seems to float before one's eyes after looking into a light source for a few seconds. By closing the eye people can see a better sense of the color in their own aspects.















SD = 720 x 576 PAL (lines in the frame)         

Standard HD = 1280 x 720

Full HD = 1920 x 1080

Pixels = Mean Picture Elements

16.9 = always a rectangle – height + width

CCD = Charged Couple Device – is a chip located inside the camera, when light passes through the camera lens it hits the optical beam (light splitter) inside the CCD which splits the light into RGB (red, green and blue) separately. 
Charge Coupled Devices (CCDs) were invented in October 19, 1969, by William S. Boyle and George E. Smith at Bell Telephone Laboratories. CCDs are electronic devices, which work by converting light into electronic charge in a silicon chip. This charge is digitized and stored as an image file on a computer.
A CCD is a two-dimensional array of metal-oxide-semiconductor (MOS) capacitors. The charges are stored in the depletion region of the MOS capacitors. Charges are moved in the CCD circuit by manipulating the voltages on the gates of the capacitors so as to allow the charge to spill from one capacitor to the next (thus the name “charge-coupled” device)
A charge detection amplifier detects the presence of the charge packet, providing an output voltage that can be processed. The CCD is a serial device where charge packets are read one at a time.











3 CCD = 3 Charged Couple Devices – Z1 cameras and HD cameras use 3 CCD which instead of having one sensor like the CCD chip, they have three. One for every primary colour, for example 1 sensor is used for the colour blue, one is used for red and the other is used for the colour green. Because these cameras have a 3 CCD inside them, the quality of the pictures/videos have better graphics and less blur.
A three-CCD camera is a camera whose imaging system uses three separate charge-coupled devices, each one taking a separate measurement of the primary colors, red, green, or blue light.
Light coming into the lens is split by a trichroic prism assembly, which directs the appropriate wavelength ranges of light to their respective CCDs. The system is employed by still cameras and professional video cameras such as Z1 cameras.
Compared to cameras with only one CCD, three-CCD cameras generally provides better image quality through resolution and lower noise. By taking separate readings of red, green, and blue values for each pixel, three-CCD cameras achieve much better results than single-CCD cameras.



CMOS = A CMOS sensor (Complementary Metal Oxide Silicon) is a sensor that is normally 10 times less sensitivity then CCD sensor. The human eye can see objects under 1 lux illumination (full moon night).
CCD sensor normally see better or as good as human eye in the range. Lux is 3 to 10 times more sensitive then CMOS sensor.

Thursday 9 February 2012

College Task 2: Video Technology Standards Fact File


Task 2 – Video Fact File


Analogue films = videos, tape recorders - 4:3 = Square

Digital films = Records onto cards (SD Cards) and Hard Drives

SD = Standard Definition – Resolution: 72 pixels

PAL 720 X 576 (Screen Height and Width)



Frame Rate = 24 frames per second (24 fps)
Elements = Elements are basically what is in the shot (i.e. Green Screen, person etc.)
Composite shots = more than one element working together (i.e. titles, graphics, actors, green screens, music, voice-over, narration, lighting effects, cutaway shots, more than one video playing in the frame, ITV logo in the corner of the screen, colour etc.)





Frame rate = 24 Frames Per Second
CCD = Charged Coupled Device – Size of the Sensor in the digital camera
DV Formats = Digital Video Format
RGB = Red, Green and Blue – Primary Colours



What does 720p and 1080i mean?

When watching television, the picture audiences see are ‘composed’ of many ‘scanned lines’. When put together, they compose the image on the screen.

Interlaced and progressive are the two scanning techniques used. Lines of resolution vary for digital televisions such as ‘480p, 720p’ and ‘1080p’. Therefore, the resolution of a television is defined by the lines and types of scanning.

Progressive scan pushes the limits of standard resolution and provides a more film-like image display that is more pleasing for viewing DVDs on a television display. A 720p resolution is a television with 720 progressive scanned lines.
A 1080i resolution has 1080 interlaced scanned lines. Side-by-side, progressive scan will show a clearer picture than interlaced, but most HD programming is shown in the 1080i (Interlaced) resolution.

Video Links

• http://www.youtube.com/watch?v=DULT4L8c8IM - High Definition Television (HDTV) : Difference Between High & Standard Definition Video

• http://www.youtube.com/watch?v=GIXM2DF0dEM - Aspect Ratio 4x3 16x9 4:3 16:9 Letterbox Pillarbox Pan Scan