Optimist 2012 Convention Mobile Information

Leadership Development Workshops


The human eye and its brain interface, the human visual system, can process 10 to 12 separate images per second, perceiving them individually.[1] The visual cortex holds onto one image for about one-fifteenth of a second, so if another image is received during that period an illusion of continuity is created, allowing a sequence of still images to give the impression of motion. Early silent films had a frame rate from 14 to 24 FPS but by using projectors with dual- and triple-blade shutters the rate was multiplied two or three times as seen by the audience. Studies by Thomas Edison determined that any rate below 46 FPS "will strain the eye."[2] In the mid- to late-1920s, the frame rate for silent films increased to about 20 to 26 FPS.[2] When sound film was first introduced in 1926, variations in film speed were no longer tolerated as the human ear was more sensitive to changes in audio frequency. From 1927 to 1930, the rate of 24 FPS became standardized for 35 mm sound film; a speed of 456 millimetres (18.0 in) per second.[1] This allowed for simple two-blade shutters to give a projected series of images at 48 per second. Many modern 35 mm film projectors use three-blade shutters to give 72 images per second—each frame flashed on screen three times.[2]

Frame rates in film and television

As of 2012, there are three main frame rate standards in the TV and movie-making business: 24p, 25p, and 30p. However, there are many variations on these as well as newer emerging standards.

  • 24p is a progressive format and is now widely adopted by those planning on transferring a video signal to film. Film and video makers use 24p even if their productions are not going to be transferred to film, simply because of the on-screen "look" of the (low) frame rate which matches native film. When transferred to NTSC television, the rate is effectively slowed to 23.976 FPS (24×1000÷1001 to be exact), and when transferred to PAL or SECAM it is sped up to 25 FPS. 35 mm movie cameras use a standard exposure rate of 24 FPS, though many cameras offer rates of 23.976 FPS for NTSC television and 25 FPS for PAL/SECAM. The 24 FPS rate became the de facto standard for sound motion pictures in the mid-1920s.[2] Practically all hand-drawn animation is designed to be played at 24 FPS. Actually hand-drawing 24 unique frames per second ("1's") is costly. Even big budget films usually hand-draw animation shooting on "2's" (one hand-drawn frame is shown 2 times, so only 12 unique frames per second)[3][4] and a lot of animation is drawn on "4's" (one hand-drawn frame is shown 4 times, so only 6 unique frames per second).
  • 25p is a progressive format and runs 25 progressive frames per second. This frame rate derives from the PAL television standard of 50i (or 50 interlaced fields per second). Film and Television companies use this rate in 50 Hz regions for direct compatibility with television field and frame rates. Conversion for 60 Hz countries is enabled by slowing down the media to 24p then converted to 60 Hz systems using pulldown. While 25p captures half the temporal resolution or motion that normal 50i PAL registers, it yields a higher vertical spacial resolution per frame. Like 24p, 25p is often used to achieve "cine"-look, albeit with virtually the same motion artifacts. It is also better suited to progressive-scan output (e.g., on LCD displays, computer monitors and projectors) because the interlacing is absent.
  • 30p is a progressive format and produces video at 30 frames per second. Progressive (noninterlaced) scanning mimics a film camera's frame-by-frame image capture. The effects of inter-frame judder are less noticeable than 24p yet retains a cinematic-like appearance. Shooting video in 30p mode gives no interlace artifacts but can introduce judder on image movement and on some camera pans. The widescreen film process Todd-AO used this frame rate in 1954–1956.[5]
  • 50i (50 interlaced fields = 25 frames) is an interlaced format and is the standard video field rate per second for PAL and SECAM television.
  • 60i (actually 59.94, or 60 x 1000/1001 to be more precise; 60 interlaced fields = 30 frames) is an interlaced format and is the standard video field rate per second for NTSC television (e.g. in the US), whether from a broadcast signal, DVD, or home camcorder. This interlaced field rate was developed separately by Farnsworth and Zworykin in 1934,[6] and was part of the NTSC television standards mandated by the FCC in 1941. When NTSC color was introduced in 1953, the older rate of 60 fields per second was reduced by a factor of 1000/1001 to avoid interference between the chroma subcarrier and the broadcast sound carrier.
  • 50p/60p is a progressive format and is used in high-end HDTV systems. While it is not technically part of the ATSC or DVB broadcast standards, it is rapidly gaining ground in the areas of set-top boxes and video recordings.[citation needed]
  • 72p is a progressive format and is currently in experimental stages. Major institutions such as Snell have demonstrated 720p72 pictures as a result of earlier analogue experiments, where 768 line television at 75 FPS looked subjectively better than 1150 line 50 FPS progressive pictures with higher shutter speeds available (and a corresponding lower data rate).[7] Modern cameras such as the Red One can use this frame rate to produce slow motion replays at 24 FPS. Douglas Trumbull, who undertook experiments with different frame rates that led to the Showscan film format, found that emotional impact peaked at 72 FPS for viewers.[8] 72 FPS is the maximum rate available in the WMV video file format.

Higher frame rates, including 300 FPS, have been tested by BBC Research over concerns with sports and other broadcasts where fast motion with large HD displays could have a disorienting effect on viewers.[9] 300 FPS can be converted to both 50 and 60 FPS transmission formats without major issues.

Owing to their flexibility, software-based video formats can specify arbitrarily high frame rates, and many (cathode ray tube) consumer PC monitors operate at hundreds of frames per second, depending on the selected video mode. LCD screens are usually 24, 25, 50, 60, or 120 FPS.

Director James Cameron stated his intention to film the two sequels to his film Avatar at a higher frame rate than 24 frames per second, in order to add a heightened sense of reality.[10]Peter Jackson is filming The Hobbit at 48 FPS.[11]


Frame rate is also a term used in real-time computing. In a fashion somewhat comparable to the moving-picture definition presented above, a real-time frame is the time it takes to complete a full round of the system's processing tasks. If the frame rate of a real-time system is 60 hertz, the system reevaluates all necessary inputs and updates the necessary outputs 60 times per second under all circumstances.

The designed frame rates of real-time systems vary depending on the equipment. For a real-time system that is steering an oil tanker, a frame rate of 1 FPS may be sufficient, while a rate of even 100 FPS may not be adequate for steering a guided missile. The designer must choose a frame rate appropriate to the application's requirements.

Frame rates in video games

Frame rates in video games refer to the speed at which the image is refreshed (typically in frames per second, or FPS). Many underlying processes, such as collision detection and network processing, run at different or inconsistent frequencies or in different physical components of a computer. FPS affect the experience in two ways: low FPS does not give the illusion of motion effectively and affects the user's capacity to interact with the game, while FPS that vary substantially from one second to the next depending on computational load produce uneven, “choppy” movement or animation. Many games lock their frame rate at lower but more sustainable levels to give consistently smooth motion.

The first 3D first-person shooter game for a personal computer, 3D Monster Maze, had a frame rate of approximately 6 FPS, and was still a success. In modern action-oriented games where players must visually track animated objects and react quickly, frame rates of between 30 to 60 FPS are considered acceptable by most, though this can vary significantly from game to game. Modern action games, including popular console shooters such as Halo 3, are locked at 30 FPS maximum, while others, such as Unreal Tournament 3, can run well in excess of 100 FPS on sufficient hardware. Additionally some games such as Quake 3 Arena perform physics, AI, networking, and other calculations in sync with the rendered frame rate - this can result in inconsistencies with movement and network prediction code if players are unable to maintain the designed maximum frame rate of 125 FPS. The frame rate within games varies considerably depending upon what is currently happening at a given moment, or with the hardware configuration (especially in PC games.) When the computation of a frame consumes more time than is allowed between frames, the frame rate decreases.

A culture of competition has arisen among game enthusiasts with regard to frame rates, with players striving to obtain the highest FPS possible, due to their utility in demonstrating a system's power and efficiency. Indeed, many benchmarks (such as 3DMark) released by the marketing departments of hardware manufacturers and published in hardware reviews focus on the FPS measurement. Even though the typical LCD monitors of today are locked at 60 Hz, making extremely high frame rates impossible to see in realtime, playthroughs of game “timedemos” at hundreds or thousands of FPS for benchmarking purposes are still common.

Beyond measurement and bragging rights, such exercises do have practical bearing in some cases. A certain amount of discarded “headroom” frames are beneficial for the elimination of uneven (“choppy” or “jumpy”) output, and to prevent FPS from plummeting during the intense sequences when players need smooth feedback most.

Aside from frame rate, a separate but related factor unique to interactive applications such as gaming is latency. Excessive preprocessing can result in a noticeable delay between player commands and computer feedback, even when a full frame rate is maintained, often referred to as input lag.

Without realistic motion blurring, video games and computer animations do not look as fluid as film, even with a higher frame rate. When a fast moving object is present on two consecutive frames, a gap between the images on the two frames contributes to a noticeable separation of the object and its afterimage in the eye. Motion blurring mitigates this effect, since it tends to reduce the image gap when the two frames are strung together. The effect of motion blurring is essentially superimposing multiple images of the fast-moving object on a single frame. Motion blurring makes the motion more fluid for some people, even as the image of the object becomes blurry on each individual frame.

A high frame rate still does not guarantee fluid movements, especially on hardware with more than one GPU. This effect is known as micro stuttering.

Visible frame rate

The human visual system does not see in terms of frames; it works with a continuous flow of light information.[12] A related question is, “how many frames per second are needed for an observer to not see artifacts?” However, this question also does not have a single straightforward answer. If the image switches between black and white each frame, the image appears to flicker at frame rates slower than 30 FPS (interlaced). In other words, the flicker fusion point, where the eyes see gray instead of flickering tends to be around 60 FPS (inconsistent). However, fast moving objects may require higher frame rates to avoid judder (non-smooth, linear motion) artifacts — and the retinal fusion point can vary in different people, as in different lighting conditions. The flicker-fusion point can only be applied to digital images of absolute values, such as black and white. Whereas a more analogous representation can run at lower frame rates, and still be perceived by a viewer. For example, motion blurring in digital games allows the frame rate to be lowered, while the human perception of motion remains unaffected. This would be the equivalent of introducing shades of gray into the black–white flicker.

Although human vision has no “frame rate”, it may be possible to investigate the consequences of changes in frame rate for human observers. The most famous example may be the wagon-wheel effect, a form of aliasing in the time domain; in which a spinning wheel suddenly appears to change direction when its speed approaches the frame rate of the image capture/reproduction system.

Different capture/playback systems may operate at the same frame rate, and still give a different level of "realism" or artifacts attributed to frame rate. One reason for this may be the temporal characteristics of the camera and display device.

Judder is a real problem in this day[when?] where 46 and 52-inch (1,300 mm) television sets have become common. The amount an object moves between frames physically on screen is now of such a magnitude that objects and backgrounds can no longer be classed as "clear". Letters cannot be read and looking at vertical objects like trees and lamp posts while the camera is panning sideways have even been known to cause headaches. The actual amount of motion blur needed to make 24 frames per second smooth eliminates every remnant of detail from the frames. Where adding the right amount of motion blur eliminates the uncomfortable side effects, it is more than often simply not done. It requires extra processing to turn the extra frames of a 120 FPS source (which is the current recording "standard"[citation needed]) into adequate motion blur for a 24 FPS target. It would also potentially remove the detail and clarity of background advertising. Today[when?], devices are up to the task of displaying 60 frames per second, using them all on the source media is very much possible. For example, the amount of data that can be stored on Blu-ray and the processing power to decode it is more than adequate. Though the extra frames when not filtered correctly, can produce a somewhat video-esque quality to the whole, the improvement to motion heavy sequences is undeniable. Many televisions now have an option to do some kind of frame interpolation (what would be a frame between 2 real frames gets calculated to some degree) using technologies like Trimension DNM. Sophisticated algorithms can utilize motion compensation information to achieve a very high degree of accuracy with few artifacts.

A JavaScript, web-browser based application is available for users to be able to observe the visual differences between frame rates as a form of reference.[13]