Sometimes referred to as widescreen format (see Figure 2), the 16:9 aspect ratio is a compromise between the standard broadcast format and that commonly utilized for motion pictures. The standard aspect ratio for digital high-definition television ( HDTV) is 16:9 (or 1.78:1), which results in a more rectangular screen. For example, a 32-inch television (measured diagonally from the lower left-hand corner to the upper right-hand corner) is 25.6 inches wide by 19.2 inches tall. The 4:3 aspect ratio standard, widely utilized for television and computer monitors, produces a display that is four units wide by three units high. By adhering to a standard aspect ratio for display of digital images, gross distortion of the image, such as a circle appearing as an ellipse, is avoided when the images are displayed on remote platforms. In contrast, an image with an aspect ratio of 1:1 (often utilized in closed circuit television or CCTV) is perfectly square. The recommended NTSC (National Television Systems Committee) commercial broadcast standard aspect ratio for television and video equipment is 1.33, which translates to a ratio of 4:3, where the horizontal dimension of the image is 1.33 times wider than the vertical dimension. The horizontal-to-vertical dimensional ratio of a digital image is referred to as the aspect ratio of the image and can be calculated by dividing the horizontal width by the vertical height. In reality, the image exists only as a large serial array of numbers (or data values) that can be interpreted by a computer to produce a digital representation of the original scene. Thus, a digital image is composed of a rectangular (or square) pixel array representing a series of intensity values and ordered through an organized ( x, y) coordinate system. In many cases, the x location is referred to as the pixel number, and the y location is known as the line number. By convention, the pixel positioned at coordinates (0,0) is located in the upper left-hand corner of the array, while a pixel located at (158,350) would be positioned where the 158th column and 350th row intersect. The x coordinate specifies the horizontal position or column location of the pixel, while the y coordinate indicates the row number or vertical position. The result is a numerical representation of the intensity, which is commonly referred to as a picture element or pixel, for each sampled data point in the array.īecause images are generally square or rectangular in dimension, each pixel that results from image digitization is represented by a coordinate-pair with specific x and y values arranged in a typical Cartesian coordinate system. After sampling is completed, the resulting data is quantized to assign a specific digital brightness value to each sampled data point, ranging from black, through all of the intermediate gray levels, to white. The sampling process measures the intensity at successive locations in the image and forms a two-dimensional array containing small rectangular blocks of intensity information. The target objective is to convert the image into an array of discrete points that each contain specific information about brightness or tonal range and can be described by a specific digital data value in a precise location. After sampling in a two-dimensional array (Figure 1(b)), brightness levels at specific locations in the analog image are recorded and subsequently converted into integers during the process of quantization (Figure 1(c)). The analog representation of a miniature young starfish imaged with optical microscope is presented in Figure 1(a). To convert a continuous-tone image into a digital format, the analog image is divided into individual brightness values through two operational processes that are termed sampling and quantization, as illustrated in Figure 1. Because grayscale images are somewhat easier to explain, they will serve as a primary model in many of the following discussions. This process applies to all images, regardless the origin and complexity, and whether they exist as black and white ( grayscale) or full color. In order for a continuous-tone or analog image to be processed or displayed by a computer, it must first be converted into a computer-readable form or digital format. Continuous-tone images are produced by analog optical and electronic devices, which accurately record image data by several methods, such as a sequence of electrical signal fluctuations or changes in the chemical nature of a film emulsion that vary continuously over all dimensions of the image.
0 Comments
Leave a Reply. |