September 29, 2010 § Leave a comment
Image Capture Methods
In computing, an image scanner is a device that optically scans images, printed text, handwriting, or an object, and converts it to a digital image. Common examples found in offices are variations of thedesktop (or flatbed) scanner where the document is placed on a glass window for scanning.
Scanners typically read red-green-blue color (RGB) data from the array. This data is then processed with some proprietary algorithm to correct for different exposure conditions, and sent to the computer via the device’s input/output interface. Colour depthvaries depending on the scanning array characteristics, but is usually at least 24 bits Another qualifying parameter for a scanner is its resolution, measured in pixels per inch (ppi), sometimes more accurately referred to as Samples per inch (spi). The size of the file created increases with the square of the resolution; doubling the resolution quadruples the file size. The file size can be reduced for a given resolution by using “lossy” compression methods such as JPEG, at some cost in quality. Reduced-quality files of smaller size can be produced from such an image when required (e.g., image designed to be printed on a full page, and a much smaller file to be displayed as part of a fast-loading web page). The third important parameter for a scanner is its density range. A high-density range means that the scanner is able to reproduce shadow details and brightness details in one scan.
Scanning the document is only one part of the process. For the scanned image to be useful, it must be transferred from the scanner to an application running on the computer. There are two basic issues: (1) how the scanner is physically connected to the computer and (2) how the application retrieves the information from the scanner.
The amount of data generated by a scanner can be very large: a 600 DPI 9″x11″ (slightly larger than A4 paper) uncompressed 24-bit image is about 100 megabytes of data which must be transferred and stored.
An application such as Adobe Photoshop must communicate with the scanner. There are many different scanners, and many of those scanners use different protocols. In order to simplify applications programming, some Applications Programming Interfaces (“API”) were developed. The API presents a uniform interface to the scanner. This means that the application does not need to know the specific details of the scanner in order to access it directly. For example, Adobe Photoshop supports the TWAIN standard; therefore in theory Photoshop can acquire an image from any scanner that also supports TWAIN.
The scanned result is a non-compressed RGB image, which can be transferred to a computer’s memory. Once on the computer, the image can be processed with a raster graphics program (such as Photoshop) and saved on a storage device.
Pictures are normally stored in image formats such as uncompressed Bitmap, “non-lossy” (lossless) compressed TIFF and PNG, and “lossy” compressed JPEG. Documents are best stored in TIFF or PDF format; JPEG is particularly unsuitable for text. Optical character recognition (OCR) software allows a scanned image of text to be converted into editable text with reasonable accuracy, so long as the text is cleanly printed and in a typeface and size that can be read by the software. OCR capability may be integrated into the scanning software, or the scanned image file can be processed with a separate OCR program.
A screenshot (orscreen shot),screen capture(or screencap),screen dump,screengrab (orscreen grab), orprint screen is an image taken by the computer to record the visible items displayed on the monitor, television, or another visual output device.
On Mac OS X, a user can take a screenshot of an entire screen by pressing Command-Shift-3, or of a chosen area of the screen by Command-Shift-4. A shell utility called “screencapture” (located in
/usr/sbin/screencapture) can be used from the Terminal application or in shell scripts to capture screenshots and save them to files.
On Microsoft Windows, pressing Print Screen captures a screenshot of the entire desktop and places it in the clipboard, while Alt+Print Screen captures only the active window.
A digital camera (also digicam orcamera for short) is a camera that takes video or still photographs, or both, digitally by recording images via an electronic image sensor. Digital cameras can do things film cameras cannot: displaying images on a screen immediately after they are recorded, storing thousands of images on a single small memory device, recording video with sound, and deleting images to free storage space. The optical system works the same as in film cameras, typically using a lens with a variablediaphragm to focus light onto an image pickup device.
Digital cameras are made in a wide range of sizes, prices and capabilities. Professional photographers and many amateurs use larger, more expensive digital single-lens reflex cameras (DSLR) for their greater versatility. Between these extremes lie digitalcompact cameras and bridge digital cameras that “bridge” the gap between amateur and professional cameras.
Compact cameras are designed to be small and portable and are particularly suitable for casual and “snapshot“. Compact cameras are usually designed to be easy to use, sacrificing advanced features and picture quality for compactness and simplicity; images can usually only be stored using lossy compression (JPEG).
Bridge are higher-end digital cameras that physically and ergonomically resemble DSLRs and share with them some advanced features, but share with compacts the use of a fixed lens and a small sensor. Many of these cameras can store images in a Raw image format, or processed and JPEG compressed, or both (sometimes TIFF).
Digital single-lens reflex cameras (DSLRs) are digital cameras based on film single-lens reflex cameras (SLRs). They take their name from their unique viewing system, in which a mirror reflects light from the lens through a separate optical viewfinder. In order to capture an image the mirror is flipped out of the way, allowing light to fall on the imager. Since no light reaches the imager during framing, autofocus is accomplished using specialized sensors in the mirror box itself. Most 21st century DSLRs also have a “live view” mode that emulates the live preview system of compact cameras, when selected. They make use of interchangeable lenses; each major DSLR manufacturer also sells a line of lenses specifically intended to be used on their cameras. This allows the user to select a lens designed for the application at hand: wide-angle, telephoto, low-light, etc. The mirror flipping out of the way at the moment of exposure makes a distinctive “clack” sound.
The resolution of a digital camera is often limited by the camera sensor (typically a CCD or CMOS sensorchip) that turns light into discrete signals, replacing the job of film in traditional photography. The sensor is made up of millions of “buckets” that essentially count the number of photons that strike the sensor. Depending on the physical structure of the sensor, a color filter array may be used which requires a demosaicing/interpolation algorithm. The number of resulting pixels in the image determines its “pixel count”. For example, a 640×480 image would have 307,200 pixels, or approximately 307 kilopixels; a 3872×2592 image would have 10,036,224 pixels, or approximately 10 megapixels.
There are three main methods of capturing the image, each based on the hardware configuration of the sensor and color filters: the first method is often called single-shot, in reference to the number of times the camera’s sensor is exposed to the light passing through the camera lens. The second method is referred to as multi-shotbecause the sensor is exposed to the image in a sequence of three or more openings of the lens aperture. The third method is calledscanning because the sensor moves across the focal plane much like the sensor of a desktop scanner. Their linear or tri-linear sensors utilize only a single line of photosensors, or three lines for the three colors.
Many digital cameras can connect directly to a computer to transfer data:
- Early cameras used the PC serial port. USB is now the most widely used method (most cameras are viewable as USB mass storage), though some have a FireWire port.
- Other cameras use wireless connections, via Bluetooth or IEEE 802.11 WiFi.
- A common alternative is the use of a card reader which may be capable of reading several types of storage media, as well as high speed transfer of data to the computer.
Problems associated with image capture
In physics, a moiré pattern is an interference pattern created, for example, when two grids are overlaid at an angle, or when they have slightly different mesh sizes. The term originates from moire (or moiré in its French form), a type of textile, traditionally of silk, with a rippled or ‘watered’ appearance.
Moiré patterns are often an undesired artifact of images produced by various digital imaging and computer graphics techniques, for example when scanning a halftone picture or ray tracing a checkered plane.
In graphic arts and prepress, the usual technology for printing full-color images involves the superimposition of halftone screens. These are regular rectangular dot patterns—often four of them, printed in cyan, yellow, magenta, and black. Some kind of moiré pattern is inevitable, but in favorable circumstances the pattern is “tight;” that is, the spatial frequency of the moiré is so high that it is not noticeable. In the graphic arts, the term moirémeans an excessively visible moiré pattern. Part of the prepress art consists of selecting screen angles and halftone frequencies which minimize moiré. The visibility of moiré is not entirely predictable. The same set of screens may produce good results with some images, but visible moiré with others.
Some image scanner driver programs provide an optional filter, called a “descreen” filter, to remove Moiré-pattern artifacts which would otherwise be produced when scanning printed halftone images to produce digital images.
In this case moire pattern is intended to make an animation.
In computer graphics, pixelation is an effect caused by displaying a bitmap or a section of a bitmap at such a large size that individualpixels, small single-colored square display elements that comprise the bitmap, are visible to the eye.
Early graphical applications such as video games ran at very low resolutions with a small number of colors, and so had easily visible pixels. The resulting sharp edges gave curved objects and diagonal lines an unnatural appearance. However, when the number of available colors increased to 256, it was possible to gainfully employ antialiasing to smooth the appearance of low-resolution objects, not eliminating pixelation but making it less jarring to the eye. Higher resolutions would soon make this type of pixelation all but invisible on the screen, but pixelation is still visible if a low-resolution image is printed on paper.
Pixelation is a problem unique to bitmaps. Alternatives such asvector graphics or purely geometric polygon models can scale to any level of detail. This is one reason vector graphics are popular for printing — most modern computer monitors have a resolution of about 100 dots per inch, and at 300 dots per inch printed documents have about 9 times as many pixels per unit of area as a screen. Another solution sometimes used is algorithmic textures, textures such as fractals that can be generated on-the-fly at arbitrary levels of detail.
A colour cast is a tint of a particular colour, usually unwanted, which affects the whole of a photographic image evenly. Certain types of light can cause film and digital cameras to have a colour cast. In general, the human eye does not notice the unnatural colour, because our eyes and brains adjust and compensate for different types of light in ways that cameras cannot.
There are two main causes of colour cast, Sunlight (or cool light) and Incandescent light (or warm light). High end digital cameras try to automatically detect and compensate colour cast and usually have a selection of manually set White Balance settings to chose from.. Otherwise, photo editing programs, such as Photoshop, often have built in colour correction facilities.
Image resolution describes the detail an image holds. The term applies to digital images, film images, and other types of images. Higher resolution means more image detail. The term resolution is often used for a pixel count in digital imaging. An image of N pixels high by M pixels wide can have any resolution less than N lines per picture height, or N TV lines. But when the pixel counts are referred to as resolution, the convention is to describe the pixel resolution with the set of two positive integer numbers, where the first number is the number of pixel columns (width) and the second is the number of pixel rows (height), for example as 640 by 480. Another popular convention is to cite resolution as the total number of pixels in the image, typically given as number ofmegapixels, which can be calculated by multiplying pixel columns by pixel rows and dividing by one million. Other conventions include describing pixels per length unit or pixels per area unit, such as pixels per inch or per square inch. None of these pixel resolutions are true resolutions, but they are widely referred to as such.
Below is an illustration of how the same image might appear at different pixel resolutions, if the pixels were poorly rendered as sharp squares.
An image that is 2048 pixels in width and 1536 pixels in height has a total of 2048×1536 = 3,145,728 pixels or 3.1 megapixels. One could refer to it as 2048 by 1536 or a 3.1-megapixel image.
A bitmap is one of many types of file formats for images stored in a computerized form. It carries the extension .BMP. Computers use bits of 1 and 0 to store data. A bitmap is literally a map of bits that form a particular picture when rendered to a display like a computer monitor. To understand how a bitmap image displays, it’s important to understand the computer display screen. The display is made up of rows and columns of tiny blocks, or pixels. In a bitmap image, each pixel is assigned at least one bit to indicate whether the pixel should reflect the background color, the foreground color, or some other color. In the case of a page of black and white text, let’s consider a single letter. The many pixels that make up that letter only require one bit of data each. Either the pixel will be black or white: 1 or 0. When a bitmap displays a colored image, such as a lake scene, there are several shades of gradation in colors and lighting. In this case, each pixel in the bitmap might have 16, 24, or 48 bits of information associated with it. The more bits, the greater the resolution of the bitmap – and the larger the file. Because bitmaps store so much information in the highest resolutions, they make very beautiful images. However, a bitmap image doesn’t rescale well. If blown up using a graphics program, the bitmap image becomes blocky and blurred. If reduced, it loses clarity. Compression techniques are used to shrink the file size of the bitmap while maintaining as much data as is necessary to render a good picture. One such format is the 8-bit.GIF format, which uses a pallette of 256 colors. The advantage of the compressed .GIF is that it is a smaller file that can be resized with satisfactory results, as it uses lossless compression. The disadvantage is that it cannot faithfully reproduce images containing more than 256 colors.
The bmp file has been created by Microsoft and Ibm and is therefore very strictly bound to the architecture of the main hardware platform that both companies support: the Ibm compatible pc.
JPEG (Joint Photographic Experts Group)
JPEG is a standardised image compression mechanism. JPEG is designed for compressing either full-colour (24 bit) or grey-scale digital images of “natural” (real-world) scenes. It works well on photographs, naturalistic artwork, and similar material; not so well on lettering, simple cartoons, or black-and-white line drawings (files come out very large). JPEG handles only still images, but there is a related standard called MPEG for motion pictures. JPEG is “lossy”, meaning that the image you get out of decompression isn’t quite identical to what you originally put in. The algorithm achieves much of its compression by exploiting known limitation of the human eye, notably the fact that small colour details aren’t perceived as well as small details of light-and-dark. Thus, JPEG is intended for compressing images that will be looked at by humans. A lot of people are scared off by the term “lossy compression”. But when it comes to representing real-world scenes, no digital image format can retain all the information that impinges on your eyeball. By comparison with the real-world scene, JPEG loses far less information than GIF.
GIF (Graphics Interchange Format)
The Graphics Interchange Format was developed in 1987 at the request of Compuserve, who needed a platform independent image format that was suitable for transfer across slow connections. It is a compressed (lossless) format (it uses the LZW compression) and compresses at a ratio of between 3:1 and 5:1. It is an 8 bit format which means the maximum number of colours supported by the format is 256. There are two GIF standards, 87a and 89a (developed in 1987 and 1989 respectively). The 89a standard has additional features such as improved interlacing, the ability to define one colour to be transparent and the ability to store multiple images in one file to create a basic form of animation.
PNG (Portable Network Graphics format)
Portable Network Graphics (PNG) is a bitmapped image format that employs lossless data compression. PNG was created to improve upon and replace GIF (Graphics Interchange Format) as an image-file format not requiring a patent license.
In January 1995 Unisys, the company Compuserve contracted to create the GIF format, announced that they would be enforcing the patent on the LZW compression technique the GIF format uses. This means that commercial developers that include the GIF encoding or decoding algorithms have to pay a license fee to Compuserve. This does not concern users of GIFs or non-commercial developers.However, a number of people banded together and created a completely patent-free graphics format called PNG (pronounced “ping”), the Portable Network Graphics format. PNG is superior to GIF in that it has better compression and supports millions of colours. PNG files end in a .png suffix. For more information, try the PNG home page.
PNG supports palette-based images (with palettes of 24-bit RGB or 32-bit RGBA colors), greyscale images (with or without alpha channel), and RGB[A] images (with or without alpha channel). PNG was designed for transferring images on the Internet, not for print graphics, and therefore does not support non-RGB color spaces such as CMYK.
TIFF (Tagged Image File Format)
TIFF is a file format for storing images, popular among Apple Macintosh owners, graphicartists, the publishing industry, and both amateur and professional photographers in general. Originally created by the company Aldus for use with what was then called “desktop publishing”, the TIFF format is widely supported by image-manipulation applications, by publishing and page layout applications, by scanning, faxing, word processing, optical character recognition and other applications. As of 2009, it is under the control of Adobe Systems.
TIFF is a flexible, adaptable file format for handling images and data within a single file, by including the header tags (size, definition, image-data arrangement, applied image compression) defining the image’s geometry. For example, a TIFF file can be a container holding compressed (lossy) JPEG and (lossless) PackBits compressed images. A TIFF file also can include a vector-based clipping path (outlines, croppings, image frames). Other TIFF options are layers and pages.
EPS (Encapsulated PostScript)
EPS, is a DSC-conforming PostScript document with additional restrictions which is intended to be usable as a graphics file format. In other words, EPS files are more or less self-contained, reasonably predictable PostScript documents that describe an image or drawing and can be placed within another PostScript document. EPS, together with DSC’s Open Structuring Conventions, form the basis of early versions of the Adobe Illustrator Artwork file format. EPS files also frequently include a preview picture of the content, for on-screen display. The idea is to allow a simple preview of the final output in any application that can draw a bitmap. When EPS was first implemented, the only machines widely using PostScript were Apple Macintoshes.
A number of programs will save or convert text and vector art to EPS format, including:
PICT is a graphics file format introduced on the original Apple Macintosh computer as its standard metafile format. It allows the interchange of graphics (both bitmapped and vector), and some limited text support, between Mac applications, and was the native graphics format of QuickDraw.
The original version, PICT 1, was designed to be as compact as possible while describing vector graphics. To this end, it featured single byte opcodes, many of which embodied operations such as “do the previous operation again”. As such it was quite memory efficient, but not very expandable. With the introduction of the Macintosh II and Color QuickDraw, PICT was revised to version 2. This version featured 16-bit opcodes and numerous changes which enhanced its utility. PICT 1 opcodes were supported as a subset for backward compatibility.
Within a Mac application, any sequence of drawing operations could be simply recorded/encoded to the PICT format by opening a “Picture”, then closing it after issuing the required commands. By saving the resulting byte stream as a resource, a PICT resourceresulted, which could be loaded and played back at any time. The same stream could be saved to a data file on disk (with 512 bytes of unused header space added) as a PICT file.
With the change to Mac OS X and deprecation of QuickDraw, PICT was dropped in favour of Portable Document Format (PDF) as the native metafile format, though PICT support is retained by many applications as it is so widely supported on the Mac.
Comparing Bitmaps and Vectors
A bitmapped (or raster) image is an imagemade up of pixels – for example, a JPEG photo from a digital camera or a GIF image in a web page. Bitmapped images are great for storing real-world images, such as photos, that can’t easily be mathematically defined. The main disadvantages of bitmapped images are their large file size and the fact that they can’t be scaled well – if you enlarge a bitmapped image you quickly start seeing aliasing or, to use a technical term, “jaggy bits”. In contrast, a vector image is defined mathematically using lines and curves. It’s not made up of pixels like a bitmapped image is. This means that vector images are resolution-independent; it doesn’t matter how big or small you scale a vector image, it never loses detail. Vectors are great for storing images that are easily described using lines and curves and that need to stay pin sharp at any resolution – for example: type, logos geometric shapes, and charts. Excerpt from Photoshop CS3 Layers Bible, published by Wiley Publishing, Inc.