17

Digital Imaging & Photography

Michael Scott Sheerin, M.S.*

*Associate Professor, School of Journalism and Mass Communications, Florida International University (Miami, Florida)

“no time for cameras we’ll use our eyes instead”

--from the song Cameras by Matt and Kim

Introduction

Digital imaging and photography is by far the most popular visual medium of all communication technologies. And it just seems that we only need to use our eyes to take pictures, as image capture technology, to borrow from the Greek, has become “phusis,” defined as a natural endowment, condition or instinct (Miller, 2012). Take a look around you, and I bet there is a device within three feet that you can use to capture an image; perhaps that device is even in your hand as you read this chapter on your mobile phone. For Millennials (18-35 year olds), the mobile phone is a natural appendage. And an appendage that we use a lot, as it has been estimated that we will take 1.3 trillion photos in 2017, with nearly 80% of them captured on a mobile phone (Worthington, 2014). These digital images are ultimately shared on social networks like Snapchat (8,796 photos uploaded/second), Whatsapp (700 Million uploaded/day), Facebook (350 million photos uploaded/day), or Instagram (70 million uploaded/day), for anyone to see, making the digital image ubiquitous in our culture (Morrison, 2015).

George Orwell’s novel, Nineteen Eighty-Four, published in 1949, fictitiously predicts a dystopian society that has had its civil liberties stripped due to government surveillance, i.e. Big Brother. But as we see today, it’s not Big Brother alone that is watching, as the plethora of images shared on social media platforms confirms that everyone is participating in the surveillance, including self-participation. According to a 2015 study conducted in London by SelfieCity, 5.6% of all images taken in the city and posted on Instagram are selfies, while another 20% are portraits (SelfieCity, 2016).

This opt-in social surveillance via digital self-imaging is especially popular with Millennials, as the average age of the selfie takers in the study was 23.7 years for females and 28 years for males, with the vast majority of the images (62.8%) taken and posted by women (SelfieCity, 2016). A vast majority of the time, Orwell’s imagined surveillance takes place in reality with complete compliance by its subjects. As Fred Ritchin, professor of Photography and Imaging at New York University’s Tisch School of the Arts, points out, “we are obsessed with ourselves” (Brook, 2011). Now couple this “social surveillance” trend with facial-recognition technology, and it’s easy to see how someone could be “tagged” and tracked very easily. In fact, your face doesn’t even need to be in the photo to be tagged, as a new, experimental algorithm being used in Facebook’s artificial intelligence lab can detect your presence in an image via other means. The algorithm uses different clues and looks for “unique characteristics like your hairdo, clothing, body shape and pose” (Rutkin, 2015). This detection algorithm concerns many who deal with privacy issues, including Ralph Gross, who conducts post-doctoral research on privacy protection at Carnegie Mellon University’s CyLab. Noting that the Facebook algorithm is impressive, Gross states, “If, even when you hide your face, you can be successfully linked to your identity—that will certainly concern people. Now is a time when it’s important to discuss these questions” (Rutkin, 2015).

These findings suggest that the photograph is not just a standalone part of the digital image industry anymore. It’s fully converged with the computer and cell phone industries, among others, and has changed the way we utilize our images “post shutter-release.” These digital images are not the same as the photographs of yesteryear. Those analog photos were continuous tone images, while the digital image is made up of discreet pixels, ultimately malleable and traceable, to a degree that becomes easier with each new version of photo-manipulating software, GPS and detection algorithms. And unlike the discovery of photography, which happened when no one alive today was around, this sea change in the industry has happened right in front of us; in fact, we are all participating in it—we are all “Big Brother.” This chapter will look at some of the hardware and software inventions that continue to make capturing and sharing quality images easier, as well as the implications on society, as trillions of digital images enter into all our media.

Background

Digital images of any sort, from family photographs to medical X-rays to geo-satellite images, can be treated as data. This ability to take, scan, manipulate, disseminate, track, or store images in a digital format has spawned major changes in the communication technology industry. From the photojournalist in the newsroom to the magazine layout artist, and from the social scientist to the vacationing tourist posting to Facebook via Instagram, digital imaging has changed media and how we view images.

The ability to manipulate digital images has grown exponentially with the addition of imaging software, and has become increasingly easier to do. Don’t like your facial skin tones on that selfie you just took? Download Facetune to your iOS or Android device. “Instead of attempting to remove blemishes from your picture with an algorithm, the app asks you to swipe away spots” (Purewal, 2015). Want to make your images look like they were shot with a 35 mm film camera? Use Faded, an iPhone and iPad app that brings “the nostalgia and beauty of classic film to your photos” (C/Net, 2016).

Looking back, history tells us photo-manipulation dates back to the film period that the Faded app attempts to capture. Images have been manipulated as far back as 1906, when a photograph taken of the San Francisco Earthquake was said to be altered as much as 30% according to forensic image analyst George Reid. A 1984 National Geographic cover photo of the Great Pyramids of Giza shows the two pyramids closer together than they actually are, as they were “moved” to fit the vertical layout of the magazine (Pictures that lie, 2011). In fact, repercussions stemming from the ease with which digital photographs can be manipulated caused the National Press Photographers Association (NPPA), in 1991, to update their code of ethics to encompass digital imaging factors (NPPA, 2016). Here is a brief look at how the captured, and now malleable, digital image got to this point.

The first photograph ever taken is credited to Joseph Niepce, and it turned out to be quite pedestrian in scope. Using a technique he derived from experimenting with the newly-invented lithograph process, Niepce was able to capture the view from outside his Saint-Loup-de-Varennes country house in 1826 in a camera obscura (Harry Ransom Center, University of Texas at Austin, 2016). The capture of this image involved an eight-hour exposure of sunlight onto bitumen of Judea, a type of asphalt. Niepce named this process heliography, which is Greek for sun writing (Lester, 2006). It wasn’t long after that the first photographic self-portrait, now known as a selfie, was recorded in 1839. (See Figure 17.1) Using the daguerreotype process (sometimes considered the first photographic process developed by Niepce’s business associate Louis Daguerre), Robert Cornelius “removed the lens cap, ran into the frame and sat stock still for five minutes before running back and replacing the lens cap” (Wild, 2016).

The next 150 years included significant innovation in photography. Outdated image capture processes kept giving way to better ones, from the daguerreotype to the calotype (William Talbot), wet-collodion (Frederick Archer), gelatin-bromide dry plate (Dr. Richard Maddox), and the now slowly disappearing continuous-tone panchromatic black-and-white and autochromatic color negative films. Additionally, exposure time has gone from Niepce’s eight-hour exposure to 1/500th of a second or less.

Figure 17.1

First ‘Selfie’

Images

Source: Library of Congress

Cameras themselves did not change that much after the early 1900s until digital photography came along. Kodak was the first to produce a prototype digital camera in 1975. The camera had a resolution of .01 megapixels and was the size of a toaster (Kodak, 2016). In 1981, Sony announced a still video camera called the MAVICA, which stands for magnetic video camera (Carter, 2015a). It was not until nine years later, in 1990, that the first digital still camera (DSC) was introduced. Called the Dycam (manufactured by a company called Dycam), it captured images in monochromatic grayscale and had a resolution that was lower than most video cameras of the time. It sold for a little less than $1,000 and had the ability to hold 32 images in its internal memory chip (Aaland, 1992).

In 1994, Apple released the Quick Take 100, the first mass-market color DSC. The Quick Take had a resolution of 640 × 480, equivalent to a NTSC TV image, and sold for $749 (Kaplan & Segan, 2008). Complete with an internal flash and a fixed focus 50mm lens, the camera could store eight 640 × 480 color images on an internal memory chip and could transfer images to a computer via a serial cable. Other mass-market DSCs released around this time were the Kodak DC-40 in 1995 for $995 (Carter, 2015b) and the Sony Cyber-Shot DSC-F1 in 1996 for $500 (Carter, 2015c).

The DSCs and the digital single lens reflex (DSLR) cameras work in much the same way as a traditional still camera. The lens and the shutter allow light into the camera based on the aperture and exposure time, respectively. The difference is that the light reacts with an image sensor, usually a charge-coupled device (CCD) sensor, a complementary metal oxide semiconductor (CMOS) sensor, or the newer, live MOS sensor. When light hits the sensor, it causes an electrical charge.

The size of this sensor and the number of picture elements (pixels) found on it determine the resolution, or quality, of the captured image. The number of thousands of pixels on any given sensor is referred to as the megapixels (MP). The sensors themselves can be different sizes. A common size for a sensor is 18 × 13.5mm (a 4:3 ratio), now referred to as the Four Thirds System (Four Thirds, 2012). In this system, the sensor area is approximately 25% of the area of exposure found in a traditional 35mm camera. Many of the sensors found in DSLRs are full frame CMOS sensors that are 35mm in size (Canon, 2014).

The pixel, also known in digital photography as a photosite, can only record light in shades of gray, not color. In order to produce color images, each photosite is covered with a series of red, green, and blue filters, a technology derived from the broadcast industry. Each filter lets specific wavelengths of light pass through, according to the color of the filter, blocking the rest. Based on a process of mathematical interpolations, each pixel is then assigned a color. Because this is done for millions of pixels at one time, it requires a great deal of computer processing. The image processor in a DSC must “interpolate, preview, capture, compress, filter, store, transfer, and display the image” in a very short period of time (Curtin, 2011).

This image processing hardware and software is not exclusive to DSCs or DSLRs, as this technology has continued to improve the image capture capacities of the camera phone. Starting with the first mobile phone that could take digital images, the Sharp J-phone released in Japan in 2000, the lens, sensor quality, and processing power have made the mobile phone the go-to-camera for most of the images captured today (Hill, 2013). An example of the improved image capture technology in mobile phones can be found on the Sony Exmor RS. Equipped with a 22.5 MP stacked sensor. The Exmor RS is the first mobile phone that uses phase detection autofocus pixels that are “stacked” on the phone’s imaging sensors (Horaczek, 2016).

Recent Developments

The digital imaging industry is reaching full maturity. But that doesn’t imply that it is not a dynamic industry. Camera hardware and software improvements continue to evolve, as do apps for the manipulation, distribution, and sharing of images across many electronic platforms. Change has always been a part of the photographic landscape. Analog improvements, such as film rolls (upgrades from expensive and cumbersome plate negatives) and Kodak’s Brownie camera (sold for $1.00) brought photography to the masses at the start of the 1900’s. The advent of the digital camera and the digital images it produced in the late 20th century was another major change, as the “jump to screen phenomenon” gradually pushed the film development and printing industry off to the sidelines (Reis, 2016). But it wasn’t until this new digital imaging phase converged with the telephone industry at the start of the 21st century that the industry really underwent a holistic change. The technological advancement that has had the largest impact on the digital imaging and photography industry up to now is the use of the mobile phone as a camera.

But technological advances don’t stop there. They continue, usually in one of two ways. The first is by incremental improvements, and this can be illustrated with a few examples. One is digital sensor improvement, demonstrated by the way camera sensors continue to increase resolution capacity. Canon’s EOS 5DS R has a full-frame CMOS sensor with a resolution of 50.6 MP, while a new Canon prototype that is not yet available to the public boasts a 250 MP APS-H (Advanced Photo System Type-C) sensor (Hornyak, 2015). Perhaps stretching the use of the term incremental is the advent of the gigapixel camera, such as the one being developed in Chile at the Large Synoptic Survey Telescope (LSST) Project. This 3.2 gigapixel camera will be the world’s largest camera and is expected to be completed in 2022 (Zhang, 2015). Each panoramic snapshot that the LSST captures will show an area 40 times the size of a full moon (LSST, 2016).

A dual lens camera phone is another example of incremental improvement. Though earlier camera phones have had dual lenses, newer uses of the technology, expected in 2017 according to Sony, bring “the smartphone camera quality closer to a DSLR. By using one lens for color information and the other for brightness, the quality of images could improve dramatically” (Nazarian, 2016). Two other innovations that deal with camera lenses begin to pull the technological advancements from the incremental innovation column to the disruptive innovation column, the second of the two types of innovation.

The disruptive innovation that digital imaging is currently undergoing is going to have the biggest impact on the future of the industry. It’s not just the ubiquitous use of today’s mobile phones for capturing images, coupled with the sharing of these images (due to the rise of social media and the Internet of things), but it’s also the new ways that these images are captured that contribute to this disruptive innovation. Consider Linx, a camera-computational and camera-array technology company bought by Apple in 2015. This purchase shows that Apple has heavily invested in disruptive innovation technology, starting with the development of multi-aperture camera phones that “can enable effects like background focus blur, parallax images and 3D picture capture” producing images that rival the quality of DSLRs (Etherington, 2015). Apple’s main competition in this field is the L16 camera by Light, a 52 MP, 16-lens camera prototype that is the size of a smartphone. Set for a late 2016 release (the first promised shipment sold out in early 2016), the camera is touted as “the world’s first multi-aperture computational camera” (Light, 2016). The L16 joins the ranks of the Lytro plenoptic camera, first released in 2011, as cameras that use light field technology. These cameras capture data rather than images, and process the data post-shutter release to produce the desired photo, including the ability to reinterpret it by focusing on any part of the image field afterwards.

Light field, defined by Arun Gershun in 1936 and first used in photography by a team of Stanford researchers in 1996, is part of a broader field of photography know as computational photography, which also includes high-dynamic range (HDR) images, digital panoramic images, as well as the aforementioned light field cameras and technology. The images produced in computational photography are “not a 1:1 record of light intensities captured on a photosensitive surface, but rather a reconstruction, based on multiple imaging sources” (Maschwitz, 2015). To clarify, optical digital image capture in general is not a 1:1 record of the light entering thru the camera lens. Only about one third of the sensor’s photosites are used, as “two thirds of the digital image is interpolated by the processor in the conversion from RAW to JPG or TIF” (Mayes, 2015). The way we see with our own eyes, in fact, can be considered to be more like a lossy JPG than a true 1:1 recording, as “less than half of what we think of as ‘seeing’ is from light hitting our retinas and the balance is constructed by our brains applying knowledge models to the visual information” (Rubin, 2015). To take it a step further, computational photography can be done with light captured by a lens-less camera. The light is diffracted thru a glass sphere, or captured through a microscopic grate. Because algorithms in the camera’s computer are coded to understand “exactly how light will pass through the sphere (or grate), it decodes and stitches the data from each sensor to make a complete image” (Gershgorn, 2015).

Computational photography allows a final image to be reinterpreted in ways never before seen in the industry. The disconnect between the optical capture of old, which in itself was an interpretation of the actual object being photographed—a “willful distortion of fact,” as eloquently stated in 1932 by Edward Weston, and the new data capture methods has opened up a realm of possibilities (Weston, et al., 1986). The old camera obscura method that has been in place for upwards of 160 years, and used in both analog and digital image capture methods, with light passing thru the lens aperture onto a light recording mechanism, has changed. And digital photography has changed with it, as it enters “a world where the digital image is almost infinitely flexible, a vessel for immeasurable volumes of information, operating in multiple dimensions and integrated into apps and technologies with purposes yet to be imagined” (Mayes, 2015).

In the past, photography represented a window to the world, and photographers were seen as voyeurs—outsiders documenting the world for all of us to see. Today, with the image capture ability that a mobile phone puts in the hands of billions, photographers are no longer them—outsiders and talented specialists. Rather, they are us—insiders that actively participate in world events and everyday living, and, thanks to social media sites such as Snapchat, Whatsapp, Facebook, and Instagram, we share this human condition in a way that was never possible before. This participation also falls under the disruptive innovation column. In Bending the Frame, Fred Ritchin writes, “The photograph, no longer automatically thought of as a trace of a visible reality, increasingly manifested individuals’ desires for certain types of reality. And rather than a system that denies interconnectedness, the digital environment emphasizes the possibility of linkages throughout—from one image to another” (Ritchin, 2013). But Ritchin and others have questioned if the steady stream of “linked” images posted to sites like Instagram are really photographs. These images, of mostly food, dogs, and cats as documented in the SelfieCity London study, instead represent a flow of information, and are related to cinema more than they are to a static photograph (Ritchin, 2015). This argument about what is or isn’t a photograph, in part spurred on by technologic advances in the medium, has played out before, starting with George Eastman’s mass production of the dry plate film in 1878 (previous to this innovation, photographers made their own wet-collodion plates).

Lewis Carroll, noted early photographer and author, upon examination of a dry plate film, stated, “Here comes the rabble” (Cicala, 2014). Soon after, film became the capture method of choice in photography and the term “snapshot,” defined as shooting without aiming, came into vogue at the turn of the 20th century (Rubin, 2015). “No one likes change,” freelance photographer Andrew Lamberson says. “People who shot large format hated on the people who shot medium format, who in turn hated on the people who shot 35mm, who in turn hated on people who shot digital” (McHugh, 2013). It will be interesting to see how the new Live Photos app, Apple’s take on multiple-image capture technology, will be viewed in this respect. The app produces short, 3-sec. movies (with audio) at the same time you shoot a still image and can be shared and viewed as either a still or video, depending on how you swipe (Apple’s new 3D Touch technology). Is it a photograph or video, or something altogether new?

Some post-shutter release developments that are not a function of the camera itself, but have still contributed to this innovative disruption, have occurred mainly in the mobile phone arena. Apps for image manipulation continue to evolve. Adobe’s Lightroom, perhaps the best photo editing software since Photoshop, was finally released in the Android market in 2016. It also replaced Aperture in the iOS market with the release of the iPhone 6 and 6s versions, as Apple will not offer upgrade support of Aperture to these newer phones. Though the mobile version of Lightroom is not as robust as the desktop version, with Apple fully throwing its support behind it, the app promises to be the go-to mobile phone editing app (if it isn’t already). The most popular digital imaging apps for mobile phones are referred to as Combine apps. The app combines multiple functionalities, “enabling the user to combine photos with other photos, video clips with other videos, or photos with videos. Combine apps also include apps that facilitate combining photos or videos with other objects, such as text, frames, stickers, or clipart” (PMA, 2016). Adobe’s Premier Clip is a good example. “Users can drag and drop photos and video clips in a timeline interface, add transitions and music, and even import custom special effects from other Adobe Creative Cloud tools” (Corpuz, 2015).

Other apps are starting to make use of the data produced by computational image captures beyond the image itself, pushing the envelope on imaging technology and functionality available on a mobile phone. One example of this is Shazam’s updated app. Once known only as a music identifying app, it can transform print images found in magazines or posters from static images into dynamic pieces of content (Shazam, 2015). The 2016 Sports Illustrated Swimsuit edition featured many images that could be “photozamed” with a smart phone, connecting it to additional content like more images of the swimsuit model and videos from the photo shoot. The app can also be used in conjunction with Google’s Cardboard, a virtual reality (VR) viewing device that lets one view VR interactive images, immersing the viewer into the full swimsuit production. In addition, Google’s Cardboard Camera app allows any Android phone to capture digital images on a continuous 360-degree plane. These images are then stitched together by the software to produce one’s own interactive VR experience.

Current Status

The number of digital images taken each year continues to grow at an exponential rate, increasing from 350 billion in 2010 to an estimated 1.3 trillion in 2017 (Heyman, 2015). (Figure 17.2) This means that about 30% of all images ever taken will be captured that year, or “another way to think about it: Every two minutes, humans take more photos than ever existed in total 150 years ago” (Eveleth, 2015). It is estimated that 75% of the 1.3 trillion images taken in 2017 will be taken on mobile phones. (Figure 17.3) In fact, it’s estimated that “more than 90% of all humans who have ever taken a picture, have only done so on a camera phone, not a stand-alone digital or film-based ‘traditional’ camera” (Ahonen, 2013). So what happens to all these captured digital images? Of the estimated 1 trillion images taken in 2015, it is estimated that 657 billion of them were uploaded to social media sites (Eveleth, 2015). Thus, the overwhelming vast majority of these images will not be printed, but will instead “jump to screen,” as we view, transfer, manipulate, and post these images onto high-definition televisions (HDTV), 5K monitors, computer screens, tablets, and mobile phones via social media sites. Because of the Wi-Fi capabilities of our mobile phones, we send images via email, post them in collaborative virtual worlds, or view them on other mobile phones, tablets, and handheld devices, including DSCs and DSLRs. Some of these images will only “jump to screen” for a limited time, due to the increased use of photo apps such as Snapchat, making then somewhat ephemeral in nature.

Figure 17.2

Digital Photos Taken Worldwide

Images

Data after 2013 are forecasts

Source: InfoTrends

Figure 17.3

Share of Total by Device

Images

Figures are estimates of consumer photos taken based on national surveys. They do not include pictures taken by professional photographers

Source: InfoTrends

Due to this increased use of the mobile phone as camera, digital camera sales continue to plummet. DSLR sales have dropped 15% since 2014. The lone bright spot has been in sales of mirrorless, interchangeable lens cameras (MILCs), which have grown 16.5% over the last year. The growth can be attributed to purchases by those under 35 (PMA,2015a). But as the quality and usability of the mobile phone continues to improve, look for this upward MILC trend to reverse.

One market that continues to grow is photo and video mobile apps. As of 2016, there are more than 78,000 such apps, more than twice the number from just two years earlier. As mentioned previously, the Combine apps are the most popular, making up 29% of the market, followed by apps that are mainly photo-editors at 18% (PMA, 2016).

The ubiquitous nature of the mobile phone has also had some interesting effects on other areas of society. Tourists at the White House, for the first time in 40 years, are allowed to take images with their mobile phones (or compact digital cameras with lenses shorter than 3 inches—DSLRs are still not allowed). And, in keeping with the times, this announcement was initially made on Instagram, as First Lady Michelle Obama tore up a sign of the old policy in a posted video. The accompanying text read “Big news! Starting today, we’re lifting the ban on cameras and photos on the @WhiteHouse public tour. Visitors are now able to take photos and keep those memories for a lifetime!” (PMA, 2015b). On the negative side of the ledger, due to our obsession with selfies, many public places have banned the use of the selfie stick. Disneyland, the Kentucky Derby, the Museum of Modern Art, and the Smithsonian are a few of the places that have enacted this ban, while South Korea has taken it a step further, as they are “cracking down on selfie sticks and will regulate them via a government agency that monitors telecom devices” (Farace, 2015). This surveillance doesn’t seem far-fetched, because back home, according to Crime Feed, Americans are “caught on camera more than 75 times a day (Crime Feed, 2015). Big brother is watching you watching you!

Factors to Watch

The End of Moore’s Law?

Much has been said lately that we are coming to the end of Moore’s Law (see Chapter 11, but those predictions may be premature. Intel says that “chips will continue to get denser over the next decade or so, even as costs continue to rise” (Miller, 2016). Also look for changes in chip design, such as the use of stacked sensors, as well other advances designed for specific applications, to keep the process going.

Film Makes a Comeback?

With many in the Millennial “Hipster” movement rejecting the digital aesthetic, analog technology seems to be gaining back some traction. Evidenced by the rise of vinyl sales, that, according to Nielsen, have grown 260% since 2009, film photography may experience a similar bump (Farace, 2015). Time-zero Polaroid film, produced by the Impossible Project, can be found at Urban Outfitters among other retailers, with sales of the instant film growing at 20% per year from 2013 to 2015 (Farace, 2015).

Energy Self-Sufficient Cameras Arrive

Another concept coming out of the Columbia Vision Laboratory at Columbia University (the first being computational photography), is a camera powered by the same light that produces the image. Shree K. Nayar, the T.C. Chang professor at the lab, states, “Why not redesign the pixels in the camera to do both? It can measure light and convert light to electricity” (Gershgorn, 2015).

Auto Detection Clothing

As image-recognition algorithms improve and make their way into the industry in mass, privacy abuses are sure to follow. So far, governments and privacy advocates have had little success in determining a code of conduct for this encroaching technology. Enter the CV Dazzle Project, whose goal it is to “break apart the gestalt of a face, or object, and make it undetectable to computer vision algorithms, in particular face detection” (Harvey, 2016). The project name is part acronym, CV for computer vision, and part WWI strategy, as Dazzle refers to the camouflage used to deconstruct the gestalt image of a warship. It’s “an ongoing collaboration between hair stylists, makeup artists, and fashion designers”, exploring “how fashion can be used as camouflage from face-detection technology” (Harvey, 2016).

Getting a Job

The job growth market for digital photographers is only expected to increase 3% over the next decade, which is “slower than the average for all occupations” (Bureau of Labor Statistics, 2015). Opportunity lies in the integration of the digital imaging industry into other industries. Taking a cue from the most popular mobile photography apps, you can best prepare for these opportunities by combining your digital photography skill set with photo-editing, videography and video editing, and multimedia production skills to increase the chances of working in the field. But there is some good news. According to a recent study on the state of news photography, of those who find a job, two-thirds “said they were happy with their choice of livelihood, and 55% feel mostly or always positive about the future” (Hadland, et al., 2015).

Projecting the Future

What will the field of digital imaging and photography look like in 15 years? Let’s start by looking back 15 years ago, when the first mobile phones with digital camera capabilities had just been released. No photo-editing apps were available for these devices, and Facebook, Instagram, and well, any social media sharing site existed. Digital cameras had sensors in the 2-3 MP range. Most predictions about the future of digital imaging and photography made at the time dealt with the sensor wars (true), or trouble ahead for the film industry (true), while most of the chatter was about DSC and DSLR improvements. Not much was said about camera phones, other than predictions that “anything 3 mp or under just won’t compete against the built-into cameras in phones” (Hogan, 2003). No one was predicting that mobile phones would become the dominant factor in digital imaging in 10 years. Big miss!

Maybe we can do better as we take a stab at the next 15 years. We’ll see the continued dominance of mobile phones in the early years, as “pocketography” cameras get better, offer easier-to-use apps for manipulation, editing and sharing, and reach user saturation. But we’ll want an easier way to take images. Sophisticated algorithms will allow for the development of light-wave capture methods on devices that won’t even need our hands to operate (i.e. Google glasses). Devices that are non-intrusive and part of our 24-7 existence. These computational lens-less cameras will continually shoot everything your eyes see (imagine 8K quality lifelogging), allowing the user to retrieve any single shot (or video), and run it thru a series of pre-ordained filters (if desired). The image could be shared on any social media platform/cloud app, complete with any other info the user may want to include (GPS location, local weather, time, venue information, emotional status, you name it), all in the literal blink of the eye. Additionally, technologies outside of the digital imaging industry will continue to integrate into this process at an accelerated pace. A current example is HallettWx.com, a site that is “attempting to use science to predict when a sunset is going to be particularly photogenic” (Horaczek, 2015).

These software, hardware, and integration developments will push Henri Cartier-Bresson’s “decisive moment” to the post-capture realm, as the editorial process of choosing a singular image, or video, will play a more important role. There will be no shutter to release. Our eyes will basically become the camera, and our brain will determine the decisive moment and decide what images are worth keeping, editing, and disseminating, as we share our unique world view with others.

As we can see, digital imaging and photography is in a new transformative period. The infinite flexibility of the digital image, the information sources that it can carry, and the possibilities for future usage in apps and yet-to-be developed technologies all are indicative of how robust and dynamic the field is. And it’s exciting that the only tool we need to participate in the creation of digital images is in our hands.

Bibliography

Aaland, M. (1992). Digital photography. Avalon Books, CA: Random House.

Ahonen, T. (2013). The Annual Mobile Industry Numbers and Stats Blog. Communities Dominate Brands. Retrieved February 26, 2016 from http://communities-dominate.blogs.com/brands/2013/03/the-annual-mobile-industry-numbers-and-stats-blog-yep-this-year-we-will-hit-the-mobile-moment.html.

Brook, P. (2011). Raw Meet: Fred Ritchin Redefines Digital Photography. Wired. Retrieved February 24, 2016 from http://www.wired.com/rawfile/2011/09/fred-ritchin/all/1.

Bureau of Labor Statistics. (2015). Photographers. Job Outlook. Occupational Outlook Handbook. Retrieved February 27, 2016 from http://www.bls.gov/ooh/media-and-communication/photographers.htm#tab-6.

C/Net. (2016). Faded Photo Editor. Digital Photo Tools Retrieved February 24, 2016 from http://download.cnet.com/Faded-Photo-Editor/3000-12511_4-76282908.html.

Canon. (2014). Technology Used in Digital SLR Cameras. Canon Global. Retrieved February 24, 2016 from http://www.canon.com/technology/canon_tech/explanation/35mm.html.

Carter, R. L. (2015a). DigiCam History Dot Com. Retrieved February 24, 2016 from http://www.digicamhistory.com/1980_1983.html.

Carter, R. L. (2015b). DigiCam History Dot Com. Retrieved February 24, 2016 from http://www.digicamhistory.com/1995%20D-Z.html.

Carter, R. L. (2015c). DigiCam History Dot Com. Retrieved February 24, 2016 from http://www.digicamhistory.com/1996%20S-Z.html.

Cicala, R. (2014) Disruption and Innovation. PetaPixel. Retrieved February 25, 2016 from http://petapixel.com/2014/02/11/disruption-innovation/.

Corpuz, J. (2015). 10 Best Video Editing Apps for Phones and Tablets. Tom’s Guide. Retrieved February 25, 2016 from http://www.tomsguide.com/us/pictures-story/511-Video-Editor-Android-iOS-Video-Filters.html.

Crime Feed. (2015). How Many Times Are You Caught On Surveillance Cameras Per Day? Eyes on You. Retrieved February 26, 2016 from http://crimefeed.com/2015/02/eyes-many-times-caught-surveillance-cameras-per-day/.

Curtin, D. (2011). How a digital camera works. Retrieved February 24, 2016 from http://www.shortcourses.com/guide/guide1-3.html.

Etherington, D. (2015). Apple Buys LinX, A Camera Module Maker Promising DSLR-Like Mobile Performance. TechCrunch. Retrieved February 25, 2016 from http://techcrunch.com/2015/04/14/apple-buys-linx-a-camera-module-maker-promising-dslr-like-mobile-performance/.

Eveleth, R. (2015). How Many Photographs of You Are Out There In the World? The Atlantic. Retrieved February 26, 2016 from http://www.theatlantic.com/technology/archive/2015/11/how-many-photographs-of-you-are-out-there-in-the-world/413389/.

Farace, J. (2015). 7 Trends That Will Change Photography Next Year: Camera & Technology Preview For 2016. Shutterbug. Retrieved February 26, 2016 from http://www.shutterbug.com/content/7-trends-will-change-photography-next-year-camera-technology-preview-2016.

Four Thirds. (2012). Overview. Four Thirds: Standard. Retrieved February 24, 2016 from http://www.four-thirds.org/en/fourthirds/whitepaper.html.

Gershgorn, D. (2015). Photography Without a Lens? Future of Images May Lie in Data. Lens. N.Y. Times. Retrieved February 25, 2016 from http://lens.blogs.nytimes.com/2015/12/23/the-future-of-computational-photography/?_r=1.

Hadland, A. et al. (2015). The State of News Photography: The Lives and Livelihoods of Photojournalists in the Digital Age. Reuters Institute for the Study of Journalism. Retrieved February 27, 2016 from https://reutersinstitute.politics.ox.ac.uk/sites/default/files/The%20State%20of%20News%20Photography.pdf.

Harry Ransom Center-The University of Texas at Austin. (2016). The First Photograph. Exhibitions. Retrieved February 24, 2016 from http://www.hrc.utexas.edu/exhibitions/permanent/.

Harvey, A. (2016). CV Dazzle. Projects. Retrieved February 27, 2016 from https://ahprojects.com/projects/cv-dazzle/#summary.

Heyman, S. (2015). Photos, Photos Everywhere. N. Y. Times. Retrieved February 26, 2016 from http://www.ny-times.com/2015/07/23/arts/international/photos-photos-everywhere.html?_r=1.

Hill, S. (2013). From J-Phone to Lumia 1020: A complete history of the camera phone. Digital Trends. Retrieved February 24, 2016 from http://www.digitaltrends.com/mobile/camera-phone-history/.

Hogan, T. (2003). What Will Happen in 2004? Bythom. Retrieved February 29, 2016 from http://www.bythom.com/2003predictions.htm.

Horaczek, S. (2015). HallettWx.com Tries to Predict When The Sunsets Will Be Good For Photography. Popular Photography. Retrieved February 29, 2016 from http://www.popphoto.com/hallettwxcom-tries-to-predict-when-sunsets-will-be-good-for-photography.

Horaczek, S. (2016). Sony’s New Exmor Stacked Smartphone Camera Sensor Is The First To Use Hybrid Autofocus. Popular Photography. Retrieved February 24, 2016 from http://www.popphoto.com/sonys-new-exmor-stacked-smartphone-camera-sensor-is-first-to-use-hybrid-autofocus.

Hornyak, T. (2015). Canon goes big on resolution with 250-megapixel sensor. PC World. Retrieved February 25, 2016 from http://www.pcworld.com/article/2980945/canon-goes-big-on-resolution-with-250-megapixel-sensor.html.

Kaplan, J. and Segan, S. (2008). 21 Great Technologies That Failed. Features. Retrieved February 24, 2016 from http://www.pcmag.com/article2/0,2817,2325943,00.asp.

Kodak. (2016). Milestones-chronology: 1960-1975. Kodak. Retrieved February 24, 2016 from http://graphics.kodak.com/US/en/corp/aboutus/heritage/milestones/default.htm.

Lester, P. (2006). Visual communication: Images with messages. Belmont, CA: Wadsworth.

Light. (2016). Photography in a whole new Light. Light. Retrieved February 25, 2016 from https://light.co/.

LSST. (2016). The Large Synoptic Survey Telescope. Public and Scientists Home. Retrieved February 25, 2016 from http://www.lsst.org/lsst.

Maschwitz, S. (2015). The Light L16 Camera and Computational Photography. Prolost. Retrieved February 25, 2016 from http://prolost.com/blog/lightl16.

Mayes, S. (2015). The Next Revolution in Photography Is Coming. Time. Retrieved February 25, 2016 from http://time.com/4003527/future-of-photography/.

McHugh, M. (2013). Photographers tussle over whether ‘pro Instagrammers’ are visionaries or hacks. Digital Trends. Retrieved February 25, 2016 from http://www.digitaltrends.com/social-media/are-professional-instagrammers-photo-graphic-visionaries-or-just-hacks/.

Miller, A.D. (2012). A Theology Study of Romans. USA: Showers of Blessing Ministries International Publishing.

Miller, M. (2016). Moore’s Law at a New Crossroads. Forward Thinking. Retrieved February 27, 2016 from http://forward-thinking.pcmag.com/none/342047-moore-s-law-at-a-new-crossroads.

Morrison, K. (2015). How Many Photos Are Uploaded to Snapchat Every Second? SocialTimes. Retrieved Febraury 24, 2015 from http://www.adweek.com/socialtimes/how-many-photos-are-uploaded-to-snapchat-every-second/621488.

National Press Photographers Association. (2016). NPPA Code of Ethics. Retrieved February 24, 2016 from https://nppa.org/node/5145.

Nazarian, R. (2016). Dual-lens cameras are coming to smartphones, but not until next year, says Sony. Digital Trends. Retrieved February 25, 2016 from http://www.digitaltrends.com/mobile/dual-lens-cameras-takeoff-2017/.

PMA. (2015a). Sony claims growth in Mirrorless camera sales. Newsline. Retrieved February 26, 2016 from http://pmanewsline.com/2015/06/07/sony-claims-growth-in-mirrorless-camera-sales/.

PMA. (2015b). 40-year ban lifted: Visitor photography allowed again in the White House. Newsline. Retrieved February 26, 2016 from http://pmanewsline.com/2015/07/01/40-year-ban-lifted-visitor-photography-allowed-again-in-the-white-house/.

PMA. (2016). Top-ranking photo and video apps win with new use cases beyond photo capture, enhancement. Newsline. Retrieved February 25, 2016 from http://pmanewsline.com/2016/02/04/top-ranking-photo-and-video-apps-win-with-new-use-cases-beyond-photo-capture-enhancement/.

Pictures that lie. (2011). C/NET News. Retrieved February 24, 2016 from http://news.cnet.com/2300-1026_3-6033210-1.html.

Purewal, S.J. (2015) 9 apps to help you up your selfie game. MacWorld. Retrieved February 24, 2016 from http://www.macworld.com/article/2907998/9-apps-to-help-you-up-your-selfie-game.html.

Reis, R. et al. (2016). Making the Best of Multimedia: Digital Photography. Writing and Reporting for Digital Media. Dubuque, IA: Kendall Hunt.

Ritchin, F. (2013). Bending the Frame: Photojournalism, Documentary, and the Citizen. New York, NY: Aperture Foundation.

Ritchin, F. (2015). Is Instagram Photography? Does it Matter? International Center of Photography. Retrieved February 24, 2016 from http://www.icp.org/perspective/is-instagram-photography-does-it-matter.

Rubin, M. (2015). The Future of Photography. Photoshop Blog. Retrieved February 25, 2016 from https://blogs.adobe.com/photoshop/2015/08/the-future-of-photography.html.

Rutkin, A. (2015). Facebook can recognise you in photos even if you’re not looking. New Scientist. Retrieved February 24, 2016 from https://www.newscientist.com/article/dn27761-facebook-can-recognise-you-in-photos-even-if-youre-not-looking/#.VYlaDBNViko.

SelfieCity. (2016). London Selfie Demograhics. Retrieved February 23, 2016 from http://selfiecity.net/london/#intro.

Shazam. (2015). Shazam Introduces Visual Recognition Capabilities, Opening Up A New World Of Shazamable Content. News. Retrieved February 25, 2016 from http://news.shazam.com/pressreleases/shazam-introduces-visual-recognition-capabilities-opening-up-a-new-world-of-shazamable-content-1168520.

Weston, E. et al. (1986). Edward Weston: Color Photography. Tucson, AZ: Center for Creative Photography.

Wild, C. (2016). 1839. The First Selfie. Mashable. Retrieved February 24, 2016 from http://mashable.com/2014/11/07/first-selfie/#8.B0os9xSkqw.

Worthington, P. (2014). The Official Mylio Memories and Photography Blog One Trillion Photos in 2015. Mylio. Retrieved February 24, 2015 from http://blog.mylio.com/one-trillion-photos-in-2015/.

Zhang, M. (2015). The World’s Largest and Most Powerful Camera Gets Funding for 3.2-Gigapixel Sky Photos. PetaPixel. Retrieved February 26, 2015 from http://petapixel.com/2015/01/15/worlds-largest-powerful-camera-gets-funding-3-2-gigapixel-sky-photos/.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.36.71