Richards Media Net is a company that specializes in helping you, your company, or your organization take advantage of Internet delivered media to deliver your message!

Monday, December 28, 2009

Exploring Next-Gen Cameras (Photo and Video)

© 2009 – Aaron L Richards – All Rights Reserved

Exploring Next-Gen Cameras (Photo and Video)
Next generation recording technology for photography and videography (we are calling both cameras here) is busting loose! With advances in wireless communications, camera's features and operations, and 3D editing, modeling and animation software the cameras of tomorrow will have little resemblance to those of today!

The first aspect of cameras to look at are the newly arriving communications capabilities. There are a number of technologies to choose from. A camera may have Wi-Fi,Wi-Max, Bluetooth or cellular, perhaps all 4. With these capabilities the photos or videos can be delivered to a PC for editing, sharing on YouTube, a TV or a DVD-burner anytime the camera is within range of a compatible wireless technology. There are also a number of activities which may be coordinated between multiple camera operators with wireless technologies. We may cover those at another time.

Onto the structural features and operational issues of future cameras.

First it is quite clear that future cameras will be 3D capable for both stills and video. There exist a number of technologies for filming and consuming 3D content: stereo-optic red and blue, stereo-optic polarized lenses, time-of-flight systems, alternating LCD shutter/image glasses and “bug's-eye” imaging systems. The benefit of time-of-flight systems and bug's-eyes technologies is that in addition to creating 3D stereo-imagery, they also lend themselves to creating 3D meshes or digital models of objects in the field of view. 3D meshes and models provide for later manipulation of a scene in 3D for a future 3D image manipulation and video editing software suite.

Today video editing consists primarily of virtually cutting clips and putting them together in a layering system with creative transitions. Tomorrow's systems, incorporating stereo imagery derived from a 3D mesh/model system will provide for editing through moving and positioning 3D models in a virtual space. This is in contrast to today's “green screen” video technology where different video layers are super-imposed on one another and may possess different lighting characteristics. Also content in different layers have an inability to interact with content in other layers creating a flat look.

This contrasts with 3D editing, where a 3D model of a person, vehicle or building can be copied from one 3D video and placed in another while maintaining consistent lighting and consistent interaction with other 3D models in a scene. In a 3D scene, created with 3D models, objects can quickly be added, deleted or posed. In addition, future 3D editing suites will facilitate the generation of 3D video from viewpoints differing from those originally filmed. It will become a relatively simple task to place a 3D movie star digital model into a 3D location where the star has never been, to do things the star has never done, and have the star look better than they have ever looked.

Today this is common in Hollywood's multi-million dollar blockbusters. With tomorrow's 3D cameras and software this will happen at prices suitable for the casual hobbiest.

Perhaps one of the most visually obvious innovations in camera design will be the “bug's-eyes” lenses. Bug's-eye lenses are paired, hemispherical, geodesic forms used for both imaging and 3D modeling and 3D mesh/model generation. Through the use of an interesting software algorithm and innovative physical design, focusing an image shall be achieved, not through mechanically moving lenses and mirrors, but through a software technique I call “Algorithmic Focusing and Mesh Extraction.”

Using Algorithmic Focusing and Mesh Extraction the count of moving parts in a camera is reduced, simplifying the design and assembly and the longevity is increased through decreased focusing related wear during use.

Additionally the camera will provide the operator with a 3D viewer while imaging, perhaps with binocular-like optics or a diffraction lens.

A substantial amount of meta-data will be available for the imagery created. Even today latitude, longitude, height, orientation, date and time are available.

In the near future adding 3D mesh coordinates and a textual representation of recorded dialog to the meta-data will be possible. This enables cool features such as searching through a video for keywords in the audio meta-data. In addition, using both image and mesh data, imaging systems will have the ability to identify “Things” in the imagery generically such as a “big-old-fat guy” using object detection and distinction features. After the technology matures, the software will make distinctions like both “big-old-fat guy” and “Uncle Tom Sleeping.”
This information will also be embedded into the meta-data, enabling noun-verb searches like “Alex eating ice cream cone” and “Kris bouncing ball.”

So lets review and further illustrate characteristics of future cameras.

First 3D capabilities are guarenteed. Both in stills as well as video. Next-Gen cameras will also create 3D meshes and models- consisting of both 3D coordinates as well as the visual imagery which gets wrapped around the 3D models, be it imagery of skin, bricks or a wood-grain, providing a “real” looking model. Focusing an image won't be done through moving lenses and mirrors but though a technique I call Algorithmic Focusing and Mesh Extraction which generates 3D mesh and image information from the scene brought into the camera through “Bug's-Eye” lenses.

This Algorithmic Focusing and Mesh Extraction software algorithm allows the camera operator to focus on any element in the photo for clear and sharp pictures as well as generating a 3D mesh of the scene. Focusing and mesh generation can be done both real-time in the camera or in the post-production phase with software.

Cameras already create 4:3 and 16:9 imagery, that is old news. The new idea is to decompose a video or picture into elements such as a 3D mesh, imagery, audio data and motion information. Images will be contained and described in a future mark-up language such as Microsoft's XAML. By decomposing pictures and video into computer and human legible XAML or a XAML-like mark-up language, absolutely incredible feats of audio/visual wonder will be enabled. This has implications in a variety of fields from entertainment to Homeland Security.

With camera's wireless communications capabilities it will be possible to orchestrate photo and video sources, both live and post-production, using video content from the cameras on a PC while the cameras are in range. In addition, using either a live feed or post-production it will be possible to introduce elements in scenes that interact with the scene environment, even though they did not physically or visually exist in the scene at imaging time.

This suggests the possibility of merging virtual characters and worlds with a live feed from one or more cameras in various locations around a downtown center or other area of interest.

In the next 5-10 years, camera technologies will explode. You've heard my thoughts on the matter, what are yours?

Labels:

0 Comments:

Post a Comment

<< Home