Abstract:
Techniques are disclosed for managing image capture and processing in a multi-camera imaging system. In such a system, a pair of cameras each may output a sequence of frames representing captured image data. The cameras output may be synchronized to each other to cause synchronism in the image capture operations of the cameras. The system may assess image quality of frames output from the cameras and, based on the image quality, designate a pair of the frames to serve as a reference frame pair. Thus, one frame from the first camera and a paired frame from the second camera will be designated as the reference frame pair. The system may adjust each reference frame in the pair using other frames from their respective cameras. The reference frames also may be processed by other operations within the system, such as image fusion.
Abstract:
In an embodiment, an electronic device may be configured to capture still frames during video capture, but may capture the still frames in the 4x3 aspect ratio and at higher resolution than the 16x9 aspect ratio video frames. The device may interleave high resolution, 4x3 frames and lower resolution 16x9 frames in the video sequence, and may capture the nearest higher resolution, 4x3 frame when the user indicates the capture of a still frame. Alternatively, the device may display 16x9 frames in the video sequence, and then expand to 4x3 frames when a shutter button is pressed. The device may capture the still frame and return to the 16x9 video frames responsive to a release of the shutter button.
Abstract:
Some embodiments provide a method for initiating a video conference using a first mobile device. The method presents, during an audio call through a wireless communication network with a second device, a selectable user-interface (UI) item on the first mobile device for switching from the audio call to the video conference. The method receives a selection of the selectable UI item. The method initiates the video conference without terminating the audio call. The method terminates the audio call before allowing the first and second devices to present audio and video data exchanged through the video conference.
Abstract:
A controller for an image sensor includes a mode selector that receives a selection between image capture mode and data capture mode. An exposure sensor collects exposure data for a scene falling on the image sensor. A command interface sends commands to the image sensor to cause the image sensor to capture an image with a rolling reset shutter operation in which an integration interval for the image sensor is set based on the exposure data if the image capture mode is selected. The integration interval for the image sensor is set to less than two row periods, preferably close to one row period, without regard to the exposure data if the data capture mode is selected. An analog gain may be increased to as large a value as possible in data capture mode. All pixels in a row may be summed before AD conversion in data capture mode.
Abstract:
A method for automatic image capture control and digital imaging is described. An image buffer is initialized to store a digital image produced by an image sensor, through allocation of a region in memory for the buffer that is large enough to store a full resolution frame from the image sensor. While non-binning streaming frames, from the sensor and in the buffer, are being displayed in preview, the sensor is reconfigured into binning mode, and then binned streaming frames are processed in the buffer, but without allocating a smaller region in memory for the buffer. Other embodiments are also described and claimed.
Abstract:
Systems, methods, and computer readable media for dynamically adjusting an image capture device's autofocus (AF) operation based, at least in part, on the device's orientation are described. In general, information about an image capture device's orientation may be used to either increase the speed or improve the resolution of autofocus operations. More particularly, orientation information such as that available from an accelerometer may be used to reduce the number of lens positions (points-of-interest) used during an autofocus operation, thereby improving the operation's speed. Alternatively, orientation information may be used to reduce the lens' range of motion while maintaining the number of points-of-interest, thereby improving the operation's resolution.
Abstract:
Systems, methods, and devices for applying lens shading correction to image data captured by an image sensor are provided. In one embodiment, multiple lens shading adaptation functions, each modeled based on the response of a color channel to a reference illuminant, are provided. An image frame from the image data may be analyzed to select a lens shading adaptation function corresponding to a reference illuminant that most closely matches a current illuminant. The selected lens shading function may then be used to adjust a set of lens shading parameters.
Abstract:
Some embodiments provide a method for initiating a video conference using a first mobile device. The method presents, during an audio call through a wireless communication network with a second device, a selectable user-interface (UI) item on the first mobile device for switching from the audio call to the video conference. The method receives a selection of the selectable UI item. The method initiates the video conference without terminating the audio call. The method terminates the audio call before allowing the first and second devices to present audio and video data exchanged through the video conference.
Abstract:
In an embodiment, an electronic device may be configured to capture still frames during video capture but may capture the still frames in the 4×3 aspect ratio and at higher resolution than the 16×9 aspect ratio video frames. The device may interleave high resolution, 4×3 frames and lower resolution 16×9 frames in the video sequence, and may capture the nearest higher resolution, 4×3 frame when the user indicates the capture of a still frame. Alternatively, the device may display 16×9 frames in the video sequence, and then expand to 4×3 frames when a shutter button is pressed. The device may capture the still frame and return to the 16×9 video frames responsive to a release of the shutter button.