Abstract:
There is provided an optical sensor that acquires a first image frame corresponding to a first flicker period and acquires a second image frame corresponding to a second flicker period. The optical sensor adds the first image frame to the second image frame to generate a sum of image frames for the motion detection. Or, the optical sensor respectively adds pixel data of every two pixels in neighboring rows of the first image frame and the second image frame to generate a low-resolution image frame for the motion detection.
Abstract:
There is provided an encoding and decoding method and an information recognition device using the same. A code block includes a center coding region and a peripheral coding region arranged around the center coding region. The encoding and decoding method uses the feature of at least one microdot included in the center coding region as codes. The encoding and decoding method uses the feature of at least one microdot included in the peripheral coding region as codes. The encoding and decoding method uses the relative feature between the center coding region and the peripheral coding region as codes. The information recognition device compares the read feature with pre-stored features to decode information such as position codes, object codes, parameter codes and control codes.
Abstract:
An imaging device including a pixel matrix and a processor is provided. The pixel matrix includes a plurality of phase detection pixels and a plurality of regular pixels. The processor performs autofocusing according to pixel data of the phase detection pixels, and determines an operating resolution of the regular pixels according to autofocused pixel data of the phase detection pixels, wherein the phase detection pixels are always-on pixels and the regular pixels are selectively turned on after the autofocusing is accomplished.
Abstract:
An imaging device including a condenser lens and an image sensor is provided. The image sensor is configured to sense light penetrating the condenser lens and includes a pixel matrix, an opaque layer, a plurality of microlenses and an infrared filter layer. The pixel matrix includes a plurality of infrared pixels, a plurality of first pixels and a plurality of second pixels. The opaque layer covers upon a first region of the first pixels and a second region of the second pixels, wherein the first region and the second region are mirror- symmetrically arranged in a first direction. The plurality of microlenses is arranged upon the pixel matrix. The infrared filter layer covers upon the infrared pixels.
Abstract:
There is provided a pupil tracking device including an active light source, an image sensor and a processing unit. The active light source emits light toward an eyeball alternatively in a first brightness value and a second brightness value. The image sensor captures a first brightness image corresponding to the first brightness value and a second brightness image corresponding to the second brightness value. The processing unit identifies a brightest region at corresponding positions of the first brightness image and the second brightness image as an active light image.
Abstract:
There is provided a capacitive touch sensing device including a sensing element, a drive unit, a detection circuit and a processing unit. The sensing element has a first electrode and a second electrode configured to form a coupling capacitance therebetween. The drive unit is configured to input a drive signal to the sensing element. The detection circuit is configured to detect a detection signal coupled to the second electrode from the drive signal through the coupling capacitance and to modulate the detection signal respectively with two signals to generate a two-dimensional detection vector. The processing unit identifies a touch event according to the two-dimensional detection vector.
Abstract:
A pupil detection device includes an active light source, an image sensor and a processing unit. The active light source emits light toward an eyeball. The image sensor captures at least one image frame of the eyeball to be served as an image to be identified. The processing unit is configured to calculate a minimum gray value in the image to be identified and to identify a plurality of pixels surrounding the minimum gray value and having gray values within a gray value range as a pupil area.