Abstract:
There is provided an interactive system, which includes a remote controller. The remote controller is equipped with a camera to capture an operating frame having a user image and a background image therein; and a processing unit to analyze the operating frame to identify a user image section and a background image section within the operating frame corresponding to the user image and the background image respectively, wherein the processing unit generates a movement information of the remote controller according to intensity distributions of the user image section and the background image section.
Abstract:
An interactive electronic device includes an image capture module, a response module and a processing module. The image capture module is for capturing images. The processing module is for generating a first or second command set according to the image and output a control signal. The response module is for driving the interactive electronic device to perform a first continuous reaction corresponding to a specific pattern contained in the image according to the first command set or drive the interactive electronic device to perform a second continuous reaction according to the second command set. The processing module is further for replacing, adding or deleting at least a command in the first command set in a random manner thereby randomly obtaining a new command set.
Abstract:
There is provided an interactive system, which includes a remote controller. The remote controller is equipped with a camera to capture an operating frame having a user image and a background image therein; and a processing unit to analyze the operating frame to identify a user image section and a background image section within the operating frame corresponding to the user image and the background image respectively, wherein the processing unit generates a movement information of the remote controller according to intensity distributions of the user image section and the background image section.
Abstract:
There is provided an encoding and decoding method and an information recognition device using the same. A code block includes a center coding region and a peripheral coding region arranged around the center coding region. The encoding and decoding method uses the feature of at least one microdot included in the center coding region as codes. The encoding and decoding method uses the feature of at least one microdot included in the peripheral coding region as codes. The encoding and decoding method uses the relative feature between the center coding region and the peripheral coding region as codes. The information recognition device compares the read feature with pre-stored features to decode information such as position codes, object codes, parameter codes and control codes.
Abstract:
There is provided a gesture detection device including two linear image sensor arrays and a processing unit. The processing unit is configured to compare sizes of pointer images in the image frames captured by the two linear image sensor arrays in the same period or different periods so as to identify a click event.
Abstract:
An interactive electronic device includes an image capture module, a response module and a processing module. The image capture module is configured to capture an image. The response module is configured to output a control signal according to a pattern contained in the image. The processing module is electrically connected to the image capture module and the response module and configured to drive the interactive electronic device according to the control signal.
Abstract:
There is provided a cleaning robot system including a charging station and a cleaning robot. The charging station includes multiple positioning beacons. The cleaning robot includes an image sensor and a processor. The image sensor is used to acquire light generated by the multiple positioning beacons on the charging station and generate an image frame. The processor is electrically connected to the image sensor, and used to calculate a relative position with respect to the charging station according to beacon images of the multiple positioning beacons in the image frame to determine a recharge path accordingly.
Abstract:
There is provided a cleaning robot system including a charging station and a cleaning robot. The charging station includes multiple positioning beacons. The cleaning robot includes an image sensor and a processor. The image sensor is used to acquire light generated by the multiple positioning beacons on the charging station and generate an image frame. The processor is electrically connected to the image sensor, and used to calculate a relative position with respect to the charging station according to beacon images of the multiple positioning beacons in the image frame to determine a recharge path accordingly.
Abstract:
A method for gesture identification with natural images includes generating a series of variant images by using each two or more successive ones of the natural images, extracting an image feature from each of the variant images, and comparing the varying pattern of the image feature with a gesture definition to identify a gesture. The method is inherently insensitive to indistinctness of images, and supports the motion estimation in axes X, Y, and Z without requiring the detected object to maintain a fixed gesture.
Abstract:
There is provided an image positioning method including the steps of: capturing an image frame with an image sensor; identifying at least on object image in the image frame; comparing an object image size of the object image with a size threshold and identifying the object image having the object image size larger than the size threshold as a reference point image; and positioning the reference point image. There is further provided an interactive imaging system.