Abstract:
A navigation and positioning device including a storage, a mark feature analyzer, a first coordinate fusion component and a second coordinate fusion component is provided. The storage stores map information including a traffic mark feature and its mark coordinate. The mark feature analyzer is used for analyzing whether a captured image has the traffic mark feature. When the captured image has the traffic mark feature, the mark feature analyzer analyzes the traffic mark feature in the captured image and calculates a device coordinate according to the mark coordinate. The first coordinate fusion component fuses the device coordinate and a first fusion coordinate and uses fused coordinate as a second fusion coordinate. The second coordinate fusion component fuses the second fusion coordinate, traffic carrier inertial information and a global positioning coordinate and uses fused coordinate as an updated first fusion coordinate.
Abstract:
An image processing method is adapted to process images captured by at least two cameras in an image system. In an embodiment, the image processing method comprises: matching two corresponding feature points for two images, respectively, to become a feature point set; selecting at least five most suitable feature point sets, by using an iterative algorithm; calculating a most suitable radial distortion homography between the two images, according to the at least five most suitable feature point sets; and fusing the images captured by the at least two cameras at each of timing sequences, by using the most suitable radial distortion homography.
Abstract:
A method for training an image generator includes multiple iterations, each including: inputting a real image the a first generator; generating a generated image by an image transformation branch of the first generator; inputting the generated image to a discriminator; obtaining a loss value from the discriminator; generating a segmented image by an image segmentation branch of the first generator; obtaining a segmentation loss value according to the segmented image; inputting the generated image to a second generator; generating a reconstructed image by the second generator; and obtaining a reconstruction loss value according to the reconstructed and the real images. Difference in network weights of the image transformation and segmentation branches is compared to obtain a similarity loss value. Network parameters of the first and the second generators are updated according to the loss value, the segmentation loss value, the reconstruction loss value and the similarity loss value.
Abstract:
An image inpainting method includes the following steps: segmenting image, acquiring a plurality of images, and having the plurality of images segment into noise-contained pixel images and non-noise-contained pixel images, and confirming the positions of every noise pixel of the noise pixel image; and performing inpainting in light of the noise-contained pixel images, finding out the offset map and geometric relationship of the pixel corresponding relationship without being subjected to the affection of noise and having the pixel corresponding relationship with minimum parallax, making use of the offset map or the geometric relationship to extract corresponding pixel that is not subjected to the affection of noise, performing inpainting and substituting the noise pixel in the plurality of images to generate at least a synthetic image without containing noise.
Abstract:
A dynamic fusion method of images includes: receiving broadcast information from surrounding vehicles of a host vehicle; determining whether at least one of the surrounding vehicles travels in the same lane as the host vehicle to become a neighboring vehicle of the host vehicle according to the broadcast information; determining whether the neighboring vehicle is too close to the host vehicle and blocks the view of the host vehicle; and performing a transparentization or translucentization process on the neighboring vehicle in an image captured by the host vehicle when the neighboring vehicle blocks the view of the host vehicle.
Abstract:
A method for in-image periodic noise pixel inpainting is provided. It is determined whether a current frame includes periodic noise pixels, and locations of periodic noise pixels are identified. Non-periodic-noise pixels in a reference frame are utilized to inpaint the periodic noise pixels in the current frame.
Abstract:
A surrounding bird view image generation method for use in an automobile-side device of an articulated vehicle is provided. The articulated vehicle includes a first body part, a second body part and a connection part. The proceeding directions of the first and the second body parts form an angle. The image processing method includes the steps of: storing an angle-to-surrounding-image model table; detecting an angle and providing an angle measurement; accessing the angle-to-surrounding-image model table to obtain a selected angle and a selected surrounding image model corresponding to the angle measurement; capturing 6 adjacent images corresponding to the surrounding of the vehicle body by the image capturers disposed on the 6 surrounding sides of the articulated vehicle; obtaining a practical operating surrounding image by processing the first to the sixth images with the selected surrounding image model.
Abstract:
Surveillance systems with a plurality of cameras and image processing method thereof are provided. Based on a plurality of images captured by the plurality of cameras, some of the images are translucentized with other images and the other images are stitched with each other according to the space geometrical relations of cameras. The benefit of the surveillance system is that monitoring the field surveilled by each camera is no longer necessary to watch each image separately.
Abstract:
A surrounding bird view image generation method for use in an automobile-side device of an articulated vehicle is provided. The articulated vehicle includes a first body part, a second body part and a connection part. The proceeding directions of the first and the second body parts form an angle. The image processing method includes the steps of: storing an angle-to-surrounding-image model table; detecting an angle and providing an angle measurement; accessing the angle-to-surrounding-image model table to obtain a selected angle and a selected surrounding image model corresponding to the angle measurement; capturing 6 adjacent images corresponding to the surrounding of the vehicle body by the image capturers disposed on the 6 surrounding sides of the articulated vehicle; obtaining a practical operating surrounding image by processing the first to the sixth images with the selected surrounding image model.