Abstract:
Methods and systems for object detection using laser point clouds are described herein. In an example implementation, a computing device may receive laser data indicative of a vehicle's environment from a sensor and generate a two dimensional (2D) range image that includes pixels indicative of respective positions of objects in the environment based on the laser data. The computing device may modify the 2D range image to provide values to given pixels that map to portions of objects in the environment lacking laser data, which may involve providing values to the given pixels based on the average value of neighboring pixels positioned by the given pixels. Additionally, the computing device may determine normal vectors of sets of pixels that correspond to surfaces of objects in the environment based on the modified 2D range image and may use the normal vectors to provide object recognition information to systems of the vehicle.
Abstract:
Example methods and systems for detecting reflective markers at long range are provided. An example method includes receiving laser data collected from successive scans of an environment of a vehicle. The method also includes determining a respective size of the one or more objects based on the laser data collected from respective successive scans. The method may further include determining, by a computing device and based at least in part on the respective size of the one or more objects for the respective successive scans, an object that exhibits a change in size as a function of distance from the vehicle. The method may also include determining that the object is representative of a reflective marker. In one example, a computing device may use the detection of one reflective marker to help detect subsequent reflective markers that may be in a similar position.
Abstract:
Aspects of the disclosure relate generally to detecting discrete actions by traveling vehicles. The features described improve the safety, use, driver experience, and performance of autonomously controlled vehicles by performing a behavior analysis on mobile objects in the vicinity of an autonomous vehicle. Specifically, an autonomous vehicle is capable of detecting and tracking nearby vehicles and is able to determine when these nearby vehicles have performed actions of interest by comparing their tracked movements with map data.
Abstract:
Aspects of the disclosure relate generally to notifying a pedestrian of the intent of a self-driving vehicle. For example, the vehicle may include sensors which detect an object such as a pedestrian attempting or about to cross the roadway in front of the vehicle. The vehicle's computer may then determine the correct way to respond to the pedestrian. For example, the computer may determine that the vehicle should stop or slow down, yield, or stop if it is safe to do so. The vehicle may then provide a notification to the pedestrian of what the vehicle is going to or is currently doing. For example, the vehicle may include a physical signaling device, an electronic sign or lights, a speaker for providing audible notifications, etc.
Abstract:
Methods and systems for generating video from panoramic images using transition trees are provided. According to an embodiment, a method for generating a video from panoramic images may include receiving a transition tree corresponding to a current panoramic image from a server. The method may also include determining a path of the transition tree to a next panoramic image based on a user navigation request. The method may further include requesting and receiving a video chunk from the server for each edge of the determined path of the transition tree. The method may also include displaying the requested video chunks in sequence according to the transition tree. According to another embodiment, a system for generating a video from panoramic images may include a transition tree module and a video display module.
Abstract:
Example methods and systems for detecting weather conditions including wet surfaces using vehicle onboard sensors are provided. An example method includes receiving laser data collected for an environment of a vehicle. The method also includes determining laser data points that are associated with one or more objects in the environment, and based on laser data points being unassociated with the one or more objects in the environment, identifying an indication that a surface on which the vehicle travels is wet. The method may further include receiving radar data collected for the environment of the vehicle that is indicative of a presence of the one or more objects in the environment of the vehicle, and identifying the indication that the surface on which the vehicle travels is wet further based on laser data points being unassociated with the one or more objects in the environment indicated by the radar data.
Abstract:
The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location.
Abstract:
Methods and systems are disclosed for cross-validating a second sensor with a first sensor. Cross-validating the second sensor may include obtaining sensor readings from the first sensor and comparing the sensor readings from the first sensor with sensor readings obtained from the second sensor. In particular, the comparison of the sensor readings may include comparing state information about a vehicle detected by the first sensor and the second sensor. In addition, comparing the sensor readings may include obtaining a first image from the first sensor, obtaining a second image from the second sensor, and then comparing various characteristics of the images. One characteristic that may be compared are object labels applied to the vehicle detected by the first and second sensor. The first and second sensors may be different types of sensors.
Abstract:
Aspects of the disclosure relate generally to methods and systems for improving object detection and classification. An example system may include a perception system and a feedback system. The perception system may be configured to receive data indicative of a surrounding environment of a vehicle, and to classify one or more portions of the data as representative of a type of object based on parameters associated with a machine learning classifier. The feedback system may be configured to request feedback regarding a classification of an object by the perception system based on a confidence level associated with the classification being below a threshold, and to cause the parameters associated with the machine classifier to be modified based on information provided in response to the request.
Abstract:
Methods and systems are disclosed for determining sensor degradation by actively controlling an autonomous vehicle. Determining sensor degradation may include obtaining sensor readings from a sensor of an autonomous vehicle, and determining baseline state information from the obtained sensor readings. A movement characteristic of the autonomous vehicle, such as speed or position, may then be changed. The sensor may then obtain additional sensor readings, and second state information may be determined from these additional sensor readings. Expected state information may be determined from the baseline state information and the change in the movement characteristic of the autonomous vehicle. A comparison of the expected state information and the second state information may then be performed. Based on this comparison, a determination may be made as to whether the sensor has degraded.