A method performed by one or more computers, the method comprising: obtaining scene context data characterizing a scene in an environment at a current time point, wherein the scene context data includes features of the scene in a scene-centric coordinate system; generating a scene-centric encoded representation of the scene in the environment by processing the scene context data using an encoder neural network; for each target agent: obtaining agent-specific features for the target agent, processing the agent-specific features for the target agent and the scene-centric encoded representation of the scene using a fusion neural network to generate a fused scene representation for the target agent, and processing the fused scene representation for the target agent using a decoder neural network to generate a trajectory prediction output for the target agent in an agent-centric coordinate system for the target agent.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing inputs using slice-based dynamic neural networks. One of the methods includes receiving a new input for processing by a neural network that includes a first conditional neural network layer that has a set of shared parameters and a respective set of slice parameters for each of a plurality of slices. One or more slices to which the new input belongs are identified. The new input is processed to generate a network output, including: receiving a layer input to the first conditional neural network layer; and processing the layer input using the set of shared parameters, the respective one or more sets of slice parameters for the identified one or more slices, but not the respective sets of slice parameters for any other slices to which the new input does not belong.
B60W 40/00 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit
G06F 18/211 - Selection of the most significant subset of features
Aspects of the disclosure provide for arranging trips for autonomous vehicles. For instance, a request for a trip may be received by one or more processors of one or more server computing devices. The request may identify an initial location. A weather condition at the initial location may be identified. One or more internal vehicle state conditions and one or more priorities for pulling over may be determined based on the weather condition. A second location may be determined based on the one or more priorities and the initial location. Dispatch instructions may be provided to an autonomous vehicle, the dispatch instructions identifying the second location and the one or more internal vehicle state conditions in order to cause computing devices of the autonomous vehicle to control the autonomous vehicle to the second location and adjust internal vehicle state conditions based on the one or more internal vehicle state conditions.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60H 1/00 - Heating, cooling or ventilating devices
B60W 10/30 - Conjoint control of vehicle sub-units of different type or different function including control of auxiliary equipment, e.g. air-conditioning compressors or oil pumps
Example embodiments relate to low-overhead, bidirectional error checking for a serial peripheral interface. An example device includes an integrated circuit. The device also includes a serial peripheral interface (SPI) with a Master In Slave Out (MISO) channel and a Master Out Slave In (MOSI) channel. The MOSI channel is configured to receive a write address, payload data, and a forward error-checking code usable to identify data corruption within the write address or the payload data. The integrated circuit is configured to calculate and provide a reverse error-checking code usable to identify data corruption within the write address or the payload data. Additionally, the integrated circuit is configured to compare the forward error-checking code to the reverse error-checking code. Further, the integrated circuit is configured to write, to the write address if the forward error-checking code matches the reverse error-checking code, the payload data.
The technology relates to camera systems for vehicles having an autonomous driving mode. An example system includes a first camera mounted on a vehicle in order to capture images of the vehicle's environment. The first camera has a first exposure time and being without an ND filter. The system also includes a second camera mounted on the vehicle in order to capture images of the vehicle's environment and having an ND filter. The system also includes one or more processors configured to capture images using the first camera and the first exposure time, capture images using the second camera and the second exposure time, use the images captured using the second camera to identify illuminated objects, use the images captured using the first camera to identify the locations of objects, and use the identified illuminated objects and identified locations of objects to control the vehicle in an autonomous driving mode.
H04N 23/745 - Detection of flicker frequency or suppression of flicker wherein the flicker is caused by illumination, e.g. due to fluorescent tube illumination or pulsed LED illumination
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
H04N 23/61 - Control of cameras or camera modules based on recognised objects
H04N 23/72 - Combination of two or more compensation controls
H04N 23/73 - Circuitry for compensating brightness variation in the scene by influencing the exposure time
H04N 23/76 - Circuitry for compensating brightness variation in the scene by influencing the image signals
A method includes obtaining multiple images captured by pixel sensors of an image sensor, analyzing, using neural network circuitry integrated in the image sensor, the multiple images for object detection, generating, for each of the multiple images using the neural network circuitry integrated in the image sensor, neural network output data related to results of the analysis of the multiple images for object detection, and transmitting, from the image sensor, the neural network output data for each of the multiple images and image data for a subset of the multiple images instead of image data of each of the multiple images.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/94 - Hardware or software architectures specially adapted for image or video understanding
7.
Systems and methods for vehicles with limited destination ability
Aspects of the present disclosure relate generally to limiting the use of an autonomous or semi-autonomous vehicle by particular occupants based on permission data. More specifically, permission data may include destinations, routes, and/or other information that is predefined or set by a third party. The vehicle may then access the permission data in order to transport the particular occupant to the predefined destination, for example, without deviation from the predefined route. The vehicle may drop the particular occupant off at the destination and may wait until the passenger is ready to move to another predefined destination. The permission data may be used to limit the ability of the particular occupant to change the route of the vehicle completely or by some maximum deviation value. For example, the vehicle may be able to deviate from the route up to a particular distance from or along the route.
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
B60T 7/22 - Brake-action initiating means for automatic initiationBrake-action initiating means for initiation not subject to will of driver or passenger initiated by contact of vehicle, e.g. bumper, with an external object, e.g. another vehicle
B60T 8/00 - Arrangements for adjusting wheel-braking force to meet varying vehicular or ground-surface conditions, e.g. limiting or varying distribution of braking force
B60T 8/17 - Using electrical or electronic regulation means to control braking
B60T 8/88 - Arrangements for adjusting wheel-braking force to meet varying vehicular or ground-surface conditions, e.g. limiting or varying distribution of braking force responsive to a speed condition, e.g. acceleration or deceleration with failure responsive means, i.e. means for detecting and indicating faulty operation of the speed responsive control means
G06T 7/223 - Analysis of motion using block-matching
G06T 7/231 - Analysis of motion using block-matching using full search
G06T 7/521 - Depth or shape recovery from laser ranging, e.g. using interferometryDepth or shape recovery from the projection of structured light
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G07C 9/00 - Individual registration on entry or exit
B60W 30/186 - Preventing damage resulting from overload or excessive wear of the driveline excessive wear or burn out of friction elements, e.g. clutches
B60W 50/029 - Adapting to failures or work around with other constraints, e.g. circumvention by avoiding use of failed parts
B62D 6/00 - Arrangements for automatically controlling steering depending on driving conditions sensed and responded to, e.g. control circuits
G01S 13/86 - Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
Example embodiments relate techniques and systems for generating multi-axis radar velocity images using stereo radar. A vehicle radar system receives radar first radar data from a first radar and second radar data from a second radar, which are coupled at different locations on a vehicle traveling in an environment. The system determines a first radial speed and a second radial speed for an object based on the first and second radar data, respectively, and then estimates a velocity vector for the object relative to the vehicle based on the first and second radial speeds. The system can provide the estimated velocity vector as an input into a neural network and enable vehicle systems to control the vehicle based on the output from the neural network.
G01S 13/58 - Velocity or trajectory determination systemsSense-of-movement determination systems
G01S 7/41 - Details of systems according to groups , , of systems according to group using analysis of echo signal for target characterisationTarget signatureTarget cross-section
G01S 13/89 - Radar or analogous systems, specially adapted for specific applications for mapping or imaging
G01S 13/931 - Radar or analogous systems, specially adapted for specific applications for anti-collision purposes of land vehicles
9.
Pedestrian Countdown Signal Classification to Increase Pedestrian Behavior Legibility
Example embodiments relate to pedestrian countdown signal classification to increase pedestrian behavior legibility. An example embodiment includes a method that includes obtaining, by a computing system of a vehicle, a camera image patch. The method further includes determining, by the computing system, using the camera image patch and a pedestrian countdown signal classifier model, a state of a pedestrian countdown signal. The method also includes determining, by the computing system based on the state of the pedestrian countdown signal, a prediction of whether a pedestrian will enter a crosswalk governed by the pedestrian countdown signal. And the method includes, based on the prediction, causing, by the computing system, the vehicle to perform an invitation action that invites a pedestrian to enter the crosswalk.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
B60Q 5/00 - Arrangement or adaptation of acoustic signal devices
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Systems and methods are disclosed to identify a presence of a volumetric medium in an environment associated with a LIDAR system. In some implementations, the LIDAR system may emit a light pulse into the environment, receive a return light pulse corresponding to reflection of the emitted light pulse by a surface in the environment, and determine a pulse width of the received light pulse. The LIDAR system may compare the determined pulse width with a reference pulse width, and determine an amount of pulse elongation of the received light pulse. The LIDAR system may classify the surface as either an object to be avoided by a vehicle or as air particulates associated with the volumetric medium based, at least in part, on the determined amount of pulse elongation.
A method includes obtaining, by a processing device, radar sensor data from a driving environment of an autonomous vehicle (AV). The radar sensor data corresponds to a field-of-view (FOV) of a radar sensor of the AV. The method further includes identifying, by the processing device from the radar sensor data, a potential occlusion within the driving environment, obtaining, by the processing device, non-radar sensor data from the driving environment, the non-radar sensor data corresponding to a FOV of a non-radar sensor of the AV, determining, by the processing device from the non-radar sensor data, whether the potential occlusion is a false occlusion, and in response to determining that the potential occlusion is a false occlusion, removing, by the processing device, the false occlusion from the FOV of the radar sensor.
Aspects of the disclosure provide for automatically generating labels for sensor data. For instance, first sensor data for a vehicle may be identified. This first sensor data may have been captured by a first sensor of the vehicle at a first location during a first point in time and may be associated with a first label for an object. Second sensor data for the vehicle may be identified. The second sensor data may have been captured by a second sensor of the vehicle at a second location at a second point in time outside of the first point in time. The second location is different from the first location. A determination may be made as to whether the object is a static object. Based on the determination that the object is a static object, the first label may be used to automatically generate a second label for the second sensor data.
Aspects of the disclosure provide for controlling behaviors of autonomous vehicles based on evaluation of sensors of those vehicles. For instance, sensor data including distance and intensity information for a point in an environment of an autonomous vehicle may be received. An expected intensity value from a pre-stored fair weather reference map may be identified based on a location of the point. An effective detection range for the sensor may be dynamically determined based on the expected intensity and the intensity information for the point. A behavior of the autonomous vehicle may be controlled based on the effective detection range.
Aspects and implementations of the present disclosure address shortcomings of the existing technology by enabling velocity estimation for efficient object identification and tracking in autonomous vehicle (AV) applications, including: obtaining, by a sensing system of the AV, a plurality of return points, each return point having a velocity value and coordinates of a reflecting region that reflects a signal emitted by the sensing system, identifying an association of the velocity values and the coordinates of return points with a motion of a physical object, the motion being a combination of a translational motion and a rotational motion of a rigid body, and causing a driving path of the AV to be determined in view of the motion of the physical object.
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
G01S 7/48 - Details of systems according to groups , , of systems according to group
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 7/539 - Details of systems according to groups , , of systems according to group using analysis of echo signal for target characterisationTarget signatureTarget cross-section
G01S 17/34 - Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated using transmission of continuous, frequency-modulated waves while heterodyning the received signal, or a signal derived therefrom, with a locally-generated signal related to the contemporaneously transmitted signal
G01S 17/58 - Velocity or trajectory determination systemsSense-of-movement determination systems
The technology relates to detecting and responding to emergency vehicles. This may include using a plurality of microphones to detect a siren noise corresponding to an emergency vehicle and to estimate a bearing of the emergency vehicle. This estimated bearing is compared to map information to identify a portion of roadway on which the emergency vehicle is traveling. In addition, information identifying a set of objects in the vehicle's environment as well as characteristics of those objects is received from a perception system is used to determine whether one of the set of objects corresponds to the emergency vehicle. How to respond to the emergency vehicle is determined based on the estimated bearing and identified road segments and the determination of whether one of the set of objects corresponds to the emergency vehicle. This determined response is then used to control the vehicle in an autonomous driving mode.
G08G 1/0965 - Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages responding to signals from another vehicle, e.g. emergency vehicle
G06F 16/68 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G08G 1/0962 - Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
A method includes determining a maximum first parameter value and a maximum second parameter value corresponding to an autonomous vehicle (AV) for traveling a route. The method further includes determining, based on a first load value at a first distal end of a first axle of the AV and a second load value at a second distal end of the first axle, an updated first parameter value and an updated second parameter value corresponding to the AV for traveling the route. The method further includes causing the AV to travel the route based on the updated first parameter value and the updated second parameter value.
An optical system may include a substrate and a plurality of silicon photomultipliers (SiPMs) monolithically integrated with the substrate. Each SiPM may include a plurality of single photon avalanche diodes (SPADs). The optical system also includes an aperture array having a plurality of apertures. The plurality of SiPMs and the aperture array are aligned so as to define a plurality of receiver channels. Each receiver channel includes a respective SiPM of the plurality of SiPMs optically coupled to a respective aperture of the plurality of apertures.
H01L 31/107 - Devices sensitive to infrared, visible or ultraviolet radiation characterised by only one potential barrier or surface barrier the potential barrier working in avalanche mode, e.g. avalanche photodiode
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
The technology relates to assisting large self-driving vehicles, such as cargo vehicles, as they maneuver towards and/or park at a destination facility. This may include a given vehicle transitioning between different autonomous driving modes. Such a vehicles may be permitted to drive in a fully autonomous mode on certain roadways for the majority of a trip, but may need to change to a partially autonomous mode on other roadways or when entering or leaving a destination facility such as a warehouse, depot or service center. Large vehicles such as cargo truck may have limited room to maneuver in and park at the destination, which may also prevent operation in a fully autonomous mode. Here, information from the destination facility and/or a remote assistance service can be employed to aid in real-time semi-autonomous maneuvering.
G05D 1/249 - Arrangements for determining position or orientation using signals provided by artificial sources external to the vehicle, e.g. navigation beacons from positioning sensors located off-board the vehicle, e.g. from cameras
G05D 1/617 - Safety or protection, e.g. defining protection zones around obstacles or avoiding hazards
G05D 1/695 - Coordinated control of the position or course of two or more vehicles for maintaining a fixed relative position of the vehicles, e.g. for convoy travelling or formation flight
G05D 1/81 - Handing over between on-board automatic and on-board manual control
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G08G 1/00 - Traffic control systems for road vehicles
G08G 1/0967 - Systems involving transmission of highway information, e.g. weather, speed limits
19.
MASS DISTRIBUTION-INFORMED OPTIMIZATION FOR AUTONOMOUS DRIVING SYSTEMS
A method includes identifying suspension stiffness data and suspension deflection data associated with corresponding distal ends of one or more axles of an autonomous vehicle (AV). The method further includes determining, based on the suspension deflection data and the suspension stiffness data, driving constraint data for traveling at least a portion of a route. The method further includes causing, based on the driving constraint data, performance of a corrective action associated with the AV during the traveling of the at least a portion of a route.
A LIDAR device may transmit light pulses originating from one or more light sources and may receive reflected light pulses that are then detected by one or more detectors. The LIDAR device may include a lens that both (i) collimates the light from the one or more light sources to provide collimated light for transmission into an environment of the LIDAR device and (ii) focuses the reflected light onto the one or more detectors. The lens may define a curved focal surface in a transmit path of the light from the one or more light sources and a curved focal surface in a receive path of the one or more detectors. The one or more light sources may be arranged along the curved focal surface in the transmit path. The one or more detectors may be arranged along the curved focal surface in the receive path.
Systems and methods for monitoring and detecting partial or full mechanical failures of a spring contact configured to provide an electrical connection between two elements. In some examples, a spring contact may be arranged between a first element, such as a window or cover of an operational sensor, and a second element, such as a printed circuit board. The window or cover may have an electrical element with a first electrical contact, and the printed circuit board may be attached to the spring contact via a joint (e.g., solder, weld, adhesive). A processing system may be configured to determine a state of the joint based on vibrations sensed by one or more vibration sensors attached to the second element.
H01R 12/71 - Coupling devices for rigid printing circuits or like structures
H05B 3/84 - Heating arrangements specially adapted for transparent or reflecting areas, e.g. for demisting or de-icing windows, mirrors or vehicle windshields
Disclosed herein are systems and methods for providing supplemental identification abilities to an autonomous vehicle system. The sensor unit of the vehicle may be configured to receive data indicating an environment of the vehicle, while the control system may be configured to operate the vehicle. The vehicle may also include a processing unit configured to analyze the data indicating the environment to determine at least one object having a detection confidence below a threshold. Based on the at least one object having a detection confidence below a threshold, the processor may communicate at least a subset of the data indicating the environment for further processing. The vehicle is also configured to receive an indication of an object confirmation of the subset of the data. Based on the object confirmation of the subset of the data, the processor may alter the control of the vehicle by the control system.
B60W 50/00 - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
B60W 50/02 - Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G07C 5/00 - Registering or indicating the working of vehicles
H04L 67/125 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
Systems and methods are described that relate to a light detection and ranging (LIDAR) device. The LIDAR device includes a fiber laser configured to emit light within a wavelength range, a scanning portion configured to direct the emitted light in a reciprocating manner about a first axis, and a plurality of detectors configured to sense light within the wavelength range. The device additionally includes a controller configured to receive target information, which may be indicative of an object, a position, a location, or an angle range. In response to receiving the target information, the controller may cause the rotational mount to rotate so as to adjust a pointing direction of the LIDAR. The controller is further configured to cause the LIDAR to scan a field-of-view (FOV) of the environment. The controller may determine a three-dimensional (3D) representation of the environment based on data from scanning the FOV.
Example embodiments relate to an air cooling system for an electronic spinning assembly. An example embodiment includes a plurality of vanes coupled to a static base. A vane cover is rotatably coupled to the static base. The vane cover includes at least one air inlet, at least one air duct extending from the vane cover, and at least one choke point disposed in the cover. The at least one choke point is configured to increase a pressure of the airflow.
Methods and systems for use of a reference image to detect a road obstacle are described. A computing device configured to control a vehicle, may be configured to receive, from an image-capture device, an image of a road on which the vehicle is travelling. The computing device may be configured to compare the image to a reference image; and identify a difference between the image and the reference image. Further, the computing device may be configured to determine a level of confidence for identification of the difference. Based on the difference and the level of confidence, the computing device may be configured to modify a control strategy associated with a driving behavior of the vehicle; and control the vehicle based on the modified control strategy.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
The present disclosure relates to systems and methods for occlusion detection. An example system includes a primary reflective surface and a rotatable mirror configured to rotate about a rotational axis. The rotatable mirror includes a plurality of secondary reflective surfaces. The system also includes an optical element and a camera that is configured to capture at least one image of the optical element by way of the primary reflective surface and at least one secondary reflective surface of the rotatable mirror.
A method is provided that includes a vehicle receiving data from an external computing device indicative of at least one other vehicle in an environment of the vehicle. The vehicle may include a sensor configured to detect the environment of the vehicle. The at least one other vehicle may include at least one sensor. The method also includes determining a likelihood of interference between the at least one sensor of the at least one other vehicle the sensor of the vehicle. The method also includes initiating an adjustment of the sensor to reduce the likelihood of interference between the sensor of the vehicle and the at least one sensor of the at least one other vehicle responsive to the determination.
G01S 13/32 - Systems for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
G01S 13/34 - Systems for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated using transmission of continuous, frequency-modulated waves while heterodyning the received signal, or a signal derived therefrom, with a locally-generated signal related to the contemporaneously transmitted signal
G01S 13/86 - Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
G01S 13/931 - Radar or analogous systems, specially adapted for specific applications for anti-collision purposes of land vehicles
A method and a radar system are provided in the present disclosure. The radar system includes a radar unit having an antenna array configured to transmit and receive radar signal and a memory configured to store radar calibration parameters and radar channel parameters corresponding to the radar unit. The method provides for operation of the radar system. The radar system also includes a radar processor. The radar processor is configured to cause transmission of radar signals by the antenna array based on the radar channel parameters. The radar processor is also configured to process received radar signals based on the radar calibration parameters. The radar system further includes a central vehicle controller configured to operate a vehicle based on the processed radar signals.
Aspects of the disclosure provide for depot behaviors for autonomous vehicles. For instance, a signal to control an autonomous vehicle to a depot area may be received from a server computing device. A prioritized list of staging areas within the depot area may be identified. Each staging area of the prioritized list of staging areas enables the vehicle to observe stopping locations at which a need of the vehicle may be addressed. The vehicle may be controlled to a first staging area of the prioritized list. Once the vehicle has reached the first staging area, whether a stopping location that meets one or more needs of the vehicle is available may be determined. When is available, the vehicle may be controlled to the available stopping location. When not available, the vehicle may be controlled to a second staging area of the prioritized list.
Aspects of the disclosure provide for the evaluation of a scheduling system software for managing autonomous vehicle scheduling and dispatching. For instance, a problem condition for a simulation may be identified. The simulation may be run using the identified problem condition. The simulation may include a plurality of simulated autonomous vehicles each utilizing its own autonomous vehicle control software and map information common to each simulated autonomous vehicle. The problem condition may relate to a particular simulated autonomous vehicle of the plurality. Output of the simulation may be analyzed to determine a score for the scheduling system software. The scheduling system software may be evaluated using the score.
G08G 1/00 - Traffic control systems for road vehicles
B62D 6/00 - Arrangements for automatically controlling steering depending on driving conditions sensed and responded to, e.g. control circuits
G06F 30/20 - Design optimisation, verification or simulation
G06F 119/02 - Reliability analysis or reliability optimisationFailure analysis, e.g. worst case scenario performance, failure mode and effects analysis [FMEA]
31.
Methods and Systems for Real-time Automotive Radar Sensor Validation
Example embodiments relate to real-time health monitoring for automotive radars. A computing device may receive radar data from multiple radar units that have partially overlapping fields of view and detect a target object located such that the radar units both capture measurements of the target object. The computing device may determine a power level representing the target object for radar data from each radar unit, adjust these power levels, and determine a power difference between them. When the power difference exceeds a threshold power difference, the computing device may perform a calibration process to decrease the power difference below the threshold power difference or alert the vehicle, including onboard algorithms, to the reduced performance of the radar.
A laser diode firing circuit for a light detection and ranging (LIDAR) device that includes an inductively coupled feedback system is disclosed. The firing circuit includes a laser diode coupled in series with a transistor, such that current through the laser diode is controlled by the transistor. The laser diode is configured to emit a pulse of light in response to current flowing through the laser diode. A feedback loop is positioned to be inductively coupled to a current path of the firing circuit that includes the laser diode. As such, a change in current flowing through the laser diode induces a voltage in the feedback loop. A change in voltage across the leads of the feedback loop can be detected and the timing of the voltage change can be used to determine the time that current begins flowing through the laser diode.
A light detection and ranging (LIDAR) system can emit light toward an environment and detect responsively reflected light to determine a distance to one or more points in the environment. The reflected light can be detected by a plurality of plurality of photodiodes that are reverse-biased using a high voltage. Signals from the plurality of reverse-biased photodiodes can be amplified by respective transistors and applied to an analog-to-digital converter (ADC). The signal from a particular photodiode can be applied to the ADC by biasing a respective transistor corresponding to the particular photodiode while not biasing transistors corresponding to other photodiodes. The gain of each photodiode/transistor pair can be controlled by adjusting the bias voltage applied to each photodiode using a digital-to-analog converter. The gain of each photodiode/transistor pair can be controlled based on the detected temperature of each photodiode.
G01S 17/10 - Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
G01S 17/42 - Simultaneous measurement of distance and other coordinates
H04N 25/40 - Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
H04N 25/77 - Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
H04N 25/772 - Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising A/D, V/T, V/F, I/T or I/F converters
34.
IMPLEMENTING AUTONOMOUS VEHICLE LANE UNDERSTANDING SYSTEMS USING FILTER-BASED LANE TRACKING
A method includes obtaining, by a processing device, input data derived from a set of sensors of an autonomous vehicle (AV), generating, by the processing device using a set of lane detection classifier heads, at least one heatmap based on a fused bird's eye view (BEV) feature generated from the input data, obtaining, by the processing device, a set of polylines using the at least one heatmap, wherein each polyline of the set of polylines corresponds to a respective track of a first set of tracks for a first frame, and generating, by the processing device, a second set of tracks for a second frame after the first frame by using a statistical filter based on a set of extrapolated tracks for the second frame and a set of track measurements for the second frame, wherein each track measurement of the set of track measurements corresponds to a respective updated polyline obtained for the second frame.
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G01S 13/89 - Radar or analogous systems, specially adapted for specific applications for mapping or imaging
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
Example embodiments relate to devices, systems, and methods involving three-dimensional time-of-flight/visible image sensors. An example embodiment includes a device that includes a plurality of time of flight sensor (ToF) pixels and a plurality of visible image sensor (VIS) pixels. The device also includes a ToF communication channel configured to provide ToF image data indicative of one or more ToF pixels and a VIS communication channel configured to provide VIS image data indicative of one or more VIS pixels. The plurality of ToF pixels and the plurality of VIS pixels are arranged along a plane. The plurality of ToF pixels is disposed in an arrangement among the plurality of VIS pixels.
An autonomous vehicle is operated along a route according to a nominal driving solution that takes into account one or more first constraints including a first predicted trajectory for an agent vehicle. An alternate scenario is determined based on one or more second external constraints that include a second predicted trajectory for the agent vehicle different from the first predicted trajectory. A risk factor on the nominal driving solution is determined for the alternative scenario, and a secondary driving solution is determined based on the risk factor and the one or more second external constraints. A candidate switching point is identified where the secondary driving solution diverges from the nominal driving solution, and the nominal driving solution is revised up to the candidate switching point based on the secondary driving solution. The autonomous vehicle is then operated based on the revised nominal driving solution.
The described aspects and implementations enable efficient calibration of a sensing system of a vehicle. In one implementation, disclosed is a method and a system to perform the method, the system including the sensing system configured to obtain a plurality of images associated with a corresponding time of a plurality of times. The system further includes a data processing system operatively coupled to the sensing system and configured to generate a plurality of sets of feature tensors (FTs) associated with one or more objects of the environment depicted in a respective image. The data processing system is further to obtain a combined FT and process the combined FT using a neural network to identify one or more tracks characterizing motion of a respective object.
An autonomous vehicle configured for active sensing may also be configured to weigh expected information gains from active-sensing actions against risk costs associated with the active-sensing actions. An example method involves: (a) receiving information from one or more sensors of an autonomous vehicle, (b) determining a risk-cost framework that indicates risk costs across a range of degrees to which an active-sensing action can be performed, wherein the active-sensing action comprises an action that is performable by the autonomous vehicle to potentially improve the information upon which at least one of the control processes for the autonomous vehicle is based, (c) determining an information-improvement expectation framework across the range of degrees to which the active-sensing action can be performed, and (d) applying the risk-cost framework and the information-improvement expectation framework to determine a degree to which the active-sensing action should be performed.
B60W 30/08 - Predicting or avoiding probable or impending collision
B60K 35/28 - Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics informationOutput arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the purpose of the output information, e.g. for attracting the attention of the driver
B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
Aspects of the disclosure provide systems and methods for providing suggested locations for pick up and destination locations. Pick up locations may include locations where an autonomous vehicle can pick up a passenger, while destination locations may include locations where the vehicle can wait for an additional passenger, stop and wait for a passenger to perform some task and return to the vehicle, or for the vehicle to drop off a passenger. As such, a request for a vehicle may be received from a client computing device. The request may identify a first location. A set of one or more suggested locations may be selected by comparing the predetermined locations to the first location. The set may be provided to the client computing device.
Aspects of the disclosure provide for enabling distribution or prepositioning of roadside assistance vehicles within a service area for a fleet of autonomous vehicles. For instance, a clustering approach may be used to determine an assignment identifying a plurality of cluster locations and respective assigned ones of the roadside assistance vehicles. The clustering approach uses a number of clusters corresponding to the number of the roadside assistance vehicles. Distribution information may be sent to computing devices associated with technicians of the respective assigned ones of the roadside assistance vehicles. The distribution information may enable the technicians of the respective assigned ones of the roadside assistance vehicles to proceed to the cluster locations of the determined assignment.
Embodiments of the present disclosure relate to configuration-based sampling of run segments for simulating autonomous vehicle behavior. In at least one implementation, a method comprises: providing a set of run segments corresponding to scenarios for simulating operation of an autonomous vehicle; receiving a user-specified configuration comprising parameters for identifying target run segments within the set of run segments; applying the user-specified configuration to the set of run segments to identify the target run segments; and causing one or more simulations to be performed using the target run segments to generate a simulation output.
The technology employs a contrasting color scheme on different surfaces for sensor housing assemblies mounted on exterior parts of a vehicle that is configured to operate in an autonomous driving mode. Lighter and darker colors may be chosen on different surfaces according to a thermal budget for a given sensor housing assembly, due to the different types of sensors arranged along particular surfaces, or to provide color contrast for different regions of the assembly. For instance, differing colors such as black/white or blue/white, and different finishes such as matte or glossy, may be selected to enhance certain attributes or to minimize issues associated with a sensor housing assembly.
Example implementations are provided for an arrangement of co-aligned rotating sensors. One example device includes a light detection and ranging (LIDAR) transmitter that emits light pulses toward a scene according to a pointing direction of the device. The device also includes a LIDAR receiver that detects reflections of the emitted light pulses reflecting from the scene. The device also includes an image sensor that captures an image of the scene based on at least external light originating from one or more external light sources. The device also includes a platform that supports the LIDAR transmitter, the LIDAR receiver, and the image sensor in a particular relative arrangement. The device also includes an actuator that rotates the platform about an axis to adjust the pointing direction of the device.
The technology relates controlling an autonomous vehicle through a multi-lane turn. In one example, data corresponding to a position of the autonomous vehicle in a lane of the multi-lane turn, a trajectory of the autonomous vehicle, and data corresponding to positions of objects in a vicinity of the autonomous vehicle may be received. A determination of whether the autonomous vehicle is positioned as a first vehicle in the lane or positioned behind another vehicle in the lane may be made based on a position of the autonomous vehicle in the lane relative to the positions of the objects. The trajectory of the autonomous vehicle through the lane may be adjusted based on whether the autonomous vehicle is positioned as a first vehicle in the lane or positioned behind another vehicle in the lane. The autonomous vehicle may be controlled based on the adjusted trajectory.
A method and apparatus for controlling a first vehicle autonomously are disclosed. For instance, one or more processors may plan to maneuver the first vehicle to complete an action and predict that a second vehicle will take a particular responsive action. The first vehicle is then maneuvered towards completing the action in a way that would allow the first vehicle to cancel completing the action without causing a collision between the first vehicle and the second vehicle, and in order to indicate to the second vehicle or a driver of the second vehicle that the first vehicle is attempting to complete the action. Thereafter, when the first vehicle is determined to be able to take the action, the action is completed by controlling the first vehicle autonomously according to whether the second vehicle begins to take the particular responsive action.
B60W 30/09 - Taking automatic action to avoid collision, e.g. braking and steering
B60Q 1/34 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating change of drive direction
A method includes monitoring, at a computing device, outputs of a sensor validator. Each output is generated by the sensor validator based on corresponding sensor data from a sensor coupled to an autonomous vehicle, and each output indicates whether the corresponding sensor data is associated with an event. The method also includes mutating, at the computing device, particular sensor data to generate mutated sensor data that is associated with a particular event. The method further includes determining, at the computing device, a performance metric associated with the sensor validator based on a particular output generated by the sensor validator. The particular output is based on the mutated sensor data.
B60W 50/02 - Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
B60W 50/04 - Monitoring the functioning of the control system
B60W 50/14 - Means for informing the driver, warning the driver or prompting a driver intervention
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
H04L 43/065 - Generation of reports related to network devices
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
47.
Antenna array calibration for vehicle radar systems
An example method for using antenna array calibration to adjust radar unit operation involves receiving radar data from a radar unit coupled to a vehicle during vehicle operation in an environment, where the radar unit receives the radar data from the environment via an antenna array of the radar unit. The method also involves detecting an object in the environment based on the radar data, determining that the detected object satisfies a set of conditions, and, in response to the set of conditions being satisfied, estimating a first phase array offset for the antenna array. The method also involves comparing the first phase array offset with a second phase array offset that represents a prior calibration for the radar unit, and, based on a difference between the first and second phase array offsets exceeding a threshold difference, adjusting operation of the radar unit according to the first phase array offset.
G01S 7/41 - Details of systems according to groups , , of systems according to group using analysis of echo signal for target characterisationTarget signatureTarget cross-section
Aspects of the present disclosure relate to using audible cues to guide a passenger to a vehicle having an autonomous driving mode. For instance, one or more processors of the vehicle may receive, from a server computing device, instructions to pick up the passenger at a pickup location. The one or more processors may maneuver the vehicle towards the pickup location in the autonomous driving mode. The one or more processors may receive a signal indicating that the passenger requests assistance locating the vehicle. The one or more processors may use the signal to generate the audible cues. The audible cues may be played by the one or more processors through a speaker of the vehicle in order to guide the passenger towards the vehicle.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for intervention behavior prediction. One of the methods includes receiving data characterizing a scene that includes a first agent and a second agent in an environment and receiving intervention data specifying a planned intervention to be performed by the second agent. A conditional behavior prediction output that assigns, to each of a plurality of possible future behaviors, (i) a respective conditional likelihood that the first agent performs the possible future behavior given that the second agent performs the planned intervention and (ii) a predicted value of a confounder variable for the possible future behavior is generated using a conditional behavior prediction model. An intervention behavior prediction for the first agent is generated by, for each possible future behavior, generating a corrected likelihood for the possible future behavior based on the respective conditional likelihood for the possible future behavior and the predicted value of the confounder variable for the possible future behavior.
A method and system to provide timebase synchronization for multiple processors in a multi-processor sensor system, where each processor operates according to a respective reference clock, and where the processors' respective reference clocks are off sync from each other. An example method includes simultaneously injecting a synchronization pulse respectively into the multiple processors. Further, the method includes recording for each processor, according to the processor's respective reference clock, a respective synchronization-pulse timestamp of the simultaneously injected synchronization pulse, comparing the respective synchronization-pulse timestamps recorded for the processors, and, based on the comparing, computing for each processor a respective time offset. Additionally, the method includes using the per-processor computed time offsets as a basis to provide a synchronized timebase across the processors.
Aspects of the disclosure provide for the identification of parkable areas. In one instance, observations of parked vehicles may be identified from logged data. The observations may be used to determine whether a sub-portion of an edge of a roadgraph corresponds to a parkable area. In some examples, the edge may define a drivable area in the roadgraph. In addition, map information is generated based on the determination of whether the sub-portion of the edge corresponds to the parkable area.
The technology relates to identifying sensor occlusions due to the limits of the ranges of a vehicle's sensors and using this information to maneuver the vehicle. As an example, the vehicle is maneuvered along a route that includes traveling on a first roadway and crossing over a lane of a second roadway. A trajectory is identified from the lane that will cross with the route during the crossing at a first point. A second point beyond a range of the vehicle's sensors is selected. The second point corresponds to a hypothetical vehicle moving towards the route along the lane. A distance between the first point and the second point is determined. An amount of time that it would take the hypothetical vehicle to travel the distance is determined and compared to a threshold amount of time. The vehicle is maneuvered based on the comparison to complete the crossing.
B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
G01S 13/931 - Radar or analogous systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G01S 17/00 - Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
A method and apparatus are provided for determining one or more behavior models used by an autonomous vehicle to predict the behavior of detected objects. The autonomous vehicle may collect and record object behavior using one or more sensors. The autonomous vehicle may then communicate the recorded object behavior to a server operative to determine the behavior models. The server may determine the behavior models according to a given object classification, actions of interest performed by the object, and the object's perceived surroundings.
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G08G 1/0962 - Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
G08G 1/0967 - Systems involving transmission of highway information, e.g. weather, speed limits
The technology includes communicating the current status of a self-driving vehicle to users, such as passengers within the vehicle and other users awaiting pickup. Certain information about the trip and vehicle status is communicated depending on where the passenger is sitting within the vehicle or where the person awaiting pickup is located outside the vehicle. This includes disseminating the “monologue” of a vehicle operating in an autonomous driving mode to a user via an app on the user's device (e.g., mobile phone, tablet or laptop PC, wearable, or other computing device) and/or an in-vehicle user interface. The monologue includes current status information regarding driving decisions and related operations or actions. This alerts the user as to why the vehicle is taking (or not taking) a certain action, which reduces confusion and allows the user to focus on other matters.
The technology relates to controlling a vehicle based on a railroad light's activation status. In one example, one or more processors receive images of a railroad light. The one or more processors determine, based on the images of the railroad light, the illumination status of a pair of lights of the railroad light over a period of time as the vehicle approaches the railroad light. The one or more processors determine based on the illumination status of the pair of lights, a confidence level, wherein the confidence level indicates the likelihood the railroad light is active. The vehicle is controlled as it approaches the railroad light based on the confidence level.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
58.
Methods and Systems for Filtering Vehicle Self-reflections in Radar
Example embodiments relate to self-reflection filtering techniques within radar data. A computing device may use radar data to determine a first radar representation that conveys information about surfaces in a vehicle's environment. The computing device may use a predefined model to generate a second radar representation that assigns predicted self-reflection values to respective locations of the environment based on the information about the surfaces conveyed by the first radar representation. The predefined model can enable a predefined self-reflection value to be assigned to a first location based on information about a surface positioned at a second location and a relationship between the first location and the second location. The computing device may then modify the first radar representation based on the predicted self-reflection values in the second radar representation and provide instructions to a control system of the vehicle based on modifying the first radar representation.
A camera includes a camera focus adjustment device, a lens, and an image sensor coupled to the camera focus adjustment device. The camera focus adjustment device includes a flexure structure. The flexure structure includes an outer framework of structural members continuously interconnected by flexure notch hinges. The flexure structure also includes two inner structural members oriented in parallel and extending from the outer framework of structural members. A gap is between the two inner structural members. The camera focus adjustment device also includes a piezoelectric material within the gap and a pair of wedges within the gap. The pair of wedges is affixed to the piezoelectric material and to one inner structural member of the two inner structural members. Based on temperature-based piezoelectric activity associated with the piezoelectric material, the camera focus adjustment device is operable to move the image sensor relative to the lens.
A method for operating a light detection and ranging (LIDAR) device is provided. The method includes driving, by an LLC resonant power converter, a wireless power signal at a primary winding of a transformer disposed on a first platform. The method includes transmitting the wireless power signal across a gap separating the first platform and a second platform. The second platform is configured to rotate relative to the first platform. The method includes receiving the wireless power signal at a secondary winding of the transformer. The secondary winding is disposed on the second platform. The method includes operating, by the LLC resonant power converter at a unity gain operating point and in an open loop mode without feedback control, a device mounted on the second platform based on the secondary winding receiving the wireless power signal.
G01S 7/4861 - Circuits for detection, sampling, integration or read-out
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
H02J 50/12 - Circuit arrangements or systems for wireless supply or distribution of electric power using inductive coupling of the resonant type
H02J 50/80 - Circuit arrangements or systems for wireless supply or distribution of electric power involving the exchange of data, concerning supply or distribution of electric power, between transmitting devices and receiving devices
62.
System and method for predicting behaviors of detected objects through environment representation
Aspects of the invention relate generally to autonomous vehicles. The features described improve the safety, use, driver experience, and performance of these vehicles by performing a behavior analysis on mobile objects in the vicinity of an autonomous vehicle. Specifically, the autonomous vehicle is capable of detecting nearby objects, such as vehicles and pedestrians, and is able to determine how the detected vehicles and pedestrians perceive their surroundings. The autonomous vehicle may then use this information to safely maneuver around all nearby objects.
Aspects of the present disclosure relate to a vehicle for maneuvering a passenger to a destination autonomously. The vehicle includes one or more computing devices that receive a request for a vehicle from a client computing device. The request identifies a first location. The one or more computing devices also determine whether the first location is within a threshold outside of a service area of the vehicle. When the location is within the threshold distance outside of the service area of the vehicle, the one or more computing devices identify a second location within the service area of the vehicle where the vehicle is able to stop for a passenger and based on the first location. The one or more computing devices then provide a map and a marker identifying the position of the second location on the map for display on the client computing device.
E05F 15/70 - Power-operated mechanisms for wings with automatic actuation
E05F 15/76 - Power-operated mechanisms for wings with automatic actuation responsive to movement or presence of persons or objects responsive to devices carried by persons or objects, e.g. magnets or reflectors
G01C 21/20 - Instruments for performing navigational calculations
G01C 21/26 - NavigationNavigational instruments not provided for in groups specially adapted for navigation in a road network
G01C 21/36 - Input/output arrangements for on-board computers
G06Q 10/1093 - Calendar-based scheduling for persons or groups
G06Q 50/40 - Business processes related to the transportation industry
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G07C 5/00 - Registering or indicating the working of vehicles
G08G 1/14 - Traffic control systems for road vehicles indicating individual free spaces in parking areas
64.
System and method for evaluating the perception system of an autonomous vehicle
A method and apparatus are provided for optimizing one or more object detection parameters used by an autonomous vehicle to detect objects in images. The autonomous vehicle may capture the images using one or more sensors. The autonomous vehicle may then determine object labels and their corresponding object label parameters for the detected objects. The captured images and the object label parameters may be communicated to an object identification server. The object identification server may request that one or more reviewers identify objects in the captured images. The object identification server may then compare the identification of objects by reviewers with the identification of objects by the autonomous vehicle. Depending on the results of the comparison, the object identification server may recommend or perform the optimization of one or more of the object detection parameters.
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
B60T 7/22 - Brake-action initiating means for automatic initiationBrake-action initiating means for initiation not subject to will of driver or passenger initiated by contact of vehicle, e.g. bumper, with an external object, e.g. another vehicle
B60T 8/00 - Arrangements for adjusting wheel-braking force to meet varying vehicular or ground-surface conditions, e.g. limiting or varying distribution of braking force
B60T 8/17 - Using electrical or electronic regulation means to control braking
B60T 8/88 - Arrangements for adjusting wheel-braking force to meet varying vehicular or ground-surface conditions, e.g. limiting or varying distribution of braking force responsive to a speed condition, e.g. acceleration or deceleration with failure responsive means, i.e. means for detecting and indicating faulty operation of the speed responsive control means
G06T 7/223 - Analysis of motion using block-matching
G06T 7/231 - Analysis of motion using block-matching using full search
G06T 7/521 - Depth or shape recovery from laser ranging, e.g. using interferometryDepth or shape recovery from the projection of structured light
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G07C 9/00 - Individual registration on entry or exit
B60W 30/186 - Preventing damage resulting from overload or excessive wear of the driveline excessive wear or burn out of friction elements, e.g. clutches
B60W 50/029 - Adapting to failures or work around with other constraints, e.g. circumvention by avoiding use of failed parts
B62D 6/00 - Arrangements for automatically controlling steering depending on driving conditions sensed and responded to, e.g. control circuits
G01S 13/86 - Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
Example systems and methods allow for reporting and sharing of information reports relating to driving conditions within a fleet of autonomous vehicles. One example method includes receiving information reports relating to driving conditions from a plurality of autonomous vehicles within a fleet of autonomous vehicles. The method may also include receiving sensor data from a plurality of autonomous vehicles within the fleet of autonomous vehicles. The method may further include validating some of the information reports based at least in part on the sensor data. The method may additionally include combining validated information reports into a driving information map. The method may also include periodically filtering the driving information map to remove outdated information reports. The method may further include providing portions of the driving information map to autonomous vehicles within the fleet of autonomous vehicles.
B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
B60W 50/00 - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
G08G 1/00 - Traffic control systems for road vehicles
G08G 1/01 - Detecting movement of traffic to be counted or controlled
G08G 1/0967 - Systems involving transmission of highway information, e.g. weather, speed limits
66.
Positioning Vehicles to Improve Quality of Observations at Intersections
Disclosed herein are methods and apparatus for controlling autonomous vehicles utilizing maps that include visibility information. A map is stored at a computing device associated with a vehicle. The vehicle is configured to operate in an autonomous mode that supports a plurality of driving behaviors. The map includes information about a plurality of roads, a plurality of features, and visibility information for at least a first feature in the plurality of features. The computing device queries the map for visibility information for the first feature at a first position. The computing device, in response to querying the map, receives the visibility information for the first feature at the first position. The computing device selects a driving behavior for the vehicle based on the visibility information. The computing device controls the vehicle in accordance with the selected driving behavior.
Example embodiments relate to techniques for using vehicle image radar to estimate rain rate and other weather conditions. A computing device may receive radar data from a radar unit coupled to a vehicle. The radar data can represent the vehicle's environment. The computing device may use the radar data to determine a radar representation that indicates backscatter power and estimate, using a rain rate model, a rain rate for the environment based on the radar representation. The computing device may then control the vehicle based on the rain rate. In some examples, the computing device may provide the rain rate estimation and an indication of its current location to other vehicles to enable the vehicles to adjust routes based on the rain rate estimation.
G01S 13/95 - Radar or analogous systems, specially adapted for specific applications for meteorological use
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
Example embodiments relate to radar reflection filtering using a vehicle sensor system. A computing device may detect a first object in radar data from a radar unit coupled to a vehicle and, responsive to determining that information corresponding to the first object is unavailable from other vehicle sensors, use the radar data to determine a position and a velocity for the first object relative to the radar unit. The computing device may also detect a second object aligned with a vector extending between the radar unit and the first object. Based on a geometric relationship between the vehicle, the first object, and the second object, the computing device may determine that the first object is a self-reflection of the vehicle caused at least in part by the second object and control the vehicle based on this determination.
The present disclosure relates to systems and devices having a rotatable mirror assembly. An example system includes a housing and a rotatable mirror assembly. The rotatable mirror assembly includes a plurality of reflective surfaces, a shaft defining a rotational axis, and a mirror body coupling the plurality of reflective surfaces to the shaft. The mirror body includes a plurality of flexible support members. The rotatable mirror assembly also includes a coupling bracket configured to removably couple the rotatable mirror assembly to the housing. The system also includes a transmitter configured to emit emission light into an environment of the system after interacting with at least one reflective surface of the plurality of reflective surfaces. The system additionally includes a receiver configured to detect return light from the environment after interacting with the at least one reflective surface of the plurality of reflective surfaces.
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01D 5/14 - Mechanical means for transferring the output of a sensing memberMeans for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for convertingTransducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage
G01S 17/08 - Systems determining position data of a target for measuring distance only
70.
PRIVACY-RESPECTING DETECTION AND LOCALIZATION OF SOUNDS IN AUTONOMOUS DRIVING APPLICATIONS
The described aspects and implementations enable privacy-respecting detection, separation, and localization of sounds in vehicle environments. The techniques include obtaining, using audio detector(s) of a vehicle, a sound recording that includes a plurality of elemental sounds (ESs) in a driving environment of the vehicle, and processing, using a sound separation model, the sound recording to separate individual ESs of the plurality of ESs. The techniques further include identifying a content of individual ESs and causing a driving path of the vehicle to be modified in view of the identified content of the individual ESs. Further techniques include rendering speech imperceptibly by redacting temporal portions of the speech, using sound recognition models to identify and discard recordings of speech, and driving at speeds that exceed threshold speeds at which speech becomes imperceptible from noise masking.
G10L 15/20 - Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise or of stress induced speech
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
71.
SYNTHESIZING THREE-DIMENSIONAL VISUALIZATIONS FROM PERSPECTIVES OF ONBOARD SENSORS OF AUTONOMOUS VEHICLES
Aspects of the disclosure provide for generating a visualization of a three-dimensional (3D) world view from the perspective of a camera of a vehicle. For example, images of a scene captured by a camera of the vehicle and 3D content for the scene may be received. A virtual camera model for the camera of the vehicle may be identified. A set of matrices may be generated using the virtual camera model. The set of matrices may be applied to the 3D content to create a 3D world view. The visualization may be generated using the 3D world view as an overlay with the image, and the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content.
Example embodiments relate to techniques for enabling one or more systems of a vehicle (e.g., an autonomous vehicle) to request remote assistance to help the vehicle navigate in an environment. A computing device may be configured to receive a request for assistance from a vehicle. The request may include an image frame representative of a portion of an environment. The computing device may also be configured to initiate display of a graphical user interface to visually represent the image frame. Further, the computing device may determine a bounding region for the image frame. The bounding region may be associated with one or more objects in the image frame. Additionally, the computing device may be configured to receive, via the GUI, an input that includes an object identifier, and associate the object identifier with each of the one or more objects in the bounding region.
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
H04W 4/40 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
Example embodiments relate to lane adjustment techniques for slow lead agents. A vehicle computing system may use sensor data depicting the surrounding environment to detect when another vehicle is traveling in front of the vehicle at a speed that is less than a threshold minimum speed. If the other vehicle fails to increase speed above the minimum speed, the computing system may determine whether to change lanes to avoid the other vehicle based on speed data for other lanes. In some implementations, the computing system assigns penalties to lane segments surrounding the vehicle based on speed data for the different lane segments. For instance, the path finding system for the vehicle can use penalties and speed data to determine efficient routes that safely circumvent slow agents.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
74.
Systems and Devices for Strain Relief for Magnetic Cores and Assemblies
An example device includes a mounting structure including a first material having a first coefficient of thermal expansion (CTE). The mounting structure includes a center portion and an outer portion. The device further includes a magnetic core for an electrical component that is coupled to the outer portion of the mounting structure. The magnetic core includes a second material having a second CTE. The magnetic core is split into a plurality of sections separated by spaces extending from the center portion to an outer edge of the outer portion. Each of the plurality of sections is separately coupled to the mounting structure, and each of the plurality of sections is connected to the electrical component.
Aspects and implementations of the present disclosure relate to augmenting point cloud data with artificial return points. An example method includes: receiving point cloud data comprising a plurality of return points, each return point being representative of a reflecting region that reflects a beam emitted by a sensing system, and generating a plurality of artificial return points based on presence or absence of return points along radial paths of beams emitted from the sensing system.
Aspects and implementations are related to systems and techniques enabling predictions of a motion change in a moving vehicle, predictions of an onset of a motion of an idling vehicle, and classification of vehicles based, at least in part, on vibrometry data obtained using light detection and ranging devices.
One example LIDAR device comprises a substrate and a waveguide disposed on the substrate. A first section of the waveguide extends lengthwise on the substrate in a first direction. A second section of the waveguide extends lengthwise on the substrate in a second direction different than the first direction. A third section of the waveguide extends lengthwise on the substrate in a third direction different than the second direction. The second section extends lengthwise between the first section and the second section. The LIDAR device also comprises a light emitter configured to emit light. The waveguide is configured to guide the light inside the first section toward the second section, inside the second section toward the third section, and inside the third section away from the second section.
The present disclosure relates to devices, systems, and methods relating to configurable silicon photomultiplier (SiPM) devices. An example device includes a substrate and a plurality of single photon avalanche diodes (SPADs) coupled to the substrate. The device also includes a plurality of outputs coupled to the substrate and a plurality of electrical components coupled to the substrate. The plurality of electrical components are configured to selectively connect the plurality of SPADs to the plurality of outputs by selecting which output of the plurality of outputs is connected to each SPAD of the plurality of SPADs and to thereby define a plurality of SiPMs in the device such that each SiPM of the plurality of SiPMs comprises a respective set of one or more SPADs connected to a respective output of the plurality of outputs.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for estimating a 3-D pose of an object of interest from image and point cloud data. In one aspect, a method includes obtaining an image of an environment; obtaining a point cloud of a three-dimensional region of the environment; generating a fused representation of the image and the point cloud; and processing the fused representation using a pose estimation neural network and in accordance with current values of a plurality of pose estimation network parameters to generate a pose estimation network output that specifies, for each of multiple keypoints, a respective estimated position in the three-dimensional region of the environment.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
80.
IDENTIFICATION OF REAL AND IMAGE SIGN DETECTIONS IN DRIVING APPLICATIONS
The described aspects and implementations enable efficient identification of real and image signs in autonomous vehicle (AV) applications. In one implementation, disclosed is a method and a system to perform the method that includes obtaining, using a sensing system of the AV, a combined image that includes a camera image and a depth information for a region of an environment of the AV, classifying a first sign in the combined image as an image-true sign, performing a spatial validation of the first sign, which includes evaluation of a spatial relationship of the first sign and one or more objects in the region of the environment of the AV, and identifying, based on the performed spatial validation, the first sign as a real sign.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
Methods and devices for detecting traffic signals and their associated states are disclosed. In one embodiment, an example method includes a scanning a target area using one or more sensors of a vehicle to obtain target area information. The vehicle may be configured to operate in an autonomous mode, and the target area may be a type of area where traffic signals are typically located. The method may also include detecting a traffic signal in the target area information, determining a location of the traffic signal, and determining a state of the traffic signal. Also, a confidence in the traffic signal may be determined. For example, the location of the traffic signal may be compared to known locations of traffic signals. Based on the state of the traffic signal and the confidence in the traffic signal, the vehicle may be controlled in the autonomous mode.
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
82.
Model for Excluding Vehicle from Sensor Field Of View
The technology relates to developing a highly accurate understanding of a vehicle's sensor fields of view in relation to the vehicle itself. A training phase is employed to gather sensor data in various situations and scenarios, and a modeling phase takes such information and identifies self-returns and other signals that should either be excluded from analysis during real-time driving or accounted for to avoid false positives. The result is a sensor field of view model for a particular vehicle, which can be extended to other similar makes and models of that vehicle. This approach enables a vehicle to determine when sensor data is of the vehicle or something else. As a result, the detailed modeling allowing the on-board computing system to make driving decisions and take other actions based on accurate sensor information.
H04Q 9/00 - Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom
83.
OBJECT TRACKING ACROSS A WIDE RANGE OF DISTANCES FOR DRIVING APPLICATIONS
The described aspects and implementations enable efficient and seamless tracking of objects in vehicle environments using different sensing modalities across a wide range of distances. A perception system of a vehicle deploys an object tracking pipeline with a plurality of models that include a camera model trained to perform, using camera images, object tracking at distances exceeding a lidar sensing range, a lidar model trained to perform, using lidar images, object tracking at distances within the lidar sensing range, and a camera-lidar model trained to transfer, using the camera images and the lidar images, object tracking from the camera model to the lidar model.
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G05D 1/02 - Control of position or course in two dimensions
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Aspects of the disclosure relate to adjusting a virtual camera's orientation when a vehicle is making a turn. One or more computing devices may receive the vehicle's original heading prior to making the turn and the vehicle's current heading. Based on the vehicle's original heading and the vehicle's current heading, the one or more computing devices may determine an angle of a turn the vehicle is performing and The one or more computing devices may determine a camera rotation angle and adjust the virtual camera's orientation relative to the vehicle to an updated orientation by rotating the virtual camera by the camera rotation angle and generate a video corresponding to the virtual camera's updated orientation. The video may be displayed on the display by the one or more computing devices.
B60R 1/27 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
B60R 1/28 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
Aspects and implementations of the present disclosure relate, generally, to optimization of autonomous vehicle (AV) technology and, more specifically, to a transfer hub for autonomous trucking operations. In one example, the disclosed techniques include obtaining, at a first time, one or more first images of an outside environment of an AV and shutting down a data processing system of the AV. The techniques further include initiating a starting-up of the data processing system of the AV and obtaining, at a second time, one or more second images of the outside environment of the AV. The techniques further include determining, based on a comparison of the one or more first images to the one or more second images, that the AV has not moved between the first time and the second time and completing the starting-up of the data processing system of the AV.
B67D 7/04 - Apparatus or devices for transferring liquids from bulk storage containers or reservoirs into vehicles or into portable containers, e.g. for retail sale purposes for transferring fuels, lubricants or mixed fuels and lubricants
86.
Predicting a Parking or Pullover Spot Vacancy for an Autonomous Vehicle Pickup
The technology involves to pickups performed by autonomous vehicles. In particular, it includes identifying one or more potential pullover locations adjacent to an area of interest that an autonomous vehicle is approaching. The vehicle detects that a given one of the potential pullover locations is occupied by another vehicle and determines whether the other vehicle will be vacating the given pullover location within a selected amount of time. Upon determining that the other vehicle will be vacating the given potential pullover location within the timeframe, the vehicle determines whether to wait for the other vehicle to vacate the given pullover location. Then a driving system of the vehicle either performs a first action in order to wait for the other vehicle to vacate the given pullover location or performs a second action that is different from the first action when it is determined to not wait.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G08G 1/01 - Detecting movement of traffic to be counted or controlled
G08G 1/14 - Traffic control systems for road vehicles indicating individual free spaces in parking areas
87.
Systems, Apparatus, and Methods for Retrieving Image Data of Image Frames
At least one processor may be configured to receive a first image frame of a sequence of image frames from an image capture device and select a first portion of a first image frame. The at least one processor may also be configured to obtain alignment information and determine a first portion and a second portion of a second image frame based on the alignment information. Further, the at least one processor may be configured to determine a bounding region within the second image frame and fetch image data corresponding to the bounding region of the second image frame from memory. In some examples, the first image frame may comprise a base image and the second image frame may comprise an alternative image frame. Further, the first image frame may comprise any one of the image frames of the sequence of image frames.
G06T 7/33 - Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
G06T 7/55 - Depth or shape recovery from multiple images
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
88.
User Interface Techniques for Recommending Remote Assistance Actions
Example embodiments relate to user interface techniques for recommending remote assistance actions. A remote computing device may display a representation of the forward path for an autonomous vehicle based on sensor data received from the vehicle. The computing device may augment the representation of the forward path to further depict one or more proposed trajectories available for the autonomous vehicle to perform. Each proposed trajectory conveys one or more maneuvers positioned relative to road boundaries in the forward path. The computing device may receive a selection of a proposed trajectory from the one or more proposed trajectories available for the autonomous vehicle to perform and provide navigation instructions to the vehicle based on the proposed trajectory.
Aspects of the present disclosure relate to a vehicle for maneuvering an occupant of the vehicle to a destination autonomously as well as providing information about the vehicle and the vehicle's environment for display to the occupant. The information includes a representation of a scene depicting an external environment of the vehicle. The representation of the scene includes a visual representation of the vehicle and a visual representation of objects in an external environment of the vehicle.
B60K 35/28 - Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics informationOutput arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the purpose of the output information, e.g. for attracting the attention of the driver
B60K 35/81 - Arrangements for controlling instruments for controlling displays
G01C 21/36 - Input/output arrangements for on-board computers
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G08G 1/015 - Detecting movement of traffic to be counted or controlled with provision for distinguishing between motor cars and cycles
G08G 1/04 - Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
G08G 1/0962 - Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
39 - Transport, packaging, storage and travel services
42 - Scientific, technological and industrial services, research and design
09 - Scientific and electric apparatus and instruments
12 - Land, air and water vehicles; parts of land vehicles
Goods & Services
Business consultancy; business advice; business training; business management; business planning; business administration; business support services; advertising services; transportation services, namely, operations management relating to vehicles; transportation services, namely, tracking, locating, and monitoring of vehicles for commercial purposes; transport in the nature of transportation logistics services, namely, arranging the transportation of goods for others; freight logistics management; monitoring, managing, and tracking of transportation of persons and delivery of goods and packages in transit, for business purposes; providing a website featuring information for business management of transportation logistics services, namely, providing information about planning, coordinating, and tracking the transportation of goods, freight, people, and conveyances; providing a website featuring information for transportation logistics management services, namely, about planning, coordinating, and tracking the transportation of people, and planning and scheduling shipments for users of transportation services; business data analysis; fleet management services in the nature of tracking, locating, and monitoring of fleet vehicles as well as vehicle fuel management, parking management, remote assistance management, and demand management, depot management, all for commercial purposes; advertising services; promoting the goods and services of others; arranging and providing discount programs that enable customers to obtain discounts on goods and services; administration of a customer loyalty program which provides discounted ride-hail rides; providing a website featuring information about discount and rewards programs; arranging and conducting incentive rewards programs to incentivize engagement with a ride-hailing platform; charitable services, namely, organizing and conducting volunteer programs and community service projects. Transportation services; car rental services; truck rental services; rental of autonomous vehicles; truck transport; car transport; travel by land vehicles; transport of persons; transport of goods; delivery of goods; delivery by road; transportation and delivery services by autonomous vehicles; freight and cargo services; freight transportation; supply chain logistics and reverse logistics services, namely, storage, transportation, and delivery of goods for others; providing autonomous vehicle booking services; travel arrangement, namely, arranging time-based ride-hailing; transportation services, namely, coordinating the pickup and dropoff of passengers at designated or directed locations; transportation management services for others, namely, planning, coordinating, and tracking the transportation of people and conveyances; providing transportation information; providing a website featuring information regarding autonomous car transportation and delivery services and scheduling transportation services; providing a website featuring information in the field of transportation; providing a website featuring information regarding transportation services and bookings for transportation services; providing a website featuring information regarding delivery services and bookings for delivery services; providing a website featuring information about transportation management services in the nature of transportation of goods, namely, about planning, coordinating, and tracking the transportation of conveyances; charitable services, namely, transportation and delivery services by road. Software as a service (SaaS) services; platform as a service (PaaS) services; online non-downloadable software; technical support services; software as a service (SaaS) services featuring software for sensor operation; platform as a service (PaaS) services featuring software for sensor operation; online non-downloadable software for detecting and issuing notifications regarding vehicle maintenance needs; online non-downloadable software for facilitating and assisting with vehicle maintenance remotely; online non-downloadable software for navigating, driving, and directing a vehicle car to receive fuelling and servicing; online non-downloadable software for sensor operation; online non-downloadable software for arranging, engaging, scheduling, managing, obtaining, booking, and coordinating travel, transportation, transportation services, ride-hailing, deliveries, and delivery services; online non-downloadable software for tracking, locating, and monitoring vehicles; online non-downloadable software for coordinating the transport and delivery of goods; online non-downloadable software for arranging, procuring, scheduling, engaging, coordinating, managing, and booking transportation and deliveries; online non-downloadable software for providing and managing delivery services; online non-downloadable software for providing and managing delivery of consumer goods, food, and groceries; online non-downloadable software for accessing and viewing transit information, schedules, routes, and prices; providing temporary use of online non-downloadable real-time map software for tracking vehicles and deliveries; providing a website featuring online non-downloadable software that enables users to request transportation; providing temporary use of online non-downloadable computer software for identifying trip delays and vehicle location; providing temporary use of online non-downloadable software for accessing transportation services, bookings for transportation services and dispatching motorized vehicles; online non-downloadable software for issuing, setting up, distributing, redeeming, and accessing promotions, coupons, discounts, deals, vouchers, rebates, rewards, incentives, and special offers; online non-downloadable software for vehicle fleet management and demand forecasting, vehicle charging, vehicle fuel monitoring, vehicle maintenance, vehicle depot management, vehicle parking management, and remote assistance with vehicles; software as a service (SaaS) services featuring online non-downloadable computer software for use as an application programming interface (API); online non-downloadable software for vehicle coordination, navigation, calibrating, direction, and management for use with vehicle on-board computers; software as a service (SaaS) services featuring software for vehicle coordination, navigation, calibrating, direction, and management for use with vehicle on-board computers; online non-downloadable software for analyzing transportation and deliveries; electronic monitoring and reporting of transportation data using computers or sensors; online non-downloadable software for the autonomous driving of motor vehicles; online non-downloadable software for autonomous vehicle navigation, steering, calibration, and management; online non-downloadable software for visualization, manipulation, and integration of digital graphics and images; online non-downloadable software for artificial intelligence, machine learning, and deep learning; online non-downloadable software for use in operating and calibrating lidar; online non-downloadable software used for data analytics in the field of transportation; online non-downloadable software used for data analytics in the field of transportation fleet management; online non-downloadable open source software for use in data management; land and road surveying; surveying services and data collection and analysis in connection therewith; mapping services; online non-downloadable software for accessing location, GPS, and motion sensor data for safety and emergency response purposes; online non-downloadable software for emergency assistance; online non-downloadable software for safety and incident detection; providing data sets in the field of machine perception and autonomous driving technology; providing information about autonomous-vehicle and machine-perception research via a website; research, design, and development in the field of artificial intelligence; research, design, and development in the field of autonomous technology; research, design, and development in the field of perception and motion prediction; research, design, and development of computer hardware and software for use with vehicle on-board computers for monitoring and controlling motor vehicle operation; installation, updating, and maintenance of computer hardware and software for use with vehicle on-board computers for monitoring and controlling motor vehicle operation; research, design, and development of computer hardware and software for vehicle coordination, navigation, calibrating, direction, and management; research, design, and development of sensors and for structural parts thereof; research, design, and development of lasers for sensing objects and distance to objects, lasers for sensing indoor and outdoor terrain, lasers for measuring purposes, laser measuring systems, lidar (light detection and ranging apparatus), and laser equipment; advanced product research, design, and development in the field of artificial intelligence in connection with autonomous vehicles; design and development of computer hardware and software; research, design, and development of vehicle software; technological, scientific and research services in the field of robotics, self-driving car and autonomous vehicle technology; providing virtual computer systems and environments through cloud computing for the purpose of training self-driving cars, autonomous vehicles and robots; virtual testing of self-driving cars, autonomous vehicles and robots using computer simulations; creation, development, programming and implementation of simulation software in the field of self-driving cars, autonomous vehicles and robots; creation of simulation programs for autonomous vehicles. Computer software; downloadable software; recorded software; computer hardware; computers; sensors; software for sensor operation; downloadable and recorded software for detecting and issuing notifications regarding vehicle maintenance needs; downloadable and recorded software for facilitating and assisting with vehicle maintenance remotely; downloadable and recorded software for navigating, driving, and directing a vehicle to receive fuelling and servicing; downloadable software for arranging, engaging, scheduling, managing, obtaining, booking, and coordinating travel, transportation, transportation services, ride-hailing, deliveries, and delivery services; downloadable software for the scheduling and dispatch of motorized vehicles; downloadable software for monitoring, managing, and tracking delivery of goods; downloadable software for requesting and ordering delivery services; downloadable software for planning, scheduling, controlling, monitoring, and providing information on transportation of assets and goods; downloadable software for tracking and providing information concerning pick-up and delivery of assets and goods in transit; downloadable software for accessing and providing online grocery and retail store services; downloadable software for providing and managing delivery of consumer goods, food, and groceries; downloadable real-time map software for tracking vehicles, trips, and deliveries; downloadable software for displaying transit routes; downloadable software for providing information on transportation and delivery services; downloadable software featuring information about food, grocery, and consumer products; downloadable software for users to administer, access, monitor, and manage loyalty programs and rewards; downloadable software for earning, tracking, and redeeming loyalty rewards, points, and discounts; downloadable software for issuing, setting up, distributing, and redeeming promotions, coupons, discounts, deals, vouchers, rebates, rewards, incentives, and special offers to customers; computer hardware and recorded software for the autonomous driving of motor vehicles; downloadable software in the nature of vehicle operating system software; downloadable software for autonomous vehicle operation, navigation, steering, calibration, and management; downloadable software for vehicle fleet management, namely, tracking fleet vehicles for commercial purposes; downloadable computer software for use as an application programming interface (API); downloadable and recorded software for vehicle fleet management and demand forecasting, vehicle charging, vehicle fuel monitoring, vehicle maintenance, vehicle depot management, vehicle parking management, and remote assistance with vehicles; recorded software and computer hardware for vehicle fleet launching, coordination, calibrating, direction, scheduling, booking, dispatching, and management; computer hardware and recorded software for use with vehicle cameras; recorded software for artificial intelligence (AI), machine learning, and deep learning; recorded software for artificial intelligence (AI), machine learning, and deep learning for data processing and contextual prediction, personalization, and predictive analytics; downloadable computer programs and downloadable software for artificial intelligence (AI), machine learning, and deep learning for use in connection with operating autonomous vehicles, systems, devices, and machinery; recorded software and computer hardware for use in connection with and for operating autonomous vehicles, systems, devices, and machinery; computer hardware and recorded software for operating vehicle cameras; downloadable computer software for use in operating and calibrating lidar; computer hardware for operating autonomous vehicles; navigational instruments for vehicles; laser object detectors; laser device for sensing objects; laser device for sensing outdoor terrain; audio detectors; laser device for sensing distance to objects in the nature of laser rangefinders; electric sensors for determining position, velocity, direction, and acceleration; perimeter sensors in the nature of sensors that measure the presence of objects in the environment and the speed, trajectory, and heading of objects; environmental sensors for measuring the presence of objects in the environment and the speed of objects; vehicle sensors, namely, environmental sensors for measuring the presence of objects in the environment and the speed of objects; sensors for determining position, velocity, direction, and acceleration; vehicle safety and detection equipment and hardware; safety and driving assistant systems comprised of sensors for determining position, velocity, direction, and acceleration of land vehicles; cameras; cameras for use with vehicles; downloadable data sets in the field of machine perception and autonomous driving technology. Vehicles; self-driving transport vehicles; trucks; freight vehicles, namely, trucks and vans; shared transit vehicles; freight vehicles in the nature of land vehicles; autonomous land vehicles and structural parts thereof.
91.
System and method of providing recommendations to users of vehicles
A system and method are arranged to provide recommendations to a user of a vehicle. In one aspect, the vehicle navigates in an autonomous mode and the sensors provide information that is based on the location of the vehicle and output from sensors directed to the environment surrounding the vehicle. In further aspects, both current and previous sensor data is used to make the recommendations, as well as data based on the sensors of other vehicles.
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
B60T 7/22 - Brake-action initiating means for automatic initiationBrake-action initiating means for initiation not subject to will of driver or passenger initiated by contact of vehicle, e.g. bumper, with an external object, e.g. another vehicle
B60T 8/00 - Arrangements for adjusting wheel-braking force to meet varying vehicular or ground-surface conditions, e.g. limiting or varying distribution of braking force
B60T 8/17 - Using electrical or electronic regulation means to control braking
B60T 8/88 - Arrangements for adjusting wheel-braking force to meet varying vehicular or ground-surface conditions, e.g. limiting or varying distribution of braking force responsive to a speed condition, e.g. acceleration or deceleration with failure responsive means, i.e. means for detecting and indicating faulty operation of the speed responsive control means
G06T 7/223 - Analysis of motion using block-matching
G06T 7/231 - Analysis of motion using block-matching using full search
G06T 7/521 - Depth or shape recovery from laser ranging, e.g. using interferometryDepth or shape recovery from the projection of structured light
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G07C 9/00 - Individual registration on entry or exit
B60W 30/186 - Preventing damage resulting from overload or excessive wear of the driveline excessive wear or burn out of friction elements, e.g. clutches
B60W 50/029 - Adapting to failures or work around with other constraints, e.g. circumvention by avoiding use of failed parts
B62D 6/00 - Arrangements for automatically controlling steering depending on driving conditions sensed and responded to, e.g. control circuits
G01S 13/86 - Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
Aspects of the disclosure relate to controlling a vehicle. For instance, using a camera, a first camera image including a first object may be captured. A first bounding box for the first object and a distance to the first object may be identified. A second camera image including a second object may be captured. A second bounding box for the second image and a distance to the second object may be identified. Whether the first object is the second object may be determined using a plurality of models to compare visual similarity of the two bounding boxes, to compare a three-dimensional location based on the distance to the first object and a three-dimensional location based on the distance to the second object, and to compare results from the first and second models. The vehicle may be controlled in an autonomous driving mode based on a result of the third model.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G01S 13/931 - Radar or analogous systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G06F 18/22 - Matching criteria, e.g. proximity measures
93.
Synchronized Spinning LIDAR and Rolling Shutter Camera System
One example system comprises a LIDAR sensor that rotates about an axis to scan an environment of the LIDAR sensor. The system also comprises one or more cameras that detect external light originating from one or more external light sources. The one or more cameras together provide a plurality of rows of sensing elements. The rows of sensing elements are aligned with the axis of rotation of the LIDAR sensor. The system also comprises a controller that operates the one or more cameras to obtain a sequence of image pixel rows. A first image pixel row in the sequence is indicative of external light detected by a first row of sensing elements during a first exposure time period. A second image pixel row in the sequence is indicative of external light detected by a second row of sensing elements during a second exposure time period.
H04N 23/45 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
H04N 23/698 - Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
H04N 23/71 - Circuitry for evaluating the brightness variation
H04N 23/73 - Circuitry for compensating brightness variation in the scene by influencing the exposure time
H04N 23/90 - Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
H04N 25/40 - Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
94.
QUALITY SCORING FOR PULLOVERS FOR AUTONOMOUS VEHICLES
Aspects of the disclosure relate to evaluating quality of a predetermined pullover location for an autonomous vehicle. For instance, a plurality of inputs for the predetermined pullover location may be received. The plurality of inputs may each include a value representative of a characteristic of the predetermined pullover location. The plurality of inputs may be combined to determine a pullover quality value for the predetermined pullover location. The pullover quality value may be provided to a vehicle in order to enable the vehicle to select a pullover location for the vehicle.
Example systems and methods enable an autonomous vehicle to request assistance from a remote operator when the vehicle's confidence in operation is low. One example method includes operating an autonomous vehicle in a first autonomous mode. The method may also include identifying a situation where a level of confidence of an autonomous operation in the first autonomous mode is below a threshold level. The method may further include sending a request for assistance to a remote assistor, the request including sensor data representative of a portion of an environment of the autonomous vehicle. The method may additionally include receiving a response from the remote assistor, the response indicating a second autonomous mode of operation. The method may also include causing the autonomous vehicle to operate in the second autonomous mode of operation in accordance with the response from the remote assistor.
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
B60W 50/02 - Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
One example method of testing an electrical device comprises transmitting a data pattern to a memory device of the electrical device by a controller of the electrical device to provide a written data pattern to the memory device, wherein the data pattern replicates a resonant frequency of at least a portion of the electrical device, reading the written data pattern from the memory device with the controller, and comparing the written data pattern to the data pattern.
Example embodiments relate to real-time health monitoring for automotive radars. A computing device may receive radar data from multiple radar units that have partially overlapping fields of view and detect a target object located such that the radar units both capture measurements of the target object. The computing device may determine a power level representing the target object for radar data from each radar unit, adjust these power levels, and determine a power difference between them. When the power difference exceeds a threshold power difference, the computing device may perform a calibration process to decrease the power difference below the threshold power difference or alert the vehicle, including onboard algorithms, to the reduced performance of the radar.
An example method includes receiving, from one or more sensors associated with an autonomous vehicle, sensor data associated with a target object in an environment of the vehicle during a first environmental condition, where at least one sensor of the sensor(s) is configurable to be associated with one of a plurality of operating field of view volumes. The method also includes based on the sensor data, determining at least one parameter associated with the target object. The method also includes determining a degradation in the parameter(s) between the sensor data and past sensor data, where the past sensor data is associated with the target object in the environment during a second environmental condition different from the first and, based on the degradation, adjusting the operating field of view volume of the at least one sensor to a different one of the operating field of view volumes.
G01S 17/86 - Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G01W 1/06 - Instruments for indicating weather conditions by measuring two or more variables, e.g. humidity, pressure, temperature, cloud cover or wind speed giving a combined indication of weather conditions
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/88 - Image or video recognition using optical means, e.g. reference filters, holographic masks, frequency domain filters or spatial domain filters
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G08G 1/01 - Detecting movement of traffic to be counted or controlled
G08G 1/04 - Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
G08G 1/048 - Detecting movement of traffic to be counted or controlled with provision for compensation of environmental or other condition, e.g. snow, vehicle stopped at detector
H04N 23/69 - Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming