Provided herein a system and method for a bracket configured to position one or more sensors on a vehicle. A method for supporting a sensor on a vehicle includes: attaching an internal frame structure to an inside of a hood of a vehicle behind a grille supported by the hood; and attaching an external frame structure to the internal frame structure, where the external frame structure is positioned outside of the hood in front of the grille, where supports for the external frame structure extend through the grille and attach to the internal frame structure, where the external frame structure supports the one or more sensors at a position providing a line-of-sight not visible from inside a passenger compartment of the vehicle.
Provided herein a system and method for a bracket configured to position one or more sensors 415,420,425 on a vehicle 10. A method for supporting a sensor 415,420,425 on a vehicle 10 includes: attaching an internal frame structure 250,350 to an inside of a hood of a vehicle 10 behind a g rille 125, 225 supported by the hood; and attaching an external frame structure 130 to the internal frame structure 415,420,425, where the external frame structure is positioned outside of the hood 120 in front of the grille 125, 225, where supports for the external frame structure extend through the grille and attach to the internal frame structure, where the external frame structure supports the one or more sensors415,420,.425 at a position providing a line-of-sight not visible from inside a passenger compartment of the vehicle 10.
B60R 11/04 - Mounting of cameras operative during driveArrangement of controls thereof relative to the vehicle
B60R 19/50 - Bumpers, i.e. impact receiving or absorbing members for protecting vehicles or fending off blows from other vehicles or objects combined with, or convertible into, other devices or objects, e.g. bumpers combined with road brushes, bumpers convertible into beds with lights or registration plates
3.
GENERATIVE ARTIFICIAL INTELLIGENCE BASED TRAJECTORY SIMULATION
Devices, systems, and methods a method for simulating a trajectory of an object are described. An example method includes obtaining a context feature representation corresponding to context information, wherein the context information comprises information describing an environment of the object; obtaining a control feature representation corresponding to control information, wherein the control information comprises information that the simulated trajectory needs to satisfy; determining a latent variable using an input encoder based on the context feature representation and the control feature representation; and determining the simulated trajectory by inputting the latent variable, the context feature representation, and the control feature representation into a decoder.
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
4.
ERROR MONITORING SCHEMES USING ALIVENESS RECORD OF THREAD
Described are devices, systems and methods for monitoring a health status of a thread. A method of monitoring a health status of a thread comprises: invoking a callback for the thread to write a begin record and an end record in an aliveness record that corresponds to the thread, the aliveness record being stored in a memory; detecting an unhealthy status of the thread by reading the aliveness record from the memory; and reporting the unhealthy status of the thread in response to the detecting.
Techniques are described for monitoring tire conditions on an autonomous vehicle, more specifically, an autonomous truck that includes a tractor unit and a trailer. An example autonomous vehicle includes a tractor unit configured to be coupled to a trailer that comprises multiple tires. The autonomous vehicle may also include one or more infrared sensors. At least part of the one or more infrared sensors is positioned at a rear side of the tractor unit and faced towards the trailer. The one or more infrared sensors are configured to capture one or more heatmaps each representing a temperature distribution of a tire. The one or more infrared sensors are in communication with a control system that is configured to receive the temperature distribution from the one or more infrared sensors, determine a tire condition of the tire, and operate the tractor unit according to the tire condition.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G01J 5/48 - ThermographyTechniques using wholly visual means
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G06T 11/20 - Drawing from basic elements, e.g. lines or circles
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G07C 5/08 - Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle, or waiting time
6.
MAP DISPLAYING SCHEMES USING MULTI-LAYER REPRESENTATION
Disclosed are devices, systems and methods for displaying a map for a user. A method of displaying a map for a user comprises: receiving, from a user device, a request to display the map for a geographical region around a vehicle, wherein the request identifies one or more layers of the map; retrieving, in response to the request, map data corresponding to the geographical region from a map database that stores a multi-layer representation of the geographical region; selecting, in response to the request, the one or more layers from the multi-layer representation; and displaying the map on a display of the user device based on the request.
A computer-implemented method of map data processing, comprising generating, for a grid-based representation of map data, raw grid features; building a grid map by reading from a memory that stores the raw grid features; and processing the grid map using one or more post-processing operations including a smoothing operation applied across zero or more grid lines of the grid map according to a rule.
A cooling system for an autonomous vehicle receives a first flow rate data from a first flow rate sensor circuit and a second flow rate data from a second flow rate sensor circuit. The first flow rate data indicates a first flow rate of the liquid coolant flowing out of a first pump. The second flow rate data indicates a second flow rate of the liquid coolant flowing out of a second pump. The system detects that the first flow rate of the liquid coolant is less than a threshold flow rate. In response, the system communicates, to the second pump, a signal that indicates to increase a speed of the second pump to increase the second flow rate of the liquid coolant flowing out of the second pump and compensate for the first flow rate that is less than the threshold flow rate.
A system receives, from one or more flow rate sensor circuits, one or more signals that indicate the flow rate of coolant provided to a set of circuit boards. The one or more flow rate sensor circuits are configured to detect the flow rate of the coolant traveling from one or more pumps towards the set of circuit boards. The one or more pumps are configured to direct a flow of the coolant towards the set of circuit boards. The system compares the flow rate of the coolant to a threshold flow rate. The system determines that a total flow rate of the coolant is less than the threshold flow rate. The system causes the autonomous vehicle to perform a minimal risk maneuver in response to determining that the flow rate of the coolant is less than the threshold flow rate.
A system receives, from one or more flow rate sensor circuits, one or more signals that indicate the flow rate of coolant provided to a set of circuit boards. The one or more flow rate sensor circuits are configured to detect the flow rate of the coolant traveling from one or more pumps towards the set of circuit boards. The one or more pumps are configured to direct a flow of the coolant towards the set of circuit boards. The system compares the flow rate of the coolant to a threshold flow rate. The system determines that a total flow rate of the coolant is less than the threshold flow rate. The system causes the autonomous vehicle to perform a minimal risk maneuver in response to determining that the flow rate of the coolant is less than the threshold flow rate.
An example sensor system for use with a vehicle includes a first group of sensing devices that are associated with an essential level of autonomous operation of the vehicle. The first group of sensing devices are configured and intended for continuous use while the vehicle is being autonomously operated. The sensor system further includes a second group of sensing devices that are associated with a non-essential level of autonomous operation of the vehicle. For example, the second group of sensing devices are configured to be redundant to and/or to enhance the performance of the first group of sensing devices. The second group operates in response to certain conditions being determined during autonomous operation of the vehicle. The sensor system further includes a plurality of assemblies attached to the vehicle. Each assembly includes two or more sensing devices from the first group and/or the second group.
A cooling system for an autonomous vehicle receives a first flow rate data from a first flow rate sensor circuit and a second flow rate data from a second flow rate sensor circuit. The first flow rate data indicates a first flow rate of the liquid coolant flowing out of a first pump. The second flow rate data indicates a second flow rate of the liquid coolant flowing out of a second pump. The system detects that the first flow rate of the liquid coolant is less than a threshold flow rate. In response, the system communicates, to the second pump, a signal that indicates to increase a speed of the second pump to increase the second flow rate of the liquid coolant flowing out of the second pump and compensate for the first flow rate that is less than the threshold flow rate.
Devices, systems, and methods for simulating an operation of an autonomous vehicle over time are described. An example method includes obtaining runtime data of an operation of modules of an autonomous vehicle during the operation, the runtime data including first module data having a first refresh frequency and second module data having a second refresh frequency, compiling a plurality of simulation data packets based on the runtime data according to a simulation frequency that is different from at least one of the first refresh frequency or the second refresh frequency, and simulating the operation of the autonomous vehicle over time based on the plurality of simulation data packets.
Provided herein is a system and method for monitoring the health of a steering system of a vehicle. Methods can include: receiving location information and motion information associated with a vehicle; determining, from the location information, a map-matched road segment associated with the location information; establishing a geometry of the map-matched road segment based on map data of a map database; filtering the motion information associated with the vehicle to establish a baseline movement of the vehicle; determining, during the baseline movement of the vehicle, steering angle input received at a steering system of the vehicle; determining, from the steering angle input received at the steering system of the vehicle, a steering angle offset; and causing the steering angle offset to be applied to the steering system of the vehicle.
A sensor housing is provided that is configured to be mounted to a roof of a vehicle. The sensor housing includes a frame defining a plurality of cavities and a plurality of windows opening into respective cavities. The sensor housing also includes a plurality of sensors, at least some of which are disposed within the respective cavities defined by the frame. The sensors disposed within the respective cavities are oriented relative to the frame such that fields of view of the sensors that are disposed within the respective cavities defined by the frame are directed through the windows that open into the respective cavities. At least one of the plurality of sensors may include a camera mounted to the frame so as to be exterior to the vehicle and oriented such that a field of view of the camera is directed into a cabin of the vehicle.
An image processing method includes performing, using images obtained from one or more sensors onboard a vehicle, a 2-dimensional (2D) feature extraction; performing, a 3-dimensional (3D) feature extraction on the images; detecting objects in the images by fusing detection results from the 2D feature extraction and the 3D feature extraction.
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
Techniques are described for analyzing autonomous vehicle driving. An example technique includes receiving, by a computer, a set of data from a software that performs driving related operations for an autonomous vehicle; generating a plurality of frames using the timestamps associated with the set of data; determining, for each frame, that the at least one data indicates information related to the autonomous vehicle and/or the one or more objects; assigning, for each frame, a label associated with the information indicated by the at least one data; and displaying, using a graphical user interface (GUI) and for the test performed with the software, at least one label associated with at least one information in a frame.
A method of processing point cloud information includes converting points in a point cloud obtained from a lidar sensor into a voxel grid, generating, from the voxel grid, sparse voxel features by applying a multi-layer perceptron and one or more max pooling layers that reduce dimension of input data; applying a cascade of an encoder that performs a N-stage sparse-to-dense feature operation, a global context pooling (GCP) module, and an M-stage decoder that performs a dense-to-sparse feature generation operation. The GCP module bridges an output of a last stage of the N-stages with an input of a first stage of the M-stages, where N and M are positive integers. The GCP module comprises a multi-scale feature extractor; and performing one or more perception operations on an output of the M-stage decoder and/or an output of the GCP module.
Devices, systems, and methods for controlling a vehicle are described. An example method for controlling a vehicle includes obtaining planning information relating to an intended operation of the vehicle, the intended operation relating to an intended value of an operation parameter of the vehicle; obtaining, based on the intended operation of the vehicle, context information relating to an environment in which the vehicle is to operate following the planning information; determining a context compensated control instruction based on the planning information and the context information; obtaining feedback relating to a deviation of a real-time value of the operation parameter of the vehicle operating according to the context compensated control instruction from the intended value of the operation parameter relating to the intended operation; determining a corrected control instruction based on the feedback; and operating the vehicle based on the corrected control instruction.
A sensor housing is provided that is configured to be mounted to a roof of a vehicle. The sensor housing includes a frame defining a plurality of cavities and a plurality of windows opening into respective cavities. The sensor housing also includes a plurality of sensors, at least some of which are disposed within the respective cavities defined by the frame. The sensors disposed within the respective cavities are oriented relative to the frame such that fields of view of the sensors that are disposed within the respective cavities defined by the frame are directed through the windows that open into the respective cavities. At least one of the plurality of sensors may include a camera mounted to the frame so as to be exterior to the vehicle and oriented such that a field of view of the camera is directed into a cabin of the vehicle.
B60R 1/27 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
B60R 11/00 - Arrangements for holding or mounting articles, not otherwise provided for
B60R 11/04 - Mounting of cameras operative during driveArrangement of controls thereof relative to the vehicle
A computer-implemented method of trajectory prediction includes obtaining a first cross-attention between a vectorized representation of a road map near a vehicle and information obtained from a rasterized representation of an environment near the vehicle by processing through a first cross-attention stage; obtaining a second cross-attention between a vectorized representation of a vehicle history and information obtained from the rasterized representation by processing through a second cross-attention stage; operating a scene encoder on the first cross-attention and the second cross-attention; operating a trajectory decoder on an output of the scene encoder; obtaining one or more trajectory predictions by performing one or more queries on the trajectory decoder.
A method of predicting vehicle trajectory includes operating a scene encoder on an environmental representation surrounding a vehicle; concatenating an output of the scene encoder with a history trajectory; applying a sequence encoder to a result of the concatenating; refining an output of the sequence encoder based on the history trajectory; and generating one or more predicted future trajectories by operating a decoder on an output of the refining.
B60W 50/00 - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
G06N 3/0442 - Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
An image processing method includes performing, using images obtained from one or more sensors onboard a vehicle, a 2-dimensional (2D) feature extraction; performing, a 3-dimensional (3D) feature extraction on the images; detecting objects in the images by fusing detection results from the 2D feature extraction and the 3D feature extraction.
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
24.
AUTONOMOUS VEHICLE SIMULATION SYSTEM FOR ANALYZING MOTION PLANNERS
An autonomous vehicle simulation system for analyzing motion planners is disclosed. A particular embodiment includes: receiving map data corresponding to a real world driving environment; obtaining perception data and configuration data including pre-defined parameters and executables defining a specific driving behavior for each of a plurality of simulated dynamic vehicles; generating simulated perception data for each of the plurality of simulated dynamic vehicles based on the map data, the perception data, and the configuration data; receiving vehicle control messages from an autonomous vehicle control system; and simulating the operation and behavior of a real world autonomous vehicle based on the vehicle control messages received from the autonomous vehicle control system.
Techniques are described for operating a vehicle using sensor data provided by one or more ultrasonic sensors located on or in the vehicle. An example method includes receiving, by a computer located in a vehicle, data from an ultrasonic sensor located on the vehicle, where the data includes a first set of coordinates of two points associated with a location where an object is detected by the ultrasonic sensor; determining a second set of coordinates associated with a point in between the two points; performing a first determination that the second set of coordinates is associated with a lane or a road on which the vehicle is operating; performing a second determination that the object is movable; and sending, in response to the first determination and the second determination, a message that causes the vehicle to perform a driving related operation while the vehicle is operating on the road.
A unified framework for detecting perception anomalies in autonomous driving systems is described. The perception anomaly detection framework takes an input image from a camera in or on a vehicle and identifies anomalies as belonging to one of three categories. Lens anomalies are associated with poor sensor conditions, such as water, dirt, or overexposure. Environment anomalies are associated with unfamiliar changes to an environment. Finally, object anomalies are associated with unknown objects. After perception anomalies are detected, the results are sent downstream to cause a behavior change of the vehicle.
G06V 10/98 - Detection or correction of errors, e.g. by rescanning the pattern or by human interventionEvaluation of the quality of the acquired patterns
B60W 50/02 - Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
B60W 50/029 - Adapting to failures or work around with other constraints, e.g. circumvention by avoiding use of failed parts
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G06V 10/26 - Segmentation of patterns in the image fieldCutting or merging of image elements to establish the pattern region, e.g. clustering-based techniquesDetection of occlusion
G06V 10/28 - Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
G06V 10/48 - Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
27.
MONITORING SYSTEM FOR AUTONOMOUS VEHICLE OPERATION
Disclosed are devices, systems and methods for a monitoring system for autonomous vehicle operation. In some embodiments, a vehicle may perform self-tests, generate a report based on the results, and transmit it to a remote monitor center over one or both of a high-speed channel for regular data transfers or a reliable channel for emergency situations. In other embodiments, the remote monitor center may determine that immediate intervention is required, and may transmit a control command with high priority, which when received by the vehicle, is implemented and overrides any local commands being processed. In yet other embodiments, the control command with high priority is selected from a small group of predetermined control commands the remote monitor center may issue.
B60W 50/02 - Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
G05B 23/00 - Testing or monitoring of control systems or parts thereof
G05D 1/223 - Command input arrangements on the remote controller, e.g. joysticks or touch screens
G05D 1/227 - Handing over between remote control and on-board controlHanding over between remote control arrangements
G05D 1/617 - Safety or protection, e.g. defining protection zones around obstacles or avoiding hazards
G07C 5/00 - Registering or indicating the working of vehicles
G07C 5/08 - Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle, or waiting time
28.
SECURITY ARCHITECTURE FOR A REAL-TIME REMOTE VEHICLE MONITORING SYSTEM
Disclosed are devices, systems and methods for securing wireless communications between a remote monitor center and a vehicle by using redundancy measures to increase the robustness of the system. In some embodiments, a system may include redundant communication channels, deploy redundant hardware and software stacks to enable switching to a backup in an emergency situation, and employ hypervisors at both the remote monitor center and the vehicle to monitor hardware and software resources and perform integrity checks. In other embodiments, message digests based on a cryptographic hash function and a plurality of predetermined commands are generated at both the remote monitor center and the vehicle, and compared to ensure the continuing integrity of the wireless communication system.
B60R 25/30 - Detection related to theft or to other events relevant to anti-theft systems
B60R 25/102 - Fittings or systems for preventing or indicating unauthorised use or theft of vehicles actuating a signalling device a signal being sent to a remote location, e.g. a radio signal being transmitted to a police station, a security company or the owner
B60R 25/32 - Detection related to theft or to other events relevant to anti-theft systems of vehicle dynamic parameters, e.g. speed or acceleration
B60R 25/33 - Detection related to theft or to other events relevant to anti-theft systems of global position, e.g. by providing GPS coordinates
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04W 4/40 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
H04W 4/90 - Services for handling of emergency or hazardous situations, e.g. earthquake and tsunami warning systems [ETWS]
Techniques are described for determining a set of pose information for an object when multiple sets of pose information are determined for a same object from multiple images. An example driving operation method includes obtaining, by a computer located in a vehicle, at least two sets of pose information related to an object located on a road on which the vehicle is operating, where each set of pose information includes characteristic(s) about the object, and where each set of pose information is determined from an image obtained by a camera; determining at least two weighted output vectors; determining, for the object, a set of pose information that are based on a combined weighted output vector that is obtained by combining the at least two weighted output vectors; and causing the vehicle to perform a driving-related operation using the set of pose information for the object.
B60T 7/22 - Brake-action initiating means for automatic initiationBrake-action initiating means for initiation not subject to will of driver or passenger initiated by contact of vehicle, e.g. bumper, with an external object, e.g. another vehicle
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 10/12 - Details of acquisition arrangementsConstructional details thereof
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
Devices, systems, and methods for remote detection and ranging are described. In an embodiment, a remote sensing method includes obtaining data points that are spatially distributed and have respective intensity values, by performing a remote detection and ranging operation, determining a spatial autocorrelation of a set of data points, out of the data points, based on a difference in distances between data points in the data points, determining an intensity weight multiplier based on a reference intensity value of the data points and an average intensity value of the data points, and determining a quality score of the set of data points by applying the intensity weight multiplier to the spatial autocorrelation of the set of data points; and identifying, based on the quality score, whether the set of data points includes one or more data points that are created by a noise source.
A computer-implemented method and apparatus are provided for training a safety driver associated with an autonomous vehicle (AV) operating in an autonomous driving mode. The computer-implemented method includes receiving a selection of one or more vehicle operation faults that are configured to impact operation of one or more vehicle systems of the AV. The computer-implemented method also includes injecting the one or more vehicle operation faults into one or more vehicle systems of the AV and detecting one or more driver responses executed by the safety driver in response to the one or more vehicle operation faults. The computer-implemented method further includes determining a reaction time associated with the one or more driver responses. The reaction time is a measure of time it takes the safety driver to respond to the one or more vehicle operation faults.
A61B 5/16 - Devices for psychotechnicsTesting reaction times
A61B 5/18 - Devices for psychotechnicsTesting reaction times for vehicle drivers
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G09B 9/052 - Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles characterised by provision for recording or measuring trainee's performance
32.
HYBRID ARTIFICIAL INTELLIGENCE AGENT BEHAVIOR GENERATION SYSTEM FOR AUTONOMOUS DRIVING SIMULATION
Devices, systems, and methods for trajectory generation for controlling an autonomous vehicle are described. In an embodiment, a trajectory generation system includes a behavior generator including a plurality of states of an object to generate one of the plurality of states, each of the plurality of states of the object corresponding to a behavior of the object at a given time, an artificial intelligence information generator including one or more first parameters for determining a trajectory of the object, a user defined information generator including one or more second parameters for determining a variant trajectory of the object, and a simulator configured to generate a future position of the object by performing a computation operation on the one of the plurality of states using a combination of the one or more first parameters and the one or more second parameters.
Devices, systems, and methods for hardware-based time synchronization for heterogenous sensors are described. An example method includes generating a plurality of input trigger pulses having a nominal pulse-per-second (PPS) rate, generating, based on timing information derived from the plurality of input trigger pulses, a plurality of output trigger pulses, and transmitting the plurality of output trigger pulses to a sensor of a plurality of sensors, wherein a frequency of the plurality of output trigger pulses corresponds to a target operating frequency of the sensor, wherein, in a case that a navigation system coupled to the synchronization unit is functioning correctly, the plurality of input trigger pulses is generated based on a nominal PPS signal from the navigation unit, and wherein, in a case that the navigation system is not functioning correctly, the plurality of input trigger pulses is generated based on a simulated clock source of the synchronization unit.
H04L 7/00 - Arrangements for synchronising receiver with transmitter
G01C 21/16 - NavigationNavigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigatedDead reckoning by integrating acceleration or speed, i.e. inertial navigation
G01C 21/28 - NavigationNavigational instruments not provided for in groups specially adapted for navigation in a road network with correlation of data from several navigational instruments
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G01S 19/01 - Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
The present disclosure provides methods and systems of sampling-based object pose determination. An example method includes obtaining, for a time frame, sensor data of the object acquired by a plurality of sensors; generating a two-dimensional bounding box of the object in a projection plane based on the sensor data of the time frame; generating a three-dimensional pose model of the object based on the sensor data of the time frame and a model reconstruction algorithm; generating, based on the sensor data, the pose model, and multiple sampling techniques, a plurality of pose hypotheses of the object corresponding to the time frame, generating a hypothesis projection of the object for each of the pose hypotheses by projecting the pose hypothesis onto the projection plane; determining evaluation results by comparing the hypothesis projections with the bounding box; and determining, based on the evaluation results, an object pose for the time frame.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
An autonomous vehicle includes a detection system for identifying the presence changes in wind incident on the autonomous vehicle, particularly wind gusts. The detection system may include one or more wind sensors, particularly those configured to detect wind incident on the vehicle from a direction that is transverse or perpendicular to the direction of motion of the autonomous vehicle. Additionally, systems may be present that correlate the detected wind gusts to changes in the behavior of the autonomous vehicle. The autonomous vehicle may react to the detected wind gusts by altering the vehicle's trajectory, by stopping the vehicle, or by communicating with a control center for further instructions.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G01P 5/06 - Measuring speed of fluids, e.g. of air streamMeasuring speed of bodies relative to fluids, e.g. of ship, of aircraft by measuring forces exerted by the fluid on solid bodies, e.g. anemometer using rotation of vanes
G01P 5/16 - Measuring speed of fluids, e.g. of air streamMeasuring speed of bodies relative to fluids, e.g. of ship, of aircraft by measuring differences of pressure in the fluid using Pitot tubes
G01P 5/24 - Measuring speed of fluids, e.g. of air streamMeasuring speed of bodies relative to fluids, e.g. of ship, of aircraft by measuring the direct influence of the streaming fluid on the properties of a detecting acoustical wave
Image processing techniques are described to obtain an image from a camera located on a vehicle while the vehicle is being driven, cropping a portion of the obtained image corresponding to a region of interest, detecting an object in the cropped portion, adding a bounding box around the detected object, determining position(s) of reference point(s) on the bounding box, and determining a location of the detected object in a spatial region where the vehicle is being driven based on the determined one or more positions of the second set of one or more reference points on the bounding box.
G06T 7/70 - Determining position or orientation of objects or cameras
G06T 11/20 - Drawing from basic elements, e.g. lines or circles
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
37.
SYSTEM, METHOD, AND APPARATUS FOR VIBRATION ISOLATION
Provided herein is a system and method for vibration mitigation at a sensor of a vehicle including at least one sensor; a bracket (500) configured to support the at least one sensor; a sheet metal structure (510) having a first major surface along which the sheet metal structure (510) extends, and a second major surface opposite the first major surface; and a backing plate (550) where the bracket (500) is disposed adjacent to the first major surface of the sheet metal structure (510), where the backing plate (550) is disposed adjacent to the second major surface of the sheet metal structure (510), and where two or more fasteners (560) secure the backing plate (550) to the bracket (500) through the sheet metal structure (510).
Disclosed are devices, systems and methods for the operational testing on autonomous vehicles. One exemplary method includes configuring a primary vehicular model with an algorithm, calculating one or more trajectories for each of one or more secondary vehicular models that exclude the algorithm, configuring the one or more secondary vehicular models with a corresponding trajectory of the one or more trajectories, generating an updated algorithm based on running a simulation of the primary vehicular model interacting with the one or more secondary vehicular models that conform to the corresponding trajectory in the simulation, and integrating the updated algorithm into an algorithmic unit of the autonomous vehicle.
Improvements to steering functionality and safety thereof for a vehicle are disclosed. In particular, disclosed embodiments monitor and diagnose steering commands received by an in-vehicle controller from a remote server. The steering commands may be evaluated with respect to rationality and context. For example, a steering angle indicated by the steering command is compared to a current vehicle steering angle and/or to other steering angles indicated by preceding and/or subsequent steering commands. Based on diagnosis of the steering commands, an in-vehicle controller can permit or prevent engagement of autonomous vehicle operation, which applies the steering commands. In an autonomous mode, invalid steering commands can trigger a minimal risk condition (MRC) maneuver for the vehicle.
B62D 6/02 - Arrangements for automatically controlling steering depending on driving conditions sensed and responded to, e.g. control circuits responsive only to vehicle speed
B62D 6/00 - Arrangements for automatically controlling steering depending on driving conditions sensed and responded to, e.g. control circuits
Disclosed are devices, systems, and methods for a LiDAR mirror assembly mounted on a vehicle, such as an autonomous or semi-autonomous vehicle. For example, a LiDAR mirror assembly may include a base plate mounted on a hood of a vehicle, where the base plate is coupled to one end of a support arm. The opposite end of the support arm is attached to a housing. The housing includes a top housing enclosure coupled to a bottom housing platform, where a sensor and a mirror is coupled to the housing, and where the sensor is at least partially exposed through an opening in the top housing enclosure. The opening for the sensor is situated near an end of the housing furthest away from the base plate.
An example method for controlling a vehicle includes obtaining reference information relating to an operation parameter of the vehicle, the operation parameter describing mission waypoints of the vehicle at respective time points during which the vehicle is to traverse a path, the reference information including reference values of the operation parameter corresponding to the time points; obtaining context information of the vehicle that relates to a state of the vehicle during an operation of the vehicle at the respective time points or an environment enclosing the path; determining tolerable ranges of the operation parameter for the time points based on the reference information and the context information; obtaining penalty information relating to differences between respective tolerable ranges and corresponding values of a constraint at the time points; determining a control instruction based on the tolerable ranges and the penalty information; and operating the vehicle based on the control instruction.
An integrated housing assembly mountable on a roof of a vehicle, such as a semi-trailer truck. The integrated housing assembly includes a main enclosure that includes four sides, a top panel, and a bottom panel. One of the four sides is a front panel that is inclined towards the ground. The main enclosure includes one or more cavities. A front panel of the integrated housing assembly includes one or more openings to allow cameras or sensors to be placed within the one or more cavities and behind the one or more opening. The cameras or sensors cameras can be clamped at a downward angle within multiple lock apparatus. The angled front panel and the inclined cameras or sensors within the lock apparatus allow the cameras or sensors to obtain images or sensor data from one or more regions of interest at some distance from the front of the vehicle.
B60R 11/04 - Mounting of cameras operative during driveArrangement of controls thereof relative to the vehicle
B60R 11/00 - Arrangements for holding or mounting articles, not otherwise provided for
B62D 65/16 - Joining sub-units or components to, or positioning sub-units or components with respect to, body shell or other sub-units or components the sub-units or components being exterior fittings, e.g. bumpers, lights, wipers
43.
SYSTEMS AND METHODS FOR UNCERTAINTY PREDICTION IN AUTONOMOUS DRIVING
Devices, systems, and methods for controlling a vehicle are described. An example method for controlling a vehicle includes obtaining planning information relating to an intended operation of the vehicle over a prediction horizon; inputting the planning information into an uncertainty model to determine uncertainty information, wherein: the uncertainty model is trained using sample driving event data based on a multivariate probability prediction algorithm; and the uncertainty model is configured to predict the uncertainty information that relates to a deviation of an operation of the vehicle according to an intended control instruction from the intended operation, the intended control instruction being determined based on the planning information; generating a control instruction based on the planning information and the uncertainty information; and operating the vehicle based on the control instruction.
An example method of controlling a vehicle using a multi-node computational architecture includes receiving, by a first computational node, waypoints and vehicle states, and generating a first control command set for motion of the vehicle in a lateral direction with a first complexity and a second control command set for motion in a longitudinal direction with a second complexity that is less than the first complexity. A second computational node, operating in parallel with the first computational node, is used to generate a third control command set for motion in the longitudinal direction with the first complexity. The method further includes selecting, by a control arbitrator and based on the vehicle states and health status indications of the first and second computational nodes, either the second control command set or the third control command set, and outputting the selected control command set, which is used to control the vehicle.
Techniques are described for measuring angle and/or orientation of a rear drivable section (e.g., a trailer unit of a semi-trailer truck) relative to a front drivable section (e.g., a tractor unit of the semi-trailer truck) using an example rotary encoder assembly. The example rotary encoder assembly comprises a base surface; a housing that includes a second end that is connected to the base surface and a first end that is at least partially open and is coupled to a housing cap; and a rotary encoder that is located in the housing in between the base surface and the housing cap, where the rotary encoder includes a rotatable shaft that protrudes from a first hole located in the housing cap, and where a top of the rotatable shaft located away from the rotary encoder is coupled to magnet(s).
Techniques are described for managing redundant steering system for a vehicle. A method includes sending a first control command that instructs a first motor coupled to a steering wheel in a steering system to steer a vehicle, receiving, after sending the first control command, a speed of the vehicle, a yaw rate of the vehicle, and a steering position of the steering wheel, determining, based at least on the speed and the yaw rate, an expected range of steering angles that describes values within which the first motor is expected to steer the vehicle based on the first control command, and upon determining that the steering position of a steering wheel is outside the expected range of steering angles, sending a second control command that instructs a second motor coupled to the steering wheel in the steering system to steer the vehicle.
A system and method for determining car to lane distance is provided. In one aspect, the system includes a camera configured to generate an image, a processor, and a computer-readable memory. The processor is configured to receive the image from the camera, generate a wheel segmentation map representative of one or more wheels detected in the image, and generate a lane segmentation map representative of one or more lanes detected in the image. For at least one of the wheels in the wheel segmentation map, the processor is also configured to determine a distance between the wheel and at least one nearby lane in the lane segmentation map. The processor is further configured to determine a distance between a vehicle in the image and the lane based on the distance between the wheel and the lane.
09 - Scientific and electric apparatus and instruments
12 - Land, air and water vehicles; parts of land vehicles
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable computer software, downloadable computer software applications and downloadable computer operating system software for operating and training artificial intelligence or deep learning applications for use in image analysis, autonomous vehicle monitoring and control, driver-assisted vehicle monitoring and control, human-machine interfaces for vehicles, vehicle subsystem monitoring and control, object detection, lane detection, semantic segmentation for vehicle control, multi-object tracking, decision-making for vehicle control, navigation and mapping generation, facial analysis, and image recognition; recorded computer software and hardware for operating and training artificial intelligence or deep learning applications for use in image analysis, autonomous vehicle monitoring and control, driver-assisted vehicle monitoring and control, human-machine interfaces for vehicles, vehicle subsystem monitoring and control, object detection, lane detection, semantic segmentation for vehicle control, multi-object tracking, decision-making for vehicle control, navigation and mapping generation, facial analysis, and image recognition Autonomous land vehicles, self-driving land vehicles, driverless land vehicles, driver-assisted land vehicles with all of the aforesaid vehicles featuring systems and devices for controlling their autonomous driving features and subsystems Engineering and scientific research and development services in the field of vehicle automation, monitoring, and control systems, and products using those systems, which are used to control and monitor autonomous, self-driving, driverless, and driver-assisted vehicles; development, installation, and maintenance of computer software for computer systems for the monitoring and control of autonomous, self-driving, driverless, and driver-assisted vehicles and consultation related thereto; software as a service (SaaS) featuring software for developing and supporting vehicle automation, vehicle monitoring, and vehicle control systems, and cartography services
A system and method for implementing a neural network based vehicle dynamics model are disclosed. A particular embodiment includes: training a machine learning system with a training dataset corresponding to a desired autonomous vehicle simulation environment; receiving vehicle control command data and vehicle status data, the vehicle control command data not including vehicle component types or characteristics of a specific vehicle; by use of the trained machine learning system, the vehicle control command data, and vehicle status data, generating simulated vehicle dynamics data including predicted vehicle acceleration data; providing the simulated vehicle dynamics data to an autonomous vehicle simulation system implementing the autonomous vehicle simulation environment; and using data produced by the autonomous vehicle simulation system to modify the vehicle status data for a subsequent iteration.
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
G05D 1/246 - Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM]
G05D 1/247 - Arrangements for determining position or orientation using signals provided by artificial sources external to the vehicle, e.g. navigation beacons
G05D 1/81 - Handing over between on-board automatic and on-board manual control
G06N 3/00 - Computing arrangements based on biological models
A modular enclosure is configured to house a set of components that facilitate the autonomous functions of an autonomous vehicle while meeting a set of requirements. The set of components comprises a sensor processing unit configured to detect objects from sensors associated with the autonomous vehicle, a compute unit configured to determine a navigation path for the autonomous vehicle, a vehicle control unit configured to control the autonomous function of the autonomous vehicle, a communication gateway configured to establish communication of the autonomous vehicle, and a data diagnostics unit configured to determine a health data for at least one component of the autonomous vehicle. The set of requirements comprises a space requirement, a communication requirement, a cooling requirement, and a shock absorption requirement.
The present disclosure provides a data processing system, method, and device. A method for monitoring a high definition map data collection trip includes recording data collected from one or more sensors associated with a vehicle, determining a current vehicle location, and determining whether the vehicle is following a planned route based on the current vehicle location. On a condition that the vehicle is not following the planned route, the method includes including an indication in the recorded data collected from the one or more sensors associated with the vehicle that the vehicle is off-route; generating additional navigation instructions, wherein the additional navigation instructions return the vehicle to the planned route; and providing the additional navigation instructions to a driver of the vehicle.
A system and method for using human driving patterns to detect and correct abnormal driving behaviors of autonomous vehicles are disclosed. A particular embodiment includes: generating data corresponding to a normal driving behavior safe zone; receiving a compliant vehicle control command; comparing the compliant vehicle control command with the normal driving behavior safe zone; and issuing a warning alert if the compliant vehicle control command is outside of the normal driving behavior safe zone. Another embodiment includes modifying the compliant vehicle control command to produce a modified and validated vehicle control command if the compliant vehicle control command is outside of the normal driving behavior safe zone.
The present disclosure provides methods and systems of maintaining a map suitable for guiding autonomous driving. In some embodiments, the method may include receiving a sensor dataset acquired by a sensor subsystem, wherein: the sensor dataset includes information about a road, the sensor subsystem comprises multiple different types of sensors; determining, by a processor, a confidence level by comparing the sensor dataset and the map that includes prior information about the road; in response to determining that the confidence level exceeds a confidence threshold, processing the map by the processor; and storing the processed map as an electronic file, wherein the processed map is configured to guide an autonomous vehicle to operate on the road.
A system and method for vehicle wheel detection is disclosed. A particular embodiment can be configured to: receive training image data from a training image data collection system; obtain ground truth data corresponding to the training image data; perform a training phase to train one or more classifiers for processing images of the training image data to detect vehicle wheel objects in the images of the training image data; receive operational image data from an image data collection system associated with an autonomous vehicle; and perform an operational phase including applying the trained one or more classifiers to extract vehicle wheel objects from the operational image data and produce vehicle wheel object data.
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06F 18/2413 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
55.
SYSTEMS AND METHODS FOR UPDATING NAVIGATIONAL MAPS
Systems and methods for updating navigational maps based using at least one sensor are provided. In one aspect, a control system for an autonomous vehicle, includes a processor and a computer-readable memory configured to cause the processor to: receive output from at least one sensor located on the autonomous vehicle indicative of a driving environment of the autonomous vehicle, retrieve a navigational map used for driving the autonomous vehicle, and detect one or more inconsistencies between the output of the at least one sensor and the navigational map. The computer-readable memory is further configured to cause the processor to: in response to detecting the one or more inconsistencies, trigger mapping of the driving environment based on the output of the at least one sensor, update the navigational map based on the mapped driving environment, and drive the autonomous vehicle using the updated navigational map.
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
G05D 1/247 - Arrangements for determining position or orientation using signals provided by artificial sources external to the vehicle, e.g. navigation beacons
G05D 1/648 - Performing a task within a working area or space, e.g. cleaning
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
H04W 4/46 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for vehicle-to-vehicle communication [V2V]
56.
ROUTE PLANNING AND TRIP MONITORING SYSTEM FOR HIGH DEFINITION MAP DATA COLLECTION
The present disclosure provides a data processing system, method, and device for monitoring a high definition map data collection trip. A current vehicle location is determined. Whether the vehicle is following a planned route linked to a data collection trip, based on the current vehicle location is determined. On a condition that the vehicle is not following the planned route, a reason why the vehicle is not following the planned route is determined. An alert is generated, displayed on an in-vehicle navigation device, and uploaded with the reason to a command center.
A modular enclosure is configured to house a set of components that facilitate the autonomous functions of an autonomous vehicle while meeting a set of requirements. The set of components comprises a sensor processing unit configured to detect objects from sensors associated with the autonomous vehicle, a compute unit configured to determine a navigation path for the autonomous vehicle, a vehicle control unit configured to control the autonomous function of the autonomous vehicle, a communication gateway configured to establish communication of the autonomous vehicle, and a data diagnostics unit configured to determine a health data for at least one component of the autonomous vehicle. The set of requirements comprises a space requirement, a communication requirement, a cooling requirement, and a shock absorption requirement.
The present disclosure provides a data processing system, method, and device for monitoring a high definition map data collection trip. A current vehicle location is determined. Whether the vehicle is following a planned route linked to a data collection trip, based on the current vehicle location is determined. On a condition that the vehicle is not following the planned route, a reason why the vehicle is not following the planned route is determined. An alert is generated, displayed on an in-vehicle navigation device, and uploaded with the reason to a command center.
Techniques are described for performing an image processing technique on frames of a camera located on or in a vehicle. An example technique includes receiving, by a computer located in a vehicle, a first image frame from a camera located on or in the vehicle; obtaining a first combined set of information by combining a first set of information about an object detected from the first image frame and a second set of information about a set of objects detected from a second image frame, where the set of objects includes the object; obtaining, by using the first combined set of information, a second combined set of information about the object from the first image frame and from the second image frame; and causing the vehicle to perform a driving related operation in response to determining a characteristic of the object using the second combined set of information.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
Techniques are described for performing an image processing on frames of a camera located on or in a vehicle. An example technique includes receiving, by a computer located in a vehicle, a first image and a second image from a camera; determining a first set of characteristics about a first set of pixels in the first image and a second set of characteristics about a second set of pixels in the second image; obtaining a motion information for each pixel in the second set by comparing the second set of characteristics with the first set of characteristics; generating, using the motion information for each pixel in the second set, a combined set of characteristics; determining attributes of a road using at least some of the combined set of characteristics; and causing the vehicle to perform a driving related operation in response to the determining the attributes of the road.
G06V 10/54 - Extraction of image or video features relating to texture
G06V 10/56 - Extraction of image or video features relating to colour
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
Techniques are described for determining weight distribution of a vehicle. A method of performing autonomous driving operation includes determining a vehicle weight distribution that values for each axle of the vehicle that describe weight or pressure applied on a respective axle. The values of the vehicle weight distribution are determined by removing at least one value that is outside a range of pre-determined values from a set of sensor values. The method further includes determining a driving-related operation of the vehicle weight distribution. For example, the driving-related operation may include determining a braking amount for each axle and/or determining a maximum steering angle to operate the vehicle. The method further includes controlling one or more subsystems in the vehicle via an instruction related to the driving-related operation. For example, transmitting the instruction to the one or more subsystems causes the vehicle to perform the driving-related operation.
Provided herein is a system and method for a sensor housing that mitigates glare within the field-of-view and facilitates cleaning of a sensor lens. A sensor assembly (600) includes: a sensor having a sensor lens (616) and a field-of-view through the sensor lens; and a baffle (612) defining a sensor lens aperture, where the field-of-view through the sensor lens (616) is through the sensor lens aperture, where the baffle (616) includes a series of concentric rings extending away from the sensor lens aperture and increasing in diameter as a distance from the sensor lens aperture increases. According to some embodiments, the series of concentric rings defines a viewing angle, where the viewing angle corresponds to the field-of-view of the sensor lens. The sensor assembly (600) may comprise an injector (620) received within a port (618) of the baffle (612) and configured to direct a spray pattern (630) of cleaning fluid toward the sensor lens (616).
Techniques are described for performing image processing on images of cameras located on or in a vehicle. An example technique includes receiving a first set of images obtained by a first camera and a second set of images obtained by a second camera; determining, for each image in the first set, a first set of features of a first object; determining, for each image in the second set, a second set of features of a second object; obtaining a third set of features of an object by combining the first set of features and the second set of features; obtaining a fourth set of features of the object by including one or more features of a light signal of the object; determining characteristic(s) indicated by the light signal; and causing a vehicle to perform a driving related operation based on the characteristic(s) of the object.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06T 7/90 - Determination of colour characteristics
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/60 - Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
64.
INTEGRATED INSPECTION TOOLS FOR AUTONOMOUS VEHICLE NETWORKS
A method, apparatus and computer program product are configured to generate, based on a user profile, one or more dynamic inspection task checklists for a respective autonomous vehicle (AV) of one or more AVs in an AV fleet based on a first enterprise-defined task protocol. The one or more dynamic inspection task checklists are associated with one or more respective AV subsystems, and the user profile is associated with a respective user role such that the one or more dynamic inspection task checklists differ dependent upon the respective user role. The method, apparatus and computer program product are also configured to cause the one or more dynamic inspection task checklists to be rendered via an interactive inspection interface of a user computing device associated with the user profile such that one or more respective inspection tasks comprised in the one or more dynamic inspection task checklists are interactive.
Provided herein is a system and method for a sensor housing that mitigates glare within the field-of-view and facilitates cleaning of a sensor lens. A sensor assembly may include: a sensor having a sensor lens and a field-of-view through the sensor lens; and a baffle defining a sensor lens aperture, where the field-of-view through the sensor lens is through the sensor lens aperture, where the baffle includes a series of concentric rings extending away from the sensor lens aperture and increasing in diameter as a distance from the sensor lens aperture increases. According to some embodiments, the series of concentric rings defines a viewing angle, where the viewing angle corresponds to the field-of-view of the sensor lens.
Methods, systems, and devices related to a method of controlling an autonomous diesel-engine vehicle. In one example aspect, the method includes determining longitudinal dynamic response properties of the autonomous vehicle. A brake mode is selected for reducing a current speed of the autonomous vehicle to a lower speed, based on a threshold that is determined using the longitudinal dynamic response properties of the vehicle. When a rate of speed reduction is equal to or smaller than the threshold, the brake mode includes only an engine brake in which engine exhaust valve opening is adjusted for reducing the current speed. When the rate of speed reduction is greater than the threshold, the brake mode incudes a combination of the engine brake and the foundation brake.
B60W 10/06 - Conjoint control of vehicle sub-units of different type or different function including control of propulsion units including control of combustion engines
B60W 10/18 - Conjoint control of vehicle sub-units of different type or different function including control of braking systems
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
A system and method for fisheye image processing can be configured to: receive fisheye image data from at least one fisheye lens camera associated with an autonomous vehicle, the fisheye image data representing at least one fisheye image frame; partition the fisheye image frame into a plurality of image portions representing portions of the fisheye image frame; warp each of the plurality of image portions to map an arc of a camera projected view into a line corresponding to a mapped target view, the mapped target view being generally orthogonal to a line between a camera center and a center of the arc of the camera projected view; combine the plurality of warped image portions to form a combined resulting fisheye image data set representing recovered or distortion-reduced fisheye image data corresponding to the fisheye image frame; generate auto-calibration data representing a correspondence between pixels in the at least one fisheye image frame and corresponding pixels in the combined resulting fisheye image data set; and provide the combined resulting fisheye image data set as an output for other autonomous vehicle subsystems.
G06T 3/047 - Fisheye or wide-angle transformations
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
G05D 1/249 - Arrangements for determining position or orientation using signals provided by artificial sources external to the vehicle, e.g. navigation beacons from positioning sensors located off-board the vehicle, e.g. from cameras
G06T 5/20 - Image enhancement or restoration using local operators
A method (800) of operating an autonomous vehicle (105, 210, 300, 400), comprising activating, by a processor on the autonomous vehicle (105, 210, 300, 400), a first set of lights in response to determining that the autonomous vehicle (105, 210, 300, 400) has come to a stop due to a critical situation, wherein the first set of lights have an illumination intensity brighter than a second set of lights used in a non-critical situation, wherein the first set of lights form a pattern indicative of a size of the autonomous vehicle (105, 210, 300, 400) wherein the first set of lights are disposed on a base that is detachable from the autonomous vehicle (105, 210, 300, 400).
B60Q 1/52 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking for indicating emergencies
B60Q 1/30 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating rear of vehicle, e.g. by means of reflecting surfaces
B60Q 1/00 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
Provided herein is a system and method for damping the steering of a vehicle. Methods of an example include: steering a pair of steerable wheels of the vehicle with a steering mechanism about an axis substantially perpendicular to an axis of rotation of a respective one of the pair of steerable wheels; damping the steering mechanism at a first damping rate in response to a steering change rate of the pair of steerable wheels being below a predetermined rate; and damping the steering mechanism at a second damping rate in response to the steering change rate of the pair of steerable wheels being above the predetermined rate.
A method of operating an autonomous vehicle, comprising activating, by a processor on the autonomous vehicle, a first set of lights in response to determining that the autonomous vehicle has come to a stop due to a critical situation, wherein the first set of lights have an illumination intensity brighter than a second set of lights used in a non-critical situation, wherein the first set of lights form a pattern indicative of a size of the autonomous vehicle wherein the first set of lights are disposed on a base that is detachable from the autonomous vehicle.
B60Q 1/50 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking
71.
AUTONOMOUS VEHICLE CONTROL BASED ON HAND SIGNAL INTENT DETECTION
A control device associated with an autonomous vehicle detects that a person is altering traffic on a road using a hand signal. The control device determines an interpretation of the hand signal. The control device transmits the hand signal interpretation to an oversight server. The oversight server verifies the hand signal interpretation and communicates an instruction to navigate the autonomous vehicle according to the verified hand signal interpretation. The control device determines a proposed trajectory for the autonomous vehicle according to the interpretation of the hand signal. The control device navigates the autonomous vehicle according to the proposed trajectory.
A control device associated with an autonomous vehicle detects that a person is altering traffic on a road using a hand signal. The control device determines an interpretation of the hand signal. The control device transmits the hand signal interpretation to an oversight server. The oversight server verifies the hand signal interpretation and communicates an instruction to navigate the autonomous vehicle according to the verified hand signal interpretation. The control device determines a proposed trajectory for the autonomous vehicle according to the interpretation of the hand signal. The control device navigates the autonomous vehicle according to the proposed trajectory.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
Provided herein is a system and method for damping the steering of a vehicle. Methods of an example include: steering a pair of steerable wheels of the vehicle with a steering mechanism about an axis substantially perpendicular to an axis of rotation of a respective one of the pair of steerable wheels; damping the steering mechanism at a first damping rate in response to a steering change rate of the pair of steerable wheels being below a predetermined rate; and damping the steering mechanism at a second damping rate in response to the steering change rate of the pair of steerable wheels being above the predetermined rate
A data-driven prediction-based system and method for trajectory planning of autonomous vehicles are disclosed. A particular embodiment includes: generating a first suggested trajectory for an autonomous vehicle; generating predicted resulting trajectories of proximate agents using a prediction module; scoring the first suggested trajectory based on the predicted resulting trajectories of the proximate agents; generating a second suggested trajectory for the autonomous vehicle and generating corresponding predicted resulting trajectories of proximate agents, if the score of the first suggested trajectory is below a minimum acceptable threshold; and outputting a suggested trajectory for the autonomous vehicle wherein the score corresponding to the suggested trajectory is at or above the minimum acceptable threshold.
A sensor failure detection system receives sensor data from sensors associated with an autonomous vehicle. For evaluating a first sensor, the system compares the first sensor data captured by the first sensor to each of map data and other sensor data captured by other sensors. If it is determined that the first sensor fails to detect object(s) that are confirmed to be on the road by the map data and the other sensor data, the system determines that the first sensor fails to detect the object(s) and is not reliable. In response, the system may determine whether the autonomous vehicle can be navigated safely without relying on the first sensor. If it is determined that the autonomous vehicle can be navigated safely without relying on the first sensor, the system may continue autonomous navigation of the autonomous vehicle.
A sensor failure detection system receives sensor data from sensors associated with an autonomous vehicle. For evaluating a first sensor, the system compares the first sensor data captured by the first sensor to each of map data and other sensor data captured by other sensors. If it is determined that the first sensor fails to detect object(s) that are confirmed to be on the road by the map data and the other sensor data, the system determines that the first sensor fails to detect the object(s) and is not reliable. In response, the system may determine whether the autonomous vehicle can be navigated safely without relying on the first sensor. If it is determined that the autonomous vehicle can be navigated safely without relying on the first sensor, the system may continue autonomous navigation of the autonomous vehicle.
A system determines that an engine of an autonomous vehicle is ignited. In response, the system transitions the autonomous vehicle into an initiation state, during which Autonomous Vehicle Communication Gateway (AVCG) configuration data is received by the autonomous vehicle. When a particular time period passes after the ignition of the engine, the system transitions the autonomous vehicle into an active state, during which instructions provided by the AVCG configuration data are executed. If the engine of the autonomous vehicle is turned off, the system transitions the autonomous vehicle into a timed-active state, during which the system sends a rescue message to an oversight server. After a timeout parameter associated with the timed-active state is reached, the system transitions the autonomous vehicle into a shutdown state, during which results of the executed instructions are stored in a local memory.
A system accesses Autonomous Vehicle Communication Gateway (AVCG) information that comprises information associated with an AVCG manager. The AVCG manager is a software resource configured to transition among states in which the autonomous vehicle operates in response to detecting a respective trigger event. The system determines an autonomy status associated with the autonomous vehicle. The system detects a change in the autonomy status by accessing historical records of event, tracking back through the historical records of events, and tracking back through the AVCG information. The system determines one or more particular events from among one or both of the historical records of events and the AVCG information that led to the change in the autonomy status. The system outputs the cause of the change in the autonomy status.
The present disclosure provides methods and systems for operating an autonomous vehicle. In some embodiments, the system may obtain, by a camera associated with an autonomous vehicle, an image of an environment of the autonomous vehicle, the environment including a road on which the autonomous vehicle is operating and an occlusion on the road. The system may identify the occlusion in the image based on map information of the environment and at least one camera parameter of the camera for obtaining the image. The system may identify an object represented in the image, and determine a confidence score relating to the object. The confidence score may indicate a likelihood a representation of the object in the image is impacted by the occlusion. The system may determine an operation algorithm based on the confidence score; and cause the autonomous vehicle to operate based on the operation algorithm.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 10/70 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning
A system accesses cellular information and network information. The cellular information provides information about cellular coverage along a road. The cellular information also provides information about cellular coverage with respect to multiple network providers. The network information provides information about network communication conditions along the road. The network information is detected from communications with a remote server. The system determines network coverage along the road based on the cellular information and the network information. The system creates a cellular map that indicates the determined network coverage along the road.
H04W 4/021 - Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
H04W 4/029 - Location-based management or tracking services
H04W 4/40 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
H04W 4/44 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
A system and method for using human driving patterns to manage speed control for autonomous vehicles are disclosed. A particular embodiment includes: generating data corresponding to desired human driving behaviors; training a human driving model module using a reinforcement learning process and the desired human driving behaviors; receiving a proposed vehicle speed control command; determining if the proposed vehicle speed control command conforms to the desired human driving behaviors by use of the human driving model module; and validating or modifying the proposed vehicle speed control command based on the determination.
A system establishes a first network communication path with a first base station. The system detects a first cellular information associated with a first network provider based on the first network communication path. The first cellular information indicates cellular coverage provided by the first network provider along a road. The system establishes a second network communication path with a second base station. The system detects a second cellular information associated with a second network provider based on the second network communication path. The second cellular information indicates cellular coverage provided by the second network provider along the road. The system establishes a third network communication path with a remote server and detects network information based on the third network communication path. A network coverage map along the road is generated based on the network information, the first and second cellular information.
Systems and methods for autonomous lane level navigation are disclosed. In one aspect, a control system for an autonomous vehicle includes a processor and a computer-readable memory configured to cause the processor to receive a partial high-definition (HD) map that defines a plurality of lane segments that together represent one or more lanes of a roadway, the partial HD map including at least a current lane segment. The processor is also configured to generate auxiliary global information for each of the lane segments in the partial HD map. The processor is further configured to generate a subgraph including a plurality of possible routes between the current lane segment and the destination lane segment using the partial HD map and the auxiliary global information, select one of the possible routes for navigation based on the auxiliary global information, and generate lane level navigation information based on the selected route.
G05D 1/247 - Arrangements for determining position or orientation using signals provided by artificial sources external to the vehicle, e.g. navigation beacons
G05D 1/648 - Performing a task within a working area or space, e.g. cleaning
A system accesses cellular information and network information. The cellular information provides information about cellular coverage along a road. The cellular information also provides information about cellular coverage with respect to multiple network providers. The network information provides information about network communication conditions along the road. The network information is detected from communications with a remote server. The system determines network coverage along the road based on the cellular information and the network information. The system creates a cellular map that indicates the determined network coverage along the road.
System and method for using dynamic objects for camera pose estimation are disclosed. In one aspect, the method includes receiving a first image from a camera of an autonomous vehicle and acquiring first camera pose constraints based on one or more static objects detected in the first image. The method further includes receiving a second image from the camera and acquiring second camera pose constraints based on one or more static objects detected in the second image. The method further includes acquiring third camera pose constraints based on one or more dynamic objects detected in the first and the second image. The method finally includes an estimation of at least one pose of the camera that satisfies the first camera pose constraints, the second camera pose constraints, and the third camera pose constraints.
Technique for performing multi-sensor collaborative calibration on a vehicle is disclosed. A method includes obtaining, from at least two sensors located on a vehicle, sensor data items of an area that comprises a plurality of calibration objects; determining, from the sensor data items, attributes of the plurality of calibration objects; determining, for the at least two sensors, an initial matrix that describes a first set of extrinsic parameters between the at least two sensors based at least on the attributes of the plurality of calibration objects; determining an updated matrix that describes a second set of extrinsic parameters between the at least two sensors based at least on the initial matrix and a location of at least one calibration object; and performing autonomous operation of the vehicle using the second set of extrinsic parameters and additional sensor data received from the at least two sensors.
Provided herein is a system and method for sensor cleaning and positioning of the sensor on a vehicle. A system can include a mount attached to a vehicle; a sensor base; a sensor supported on the sensor base; a boom connecting the mount to the sensor base; and a tether, where the tether is connected at a first end, at least indirectly, to the sensor, and at a second end to a structure within the vehicle. The system may include a sensor cleaning array of nozzles, where combined spray patterns of the array of nozzles cover a sensor window of the sensor about a field-of-view of the sensor.
B60S 1/56 - Cleaning windscreens, windows, or optical devices specially adapted for cleaning other parts or devices than front windows or windscreens
B60Q 1/50 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking
88.
SYSTEM AND METHOD FOR SENSOR CLEANING AND POSITIONING
Provided herein is a system and method for sensor cleaning and positioning of the sensor on a vehicle. A system can include a mount attached to a vehicle; a sensor base; a sensor supported on the sensor base; a boom connecting the mount to the sensor base; and a tether, where the tether is connected at a first end, at least indirectly, to the sensor, and at a second end to a structure within the vehicle. The system may include a sensor cleaning array of nozzles, where combined spray patterns of the array of nozzles cover a sensor window of the sensor about a field-of-view of the sensor.
A system includes an autonomous vehicle (AV) comprising a sensor, a control subsystem, and an operation server. The control subsystem receives sensor data comprising location coordinates of the AV from the sensor. The operation server detects an unexpected event from the sensor data, comprising at least one of an accident, an inspection, and a report request. The operation server receives a message from a user comprising a request to access particular information regarding the AV and location data. The operation server associates the AV with the user if the location coordinates of the AV match location data of the user. The operation server establishes a communication path between the user and a remote operator for further communications.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G05D 1/247 - Arrangements for determining position or orientation using signals provided by artificial sources external to the vehicle, e.g. navigation beacons
G05D 1/617 - Safety or protection, e.g. defining protection zones around obstacles or avoiding hazards
G05D 1/81 - Handing over between on-board automatic and on-board manual control
G07C 5/00 - Registering or indicating the working of vehicles
G07C 5/08 - Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle, or waiting time
Disclosed are devices, systems and methods for using a rotating camera for vehicular operation. One example of a method for improving driving includes determining, by a processor in the vehicle, that a trigger has activated, orienting, based on the determining, a single rotating camera towards a direction of interest, and activating a recording functionality of the single rotating camera, where the vehicle comprises the single rotating camera and one or more fixed cameras, and where the single rotating camera provides a redundant functionality for, and consumes less power than, the one or more fixed cameras.
G08G 1/017 - Detecting movement of traffic to be counted or controlled identifying vehicles
B60R 11/04 - Mounting of cameras operative during driveArrangement of controls thereof relative to the vehicle
B60R 21/0136 - Electrical circuits for triggering safety arrangements in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over responsive to actual contact with an obstacle
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
A system accesses a plurality of data streams, each providing information about an environment associated with at least a portion of a predetermined routing plan of an autonomous vehicle. The system determines that the plurality of data streams in the aggregate indicates a route-altering event. In response to determining that the plurality of data streams in the aggregate indicate the route-altering event, the system updates the predetermined routing plan of the autonomous vehicle and communicates the updated routing plan to the autonomous vehicle.
A vehicle position and velocity estimation system based on camera and LIDAR data is disclosed. An embodiment includes: receiving input object data from a subsystem of a vehicle, the input object data including image data from an image generating device and distance data from a distance measuring device, the distance measuring device comprising one or more LIDAR sensors; determining a first position of a proximate object near the vehicle from the image data; determining a second position of the proximate object from the distance data; correlating the first position and the second position by matching the first position of the proximate object detected in the image data with the second position of the same proximate object detected in the distance data; determining a three-dimensional (3D) position of the proximate object using the correlated first and second positions; and using the 3D position of the proximate object to navigate the vehicle.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G01S 7/48 - Details of systems according to groups , , of systems according to group
G01S 17/08 - Systems determining position data of a target for measuring distance only
G01S 17/58 - Velocity or trajectory determination systemsSense-of-movement determination systems
G01S 17/66 - Tracking systems using electromagnetic waves other than radio waves
G01S 17/86 - Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
G01S 17/88 - Lidar systems, specially adapted for specific applications
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
A system comprises a microphone array and a processor. The microphone array detects first sound signal and the second sound signal. Each of the first and second sound signals has a particular frequency band. Each of the first and second sound signals is originated from a particular sound source. The processor receives the first and second sound signals. The processor amplifies the first sound signals, each with a different amplification order. The processor disregards the second sound signal, where the second sound signal includes interference noise signals. The processor determines that the first sound signals indicate that a vehicle is within a threshold distance from an autonomous vehicle and traveling in a direction toward the autonomous vehicle. In response, the processor instructs the autonomous vehicle to perform a minimal risk condition operation. The minimal risk condition operation includes pulling over or stopping the autonomous vehicle.
G05D 1/243 - Means capturing signals occurring naturally from the environment, e.g. ambient optical, acoustic, gravitational or magnetic signals
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G01S 3/00 - Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
Techniques are described for autonomous driving operation that includes receiving, by a computer located in a vehicle, an image from a camera located on the vehicle while the vehicle is operating on a road, wherein the image includes a plurality of lanes of the road; for each of the plurality of lanes: obtaining, from a map database stored in the computer, a set of values that describe locations of boundaries of a lane; dividing the lane into a plurality of polygons; rendering the plurality of polygons onto the image; and determining identifiers of lane segments of the lane; determining one or more characteristics of a lane segment on which the vehicle is operating based on an identifier of the lane segment; and causing the vehicle to perform a driving related operation in response to the one or more characteristics of the lane segment on which the vehicle is operating.
The disclosed technology enables automated parking of an autonomous vehicle. An example method of performing automated parking for a vehicle comprises obtaining, from a plurality of global positioning system (GPS) devices located on or in an autonomous vehicle, a first set of location information that describes locations of multiple points on the autonomous vehicle, where the first set of location information are associated with a first position of the autonomous vehicle, determining, based on the first set of location information and a location of the parking area, a trajectory information that describes a trajectory for the autonomous vehicle to be driven from the first position of the autonomous vehicle to a parking area, and causing the autonomous vehicle to be driven along the trajectory to the parking area by causing operation of one or more devices located in the autonomous vehicle based on at least the trajectory information.
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
96.
MICROPHONE ARRAYS TO OPTIMIZE THE ACOUSTIC PERCEPTION OF AUTONOMOUS VEHICLES
A system comprises a microphone array and a processor. The microphone array detects first sound signal and the second sound signal. Each of the first and second sound signals has a particular frequency band. Each of the first and second sound signals is originated from a particular sound source. The processor receives the first and second sound signals. The processor amplifies the first sound signals, each with a different amplification order. The processor disregards the second sound signal, where the second sound signal includes interference noise signals. The processor determines that the first sound signals indicate that a vehicle is within a threshold distance from an autonomous vehicle and traveling in a direction toward the autonomous vehicle. In response, the processor instructs the autonomous vehicle to perform a minimal risk condition operation. The minimal risk condition operation includes pulling over or stopping the autonomous vehicle.
Techniques are described for compensating for movements of sensors. A method includes receiving two sets of sensor data from two sets of sensors, where a first set of sensors are located on a roof of a cab of a semi-trailer truck and a second set of sensor data are located on a hood of the semi-trailer truck. The method also receives from a height sensor a measured value indicative of a height of the rear of a rear portion of the cab of the semi-trailer truck relative to a chassis of the semi-trailer truck, determines two correction values, one for each of the two sets of sensor data, and compensates for the movement of the two sets of sensors by generating two sets of compensated sensor data. The two sets of compensated sensor data are generated by adjusting the two sets of sensor data based on the two correction values.
H04N 23/68 - Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
B60G 17/015 - Resilient suspensions having means for adjusting the spring or vibration-damper characteristics, for regulating the distance between a supporting surface and a sprung part of vehicle or for locking suspension during use to meet varying vehicular or surface conditions, e.g. due to speed or load the regulating means comprising electric or electronic elements
G01B 7/14 - Measuring arrangements characterised by the use of electric or magnetic techniques for measuring distance or clearance between spaced objects or spaced apertures
G01B 11/14 - Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
G01B 17/00 - Measuring arrangements characterised by the use of infrasonic, sonic, or ultrasonic vibrations
G01S 17/06 - Systems determining position data of a target
G01S 17/86 - Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G05D 1/249 - Arrangements for determining position or orientation using signals provided by artificial sources external to the vehicle, e.g. navigation beacons from positioning sensors located off-board the vehicle, e.g. from cameras
G05D 1/646 - Following a predefined trajectory, e.g. a line marked on the floor or a flight path
98.
Perception simulation for improved autonomous vehicle control
A system and method for real world autonomous vehicle perception simulation are disclosed. A particular embodiment includes: configuring a sensor noise modeling module to produce simulated sensor errors or noise data with a configured degree, extent, and timing of simulated sensor errors or noise based on a set of modifiable parameters; using the simulated sensor errors or noise data to generate simulated perception data by simulating errors related to constraints of one or more of a plurality of sensors, and by simulating noise in data provided by a sensor processing module corresponding to one or more of the plurality of sensors; and providing the simulated perception data to a motion planning system for the autonomous vehicle.
B60W 50/02 - Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
Autonomous vehicles must accommodate various road configurations such as straight roads, curved roads, controlled intersections, uncontrolled intersections, and many others. Autonomous driving systems must make decisions about the speed and distance of traffic and about obstacles including obstacles that obstruct the view of the autonomous vehicle's sensors. For example, at intersections, the autonomous driving system must identify vehicles in the path of the autonomous vehicle or potentially in the path based on a planned path, estimate the distance to those vehicles, and estimate the speeds of those vehicles. Then, based on those and the road configuration and environmental conditions, the autonomous driving system must decide whether it is safe to proceed along the planned path or not, and when it is safe to proceed.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G06T 3/40 - Scaling of whole images or parts thereof, e.g. expanding or contracting
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
Systems and methods for deploying emergency roadside signaling devices are disclosed. In one aspect, a control system for an object placing device of an autonomous vehicle includes a processor, and a computer-readable memory in communication with the processor and having stored thereon computer-executable instructions to cause the processor to: receive a signal comprising instructions to activate the object placing device; and provide instructions to the object placing device to place a plurality of signaling devices in accordance with predetermined criteria.
B60Q 7/00 - Arrangement or adaptation of portable emergency signal devices on vehicles
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
E01F 9/619 - Upright bodies, e.g. marker posts or bollardsSupports for road signs specially adapted for particular signalling purposes, e.g. for indicating curves, road works or pedestrian crossings with reflectorsUpright bodies, e.g. marker posts or bollardsSupports for road signs specially adapted for particular signalling purposes, e.g. for indicating curves, road works or pedestrian crossings with means for keeping reflectors clean
E01F 9/627 - Upright bodies, e.g. marker posts or bollardsSupports for road signs characterised by form or by structural features, e.g. for enabling displacement or deflection self-righting after deflection or displacement
E01F 9/70 - Storing, transporting, placing or retrieving portable devices
G07C 5/08 - Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle, or waiting time
G09F 13/16 - Signs formed of, or incorporating, reflecting elements or surfaces, e.g. warning signs having triangular or other geometrical shape