An aspect of the invention relates to a vehicle camera (1) with a housing (2) and with a board (6), which is arranged in the housing (2), wherein the board (6) comprises a carrier (11) of a base material (12) and comprises a material layer (13), which is applied to the base material (12), wherein the material layer (13) has higher thermal conductivity than the base material (12), wherein the board (6) comprises a main zone (14) and at least one secondary zone (15, 16, 17, 18) and a hole (19, 20, 21, 22) for a mounting pin (9) is formed in the secondary zone (15 to 18), wherein the material layer (13) is interrupted between the main zone (14) and the secondary zone (15 to 18) in a separation area (23), such that a first material layer area (14b) and a second material layer area (15b, 16b, 17b, 18b) separated therefrom are formed, and this separation area (23) is filled with a connection material (24) at least in certain areas in the manufactured final state of the board (6) such that the material layer areas (14b, 15b, 16b, 17b, 18b) are connected by the connection material (24).
H04N 23/52 - Elements optimising image sensor operation, e.g. for electromagnetic interference [EMI] protection or temperature control by heat transfer or cooling elements
According to a method for parking slot detection, camera image (8) depicting an environment of a motor vehicle (1) is received from a camera (4) mounted to a motor vehicle (1), at least one ultrasonic sensor signal (9) is received from at least one ultrasonic detector (5, 6) mounted to the motor vehicle (1), and a two-dimensional ultrasonic map (29) in a top view perspective is generated depending on the at least one ultrasonic sensor signal (9). Parking slot proposal data comprising a position of a proposed parking slot (18) in the environment, which is equipped with an infrastructure device (22) of a predefined device type, is generated by applying a trained artificial neural network, ANN, (7) to input data, which depends on the camera image (8) and the ultrasonic map (29).
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G01S 15/931 - Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G08G 1/14 - Traffic control systems for road vehicles indicating individual free spaces in parking areas
3.
METHODS AND SYSTEMS FOR PARKING ZONE MAPPING AND VEHICLE LOCALIZATION USING MIXED-DOMAIN NEURAL NETWORK WITH ALTITUDE COMPENSATION
Methods and systems for assisting a vehicle to park using mixed-domain image data. Image-domain data is generated based on raw image data received from a plurality of cameras mounted on a vehicle. A bird's-eye-view (BEV) image is generated based on the raw image data. BEV-domain data associated with the BEV image is generated, which includes data associated with parking landmarks in the parking zone. A tri-perspective view (TPV) and associated data can be generated. A computing system localizes the vehicle within the parking zone based on the BEV-domain data, the image-domain data, and the TPV-domain data to generate localization data. The computing system performs mapping of the parking zone based on all three image-domain data and the localization data. A motion sensor such as an inertial measurement unit (IMU) can generate data that is used to compensate the various domain data.
A method includes obtaining image frames from each camera disposed along a vehicle, where each image frame corresponds to a same timestamp. The method further includes constructing a first birds-eye view (BEV) image from each image frame with a first BEV module and constructing a second BEV image from each image frame by Inverse Perspective Mapping (IPM) with a second BEV module. The first BEV module extracts features of an external environment of the vehicle from each image frame, transforms the features to a three-dimensional space, and projects the three-dimensional space onto an overhead two-dimensional plane. Subsequently, a merging module merges the first and second BEV images to produce a hybrid BEV image. Features of an external environment of the vehicle within the hybrid BEV image are detected by a deep learning neural network and the hybrid BEV image is displayed to a user in the vehicle.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
H04N 5/262 - Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects
A system for creating online vectorized maps for autonomous vehicles includes an image sensor and an electronic control unit (ECU). The image sensor captures a series of image frames. The ECU includes a memory, a central processing unit (CPU), and a transceiver. The memory stores a semantic segmentation deep learning model and a vectorization post-processing module as computer readable code. The CPU executes the semantic segmentation deep learning model and the vectorization post-processing module to output a vectorized map of an external environment of a vehicle. The transceiver uploads the vectorized map to a server such that the vectorized map can be accessed by a second vehicle that uses the vectorized map to traverse the external environment.
G01C 21/00 - NavigationNavigational instruments not provided for in groups
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
6.
METHOD TO CREATE ONLINE VECTORIZED MAPS FOR AUTONOMOUS VEHICLES
A system for creating online vectorized maps for autonomous vehicles includes an image sensor and an electronic control unit (ECU). The image sensor captures a series of image frames. The ECU includes a memory, a central processing unit (CPU), and a transceiver. The memory stores a semantic segmentation deep learning model and a vectorization post-processing module as computer readable code. The CPU executes the semantic segmentation deep learning model and the vectorization post-processing module to output a vectorized map of an external environment of a vehicle. The transceiver uploads the vectorized map to a server such that the vectorized map can be accessed by a second vehicle that uses the vectorized map to traverse the external environment.
G01C 21/00 - NavigationNavigational instruments not provided for in groups
G06N 3/04 - Architecture, e.g. interconnection topology
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
7.
METHOD FOR DETECTING AND RECOGNIZING A STREET SIGN IN AN ENVIRONMENT OF A MOTOR VEHICLE BY AN ASSISTANCE SYSTEM, COMPUTER PROGRAM PRODUCT, COMPUTER-READABLE STORAGE MEDIUM, AS WELL AS ASSISTANCE SYSTEM
The invention relates to a method for detecting and recognizing (31) a street sign (6) in an environment (5) of a motor vehicle (1) by an assistance system (2) of the motor vehicle (1), comprising the steps capturing at least one image (9) of the environment (5) by an optical capturing device (3) of the assistance system (2), encoding the captured image (9) by a transformer device (12) of an electronic computing device (4) of the assistance system (2), first decoding of the encoded image (9) by a detection transformer device (13) of the electronic computing device (4) for decoding object features (15) in the captured image (9), second decoding of the encoded image (9), wherein the second decoding is performed in parallel to the first decoding, by a recognition transformer device (14) of the electronic computing device (4) for text recognition (16) in the captured image (9) and detection and recognition (31) of the street sign (6) depending on the decoded object features (15) and the text recognition (16) by the electronic computing device (4). Further the invention relates to a computer program product, a computer-readable storage medium, as well as an assistance system (2).
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/62 - Text, e.g. of license plates, overlay texts or captions on TV images
H04N 23/80 - Camera processing pipelinesComponents thereof
8.
PROVIDING VEHICLE GUIDING DATA BY A DRIVER ASSISTANCE SYSTEM
The invention relates to a method for providing vehicle guiding data by a driver assistance system (24), wherein a camera system (26, 28) captures an object (30) in a predetermined environment region (32) of the motor vehicle (22) and depending thereon environment data is provided, wherein at least for the predetermined environment region a stitched top view from a bird's eye perspective upon the motor vehicle and the object are determined, wherein the motor vehicle is represented by a motor vehicle dataset and the object is represented by an object dataset, wherein a parking situation is detected by the driver assistance system. According to the invention, at a point in time of detecting the parking situation, the predetermined environment region is determined so that the motor vehicle is positioned within the maneuvering region (34), wherein at least during the maneuvering of the motor vehicle a distance (38) between the motor vehicle and the edge (36) of the maneuvering region is determined, wherein the distance is compared with the predetermined minimum distance, wherein the vehicle guiding data is determined at least depending on the maneuvering region.
B60R 1/28 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
An electronic assembly includes a first printed circuit board (PCB) with a first ground layer and a second PCB with a second ground layer, separated by a gap. A metal frame resides in the gap, featuring an upper surface facing the first PCB and a lower surface facing the second PCB. The assembly incorporates a first electrically-conductive seal on the upper surface contacting the first ground layer and a second electrically-conductive seal on the lower surface contacting the second ground layer. In some embodiments, the metal frame extends about or adjacent to the perimeters of the two PCBs. In other embodiments, the metal frame is a local metal frame, partially surrounding a connector.
An aspect of the invention relates to a vehicle module (1) with a housing (2) and an electronic unit (4) arranged in the housing (2) as well as with a liquid cooling device (5), by which heat, which is generated by the electronic unit (4) in the operation of the vehicle module (1), can be dissipated, wherein the liquid cooling device (5) comprises at least one cooling channel (6), in which liquid can flow, wherein a volume area (10) of the cooling channel (6) is partially delimited by walls (7, 8, 9) of a housing member (3a) of the housing (2), wherein the volume area (10) of the cooling channel (6) is partially directly delimited by an insert (14) separate from the housing member (3a), which is formed of a thermally conducting material, the thermal conductivity of which is greater than the thermal conductivity of the material of the housing member (3a).
H05K 7/20 - Modifications to facilitate cooling, ventilating, or heating
H01L 23/473 - Arrangements for cooling, heating, ventilating or temperature compensation involving the transfer of heat by flowing fluids by flowing liquids
11.
PLASTIC HOUSING FOR AN ELECTRICAL COMPONENT AND METHOD FOR MANUFACTURING THE SAME
The invention is directed at a plastic housing (10) for an electrical component, comprising a housing interior for housing the electrical component and a tempering device (22), wherein the tempering device (22) comprises a thermally conductive member, adapted to dissipate heat from the housing interior to a surrounding region of the housing (10). The thermally conductive member comprises a pedestal member (30) disposed within the housing interior, adapted to mount the electrical component within the housing interior, wherein in a mounted condition, the electrical component is thermally connected to the pedestal member (30) and a heat dissipating member (24) facing the surrounding region of the housing (10), wherein the pedestal member (30) and the heat dissipating member (24) are thermally connected to each other.
A MEMS device includes an electrical distribution substrate and a spacer ring extending upward therefrom. An actuator stator is positioned above the electrical distribution substrate and within the spacer ring. An outer frame extends from a floor of the actuator stator. An actuator rotor is suspended above the floor. A sensor is supported by the actuator rotor. A conductive stack is positioned above the spacer ring. A wire electrically connects the sensor to the conductive stack. A plurality of vias extending through the spacer ring and to the electrical distribution substrate, thereby allowing electrical communication between the sensor and the electrical distribution substrate while enabling vertical displacement of the sensor and actuator rotor.
A MEMS device includes an electrical distribution substrate and a spacer ring extending upward therefrom. An actuator stator is positioned above the electrical distribution substrate and within the spacer ring. An outer frame extends from a floor of the actuator stator. An actuator rotor is suspended above the floor. A sensor is supported by the actuator rotor. A conductive stack is positioned above the spacer ring. A wire electrically connects the sensor to the conductive stack. A plurality of vias extending through the spacer ring and to the electrical distribution substrate, thereby allowing electrical communication between the sensor and the electrical distribution substrate while enabling vertical displacement of the sensor and actuator rotor.
For training a student ANN (5), first features (18) are generated by applying a first feature module (9) of a teacher ANN (6) to a first dataset (7) corresponding to a first sensor type, second features (19) are generated by applying a second feature module (10) of the teacher ANN (6) to a second dataset (8) corresponding to a second sensor type, the first and the second features are fused, third features (21) are generated by applying a feature module (14) of the student ANN (5) to the first dataset (7), prediction data (16) is generated by applying a decoder module (15) of the student ANN (5) to the third features (21 ). Arguments of a loss function (17) for updating the student ANN (5) include the prediction data (16), ground truth data (22) for the first dataset (7), the fused features (20) and the third features (21).
For tracking an emergency vehicle, a thermal image (8) depicting a target vehicle is received from a thermal camera (3) for each frame of a sequence of frames. An object detection algorithm (15) is carried out based on the thermal images (8), wherein an output comprises vehicle bounding boxes for the target vehicle and respective light bounding boxes for an active light source. If it is determined that the light source is alternatingly active and inactive over the frames, the target vehicle is classified as an emergency vehicle and a sequence of refined bounding boxes (10) for the emergency vehicle is determined by tracking the emergency vehicle, wherein the tracking comprises applying a first ANN (32) to first input data (20) depending on the bounding boxes.
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G10L 25/18 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
The invention relates to a vehicle electronic module (1), comprising - a housing (2); - a first electronic unit (3) and a second electronic unit (4); - a cooling unit (5), which is arranged in the housing (2) and which is arranged between the first electronic unit (3) and the second electronic unit (4); and 0 - a movement device (7), wherein - by the movement device (7), a first relative movement can be performed such that the cooling unit (5) is arranged spaced from the first electronic unit (3) in the housing (2) in a first positioning state, and the cooling unit (5) and the first electronic unit (3) are pressed onto each other in the housing (2) in a second positioning state, - by the movement device (7), a second relative movement can be performed, which can be performed analogously to the first relative movement, between the second electronic unit (4) and the cooling unit (5), and - the movement device (7) comprises an actuation system (9), by the actuation of which the first relative movement and/or the second relative movement can be selectively generated.
Vehicle electronic module (1), comprising - a housing (2); - a first electronic unit (3) and at least one second electronic unit (4), which are detachably arranged in the housing (2); - a first cooling unit (5) and at least one second cooling unit (6), which are arranged in the housing (2); and - a movement device (7), by which a relative movement between the electronic units (3, 4) and the cooling units (5, 6) can be performed, such that the cooling units (5, 6) are arranged spaced from the electronic units (3, 4) in the housing (2) in a first positioning state and the cooling units (5, 6) and the electronic units (3, 4) are pressed onto each other in the housing (2) in a second positioning state, wherein - the movement device (7) comprises at least one actuating element (9), by the actuation of which both a first relative movement between the first electronic unit (3) and the first cooling unit (5) and a second relative movement between the second electronic unit (4) and the second cooling unit (6) are generated.
H05K 5/00 - Casings, cabinets or drawers for electric apparatus
H05K 7/20 - Modifications to facilitate cooling, ventilating, or heating
18.
ELECTRONIC COMPUTING DEVICE FOR AN ASSISTANCE SYSTEM OF A MOTOR VEHICLE, ASSISTANCE SYSTEM AS WELL AS METHOD FOR PRODUCING AN ELECTRONIC COMPUTING DEVICE
An electronic computing device for an assistance system of a motor vehicle is disclosed. The device includes a first housing. A first processor device of the assistance system is arranged in a first internal space of the first housing and generates heat during operation. The device also includes a cooling device for cooling the first processor device. The cooling device is formed on a first outer wall of the first housing. A separate second housing with a second internal space is arranged stacked on the first housing viewed along a vertical direction of the electronic computing device. A second processor device of the assistance system is arranged in the second internal space. The second processor device generates heat during operation. A columnar structure of the cooling device is formed on a first outer side of the first outer wall.
An electric power distribution assembly for at least one Printed Circuit Board Assembly (PCBA) of a Connectivity Control Unit (CCU). The electric power distribution assembly comprises a first block having a plurality of electrical connectors, a plurality of electrical channels, an alignment protrusion, a top surface and a bottom surface. The electric power distribution assembly further comprises a second block having an alignment indentation, an upper face and a lower face. The electric power distribution assembly additionally comprises a plurality of conductive pads housed within the first block. The electric power distribution assembly moreover comprises a plurality of locking members housed within the second block. The electric power distribution assembly further comprises a plurality of fasteners connecting the first block and the second block.
B60R 16/023 - Electric or fluid circuits specially adapted for vehicles and not otherwise provided forArrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric for transmission of signals between vehicle parts or subsystems
A housing for at least one electronic circuit includes a housing portion with an inner surface. The housing includes a heat spreader component which is connected to the inner surface of the housing portion. The heat spreader component includes a contact region connected to an electronic component of the at least one electronic circuit. The heat spreader component includes a material with a thermal conductivity greater than the housing portion. The heat spreader component has a star-like shape including an inner contact region for connecting the electronic component to arms extending from the inner contact region.
The invention relates to a camera (4) for a motor vehicle (1), with an exterior housing (9), in which a lens module (15) of the camera (4) and a circuit board (13) of the camera (4) are arranged, wherein the circuit board (13) is fixed to mounting elements (25, 26, 27, 27a) of the camera (4), and the material of the circuit board (13) has a first thermal 0 expansion coefficient and the material of the exterior housing (9) has a second thermal expansion coefficient different therefrom, wherein the mounting elements (25, 26, 27, 27a) are arranged on the exterior housing (9), characterized in that the mounting elements (25, 26, 27, 27a) are mounting pins with each a longitudinal axis 5 (B) and at least one mounting pin comprises at least two nominal kink points (39, 40) axially spaced from each other in an area (d) of the mounting pin between the circuit board (13) and the exterior housing (9).
B60R 11/04 - Mounting of cameras operative during driveArrangement of controls thereof relative to the vehicle
H04N 23/57 - Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
H04N 23/52 - Elements optimising image sensor operation, e.g. for electromagnetic interference [EMI] protection or temperature control by heat transfer or cooling elements
22.
CAMERA DEVICE WITH A HEATING DEVICE FOR A MOTOR VEHICLE
The invention relates to a camera device (2) with a heating device (19) for a motor vehicle (1). The heating device (19) in configured to heat a front window cover (6) of the camera device (2) by means of a heating element (30). The camera device (2) comprises a bezel housing (18) and an isolator (20), which is produced of a thermally insulating material. The isolator (20) is arranged between the heating element (30) and an edge (35) of the bezel housing (18), such that a heat transfer from the heating device (19) to the bezel housing (18) is at least reduced.
B60R 11/04 - Mounting of cameras operative during driveArrangement of controls thereof relative to the vehicle
G03B 17/55 - Details of cameras or camera bodiesAccessories therefor with provision for heating or cooling, e.g. in aircraft
H04N 23/52 - Elements optimising image sensor operation, e.g. for electromagnetic interference [EMI] protection or temperature control by heat transfer or cooling elements
H04N 23/55 - Optical parts specially adapted for electronic image sensorsMounting thereof
23.
METHOD FOR RECOGNIZING A CLEANING OF A LENS OF A CAMERA FOR A MOTOR VEHICLE BY MEANS OF AN ELECTRONIC COMPUTING DEVICE OF THE MOTOR VEHICLE, COMPUTER PROGRAM PRODUCT, COMPUTER-READABLE STORAGE MEDIUM AS WELL AS ELECTRONIC COMPUTING DEVICE
The invention relates to a method for recognizing a cleaning of a lens (4) of a camera (3, 10) for a motor vehicle (1) by means of an electronic computing device (5) of the motor vehicle (1), comprising the steps: - capturing at least one image (8, 9) of an environment (2) of the motor vehicle (1) by means of the camera (3, 10); - comparing an image parameter of the captured image (8, 9) to a comparative parameter for the image (8, 9) by means of the electronic computing device (5); - determining an application of a cleaning fluid (7) onto the lens (4) depending on the comparison; and - recognizing the cleaning depending on the determined application of the cleaning fluid (7) by means of the electronic computing device (5). Further, the invention relates to a computer program product, to a computer-readable storage medium as well as to an electronic computing device (5).
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]Salient regional features
G06V 10/50 - Extraction of image or video features by performing operations within image blocksExtraction of image or video features by using histograms, e.g. histogram of oriented gradients [HoG]Extraction of image or video features by summing image-intensity valuesProjection analysis
G06V 10/74 - Image or video pattern matchingProximity measures in feature spaces
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
24.
ADAPTIVE COLOR MITIGATION FOR NON-UNIFORMITY AMONGST LENSES
Systems and methods for altering colors of pixels of images produced by cameras using a lens. A plurality of generated images are received, and color channel values and luminance values of image pixels are determined. Benchmark U-color thresholds and V-color thresholds are established based on the color channels and luminance associated with the pixels. A first image generated using a first lens of a first camera is received. U-color values and V-color values associated with first pixels of the first image are determined. A uniformity score associated with the first lens is received. The benchmark U-color and V-color thresholds are altered based on the uniformity score. The U-color values and V-color values of the first pixels are corrected in response to the U-color values being within the altered U-color thresholds and the V-color values being within the altered V-color thresholds.
In some implementations, the device may include receiving, via an image sensor associated with a vehicle, an image of a parking area. In addition, the device may include determining that the image of the parking area includes an available parking spot; analyzing the image of the parking area to locate a number associated with the available parking spot; determining that the vehicle is entering the available parking spot; creating, in response to the vehicle entering the available parking spot, a generated image that includes the number associated with the available parking spot based on the image of the parking area; and displaying the generated image on a vehicle display in response to the vehicle occupying the available parking spot.
G08G 1/14 - Traffic control systems for road vehicles indicating individual free spaces in parking areas
G06T 11/60 - Editing figures and textCombining figures or text
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 20/62 - Text, e.g. of license plates, overlay texts or captions on TV images
G06V 30/18 - Extraction of features or characteristics of the image
26.
MEMS PACKAGING WITH ACTUATOR STATOR PROVIDING ELECTRICAL CONNECTION POINT
A MEMS device is provided with an actuator stator providing an electrical connection point. The MEMS device includes an electrical distribution substrate, and an actuator stator positioned above it. The actuator stator has a floor and an outer frame extending up from the floor. The MEMS device includes a stator pad disposed on the outer frame and above the electrical distribution substrate. The MEMS device also includes an actuator rotor suspended above the floor, within the outer frame, with a sensor mounted thereon. A wire bond interconnect electrically couples the sensor to the stator pad. In some embodiments, the outer frame includes a via extending therethrough which electrically connects the stator pad with the electrical distribution substrate, enabling an electrical connection between the sensor and the electrical distribution substrate. In some embodiments, a second wire bond interconnect electrically connects the stator pad and the substrate.
A MEMS device is provided with an actuator stator providing an electrical connection point. The MEMS device includes an electrical distribution substrate, and an actuator stator positioned above it. The actuator stator has a floor and an outer frame extending up from the floor. The MEMS device includes a stator pad disposed on the outer frame and above the electrical distribution substrate. The MEMS device also includes an actuator rotor suspended above the floor, within the outer frame, with a sensor mounted thereon. A wire bond interconnect electrically couples the sensor to the stator pad. In some embodiments, the outer frame includes a via extending therethrough which electrically connects the stator pad with the electrical distribution substrate, enabling an electrical connection between the sensor and the electrical distribution substrate. In some embodiments, a second wire bond interconnect electrically connects the stator pad and the substrate.
The invention relates to a camera (10) for a motor vehicle, with a housing part (36) and with a circuit board (26), on which an image sensor (20) of the camera (10) is arranged. The circuit board (26) is fixed to the housing part (36) by means of a plurality of positioning pins (38). A lens module (22) comprises a module section (32) close to the image sensor (20). The module section (32) is received in a passage opening (34), which is formed in the housing part (36). The circuit board (26) is retained at distance from the lens module (22) by means of the positioning pins (38), wherein fixing sections (40) of the positioning pins (38) engage with fixing openings (42), which are formed in the circuit board (26). The circuit board (26) comprises at least one recess (46) in partial areas (44), which adjoin to the respective fixing opening (42). Furthermore, the invention relates to a motor vehicle with at least one such camera (10).
A camera for a motor vehicle is disclosed. The camera includes a housing, wherein a receiving space, in which a lens module of the camera and a circuit board of the camera are arranged, is bounded by the housing. The camera has a longitudinal axis, wherein the circuit board and the lens module are arranged axially spaced from each other in the direction of this longitudinal axis such that a clearance is formed between the circuit board and the lens module. An elastic seal is arranged between the circuit board and the lens module such that the clearance is sealed from the remaining volume space in the housing by the seal.
G03B 17/12 - Bodies with means for supporting objectives, supplementary lenses, filters, masks, or turrets
G03B 17/55 - Details of cameras or camera bodiesAccessories therefor with provision for heating or cooling, e.g. in aircraft
G03B 30/00 - Camera modules comprising integrated lens units and imaging units, specially adapted for being embedded in other devices, e.g. mobile phones or vehicles
H04N 23/52 - Elements optimising image sensor operation, e.g. for electromagnetic interference [EMI] protection or temperature control by heat transfer or cooling elements
H04N 23/57 - Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
30.
HEATING A LENS MODULE OF A CAMERA FOR A MOTOR VEHICLE
The present invention relates to a camera for a motor vehicle. The camera includes a lens module, a lens holder body, which is mechanically connected to the lens module, and an electrical heater element for heating the lens module. The heater element is arranged at a surface of the lens holder body facing away from the lens module. The heater element is thermally connected to the lens module via the lens holder body.
A vehicle light system includes a plurality of light banks connected to a vehicle and configured to project light that radiates away from the vehicle. Each light bank includes a plurality of light sources. An image sensor, such as a camera, is connected to the vehicle and is configured to generate image data corresponding to a scene about the vehicle. A controller is connected to the light banks and the image sensor. The controller is configured to receive the image data, and execute an object detection machine learning model based on the image data to detect an object, determine a location of the object, and classify the detected object. The controller is configured to then select and dim one or more of the plurality of light sources based on the output of the object detection machine learning model. The dimming can vary based on the object class.
F21S 41/663 - Illuminating devices specially adapted for vehicle exteriors, e.g. headlamps characterised by a variable light distribution by acting on light sources by switching light sources
F21V 23/04 - Arrangement of electric circuit elements in or on lighting devices the elements being switches
32.
COMPUTER VISION IN BIRD'S-EYE-VIEW AND GUIDING A VEHICLE
For computer vision in a BEV perspective, images (B1, B2, B3, BN, Bi) are received from vehicle cameras (C1, C2, C3, CN). For each image (B1, B2, B3, BN, Bi), sections (S11, S12, S13, S14, S15, S16, S17, S21, S22, S23, S24, S22', S23', S24', S22'', S23'', S24'', Si1, Si2, Sin) are extracted. From each section (S11, S12, S13, S14, S15, S16, S17, S21, S22, S23, S24, S22', S23', S24', S22'', S23'', S24'', Si1, Si2, Sin), a virtual image (VBi1, VBi2, VBin) is generated by scaling it to a target resolution. For each virtual image (VBi1, VBi2, VBin), virtual calibration data (VDi1, VDi2, VDin) is generated depending on a size and a position of the section (S11, S12, S13, S14, S15, S16, S17, S21, S22, S23, S24, S22', S23', S24', S22'', S23'', S24'', Si1, Si2, Sin) and calibration data (Di) of the respective vehicle camera (C1, C2, C3, CN).
G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
33.
CONTROL DEVICE ASSEMBLY FOR A VEHICLE, WHICH COMPRISES A LIQUID-COOLED CONTROL UNIT AND TEMPERATURE SENSORS AT THE INLET AND OUTLET OF A HEAT SINK, VEHICLE AND METHOD FOR OPERATING THE CONTROL DEVICE ASSEMBLY
The invention relates to a control device assembly (10) for a vehicle, with an electronic control unit (14), which comprises a circuit board (16) and a plurality of electronic semiconductor devices (18, 20, 22) arranged on the circuit board (16). A heat sink (28) of the control device assembly (10) is formed for dissipating heat from the semiconductor devices (18, 20, 22). The heat sink (28) can be flown-through by a cooling liquid (30) and comprises a cooling liquid inlet (34) and a cooling liquid outlet (36). The control device assembly (10) comprises a first temperature sensor (38), which is arranged in the area of the cooling liquid inlet (34) on an outer side (42) of the heat sink (28), and a second temperature sensor (40), which is arranged in the area of the cooling liquid outlet (36) on the outer side (42) of the heat sink (28). A flow of the cooling liquid through the heat sink 5 (28) is adjustable depending on measurement values capable of being captured by means of the temperature sensors (38, 40). Furthermore, the invention relates to a vehicle and to a method for operating the control device assembly (10).
In order to lock a communication interface of an electronic device (1) for a motor vehicle, a first identifier (3) is determined which characterizes a hardware component of a computing unit (2) of a production line in which the electronic device (1) was produced or a software component of the computing unit (2). A second identifier (4) is obtained which characterizes the electronic device (1). A first input value for a specified key derivation function is generated by the computing unit (2) on the basis of the first identifier (3), and a second input value for the key derivation function is generated on the basis of the second identifier (4). A key is generated by the computing unit (2) on the basis of the first input value and the second input value using the key derivation function, and the communication interface is locked using the key.
G06F 21/85 - Protecting input, output or interconnection devices interconnection devices, e.g. bus-connected or in-line devices
G06F 21/73 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information by creating or determining hardware identification, e.g. serial numbers
35.
CAMERA FOR A VEHICLE AS WELL AS SPRING ELEMENT FOR A CAMERA
The invention relates to a camera (2) for a vehicle (1), comprising a housing (4), which comprises a first housing part (5) and a second housing part (6), which together bound a receiving space (15), in which a circuit board (7) and a lens module (8) are arranged, wherein the circuit board (7) is attached to the first housing part (5). The circuit board (7) comprises a connecting element (16) for connecting the circuit board (7) with an interface (10) of the camera (2), wherein the connecting element (16) on an end facing the second housing part (6) comprises an edge (17) protruding into the receiving space (15). Between the edge (17) and the second housing part (6) a spring element (11) is tensioned, which holds the circuit board (7) relative to the second housing part (6) in a circuit board position.
A composite top view of an area surrounding a vehicle is generated utilizing image harmonization and proximity sensor data. Images from various vehicle cameras are received, wherein at least two of the cameras can see the same portion of the environment in an overlapping region. The images are segmented into segments. The overlapping regions include some of the segments of one image and some of the segments of another region to define overlapping segments. Sensor data is received from a plurality of proximity sensors, wherein the sensor data indicates a location of an object outside of the vehicle. The color and/or brightness of the images are harmonized based at least in part on the sensor data. In particular embodiments, weights are placed on the color and/or brightness of the overlapping segments associated with the location of the object. Or these overlapping segments can be removed from the harmonization.
B60R 1/23 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
A composite top view of an area surrounding a vehicle is generated utilizing image harmonization and proximity sensor data. Images from various vehicle cameras are received, wherein at least two of the cameras can see the same portion of the environment in an overlapping region. The images are segmented into segments. The overlapping regions include some of the segments of one image and some of the segments of another region to define overlapping segments. Sensor data is received from a plurality of proximity sensors, wherein the sensor data indicates a location of an object outside of the vehicle. The color and/or brightness of the images are harmonized based at least in part on the sensor data. In particular embodiments, weights are placed on the color and/or brightness of the overlapping segments associated with the location of the object. Or these overlapping segments can be removed from the harmonization.
A computer-implemented method for analyzing a roundabout in an environment for a vehicle is disclosed. The method includes generating at least one initial feature map by applying a feature encoder module of a trained neural network to an input image. The input image depicts the roundabout. The method includes next applying a classificator module of the trained neural network to the initial feature map. An output of the classificator module represents a road region on the input image. The method includes next applying a radius estimation module of the trained neural network to the initial feature map. An output of the radius estimation module depends on an inner radius of the roundabout and an outer radius of the roundabout. The method includes finally determining an entry point and an exit point of the roundabout depending on the output of the classificator module and depending on the output of the radius estimation module.
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G01C 21/36 - Input/output arrangements for on-board computers
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
39.
DETECTING AT LEAST ONE EMERGENCY VEHICLE USING A PERCEPTION ALGORITHM
For training a perception algorithm to detect an emergency vehicle, respective audio datasets are received from two microphones and respective spectrograms are generated. At least one interaural difference map is generated based on the spectrograms, audio source localization data is generated, which specifies a number of audio sources in respective grid cells of a spatial grid, by applying a CRNN to first input data containing the spectrograms and the least one interaural difference map. An image is received from a camera and output data comprising a bounding box for the emergency vehicle is predicted by applying at least one further ANN to second input data containing the image and the spectrograms. Network parameters are adapted depending on the output data and the audio source localization data.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
For training a perception algorithm to detect an emergency vehicle, respective audio datasets are received from two microphones and respective spectrograms are generated. At least one interaural difference map is generated based on the spectrograms, audio source localization data is generated, which specifies a number of audio sources in respective grid cells of a spatial grid, by applying a CRNN to first input data containing the spectrograms and the least one interaural difference map. An image is received from a camera and output data comprising a bounding box for the emergency vehicle is predicted by applying at least one further ANN to second input data containing the image and the spectrograms. Network parameters are adapted depending on the output data and the audio source localization data.
G01S 5/18 - Position-fixing by co-ordinating two or more direction or position-line determinationsPosition-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
G01S 3/808 - Systems for determining direction or deviation from predetermined direction using transducers spaced apart and measuring phase or time difference between signals therefrom, i.e. path-difference systems
G01S 5/16 - Position-fixing by co-ordinating two or more direction or position-line determinationsPosition-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
41.
SYSTEM FOR OPERATING A HEATING DEVICE OF A CAMERA FOR A MOTOR VEHICLE
The invention relates to a system (5) for operating a heating device (9) of a camera (2) for a motor vehicle (1), comprising a control device (3) and an electronic circuit (10) of the camera (2), wherein the electronic circuit (10) comprises at least one switch (11) for the heating device (9) and a voltage regulator device (12), which comprises a primary voltage regulator (13) and in particular at least one secondary voltage regulator (14). The control device (3) is configured to provide a voltage to the primary voltage regulator (13). The primary voltage regulator (13) reduces the voltage provided to the switch (11) to a constant voltage level (18). The system (5) is configured to operate the heating device (9) with the provided reduced voltage.
A camera for a motor vehicle includes a housing. The housing bounds a receiving space that contains a lens module and a circuit board. The camera includes a longitudinal axis and a heater for heating the lens module. The circuit board and the lens module are arranged axially spaced from each other in the direction of the longitudinal axis of the camera so that a clearance is formed between the circuit board and the lens module. A support plate of the camera is arranged between the circuit board and the lens module. The support plate is a component separate from the circuit board, the housing, and the lens module. The support plate includes first electrical contact areas that are able to connect to electrical lines of the heater. Electrical energy is able to flow to the lens module to heat the lens module.
G03B 30/00 - Camera modules comprising integrated lens units and imaging units, specially adapted for being embedded in other devices, e.g. mobile phones or vehicles
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G03B 17/55 - Details of cameras or camera bodiesAccessories therefor with provision for heating or cooling, e.g. in aircraft
A method for focal adjustment of an automotive camera is disclosed. The method includes capturing a first image with an imager of the camera during a first frame period, exposing a plurality of rows of a sensor array of the imager according to a rolling shutter mode, and capturing a second image with the imager during a second frame period after the first frame period. The method further includes determining a first area on the sensor array, which corresponds to a predefined region of interest for the first frame period, determining a first subset of the plurality of rows, determining a focal adjustment period which starts after all rows of the first subset of rows have been exposed during the first frame period, and adjusting at least one focal parameter of the camera during the focal adjustment period according to a predefined focal setting.
H04N 23/67 - Focus control based on electronic image sensor signals
B60K 35/28 - Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics informationOutput arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the purpose of the output information, e.g. for attracting the attention of the driver
A method for focal adjustment of an automotive camera is disclosed. The method includes providing a target focal setting for the camera for predefined regions of interest, determining a first and second focal setting for each subset of regions of interest depending on the respective target focal settings of the regions, capturing a first and second image during the first frame period respectively. The first and second focal settings are used during at least part of the first and second frame periods respectively. Between the capturing of the first and second images, the first focal setting is changed to the second focal setting.
METHOD FOR REDUCING A TEMPORAL NOISE IN AN IMAGE SEQUENCE OF A CAMERA BY AN ELECTRONIC COMPUTING DEVICE, COMPUTER PROGRAM PRODUCT, COMPUTER-READABLE STORAGE MEDIUM, AS WELL AS ELECTRONIC COMPUTING DEVICE
The present invention relates to a method for reducing a temporal noise in an image sequence by an electronic computing device (3), comprising the steps: - capturing a first image (Gt-1, Ft-1) of an environment (5) at a first point in time (t-1) and capturing a second image (Ft) at a second point in time (t); - determining at least one feature in the second image (Ft) by a feature capturing module (7); - determining of pixels associated with the at least one feature in the second image (Ft) by the feature capturing module (7); - generating a weighting map (W) for each pixel of the second image (Ft) by an adaptive motion estimation module (8); and - reducing the temporal noise by blending pixels of the first image (Gt-1, Ft-1) with pixels of the second image (Ft) in dependence on the generated weighting map (W) by a blending module (9). Further, the invention relates to a computer program product, a computer- readable storage medium, as well as an electronic computing device (3).
A driver assistance system and driver assistance method for a combination of a motor vehicle and a trailer, which includes a first camera located on the motor vehicle configured to generate a first image, and a second camera located on the trailer configured to generate a second image, and a computer processing device configured to determine a first distance and a second distance by a sensor or a user input, determine a bowl view responsive to the first distance and the second distance in relation to a threshold distance, and generate a combined image from the second image and the first image responsive to the bowl view.
B60R 1/22 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
47.
GENERATING OR UPDATING A DIGITAL REPRESENTATION OF A TRAJECTORY, SELF-LOCALIZATION OF AN EGO-VEHICLE AND GUIDING AN EGO-VEHICLE AT LEAST IN PART AUTOMATICALLY
In order to generate or update a digital representation of a trajectory (T1, TN) in a predefined spatial region, for each of a plurality of time instances while driving a capturing vehicle (1') along the trajectory (T1, TN), a camera image of an environment of the capturing vehicle (1') is captured by a camera (3) of the capturing vehicle (1'), at least one feature descriptor of the respective camera image is determined and a two-dimensional pose of the capturing vehicle (1') in a coordinate system of the digital representation is determined depending on the camera image, an altitude (z) of the capturing vehicle (1') is determined, and a respective dataset of the digital representation, which comprises the respective at least one feature descriptor, the respective pose of the capturing vehicle (1') and the respective altitude (z) of the capturing vehicle (1'), is generated or updated.
A method for reducing temporal noise in an image sequence by a camera of an assistance system of a motor vehicle by an electronic computing device. The method includes capturing a first image and a second image of the image sequence of an environment of the motor vehicle by the camera at a first point and a second point in time. Furthermore, determining at least one feature in the second image and determining of pixels associated with the at least one feature in the second image by the feature capturing module. Moreover, generating a weighting map comprising a weight value for each pixel of the second image by an adaptive motion estimation module. Lastly, reducing the temporal noise by blending pixels of the first image with pixels of the second image in dependence on the generated weighting map by a blending module.
Sensor data generated by an environmental sensor arrangement (5) and representing one or more objects (2a, 2b, 7) is received. An occupancy grid map (8), which comprises of a plurality occupied cells, which represent regions that are occupied by the one or more object (2a, 2b, 7), wherein the plurality of occupied cells comprises a cluster (12a, 12b) of connected occupied cells, is computed based on the sensor data. A contour tracing 0 algorithm is carried out to determine all contour cells of the cluster (12a, 12b), which lie on an outer contour of the cluster (12a, 12b). For each of the contour cells, a representative point lying in the respective contour cell is determined. A polygon (9a, 9b), whose vertices are given by the representative points of the contour cells, is determined and a modeling dataset is generated and stored depending on the polygon (9a, 9b).
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
50.
CAMERA WITH A SEALING BETWEEN A CIRCUIT BOARD AND A LENS MODULE FOR A MOTOR VEHICLE
The present invention relates to a camera for a motor vehicle. The camera includes a housing that delimits the boundary of a receiving space. A lens module of the camera and a circuit board of the camera are arranged in the receiving space. The circuit board and the lens module are arranged axially spaced apart from each other in the direction of the longitudinal axis of the camera so that between the circuit board and the lens module, a clearance is formed. Between the circuit board and the lens module, an elastic sealing is arranged so that the clearance is sealed, by the sealing, from the remaining volume space in the housing. The sealing includes a predetermined bending point where a width of the sealing is minimal to allow bending.
For displaying image data, initial and final projection surfaces (9a, 9b) are provided and a video stream is received and is buffered in a buffer storage, which is updated at a frame rate. Image data is read out at a readout rate from the buffer at an initial readout instance, intermediate readout instances and a final readout instance. Respective sets of initial and final sampling points (11a, 11b) on the projection surfaces (9a, 9b) are determined. For each of the intermediate readout instances, intermediate sampling points (12, 13), which interpolate between the final sampling points (11b) and the respective initial sampling points (11a), and an intermediate projection surface (10) containing the intermediate sampling points (12, 13) are determined. The read out image data is projected to the respective intermediate projection surface (10) and a corresponding image is displayed on a display device (5).
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06T 3/4007 - Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
For computing height information of a curb (5), line segments (7) approximating the curb (5) in a ground plane (6) are determined, a perpendicular bisector of a first line segment is computed, and a second segment are determined, which intersects the perpendicular bisector. A first plane comprising the first segment and a second plane comprising the second segment are determined. A midpoint (8) of the first segment is projected to the second plane and an intersection point (9) of the second segment is projected to the first plane. Depending on respective resulting heights, the first and the second segment are classified as upper and lower, respectively. An upper point (11) of the upper segment is projected to a further plane, which is perpendicular to the ground plane (6) and comprises the lower segment, and a height (H) of the projected upper point (11') above the ground 5 plane (6) is determined.
The invention relates to an automotive camera (10) with a lens assembly (16), wherein the lens assembly (16) comprises a cylindrical lens barrel (22) with at least one lens (26) arranged therein and a lens opening for allowing light to travel through the lens barrel (22) along an optical axis (20) of the at least one lens (26). For protecting the at least one lens (26), a transparent protective cover (28) is arranged at the lens opening. A fastening cap (30) comprising a central opening (32) and a fastening collar (34) surrounding the central opening (32) is put on the lens barrel (22) at the lens opening to fasten the protective cover (28) to the lens barrel (22). The fastening cap (30) is locked to the lens barrel (22) in a final fastening position by a locking mechanism, wherein the locking mechanism prevents the fastening cap (30) from being moved out of the final fastening position.
In a method for displaying image data in a vehicle (1), at least one image depicting an environment of the vehicle (1) is generated by a camera system (4) of the vehicle (1) and a predefined projection surface is provided. Sensor data representing at least one object in the environment is generated by an environmental sensor system (5) of the vehicle (1). For each of the at least one object, respective object data, which depends on a distance of the respective object from the vehicle (1), is determined depending on the sensor data. The projection surface is adjusted by deforming the projection surface depending on the object data of the at least one object. The at least one image is projected onto the adjusted projection surface. Image data depending on the projected at least one image is displayed on a display device (6) of the vehicle (1).
According to a method for automatic environmental perception based on sensor data of a vehicle (1), a first and a second image (6, 7) of respective sensor modalities (3, 4) are received, a first feature map is generated by applying at least one layer (11, 16) of a neural network (8) to the first image (6) and a second feature map is generated by applying at least one further layer (31, 36) of the neural network (8) to the second image (7). A transformed feature map is generated based on the second feature map using an affine transformation accounting for a deviation in extrinsic parameters of the sensor modalities (3, 4), a first fused feature map is generated by concatenating the first feature map with the transformed feature map, and a visual perception task is carried out depending on the first fused feature map.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 10/143 - Sensing or illuminating at different wavelengths
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
According to a method for determining a trailer orientation, an image (13) depicting a component (14) of the trailer (12) is generated by means of a camera system (15). Predetermined first and second reference structures (16, 18) are identified based on the image (13). The trailer orientation is determined by means of a computing unit (17) depending on the reference structures (16, 18).
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
B60R 1/26 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
57.
VEHICLE DEVICE WITH SPECIFIC CABLE CONNECTION BETWEEN A FAN AND A HOUSING THAT IS SEPARATE THERETO AND COMPRISES A PRINTED CIRCUIT BOARD
The invention relates to a vehicle device (1 ) comprising a housing (2), in which at least one printed circuit board (7) is arranged, and comprising a fan (3), which is arranged outside the housing (2), and with an electric cable (11), which is laid through a cable opening (13) in the housing (2) and extends outside and inside the housing (2), wherein the cable (11 ) is connected to electric connection (9) of the fan (4) and which is connected to the printed circuit board (7), wherein the cable (11) outside the housing (2) is arranged on a guiding element (12) arranged adjacent to the cable opening (13), and is laid bent by the guiding element (12) of the vehicle device (1) towards the cable opening (13), wherein the cable opening (13) is arranged at a height position viewed in the height direction (y) of the vehicle device (1), which is higher than a lowest point (14) of the guiding element (12) contacted by the cable (11).
According to a method for automatic visual perception, first feature maps (17) are generated from a camera image (7) by a first encoder module (11) of a neural network (6) and the first feature maps (17) are transformed into a top view perspective. An ultrasonic pulse is emitted into the environment and an ultrasonic sensor signal (8) is generated depending on reflected portions of the emitted ultrasonic pulse. A spatial ultrasonic map (9) is generated depending on the ultrasonic sensor signal (8) and second feature maps (22) are generated from the ultrasonic map (9) by a second encoder module (12) of the neural network (6). The transformed first feature maps (20) and the second feature maps (22) are fused and a visual perception task is carried out by a decoder module (15a, 15b, 15c) of the neural network (6) depending on the fused feature maps.
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
59.
LOCALIZING A FIFTH WHEEL HITCH COUPLER OF A MOTOR VEHICLE
According to a computer-implemented method for localizing a fifth wheel hitch coupler (3) of a motor vehicle (1), a camera image (8) depicting the hitch coupler (3) is received from a camera (2) of the motor vehicle (1) and a top view image is generated by projecting the camera image (8) to a plane, which is perpendicular to a predefined height axis of the motor vehicle (1). A contour map (13) representing a contour (14) of a coupler throat (4) of the hitch coupler (3) is determined based on the top view image and a two-dimensional in-plane position of the coupler throat (4) is determined by fitting a predefined geometric figure (17) to the contour (14) of the coupler throat (4).
The invention relates to an electrical ventilator (7) for a temperature control device (16) of an electronic computing device (3), with a housing (10), in which an electrical ventilation device (17) of the electrical ventilator (7) is arranged, and with an electrical contact device (14), which is formed for contacting with a further electrical contact device (15) of the electronic computing device (3), wherein the contact device (14) is formed for transferring at least electrical energy from the electronic computing device (3) to the electrical ventilation device (17), wherein the housing (10) comprises a shaft (18), which is formed for plunging into a recess (19) corresponding thereto on a further housing (9) of the electronic computing device (3) in certain areas, wherein the contact device (14) is fixed in the shaft (18) and is formed for wirelessly and directly contacting with the further contact device (15). Further, the invention relates to an arrangement (21) as well as to an electronic computing device (3).
According to a computer-implemented method for determining a region of interest (12) from a sequence of camera images (6) of a camera (5) of a vehicle (1), for each camera image (6) of the sequence, a respective individual edge image is generated by applying an edge detection algorithm. A joint edge image is generated by summing the individual edge images. A contour (8) is determined depending on the joint edge image and a convex polygon (7) approximating the contour (8) is determined, wherein an interior of the convex polygon (7) corresponds to the region of interest (12).
The invention relates to a control device arrangement (10) for a vehicle, comprising a housing (14) and an ECU (12) mounted inside the housing (14) in a mounting position. The housing (14) comprises a front-side access opening (16) for introducing the ECU (12) along a mounting direction (18), sidewalls (20.1, 20.2), which are opposite each other in a transverse direction (22) different from the mounting direction (18), and an electric connector interface (30). The interface (30) is arranged at a rear side (28) of the housing (14), which is opposite the access opening (16). The housing (14) further comprises a centering element (26) located at a first one of the sidewalls (20.1, 20.2), the centering element (26) applying a centering force to the ECU (12) while the ECU (12) is introduced into the housing (14), thereby centering a connecting element of the ECU (12) onto the interface (30).
B60R 16/02 - Electric or fluid circuits specially adapted for vehicles and not otherwise provided forArrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric
H01R 12/89 - Coupling devices connected with low or zero insertion force contact pressure producing means, contacts activated after insertion of printed circuits or like structures acting manually by moving connector housing parts linearly e.g. slider
ELECTRONIC CIRCUIT AND METHOD FOR POWERING MULTIPLE ELECTRONIC SYSTEMS OF AN ELECTRONIC CONTROL UNIT OF A VEHICLE, AND ELECTRONIC CONTROL ARRANGEMENT OF A VEHICLE
Electronic circuit (1) for powering multiple electronic systems (2, 3, 4) of an electronic control unit (70) of a vehicle, the electronic circuit comprising - a power input detector (5), which is configured to detect an input voltage and to generate a first set signal depending on the detected input voltage, - multiple latches (6, 7, 8), wherein the output of the power input detector is connected to a respective set input of each of the multiple latches, wherein each of the multiple latches is configured to generate a respective enable signal depending on the first set signal, - multiple power management circuitries (13, 14, 15), wherein each latch is connected to its associated power management circuitry, wherein each of the multiple power management circuitries is configured to power its associated electronic system depending on the enable signal.
A Camera (100) for a vehicle (EGO) comprises an assembly of optical elements (20) and has a cone of view (24) with an angle and a direction. The assembly of optical elements (20) comprises a first liquid cell (10), to which a first voltage (VI) is applicable thereby adjusting the magnitude of the angle of the cone of view (24). The assembly of optical elements (20) comprises a second liquid cell (12), to which a second voltage (V2) is applicable thereby adjusting the direction of the cone of view (24). The application further relates to a method for operating a camera and a system comprising a camera and a controller.
H04N 23/55 - Optical parts specially adapted for electronic image sensorsMounting thereof
H04N 23/57 - Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
H04N 23/58 - Means for changing the camera field of view without moving the camera body, e.g. nutating or panning of optics or image sensors
G02B 26/00 - Optical devices or arrangements for the control of light using movable or deformable optical elements
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
65.
METHOD FOR DETERMINING A MOTION MODEL OF AN OBJECT IN THE SURROUNDINGS OF A MOTOR VEHICLE, COMPUTER PROGRAM PRODUCT, COMPUTER-READABLE STORAGE MEDIUM, AS WELL AS ASSISTANCE SYSTEM
A method for determining a motion model of an object by an assistance system is disclosed. The method involves capturing an image of the surroundings with the moving object by a capturing device, encoding the image by a feature extraction module of a neural network of an electronic computing device, decoding the encoded image by an object segmentation module and generating a first loss function, decoding the at least one encoded image by a bounding box estimation module and generating a second loss function, decoding the second loss function depending on the decoding of the image by a motion decoding module and generating a third loss function; and determining the motion model depending on the first loss function and the third loss function.
An electronic control system (3) for a vehicle (1) contains a first and a second ECU (5a, 5b) comprising a first and a second circuit carrier (6a, 7a, 6b, 7b), respectively, as well as a first and a second cooling channel (8a, 8b), respectively. The cooling channels (8a, 8b) each comprise respective coolant inlets (9a, 9b) and coolant outlets (10a, 10b). A hydraulic manifold (11) of the electronic control system (3) comprises a main inlet (12), a main outlet (13), a first and a second ECU outlet (14, 15) as well as a first and a second ECU inlet (16, 17), to distribute the coolant from the main inlet (12) to the first and the second ECU (5a, 5b) and from the first and the second ECU (5a, 5b) to the main outlet (13).
H05K 7/20 - Modifications to facilitate cooling, ventilating, or heating
67.
CONTROL DEVICE ARRANGEMENT WITH HEAT TRANSFER MATERIAL ARRANGED IN A CHANNEL SYSTEM OF A CIRCUIT BOARD UNIT AND METHOD FOR PRODUCING A CONTROL DEVICE ARRANGEMENT
The invention relates to a control device arrangement (10) with a housing (22), in which at least one circuit board unit (12, 14, 16, 18) and at least one cooling device (26, 28) are arranged. The at least one cooling device (26, 28) is supplied with a cooling fluid in the cooling operation and formed for dissipating heat from the at least one circuit board unit (12, 14, 16, 18). The at least one circuit board unit (12, 14, 16, 18) is thermally coupled to the at least one cooling device (26, 28) by means of a heat transfer material. The circuit board unit (12, 14, 16, 18) comprises an outer wall (36) facing the cooling device (26, 28), in which at least one recess is formed. The heat transfer material is arranged in the at least one recess. Furthermore, the invention relates to a method for producing such a control device arrangement (10).
The invention relates to a circuit board arrangement (10) with at least one housing (12), in which a plurality of circuit boards (14) is arranged, and with at least one cooling device (16), through which a cooling fluid can be passed and which is formed for dissipating heat from at least one of the circuit boards (14). The circuit board (14) is thermally coupled to the cooling device (16) by means of a heat transfer element. The respective circuit board (14) is pivotable from a mounting position, in which the heat transfer element abuts both on the cooling device (16) and on the circuit board (14), around a pivot axis (44) into an intermediate position. In the intermediate position, an edge area (50) of the circuit board (14) far from the pivot axis (44) is further spaced from the cooling device (16) than in the mounting position. The housing (12) comprises at least one guide element (38), wherein the respective circuit board (14) is displaceable along the at least one guide element (38) into the intermediate position in introducing the circuit board (14) into the housing (12).
One aspect of the invention relates to an electric plug (1) for connection to a vehicle camera (18), comprising a plug head (2) and at least one electric signal contact (4) and at least one ground contact (5), which are configured on the plug head (2), wherein the plug head (2) comprises a carrier block (7) for the signal contact (4) and the ground contact (5), and the carrier block (7) comprises a jacket wall (8), on which a cover collar (9) projecting outwardly and extending circumferentially on the jacket wall (8) is arranged, wherein an outwardly projecting shielding flap (10, 11) of the plug (11) is integrally formed with the cover collar (9), wherein the shielding flap (10, 11) is connected to a connecting element (16), which is electrically connected to the cover collar (9) and the ground contact (5). One aspect relates to an arrangement (17).
A method for forming a point cloud corresponding to the topography of an imaged environment involves acquiring an image of the environment with a camera having a WFOV lens mounted on a vehicle; changing the camera pose by an adjustment greater than a threshold; and acquiring another image of the environment with the camera at the changed pose. The images are mapped onto respective surfaces to form respective mapped images, defined by the same nonplanar geometry. One of the mapped images is divided into blocks of pixels. For each block, a depth map is formed by performing a search through the other mapped image to evaluate a disparity in the position of the location of the block of pixels in each of the mapped images. The depth map is converted into a partial point cloud corresponding to the local topography of the environment surrounding the vehicle as it moves through the environment.
G06T 7/579 - Depth or shape recovery from multiple images from motion
G06T 7/593 - Depth or shape recovery from multiple images from stereo images
H04N 13/221 - Image signal generators using stereoscopic image cameras using a single 2D image sensor using the relative movement between cameras and objects
H04N 13/243 - Image signal generators using stereoscopic image cameras using three or more 2D image sensors
71.
METHOD FOR REDUCING A COLOR SHIFT OF IMAGE PIXELS OF AN IMAGE FOR A MOTOR VEHICLE CAPTURED BY A CAMERA
A method for reducing a color shift of image pixels of an image for a motor vehicle captured by a camera, a computer program product and a control device for the vehicle are disclosed. For the respective image pixel of the image, the method involves determining a color information, which describes a color of the image pixel, checking whether the determined color information is larger than a minimum color information and smaller than a maximum color information, wherein the minimum and maximum color information delimit a color information range, in which reference image pixels describing a reference object sway. When the checking is successful, calculating a corrected color information in consideration of the determined color information and a correction factor and providing the calculated corrected color information instead of the determined color information, wherein the corrected color information in comparison to the determined color information has a reduced color shift.
H04N 23/84 - Camera processing pipelinesComponents thereof for processing colour signals
72.
METHOD FOR DETECTING AND RECOGNIZING A STREET SIGN IN AN ENVIRONMENT OF A MOTOR VEHICLE BY AN ASSISTANCE SYSTEM, COMPUTER PROGRAM PRODUCT, COMPUTER-READABLE STORAGE MEDIUM, AS WELL AS ASSISTANCE SYSTEM
The invention relates to a method for detecting and recognizing (31) a street sign (6) in an environment (5) of a motor vehicle (1) by an assistance system (2) of the motor vehicle (1), comprising the steps capturing at least one image (9) of the environment (5) by an optical capturing device (3) of the assistance system (2), encoding the captured image (9) by a transformer device (12) of an electronic computing device (4) of the assistance system (2), first decoding of the encoded image (9) by a detection transformer device (13) of the electronic computing device (4) for decoding object features (15) in the captured image (9), second decoding of the encoded image (9), wherein the second decoding is performed in parallel to the first decoding, by a recognition transformer device (14) of the electronic computing device (4) for text recognition (16) in the captured image (9) and detection and recognition (31) of the street sign (6) depending on the decoded object features (15) and the text recognition (16) by the electronic computing device (4). Further the invention relates to a computer program product, a computer-readable storage medium, as well as an assistance system (2).
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G01C 21/00 - NavigationNavigational instruments not provided for in groups
G06T 3/40 - Scaling of whole images or parts thereof, e.g. expanding or contracting
G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
G06V 10/74 - Image or video pattern matchingProximity measures in feature spaces
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/40 - ScenesScene-specific elements in video content
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 30/18 - Extraction of features or characteristics of the image
A computer-implemented method for combining camera information given by at least one camera image (14) and further information given by environmental sensor data (15) comprises generating a further image (17, 18) by an artificial neural network, ANN, (10) depending on the environmental sensor data (15). A mask feature map (19) is generated by applying a masking module (24) of the ANN (10) to the at least one camera image (14) and the further image (17, 18), wherein, for each image point of the further image (17, 18), the mask feature map (19) specifies whether the at least one camera image (14) comprises a corresponding camera image point. A combined visual representation (16) of the camera information and the further information is generated by using a generative adversarial network, GAN, (13) of the ANN (10) depending on the mask feature map (19).
G06V 10/22 - Image preprocessing by selection of a specific region containing or referencing a patternLocating or processing of specific regions to guide the detection or recognition
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
A method for reducing a color shift of image pixels of an image for a motor vehicle captured by a camera, a computer program product and a control device for the vehicle are disclosed. For the respective image pixel of the image, the method involves determining a color information, which describes a color of the image pixel, checking whether the determined color information is larger than a minimum color information and smaller than a maximum color information, wherein the minimum and maximum color information delimit a color information range, in which reference image pixels describing a reference object sway. When the checking is successful, calculating a corrected color information in consideration of the determined color information and a correction factor and providing the calculated corrected color information instead of the determined color information, wherein the corrected color information in comparison to the determined color information has a reduced color shift.
An automotive camera (3) comprises an imager (5), wherein the imager (5) comprises an array of optical detectors, which is configured to generate pixel data depending on light impinging on the array of optical detectors. The imager (5) is configured to provide image data depending on the pixel data at a data output of the imager (5). The imager (5) comprises an error monitoring unit, which is configured to detect an internal error of the imager (5). The error monitoring unit is configured to determine a bit-pattern, which is assigned to the detected internal error, and to generate an error signal (9) representing the bit pattern (8) at an error output (7) of the imager (5).
According to a computer-implemented method for image compression, a compressed image (13) is generated by applying a compression module (14a) of an artificial neural network (14), which is trained for image compression, to input data, which comprises an input image (12) or depends on the input image (12). A further artificial neural network (15), which is trained for carrying out at least one computer vision task and comprises a first hidden layer (19a, 19b, 19c), is applied to the input image (12). The input data comprises an output of the first hidden layer (19a, 19b, 19c) or depends on the output of the first hidden layer (19a, 19b, 19c).
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
A method for manufacturing a camera (1) comprises connecting a lens unit (9) to a first housing part (4) and mounting a circuit board (7) to the first housing part (4), wherein a position and/or orientation of the circuit board (7) is aligned with respect to the lens unit (9). A second housing part (5) is connected to the first housing part (4), such that a housing interior (8) accommodates the circuit board (7). A first component (6a) of a connector assembly (6) is fastened to the second housing part (5) such that the connector assembly (6) passes through an opening of the second housing part (5). A second component (6b) of the connector assembly (6) is connected to the first component (6a), such that the first component (6a) and the second component (6b) are moveable with respect to each other.
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
H04N 23/57 - Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
H01R 13/22 - Contacts for co-operating by abutting
An automotive camera (1) comprises a camera body (4), which has an essentially cuboid outer shape. The camera body (4) has a first chamfer (7) at a first edge (8) of the essentially cuboid outer shape, which extends only partially from a lateral surface (10, 19) of the essentially cuboid outer shape along the first edge (8), wherein the first chamfer (7) breaks a two-fold or four-fold rotational symmetry of an outline of the lateral surface (10, 19).
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
A housing (1) for at least one electronic circuit (3, 4) comprises a housing portion (2) with an inner surface (9) and a heat spreader component (5), which is connected to the inner surface (9) of the housing portion (2). The heat spreader component (5) comprises a contact region to be connected an electronic component (4) of the at least one electronic circuit (3, 4) and the heat spreader component (5) comprises a material, whose thermal conductivity is greater that a thermal conductivity of the housing portion (2).
A method, for detecting an artefact in a stitched image, comprises: acquiring component images of an environment from respective vehicle mounted cameras with overlapping fields of view; forming (410) a stitched image from the component images; processing (420) at least a portion of the stitched image corresponding to the overlapping field of view with a classifier to provide a list of detected objects from the environment at respective locations in the stitched image; determining (430) whether any detected object in the list of detected objects is a duplicate of another object in the list of detected objects; and reporting any objects that are determined to be duplicates.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
An image processing method comprises: performing a spectral analysis of a HDR image (220) to determine whether the HDR image (220) comprises spectral components indicative of a lighting of the scene by a modulated light source; analysing meta-data associated with a set of component images (221) for the HDR image to determine a difference between meta-data for one component image of the set and meta-data for at least one other component image of the set, any difference being indicative of an artefact caused by illumination of the scene by a modulated light source; combining a result of the spectral analysis and the meta-data analysis to provide an indication that illumination of the scene by a modulated light source is causing a visible artefact in at least one of the HDR images; and changing a HDR operating mode of the image processing system (200) accordingly.
H04N 23/745 - Detection of flicker frequency or suppression of flicker wherein the flicker is caused by illumination, e.g. due to fluorescent tube illumination or pulsed LED illumination
H04N 23/667 - Camera operation mode switching, e.g. between still and video, sport and normal or high and low resolution modes
H04N 23/71 - Circuitry for evaluating the brightness variation
H04N 23/741 - Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
H04N 23/73 - Circuitry for compensating brightness variation in the scene by influencing the exposure time
a) between a first vehicle (110) coupled to a second vehicle (100), comprises: receiving an image of the first vehicle from a camera (101) mounted on the second vehicle; performing (210) a polar transformation on the image to form a polar image (400) in a polar space having an origin corresponding to a location of a pivot point (303) of the coupling (302) to the second vehicle in the received image coordinate space; estimating (230) an optical coupling angle by analysing the content of the polar image; receiving a signal from an odometer (102) mounted on the first or second vehicle; estimating a kinematic coupling angle from the signal and a kinematic model of the first vehicle and the coupled second vehicle; and combining the estimated optical coupling angle and the estimated kinematic coupling angle to provide the coupling angle.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
84.
Method for mounting a camera for a vehicle, pre-mounting module for a camera as well as camera
A method for mounting a camera for a vehicle providing a first housing part of an electronic housing for the camera, in which a first groove is formed on an outer side of the first housing part, providing a housing outer part separate from the first housing part, which comprises a plug channel and an inner side, wherein a second groove is formed on the inner side, introducing an adhesive part into the first groove and/or into the second groove, assembling and retaining the first housing part to the housing outer part by an adhesive connection, wherein thereto the adhesive part in the first groove adheres to the first housing part and the adhesive part on the second groove adheres to the housing outer part.
B60R 11/04 - Mounting of cameras operative during driveArrangement of controls thereof relative to the vehicle
G03B 30/00 - Camera modules comprising integrated lens units and imaging units, specially adapted for being embedded in other devices, e.g. mobile phones or vehicles
An aspect of the invention relates to a camera (4) for a motor vehicle (1), with a housing (9), which is formed of an electrically conductive material, with at least one circuit board (47), which is arranged in the housing (9), wherein the camera (4) comprises an electrically conductive circuit board carrier (24) separate to the housing (9), which carries the circuit board (47) and is connected to the circuit board (47) in electrically conductive manner, wherein the circuit board carrier (47) is arranged in the housing (9) and is connected to the housing (9) in electrically conductive manner. A further aspect relates to a method for mounting a camera (4).
A camera (2) for a motor vehicle (1) comprises a lens module, a lens holder body (5), which is mechanically connected to the lens module, and an electrical heater element (6) for heating the lens module. The heater element (6) is arranged at a surface of the lens holder body (5) facing away from the lens module and the heater element (6) is thermally connected to the lens module via the lens holder body (5).
An image processing method for harmonizing images acquired by a first camera and a second camera connected to a vehicle and arranged in such a way as their fields of view cover a same road space at different times as the vehicle travels along a travel direction is disclosed. The method includes: acquiring by a selected camera, a first image at a first time; selecting a first region of interest bounding a road portion from the first image; sampling the first region of interest; acquiring by the other camera, a second image in such a way that the road portion is included in a second region of interest; sampling the second region of interest; and determining one or more correction parameters for harmonizing images acquired by the first and second cameras, based on a comparison between the image content of the first and second regions of interest.
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
H04N 23/76 - Circuitry for compensating brightness variation in the scene by influencing the image signals
H04N 23/90 - Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
H04N 9/64 - Circuits for processing colour signals
H04N 9/67 - Circuits for processing colour signals for matrixing
H04N 9/68 - Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
B60R 1/22 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
88.
REDUCING ADVERSE ENVIRONMENTAL INFLUENCES IN A CAMERA IMAGE
According to a computer-implemented method for training an ANN, a training image (Xa) is provided, wherein each of a set of adverse environmental influence factors is either present or absent in the training image (Xa). A set of features (z) is generated by applying a generator encoder module (Ge) to the training image (Xa). A predefined set of reference attributes (b) is provided, each specifying an intended absence of an adverse environmental influence factor. An improved training image (Xb) is generated by applying a generator decoder module (Gd) to the set of features (z) depending on the set of reference attributes (b). A discriminator module (D) is applied to the improved training image (Xb) and an adversarial loss (14a) is computed depending on an output of the discriminator module (D) for adapting the generator module (G).
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/98 - Detection or correction of errors, e.g. by rescanning the pattern or by human interventionEvaluation of the quality of the acquired patterns
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
ELECTRONIC COMPUTING DEVICE FOR AN ASSISTANCE SYSTEM OF A MOTOR VEHICLE, ASSISTANCE SYSTEM AS WELL AS METHOD FOR PRODUCING AN ELECTRONIC COMPUTING DEVICE
The invention relates to an electronic computing device (3), with at least a first housing (7a, 7b), wherein a first processor device (9) is arranged in a first internal space (8), and with a cooling device (20) for cooling the first processor device (9), which is formed on a first outer wall (21) of the first housing (7a, 7b), wherein a separate second housing (14a, 14b) with a second internal space (15) is arranged stacked on the first housing (7a, 7b) viewed along a vertical direction (22) of the electronic computing device (3), wherein a second processor device (16) is arranged in the second internal space (15), and wherein the second housing (14a, 14b) is arranged on the cooling device (20) with a second outer wall (23), such that the cooling device (20) is bounded by the first and the second outer wall (21, 23) and is formed for cooling the second processor device (16). Further, the invention relates to an assistance system (2) as well as to a method for producing an electronic computing device (3).
For analyzing a roundabout (9), at least one initial feature map is generated by applying a feature encoder module (10) of a neural network (6) to an input image. A classificator module (11) is applied to the at least one initial feature map, wherein an output of the classificator module (11) represents a road region in the input image. A radius estimation module (12, 13) is applied to the at least one initial feature map, wherein an output of the radius estimation module (12, 13) depends on an inner radius (14a) of the roundabout (9) and an outer radius (14b) of the roundabout (9). At least one entry point (15a, 15b, 15c, 15d) and/or at least one exit point (16a, 16b, 16c, 16d) of the roundabout (9) are determined depending on the outputs of the classificator module (11) and the radius estimation module (12, 13).
The invention provides an internally aligned camera device comprising a front housing assembly, a first printed circuit board (PCB), a second PCB and a flexible PCB, a PCB retention cage and a rear housing assembly. The front housing assembly comprises lens elements for forming an image on an image sensor operably coupled to the first PCB, said image sensor optically aligned with said front housing assembly comprising said lens elements. The second PCB is electrically coupled to said first PCB using a flexible PCB, where the second PCB folded over said first circuit board. The PCB retention cage retains the second PCB in position. The rear housing assembly comprises a metal shield which clamps down the second PCB and said PCB retention cage in position. Further, said front housing assembly is centrally aligned and attached with said rear housing assembly. A method of manufacturing the camera device is also described.
B60R 11/04 - Mounting of cameras operative during driveArrangement of controls thereof relative to the vehicle
H04N 23/52 - Elements optimising image sensor operation, e.g. for electromagnetic interference [EMI] protection or temperature control by heat transfer or cooling elements
H04N 23/54 - Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
H04N 23/55 - Optical parts specially adapted for electronic image sensorsMounting thereof
H04N 23/57 - Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
92.
CAMERA FOR A MOTOR VEHICLE WITH SPECIFIC LENS HEATER, AS WELL AS MOTOR VEHICLE
The invention relates to a camera (4) for a motor vehicle (1), with a housing (23), wherein a receiving space (24), in which a lens module (9) of the camera (4) and a circuit board (25) of the camera (4) are arranged, is bounded by the housing (23), and comprising a heater (11) for heating the lens module (9), wherein a clearance (32) is thus formed between the circuit board (25) and the lens module (9), wherein a support plate (28) of the camera (4) is arranged between the circuit board (25) and the lens module (9), wherein the support plate (28) comprises first electrical contact areas (29, 30), to which electrical lines (12, 12a, 12b) of the heater (11 ) can be connected, and/or the support plate (28) comprises second electrical contact areas (33, 34), to which electrical contact elements (35, 36), which are arranged on the circuit board (25), can be connected, so that energy can be transferred from the circuit board (25) to the lens module (9) to perform the heating of the lens module (9) with the heater (11).
The invention relates to a camera (4) for a motor vehicle (1), with a housing (23), wherein a receiving space (24), in which a lens module (9) of the camera (4) and a circuit board (25) of the camera (4) are arranged, is bounded by the housing (23), wherein the camera (4) has a longitudinal axis (A), wherein the circuit board (25) and the lens module (9) are arranged axially spaced from each other in the direction of this longitudinal axis (A) such that a clearance (32) is formed between the circuit board (25) and the lens module (9), wherein an elastic seal (38) is arranged between the circuit board (25) and the lens module (9) such that the clearance (32) is sealed from the remaining volume space in the housing (23) by the seal (38).
For communication in an electronic control arrangement (2), video data is transmitted from a first processing unit (7a) of a first ECU (4a) via a first digital video interface (12a), a serializer (10), a data cable (8) between the first ECU (4a) and a second ECU (4b), a deserializer (11) and a second digital video interface (12b) to a second processing unit (7b) of the second ECU (4b). Communication data is exchanged between the first processing unit (7a) and the second processing unit (7b) via a first ethernet interface (14a), the serializer (10), the data cable (8), the deserializer (11) and a second ethernet interface (14b) of the second ECU (4b).
For driver assistance for a combination (8) with a motor vehicle (9) and a trailer (10), a first camera image (19) and a second camera image (20) are generated. A combined image (21) is generated by means of a computing unit (13) by superimposing the camera images (19, 20) such that the second camera image (20) covers a subsection of the first camera image (19), wherein a hitch angle (14) of the combination (8) is determined by means of the computing unit (13). State data of the combination (8) are determined by means of a sensor system (17) and it is determined whether the combination (8) moves forward or backward. The hitch angle (14) is determined based on the state data, if the combination (8) moves forward and based on a change of time-dependent image data, if the combination moves backward. A position of the subsection is determined depending on the hitch angle (14).
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/50 - Extraction of image or video features by performing operations within image blocksExtraction of image or video features by using histograms, e.g. histogram of oriented gradients [HoG]Extraction of image or video features by summing image-intensity valuesProjection analysis
B60R 1/26 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
B60R 11/04 - Mounting of cameras operative during driveArrangement of controls thereof relative to the vehicle
B60R 11/00 - Arrangements for holding or mounting articles, not otherwise provided for
96.
METHOD FOR DETERMINING A DEGRADATION DEGREE OF A CAPTURED IMAGE, COMPUTER PROGRAM PRODUCT, COMPUTER-READABLE STORAGE MEDIUM AS WELL AS ASSISTANCE SYSTEM
The invention relates to a method for determining a degradation degree (3) of an image (5) captured by a camera (4) of an assistance system (2) of a motor vehicle (1) by the assistance system (2), comprising the steps of: - capturing the image (5) by the camera (4); - performing a deep feature extraction of a plurality of pixels (8) of the image (5) by 0 an encoding module (9) of an electronic computing device (6) of the assistance system (2); - clustering the plurality of pixels (8) by a feature point cluster module (10) of the electronic computing device (6); - regressing the clustered pixels (8) by a regression module (11) of the electronic 5 computing device (6); and - determining the degradation degree (3) depending on an evaluation by applying a sigmoid function (20) after the regression by a sigmoid function module (12) of the electronic computing device (6) as an output of the sigmoid function module (12). Further, the invention relates to a computer program product, to a computer-readable storage medium as well as to an assistance system (2).
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/762 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
G06K 9/62 - Methods or arrangements for recognition using electronic means
97.
ELECTRICAL VENTILATOR FOR A TEMPERATURE CONTROL DEVICE OF AN ELECTRONIC COMPUTING DEVICE, ARRANGEMENT AS WELL AS ELECTRONIC COMPUTING DEVICE
The invention relates to an electrical ventilator (7) for a temperature control device (16) of an electronic computing device (3), with a housing (10), in which an electrical ventilation device (17) of the electrical ventilator (7) is arranged, and with an electrical contact device (14), which is formed for contacting with a further electrical contact device (15) of the electronic computing device (3), wherein the contact device (14) is formed for transferring at least electrical energy from the electronic computing device (3) to the electrical ventilation device (17), wherein the housing (10) comprises a shaft (18), which is formed for plunging into a recess (19) corresponding thereto on a further housing (9) of the electronic computing device (3) in certain areas, wherein the contact device (14) is fixed in the shaft (18) and is formed for wirelessly and directly contacting with the further contact device (15). Further, the invention relates to an arrangement (21) as well as to an electronic computing device (3).
An image processing method is operable in an image acquisition system comprising a camera arranged to capture successive images with a field of view, FOV of a portion of an environment surrounding a vehicle, the FOV intersecting a window of the vehicle comprising one or more heater elements that are visible within the FOV of the camera. The method comprises: obtaining an image; determining whether one or more sequences of pixels within the image corresponds to an image of a respective heater element, each pixel within the or each sequence of pixels having a colour and an intensity within respective thresholds; correcting, within the image, the or each sequence of pixels corresponding to a respective heater element by replacing pixel values for the or each sequence of pixels with pixel values derived from pixels which do not correspond with a heater element; and displaying the corrected image.
B60R 1/28 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
5 A video processing system (2) comprises a processing unit (7), a video output interface (10) and at least one memory interface (12, 14). The processing unit (7) is configured to receive file content, to determine first pixel values for a first pixel according to a predefined color space based on the file content, wherein the first pixel values encode a bit string of the file content, and to determine second pixel values for a second pixel 0 according to the color space independently of the file content. The processing unit (7) is configured to generate a frame for a video stream, the frame comprising the first pixel within a region of interest (22) of the frame and the second pixel outside of the region of interest (22). The processing unit (7) is configured to provide the video stream at the video output interface (10). 5 (Fig. 2)
H04N 19/17 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
B60R 1/20 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G06F 11/10 - Adding special bits or symbols to the coded information, e.g. parity check, casting out nines or elevens
H04N 19/184 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 19/65 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using error resilience
100.
METHOD FOR DETERMINING A MOTION MODEL OF AN OBJECT IN THE SURROUNDINGS OF A MOTOR VEHICLE, COMPUTER PROGRAM PRODUCT, COMPUTER-READABLE STORAGE MEDIUM, AS WELL AS ASSISTANCE SYSTEM
The invention relates to a method for determining a motion model (3) of an object (4) by an assistance system (2), the method comprising the steps: - capturing an image (20, 21) of the surroundings (5) with the moving object (4) by a capturing device (6); - encoding the image (20, 21) by a feature extraction module (9) of a neural network (8) of an electronic computing device (7); - decoding the encoded image (20, 21) by an object segmentation module (10) and generating a first loss function (22); - decoding the at least one encoded image (20, 21) by a bounding box estimation module (11) and generating a second loss function (23); - decoding the second loss function (23) depending on the decoding of the image (20, 21) by a motion decoding module (12) and generating a third loss function (24); and - determining the motion model (3) depending on the first loss function (22) and the third loss function (24). Further the invention relates to a computer program product, a computer-readable storage medium, as well as an assistance system (2).