Patent classifications
G06T2207/30248
System and method for lateral vehicle detection
A system and method for lateral vehicle detection is disclosed. A particular embodiment can be configured to: receive lateral image data from at least one laterally-facing camera associated with an autonomous vehicle; warp the lateral image data based on a line parallel to a side of the autonomous vehicle; perform object extraction on the warped lateral image data to identify extracted objects in the warped lateral image data; and apply bounding boxes around the extracted objects.
WEAK MULTI-VIEW SUPERVISION FOR SURFACE MAPPING ESTIMATION
One or more two-dimensional images of a three-dimensional object may be analyzed to estimate a three-dimensional mesh representing the object and a mapping of the two-dimensional images to the three-dimensional mesh. Initially, a correspondence may be determined between the images and a UV representation of a three-dimensional template mesh by training a neural network. Then, the three-dimensional template mesh may be deformed to determine the representation of the object. The process may involve a reprojection loss cycle in which points from the images are mapped onto the UV representation, then onto the three-dimensional template mesh, and then back onto the two-dimensional images.
THREE-DIMENSIONAL MAP ESTIMATION APPARATUS AND OBSTACLE DETECTION APPARATUS
According to one embodiment, a three-dimensional map estimation apparatus includes a processor that selects an imaging apparatus from a plurality of imaging apparatuses and then estimates a position and orientation for a moving object on which the selected imaging apparatus is mounted based on images captured by the selected imaging apparatus. The processor outputs a first position and orientation estimation result for the moving object based on images from selected imaging apparatuses. The processor calculates a second position and orientation estimation result indicating an estimated position and orientation for the moving object using the first position and orientation estimation result. The processor estimates a three-dimensional map for the surroundings of the moving object based on the second position and orientation estimation result.
POSITION-WINDOW EXTENSION FOR GNSS AND VISUAL-INERTIAL-ODOMETRY (VIO) FUSION
Techniques provided herein are directed toward virtually extending an updated set of output positions of a mobile device determined by a VIO by combining a current set of VIO output positions with one or more previous sets of VIO output positions in such a way that ensure all outputs positions among the various combined sets of output positions are consistent. The combined sets can be used for accurate position determination of the mobile device. Moreover, the position determination further may be based on GNSS measurements.
Door control systems and methods
Example door control systems and methods are described. In one implementation, a method receives image data from a camera mounted to a vehicle. The image data is analyzed by a door control system to determine whether a user is near the vehicle. If a user is identified near the vehicle, the method determines whether the user intends to open a garage door based on the image data.
System and method for obtaining video for use with photo-based estimation
A server comprises a communications module, a processor coupled to the communications module, and a memory coupled to the processor. The memory stores processor-executable instructions which, when executed by the processor, configure the processor to receive, via the communications module and from a remote computing device, video data of a vehicle in an original state, store, in the memory, at least some of the video data of the vehicle and associate the stored video data with an account, receive, via the communications module and from the remote computing device, an indication of a claim, the indication associated with an account identifier, retrieve, using the account identifier and from the memory, the video data of the vehicle in the original state from the memory, and send, via the communications module and to the remote computing device, instructions for obtaining video data of the vehicle based on the video data of the vehicle in the original state.
SYSTEMS AND METHODS FOR SINGLE-SHOT MULTI-OBJECT 3D SHAPE RECONSTRUCTION AND CATEGORICAL 6D POSE AND SIZE ESTIMATION
System, methods, and other embodiments described herein relate to single-shot multi-object three-dimensional (3D) shape reconstruction and categorical six-dimensional (6D) pose and size estimation. In one embodiment, a method includes inferring a heatmap based upon a feature pyramid, where the feature pyramid is generated based upon a red green blue depth (RGB-D) image that includes objects. The method further includes sampling a 3D parameter map at locations corresponding to peaks in the heatmap, where the 3D parameter map is inferred based upon the feature pyramid, and where the locations include latent shape codes, 6D poses, and one-dimensional (1D) scales. The method further includes generating point clouds based upon the latent shape codes, the 6D poses, and the 1D scales.
ROAD DETERIORATION DETERMINATION DEVICE, ROAD DETERIORATION DETERMINATION METHOD, AND STORAGE MEDIUM
A road deterioration determination device includes a selection unit and a determination unit. The selection unit selects an image of a road surface captured at one point on a road on the basis of priorities of values of attributes set for each of points on the road and related to image capture for images of the road surface captured at the points. The determination unit determines road deterioration at the one point using the selected image and a model for determining road deterioration from the image.
VISION-BASED SAFETY MONITORING AND/OR ACTIVITY ANALYSIS
Presented herein are embodiments of a vision-based object perception system for activity analysis, safety monitoring, or both. Embodiments of the perception subsystem detect multi-class objects (e.g., construction machines and humans) in real-time while estimating the poses and actions of the detected objects. Safety monitoring embodiments and object activity analysis embodiments may be based on the perception result. To evaluate the performance of embodiments, a dataset was collected including multi-class of objects in different lighting conditions with human annotations. Experimental results show that the proposed action recognition approach outperforms the state-of-the-art approaches on top-1 accuracy by about 5.18%.
ACCESSIBILITY SYSTEM FOR ASSISTING A USER IN INTERACTING WITH A VEHICLE
Provided are methods for assisting a user in interaction with a vehicle. The methods can include obtaining sensor data representing a user; determining at least one of: (i) a distance between a body part of the user and an object associated with the vehicle, or (ii) a direction from the body part of the user to the object; and causing, by the accessibility system, at least one notification to be presented to the user, where the least one notification indicates at least one of: (i) the distance between the body part of the user and the object, or (ii) the direction from the body part of the user to the object. Systems and computer program products are also provided.