Patent classifications
G06T7/97
TRAILERING ASSIST SYSTEM WITH HITCH BALL POSITION DETECTION
A vehicular trailering assist system includes a camera disposed at a rear portion of a vehicle. With a trailer hitched to the vehicle at a hitch ball of the vehicle, the camera views at least a portion of the trailer. An electronic control unit includes an image processor operable to process frames of image data captured by the camera. With the trailer hitched to the vehicle at the hitch ball of the vehicle, the vehicular trailering assist system determines trailer feature points via image processing of image data captured by the camera, and determines respective perpendicular bisectors of respective line segments extending between the determined trailer feature points in respective frames of captured image data while the vehicle tows the trailer and turns. The vehicular trailering assist system determines a location of the hitch ball based at least in part on an intersection point of the determined perpendicular bisectors.
NON-RIGID STEREO VISION CAMERA SYSTEM
A long-baseline and long depth-range stereo vision system is provided that is suitable for use in non-rigid assemblies where relative motion between two or more cameras of the system does not degrade estimates of a depth map. The stereo vision system may include a processor that tracks camera parameters as a function of time to rectify images from the cameras even during fast and slow perturbations to camera positions. Factory calibration of the system is not needed, and manual calibration during regular operation is not needed, thus simplifying manufacturing of the system.
Mapping multiple views to an identity
Disclosed are systems and methods for mapping multiple views to an identity. The systems and methods may include receiving a plurality of images that depict an object. Attributes associated with the object may be extracted from the plurality of images. An identity of the object may be determined based on processing the attributes.
Systems and methods for navigating a vehicle among encroaching vehicles
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a method for navigating a user vehicle may include acquiring, using at least one image capture device, a plurality of images of an area in a vicinity of the user vehicle; determining from the plurality of images a first lane constraint on a first side of the user vehicle and a second lane constraint on a second side of the user vehicle opposite to the first side of the user vehicle; enabling the user vehicle to pass a target vehicle if the target vehicle is determined to be in a lane different from the lane in which the user vehicle is traveling; and causing the user vehicle to abort the pass before completion of the pass, if the target vehicle is determined to be entering the lane in which the user vehicle is traveling.
Method and system for re-projecting and combining sensor data for visualization
There is provided a system and method of re-projecting and combining sensor data of a scene from a plurality of sensors for visualization. The method including: receiving the sensor data from the plurality of sensors; re-projecting the sensor data from each of the sensors into a new viewpoint; localizing each of the re-projected sensor data; combining the localized re-projected sensor data into a combined image; and outputting the combined image. In a particular case, the receiving and re-projecting can be performed locally at each of the sensors.
Self-tracked controller
The disclosed system may include a housing dimensioned to secure various components including at least one physical processor and various sensors. The system may also include a camera mounted to the housing, as well as physical memory with computer-executable instructions that, when executed by the physical processor, cause the physical processor to: acquire images of a surrounding environment using the camera mounted to the housing, identify features of the surrounding environment from the acquired images, generate a map using the features identified from the acquired images, access sensor data generated by the sensors, and determine a current pose of the system in the surrounding environment based on the features in the generated map and the accessed sensor data. Various other methods, apparatuses, and computer-readable media are also disclosed.
Three-Dimensional Skeleton Mapping
A system includes processing hardware and a memory storing software code. When executed, the software code receives first skeleton data including a first location of each of multiple skeletal key-points from the perspective of a first camera, receives second skeleton data including a second location of each of the skeletal key-points from the perspective of a second camera, correlates first and second locations of some or all of the multiple skeletal key-points to produce correlated skeletal key-point location data for each of at least some skeletal key-points. The software code further merges the correlated skeletal key-point location data for each of those at least some skeletal key-points to provide merged location data, and generates, using the merged location data and the locations of the first, second, and third cameras, a mapping of the 3D pose of a skeleton.
METHODS AND SYSTEM FOR DYNAMICALLY ANNOTATING MEDICAL IMAGES
Various methods and systems are provided for a medical imaging system. In one embodiment, a method for a projection imaging system includes acquiring a first image of a region of interest (ROI) with the projection imaging system in a first position, determining a three-dimensional (3D) location of an annotation on the first image via a geometric transformation using planes, acquiring a second image of the ROI with the projection imaging system in a second position, determining a location of the annotation on the second image based on the 3D location of the annotation in the first position and a geometry of the second position, and displaying the annotation on the second image in response to an accuracy check being satisfied.
SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR RECURSIVE HYPERSPECTRAL IMAGING
Systems, methods, and computer program products for recursive modifications are provided. An example imaging system includes a first infrared (IR) imaging device that generates first IR image data of a first field of view of the first IR imaging device at a first time and a computing device operably connected with the first IR imaging device. The computing device receives the first IR image data from the first IR imaging device and determines a first set of pixels and a second set of pixels from amongst a plurality of pixels associated with the first IR image data. The computing device further determines a first modification protocol for the first set of pixels and determines a second modification protocol for the second set of pixels. In response, the computing device generates a recursive modification input based upon the first and second modification protocols.
CONSTRUCTING PROCESSING PIPELINE AT EDGE COMPUTING DEVICE
A computing system including an edge computing device. The edge computing device may include an edge device processor configured to receive edge device contextual data including computing resource availability data. Based at least in part on the edge device contextual data, the edge device processor may select a processing stage machine learning model of a plurality of processing stage machine learning models and construct a runtime processing pipeline of one or more runtime processing stages including the processing stage machine learning model. The edge device processor may receive a runtime input, and, at the runtime processing pipeline, generate a runtime output based at least in part on the runtime input. The edge device processor may generate runtime pipeline metadata that indicates the one or more runtime processing stages included in the runtime processing pipeline. The edge device processor may output the runtime output and the runtime pipeline metadata.