Patent classifications
G06T2207/20076
QUATERNION MULTI-DEGREE-OF-FREEDOM NEURON-BASED MULTISPECTRAL WELDING IMAGE RECOGNITION METHOD
Disclosed is a quaternion multi-degree-of-freedom neuron-based multispectral welding image recognition method, comprising: using three cameras having different wavebands to obtain multispectral weld pool images, and respectively performing pre-processing and edge extraction on the weld pool images having the different wavebands obtained at a same moment by the three cameras; establishing a quaternion-based multispectral weld pool image edge model; extracting low-frequency features after a quaternion discrete cosine transform; using a quaternion-based multi-degree-of-freedom neuron network to perform classification, training and recognition on edge features of the multispectral weld pool images. Compared to traditional means, the present invention has multiple recognition information sources, strong anti-interference capabilities and high recognition accuracy.
LASER SPECKLE FORCE FEEDBACK ESTIMATION
Provided herein are systems, methods, and media capable of determining estimated force applied on a target tissue region to enable tactile feedback during interaction with said target tissue region.
METHOD AND APPARATUS FOR OBTAINING 3D INFORMATION OF VEHICLE
A method and an apparatus for obtaining 3D information of a vehicle are provided. The method includes: first determining a body boundary line of a first vehicle, and then determining an observation angle and/or an orientation angle of the first vehicle based on the body boundary line.
METHOD FOR AUTOMATIC SEGMENTATION OF CORONARY SINUS
Method, executed by a computer, for identifying a coronary sinus of a patient, comprising: receiving a 3D image of a body region of the patient; extracting 2D axial images of the 3D image taken along respective axial planes, 2D sagittal images of the 3D image taken along respective sagittal planes, and 2D coronal images of the 3D image taken along respective coronal planes; applying an axial neural network to each 2D axial image to generate a respective 2D axial probability map, a sagittal neural network to each 2D sagittal image to generate a respective 2D sagittal probability map, and a coronal neural network to each 2D coronal image to generate a respective 2D coronal probability map; generating, based on the 2D probability maps, a 3D mask of the coronary sinus of the patient.
SYSTEMS AND METHODS FOR FORECASTING AND ASSESSING HAZARD-RESULTANT EFFECTS
Hazard-resultant effects to land and buildings are predicted based on various inputs. Hazards may include any appropriate type of hazard (e.g., flood, wildfire, climate-related hazards, or the like). Inputs may include the likelihood that that a specific type of hazard may occur for various scenarios, terrestrial boundaries, property boundaries, census geographies, or the like. Relationships between the inputs are determined and used to quantify parameters pertaining to a specific type of hazard. For example, the depth of flood water may be predicted for a particular terrestrial boundary, a city or town, or a building, for specific climate scenarios. A risk likelihood of the quantified parameter may be determined for a particular period of time and environment. For example, flooding to a building may be determined, broken down by depth threshold and year of annual risk for specific climate scenarios. Economic loss also may be predicted.
METHOD FOR ADJUSTING POINT CLOUD DENSITY, ELECTRONIC DEVICE, AND STORAGE MEDIUM
A method for adjusting point cloud density, an electronic device, and a storage medium are provided. In the method an initial point cloud map and a distance determination threshold of a robot are obtained. A plurality of target regions in the initial point cloud map are determined, and an environmental complexity value of each target region is calculated. The initial point cloud map is divided into submaps, and a point cloud density coefficient of each submap is determined. The initial point cloud map is adjusted according to the point cloud density coefficient and the target point cloud map is obtained. By utilizing such method, adjustment efficiency and an accuracy of point cloud density can be improved.
SYSTEMS AND METHODS FOR RECONSTRUCTING A SCENE IN THREE DIMENSIONS FROM A TWO-DIMENSIONAL IMAGE
Systems and methods described herein relate to reconstructing a scene in three dimensions from a two-dimensional image. One embodiment processes an image using a detection transformer to detect an object in the scene and to generate a NOCS map of the object and a background depth map; uses MLPs to relate the object to a differentiable database of object priors (PriorDB); recovers, from the NOCS map, a partial 3D object shape; estimates an initial object pose; fits a PriorDB object prior to align in geometry and appearance with the partial 3D shape to produce a complete shape and refines the initial pose estimate; generates an editable and re-renderable 3D scene reconstruction based, at least in part, on the complete shape, the refined pose estimate, and the depth map; and controls the operation of a robot based, at least in part, on the editable and re-renderable 3D scene reconstruction.
ELECTRONIC DEVICE WAKES
In some examples, a non-transitory machine-readable medium stores machine-readable instructions. When executed by a controller of an electronic device, the machine-readable instructions cause the controller to detect a user presence, determine first and second measurements, where the first and the second measurements indicate first and second distances to the user presence, and, responsive to a determination that the second measurement is less than the first measurement and a determination that the second measurement is within a distance threshold, wake the electronic device from a power saving mode.
HAND DETECTION TRIGGER FOR ITEM IDENTIFICATION
A device configured to capture a first overhead depth image of the platform using a three-dimensional (3D) sensor at a first time instance and a second overhead depth image of a first object using the 3D sensor at a second time instance. The device is further configured to determine that a first portion of the first object is within a region-of-interest and a second portion of the first object is outside the region-of-interest in the second overhead depth image. The device is further configured to capture a third overhead depth image of a second object placed on the platform using the 3D sensor at a third time instance. The device is further configured to capture a first image of the second object using a camera in response to determining that the first object is outside of the region-of-interest and the second object is within the region-of-interest for the platform.
REDUCING A SEARCH SPACE FOR ITEM IDENTIFICATION USING MACHINE LEARNING
A device configured to receive a first encoded vector and receive one or more feature descriptors for a first object. The device is further configured to remove one or more encoded vectors from an encoded vector library that are not associated with the one or more feature descriptors and to identify a second encoded vector in the encoded vector library that most closely matches the first encoded vector based on the numerical values within the first encoded vector. The device is further configured to identify a first item identifier in the encoded vector library that is associated with the second encoded vector and to output the first item identifier.