Patent classifications
G06V10/40
Systems and Methods for Image Based Perception
Systems and methods for image-based perception. The methods comprise: capturing images by a plurality of cameras with overlapping fields of view; generating, by a computing device, spatial feature maps indicating locations of features in the images; identifying, by the computing device, overlapping portions of the spatial feature maps; generating, by the computing device, at least one combined spatial feature map by combining the overlapping portions of the spatial feature maps together; and/or using, by the computing device, the at least one combined spatial feature map to define a predicted cuboid for at least one object in the images.
Vision inspection system and method of inspecting parts
A vision inspection system includes a sorting platform having an upper surface supporting parts for inspection, wherein the parts are configured to be loaded onto the upper surface of the sorting platform in a random orientation. The vision inspection system includes an inspection station including an imaging device. The vision inspection system includes a vision inspection controller receiving images and processing the images based on an image analysis model to determine inspection results for each of the parts. The vision inspection controller has a shape recognition tool configured to recognize the parts in the field of view regardless of the orientation of the parts on the sorting platform. The vision inspection controller has an AI learning module operated to customize and configure the image analysis model based on the images received from the imaging device.
Vision inspection system and method of inspecting parts
A vision inspection system includes a sorting platform having an upper surface supporting parts for inspection, wherein the parts are configured to be loaded onto the upper surface of the sorting platform in a random orientation. The vision inspection system includes an inspection station including an imaging device. The vision inspection system includes a vision inspection controller receiving images and processing the images based on an image analysis model to determine inspection results for each of the parts. The vision inspection controller has a shape recognition tool configured to recognize the parts in the field of view regardless of the orientation of the parts on the sorting platform. The vision inspection controller has an AI learning module operated to customize and configure the image analysis model based on the images received from the imaging device.
Mobile terminal and method for controlling same
The present disclosure relates to a mobile terminal having a lighting unit and a control method thereof. A mobile terminal according to one implementation includes a lighting unit, a camera, and a controller configured to control the lighting unit to irradiate illumination light to a subject to be captured through the camera, and control the camera to capture the subject irradiated with the illumination light, wherein the controller is configured to determine a material of the subject based on information related to the illumination light irradiated on the subject captured through the camera.
Mobile terminal and method for controlling same
The present disclosure relates to a mobile terminal having a lighting unit and a control method thereof. A mobile terminal according to one implementation includes a lighting unit, a camera, and a controller configured to control the lighting unit to irradiate illumination light to a subject to be captured through the camera, and control the camera to capture the subject irradiated with the illumination light, wherein the controller is configured to determine a material of the subject based on information related to the illumination light irradiated on the subject captured through the camera.
Learning highlights using event detection
A highlight learning technique is provided to detect and identify highlights in sports videos. A set of event models are calculated from low-level frame information of the sports videos to identify recurring events within the videos. The event models are used to characterize videos by detecting events within the videos and using the detected events to generate an event vector. The event vector is used to train a classifier to identify the videos as highlight or non-highlight.
Learning highlights using event detection
A highlight learning technique is provided to detect and identify highlights in sports videos. A set of event models are calculated from low-level frame information of the sports videos to identify recurring events within the videos. The event models are used to characterize videos by detecting events within the videos and using the detected events to generate an event vector. The event vector is used to train a classifier to identify the videos as highlight or non-highlight.
Training of joint depth prediction and completion
System, methods, and other embodiments described herein relate to training a depth model for joint depth completion and prediction. In one arrangement, a method includes generating depth features from sparse depth data according to a sparse auxiliary network (SAN) of a depth model. The method includes generating a first depth map from a monocular image and a second depth map from the monocular image and the depth features using the depth model. The method includes generating a depth loss from the second depth map and the sparse depth data and an image loss from the first depth map and the sparse depth data. The method includes updating the depth model including the SAN using the depth loss and the image loss.
Training of joint depth prediction and completion
System, methods, and other embodiments described herein relate to training a depth model for joint depth completion and prediction. In one arrangement, a method includes generating depth features from sparse depth data according to a sparse auxiliary network (SAN) of a depth model. The method includes generating a first depth map from a monocular image and a second depth map from the monocular image and the depth features using the depth model. The method includes generating a depth loss from the second depth map and the sparse depth data and an image loss from the first depth map and the sparse depth data. The method includes updating the depth model including the SAN using the depth loss and the image loss.
Quotation method executed by computer, quotation device, electronic device and storage medium
Disclosed is a quotation method executed by a computer, comprising: obtaining structure parameters and electrical parameters of a product (S101); constructing an external view of the product by using the structure parameters of the product, and performing similarity comparison on the external view of the product and the external view of a historical product to obtain an appearance similarity sorting (102); performing similarity comparison on the electrical parameters of the product and the electrical parameters of the historical product to obtain an electrical parameter similarity sorting (103); on the basis of the cost weights of a structural member and an electrical component and the appearance similarity sorting and the electrical parameter similarity sorting, obtaining a comprehensive sorting which is based on the structure parameters and the electrical parameters (S104); and determining, based on the comprehensive sorting, a bill of materials of the product, and calculating, based on the bill of the materials of the product, the product quotation (105).