Patent classifications
G06V10/42
System and methods for computing 2-D convolutions and cross-correlations
Fast and scalable architectures and methods adaptable to available resources, that (1) compute 2-D convolutions using 1-D convolutions, (2) provide fast transposition and accumulation of results for computing fast cross-correlations or 2-D convolutions, and (3) provide parallel computations using pipelined 1-D convolvers. Additionally, fast and scalable architectures and methods that compute 2-D linear convolutions using Discrete Periodic Radon Transforms (DPRTs) including the use of scalable DPRT, Fast DPRT, and fast 1-D convolutions.
Deformable end effectors for cosmetic robotics
A device for ensuring safe operation of a robot used for cosmetics applications, including the retrofitting of robots not originally design for such applications. In some embodiments, the robot is used for the automatic placement of eyelash extensions onto the natural eyelashes of a subject. In some embodiments, a safety barrier is provided by a physical barrier or light curtain. In other embodiments, readily deformable end effectors are used.
Deformable end effectors for cosmetic robotics
A device for ensuring safe operation of a robot used for cosmetics applications, including the retrofitting of robots not originally design for such applications. In some embodiments, the robot is used for the automatic placement of eyelash extensions onto the natural eyelashes of a subject. In some embodiments, a safety barrier is provided by a physical barrier or light curtain. In other embodiments, readily deformable end effectors are used.
INSURANCE UNDERWRITING AND RE-UNDERWRITING IMPLEMENTING UNMANNED AERIAL VEHICLES (UAVS)
Unmanned aerial vehicles (UAVs) may facilitate insurance-related tasks. UAVs may actively be dispatched to an area surrounding a property, and collect data related to property. A location for an inspection of a property to be conducted by a UAV may be received, and one or more images depicting a view of the location may be displayed via a user interface. Additionally, a geofence boundary may be determined based on an area corresponding to a property boundary, where the geofence boundary represents a geospatial boundary in which to limit flight of the UAV. Furthermore, a navigation route may be determined which corresponds to the geofence boundary for inspection of the property by the UAV, the navigation route having waypoints, each waypoint indicating a location for the UAV to obtain drone data. The UAV may be directed around the property using the determined navigation route.
INSURANCE UNDERWRITING AND RE-UNDERWRITING IMPLEMENTING UNMANNED AERIAL VEHICLES (UAVS)
Unmanned aerial vehicles (UAVs) may facilitate insurance-related tasks. UAVs may actively be dispatched to an area surrounding a property, and collect data related to property. A location for an inspection of a property to be conducted by a UAV may be received, and one or more images depicting a view of the location may be displayed via a user interface. Additionally, a geofence boundary may be determined based on an area corresponding to a property boundary, where the geofence boundary represents a geospatial boundary in which to limit flight of the UAV. Furthermore, a navigation route may be determined which corresponds to the geofence boundary for inspection of the property by the UAV, the navigation route having waypoints, each waypoint indicating a location for the UAV to obtain drone data. The UAV may be directed around the property using the determined navigation route.
APPARATUS AND METHOD FOR IMAGE CLASSIFICATION AND SEGMENTATION BASED ON FEATURE-GUIDED NETWORK, DEVICE, AND MEDIUM
The present invention provides an apparatus and method for image classification and segmentation based on a feature-guided network, a device, and a medium, and belongs to the technical field of deep learning. A feature-guided classification network and feature-guided segmentation network of the present invention include basic unit blocks. A local feature is enhanced and a global feature is extracted among the basic unit blocks. This resolves a problem that features are not fully utilized in existing image classification and image segmentation network models. In this way, a trained feature-guided classification network and feature-guided segmentation network have better effects and are more robust. The present invention selects the feature-guided classification network or the feature-guided segmentation network based on a requirement of an input image and outputs a corresponding category or segmented image, to resolve a problem that the existing classification or segmentation network model has an unsatisfactory classification or segmentation effect.
APPARATUS AND METHOD FOR IMAGE CLASSIFICATION AND SEGMENTATION BASED ON FEATURE-GUIDED NETWORK, DEVICE, AND MEDIUM
The present invention provides an apparatus and method for image classification and segmentation based on a feature-guided network, a device, and a medium, and belongs to the technical field of deep learning. A feature-guided classification network and feature-guided segmentation network of the present invention include basic unit blocks. A local feature is enhanced and a global feature is extracted among the basic unit blocks. This resolves a problem that features are not fully utilized in existing image classification and image segmentation network models. In this way, a trained feature-guided classification network and feature-guided segmentation network have better effects and are more robust. The present invention selects the feature-guided classification network or the feature-guided segmentation network based on a requirement of an input image and outputs a corresponding category or segmented image, to resolve a problem that the existing classification or segmentation network model has an unsatisfactory classification or segmentation effect.
SYSTEMS, METHODS, AND DEVICES FOR AUTOMATED METER READING FOR SMART FIELD PATROL
Methods, systems, and devices for equipment reading in a factory or plant environment are described, including: capturing an image of an environment including a measurement device; detecting a target region included in the image, the target region including at least a portion of the measurement device; determining identification information associated with the measurement device based on detecting the target region; and extracting measurement information associated with the measurement device based on detecting the target region. In some aspects, detecting the target region may include: providing the image to a machine learning network; and receiving an output from the machine learning network in response to the machine learning network processing the image based on a detection model, the output including the target region.
TRANSFORMER-BASED TEMPORAL DETECTION IN VIDEO
With rapidly evolving technologies and emerging tools, sports-related videos generated online are rapidly increasing. To automate the sports video editing/highlight generation process, a key task is to precisely recognize and locate events-of-interest in videos. Embodiments herein comprise a two-stage paradigm to detect categories of events and when these events happen in videos. In one or more embodiments, multiple action recognition models extract high-level semantic features, and a transformer-based temporal detection module locates target events. These novel approaches achieved state-of-the-art performance in both action spotting and replay grounding. While presented in the context of sports, it shall be noted that the systems and methods herein may be used for videos comprising other content and events.
TRANSFORMER-BASED TEMPORAL DETECTION IN VIDEO
With rapidly evolving technologies and emerging tools, sports-related videos generated online are rapidly increasing. To automate the sports video editing/highlight generation process, a key task is to precisely recognize and locate events-of-interest in videos. Embodiments herein comprise a two-stage paradigm to detect categories of events and when these events happen in videos. In one or more embodiments, multiple action recognition models extract high-level semantic features, and a transformer-based temporal detection module locates target events. These novel approaches achieved state-of-the-art performance in both action spotting and replay grounding. While presented in the context of sports, it shall be noted that the systems and methods herein may be used for videos comprising other content and events.