Patent classifications
G06T2207/30241
Systems and Methods for Image Based Perception
Systems and methods for image-based perception. The methods comprise: capturing images by a plurality of cameras with overlapping fields of view; generating, by a computing device, spatial feature maps indicating locations of features in the images; identifying, by the computing device, overlapping portions of the spatial feature maps; generating, by the computing device, at least one combined spatial feature map by combining the overlapping portions of the spatial feature maps together; and/or using, by the computing device, the at least one combined spatial feature map to define a predicted cuboid for at least one object in the images.
Systems and Methods for Image Based Perception
Systems and methods for image-based perception. The methods comprise: obtaining, by a computing device, images captured by a plurality of cameras with overlapping fields of view; generating, by the computing device, spatial feature maps indicating locations of features in the images; defining, by the computing device, predicted cuboids at each location of an object in the images based on the spatial feature maps; and assigning, by the computing device, at least two cuboids of said predicted cuboids to a given object when predictions from images captured by separate cameras of the plurality of cameras should be associated with a same detected object.
Control apparatus, control system, control method, and storage medium
A control apparatus including an extraction unit configured to extract a subject from an image captured by an image capturing apparatus, an estimation unit configured to estimate a skeleton of the subject extracted by the extraction unit and a control unit configured to control an angle of view of the image capturing apparatus based on a result of the estimation by the estimation unit.
Battery efficient wireless network connection and registration for a low-power device
A client device is configured to communicate with an access point over a wireless network, exchanging data with the access point over a selected communication channel. The client device stores an identifier of the selected communication channel. After the wireless connection to the access point has ended, the client device initiates a process to reconnect to the access point over the selected communication channel using the stored identifier.
PEDESTRIAN SEARCH METHOD, SERVER, AND STORAGE MEDIUM
Provided are a pedestrian search method, a server, and a storage medium. The pedestrian search method is described as follows: a pedestrian detection is performed on each segment of monitoring video to obtain multiple pedestrian tracks, where each pedestrian track of the multiple pedestrian tracks includes multiple video frame images of a same pedestrian; and pedestrian tracks belonging to the same pedestrian is determined according to video frame images in the multiple pedestrian tracks, and the pedestrian tracks of the same pedestrian are merged.
Avian detection systems and methods
Provided herein are detection systems and related methods for detecting moving objects in an airspace surrounding the detection system. In an aspect, the moving object is a flying animal and the detection system comprises a first imager and a second imager that determines position of the moving object and for moving objects within a user selected distance from the system the system determines whether the moving object is a flying animal, such as a bird or bat. The systems and methods are compatible with wind turbines to identify avian(s) of interest in airspace around wind turbines and, if necessary, take action to minimize avian strike by a wind turbine blade.
Cross reality system with fast localization
A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps, even maps of very large environments, and render virtual content specified in relation to those maps. The cross reality system may quickly process a batch of images acquired with a portable device to determine whether there is sufficient consistency across the batch in the computed localization. Processing on at least one image from the batch may determine a rough localization of the device to the map. This rough localization result may be used in a refined localization process for the image for which it was generated. The rough localization result may also be selectively propagated to a refined localization process for other images in the batch, enabling rough localization processing to be skipped for the other images.
Monitoring device, and method for monitoring a man overboard situation
The invention relates to a monitoring device 1 for monitoring a man-overboard situation in a ship section 5, wherein the ship section 5 is monitored by video technology using at least one camera 2, and the camera 2 is designed to provide surveillance in the form of video data. The monitoring device comprises an analysis device 9, said analysis device 9 having an interface 10 for transferring the video data, and the analysis device 9 is designed to detect a moving object in the ship section 5 on the basis of the video data and determine a kinematic variable of the moving object. The analysis device 9 is also designed to determine a scale on the basis of the video data and the kinematic variable in order to determine the extent 8 of the moving object and evaluate the moving object as a man-overboard event on the basis of the extent 8 thereof.
Synthesizing three-dimensional visualizations from perspectives of onboard sensors of autonomous vehicles
Aspects of the disclosure provide for generating a visualization of a three-dimensional (3D) world view from the perspective of a camera of a vehicle. For example, images of a scene captured by a camera of the vehicle and 3D content for the scene may be received. A virtual camera model for the camera of the vehicle may be identified. A set of matrices may be generated using the virtual camera model. The set of matrices may be applied to the 3D content to create a 3D world view. The visualization may be generated using the 3D world view as an overlay with the image, and the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content.
METHOD AND APPARATUS FOR IDENTIFYING INPUT FEATURES FOR LATER RECOGNITION
Disclosed are method and apparatus to recognize actors during normal system operation. The method includes defining actor input such as hand gestures, executing and detecting input, and identifying salient features of the actor therein. A model is defined from salient features, and a data set of salient features and/or model are retained, and may be used to identify actors for other inputs. A command such as “unlock” may be executed in response to actor input. Parameters may be applied to further define where, when, how, etc. actor input is executed, such as defining a region for a gesture. The apparatus includes a processor and sensor, the processor defining actor input, identifying salient features, defining a model therefrom, and retaining a data set. A display may also be used to show actor input, a defined region, relevant information, and/or an environment. A stylus or other non-human actor may be used.