G06V20/20

EXTRACTING INFORMATION ABOUT PEOPLE FROM SENSOR SIGNALS

There is provided a computer implemented method of extracting information about a person. Incoming sensor signals for monitoring people within a field of view of a sensor system are received and processed. In response to detecting a person located within a notification region, an output device outputs a notification to the detected person. Processing of the incoming sensor signals continues in order to monitor behaviour patterns of the person and determine from his behaviour patterns whether he is currently in a consenting or non-consenting state. An extraction function attempts to extract information about the person irrespective of his determined state. A sharing function determines whether or not to share an extracted piece of information about the person with a receiving entity in accordance with his determined state, the information not being shared unless and until it is subsequently determined that the person is in the consenting state.

SYSTEMS AND METHODS FOR MASKING A RECOGNIZED OBJECT DURING AN APPLICATION OF A SYNTHETIC ELEMENT TO AN ORIGINAL IMAGE
20230050857 · 2023-02-16 ·

An exemplary object masking system is configured to mask a recognized object during an application of a synthetic element to an original image. For example, the object masking system accesses a model of a recognized object depicted in an original image of a scene. The object masking system associates the model with the recognized object. The object masking system then generates presentation data for use by a presentation system to present an augmented version of the original image in which a synthetic element added to the original image is, based on the model as associated with the recognized object, prevented from occluding at least a portion of the recognized object. In this way, the synthetic element is made to appear as if located behind the recognized object. Corresponding systems and methods are also disclosed.

METHODS AND SYSTEMS FOR OBTAINING A SCALE REFERENCE AND MEASUREMENTS OF 3D OBJECTS FROM 2D PHOTOS
20230052613 · 2023-02-16 ·

Disclosed are systems and methods for obtaining a scale factor and 3D measurements of objects from a series of 2D images. An object to be measured is selected from a menu of an Augmented Reality (AR) based measurement application being executed by a mobile computing device. Measurement instructions corresponding to the selected object are retrieved and used to generate a series of image capture screens. A series of image capture screens assist the user in positioning the device relative to the object in a plurality of imaging positions to capture the series of 2D images. The images are used to determine one or more scale factors and to build a complete scaled 3D model of the object in virtual 3D space. The 3D model is used to generate one or more measurements of the object.

Generating Computer Augmented Maps from Physical Maps
20230050644 · 2023-02-16 ·

A method by a computing device obtains a digital image of a physical map, identifies features in the digital image, and obtains map augmentation information based on the identified features. The method then generates an augmented map based on the map augmentation information, and provides the augmented map for display. Related mobile devices and computer program products are disclosed.

Generating Computer Augmented Maps from Physical Maps
20230050644 · 2023-02-16 ·

A method by a computing device obtains a digital image of a physical map, identifies features in the digital image, and obtains map augmentation information based on the identified features. The method then generates an augmented map based on the map augmentation information, and provides the augmented map for display. Related mobile devices and computer program products are disclosed.

CONSTRUCTION OF ENVIRONMENT VIEWS FROM SELECTIVELY DETERMINED ENVIRONMENT IMAGES
20230051775 · 2023-02-16 ·

A computing system may include a client device and a server. The client device may be configured to access a stream of image frames that depict an environment, determine, from the stream of image frames, environment images that satisfy selection criteria, and transmit the environment images to the server. The server may be configured to receive the environment images from the client device, construct a spatial view of the environment based on position data included with the environment images, and navigate the spatial view, including by receiving a movement direction and progressing from a current environment image depicted for the spatial view to a next environment image based on the movement direction.

AUGMENTED REALITY OBJECT MANIPULATION
20230052265 · 2023-02-16 ·

A processing system having at least one processor may detect a first object in a first video of a first user and detect a second object in a second video of a second user, where the first video and the second video are part of a visual communication session between the first user and the second user. The processing system may further detect a first action in the first video relative to the first object, detect a second action in the second video relative to the second object, detect a difference between the first action and the second action, and provide a notification indicative of the difference.

SYSTEMS AND METHODS FOR PROVIDING DISPLAYED FEEDBACK WHEN USING A REAR-FACING CAMERA

A system includes a processor and a non-transitory computer-readable medium containing instructions that when executed by the processor causes the processor to perform operations comprising displaying a prompt to a user of a mobile device on a display of a mobile device to capture an image representing at least a portion of a mouth of the user using a rear-facing camera of the mobile device, where the rear-facing camera is on an opposite side of the mobile device including the display. The operations further comprise controlling the rear-facing camera to enable the rear-facing camera to capture the image, receiving the image, and outputting, user feedback based on the image, where the user feedback is outputted on the display that is on the opposite side of the mobile device than the rear-facing camera.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND INFORMATION PROCESSING METHOD

An information processing apparatus includes a processor configured to: obtain a video and an instruction to generate a still image from the video, the video being a video in which a work target is photographed, the work target being a target on which to work; generate the still image in response to the instruction, the still image being cut from the video including the work target; specify the work target in the video, position information, and a superimposition area by using the still image, the position information describing a position of the work target, the superimposition area being an area in which an image is superimposed, the image being obtained by using the position of the work target as a reference; receive instruction information indicating an instruction for work on the work target; and superimpose and display an instruction image in the superimposition area in the video, the instruction image being an image according to the instruction information.

AUGMENTED REALITY GUIDANCE OVERLAP

Embodiments of the present invention provide computer-implemented methods, computer program products and computer systems. Embodiments of the present invention can, in response to receiving a request, identify a core component from source material based on topic analysis. Embodiments of the present invention can then generate three-dimensional representations of physical core components associated with the request. Finally, embodiments of the present invention then render the generated three-dimensional representations of the physical core components over the physical core components.