G06T3/02

Image recognition method and device

An image recognition method and device are provided according to the disclosure. The method includes: obtaining a target image, and extracting at least one first visual feature of the target image; obtaining at least one pending image according to the first visual feature of the target image, and extracting a plurality of second visual features of the target image and the plurality of second visual features of the pending image; for each pending image, forming a plurality of visual feature pairs; and removing an unavailable visual feature pair from the plurality of visual feature pairs, to obtain at least one remaining feature pair; and determining an image similar to the target image from the at least one pending image, according to the at least one remaining feature pair.

Method for developing augmented reality experiences in low computer power systems and devices
10762713 · 2020-09-01 · ·

A method and system for generating an augmented reality experience without a physical marker. At least two frames from a video stream are collected and one of the frames is designated as a first frame. The graphical processor of a device prepares two collected frames for analysis and features from the two collected frames are selected for comparison. The central processor of the device isolates points on a same plane as a tracked point in the first frame and calculates a position of a virtual object in a second frame in 2D. The next frame from the video stream is collected and the process is repeated until the user navigates away from the URL, webpage or when the camera is turned off. The central processor renders the virtual object on a display of the device.

METHODS AND APPARATUS FOR THE APPLICATION OF MACHINE LEARNING TO RADIOGRAPHIC IMAGES OF ANIMALS
20200273166 · 2020-08-27 · ·

Methods and apparatus for the application of machine learning to radiographic images of animals. In one embodiment, the method includes receiving a set of radiographic images captured of an animal, applying one or more transformations to the set of radiographic images to create a modified set, segmenting the modified set using one or more segmentation artificial intelligence engines to create a set of segmented radiographic images, feeding the set of segmented radiographic images to respective ones of a plurality of classification artificial intelligence engines, outputting results from the plurality of classification artificial intelligence engines for the set of segmented radiographic images to an output decision engine, and adding the set of segmented radiographic images and the output results from the plurality of classification artificial intelligence engines to a training set for one or more of the plurality of classification artificial intelligence engines. Computer-readable apparatus and computing systems are also disclosed.

DISPLAYING OBLIQUE IMAGERY
20200273150 · 2020-08-27 ·

An oblique imagery application receives an oblique image captured by an oblique camera at a non-orthogonal angle with respect to a ground plane and map data including a map tile corresponding to geographic coordinates. A principal axis is determined that is orthogonal to an image plane defined by the oblique image and intersecting a center of the oblique image. For each pixel of the oblique image, a pixel vector is determined and a set of deviation coordinates based on a deviation of the pixel vector from the principal axis is determined for the pixel, with the pixel vector of a pixel passing through a focal point of the oblique camera and ending at the pixel. The map tile is associated to the pixels of the oblique image based on the camera parameters, the deviation coordinates of the pixels, the oblique camera parameters, and the geographic coordinates of the map tile.

METHOD FOR DERIVING ADDITIONAL AND FURTHER PICTURES FROM AN ORIGINAL PICTURE, AND DEVICE APPLYING THE METHOD
20200273143 · 2020-08-27 ·

A method for deriving further and additional pictures from an original picture, for Artificial Intelligence (AI) training purposes, is applied in a device. The device establishes an original picture set and sets the original pictures as a training picture set for AI training. The original pictures are rotated or flipped or both to obtain amplification pictures. The original pictures are annotated, and each of the amplification pictures is annotated according to a preset conversion rule. The original pictures, the amplification pictures, the annotated original pictures, and the annotated amplification pictures are stored, for inclusion in the AI training picture set.

METHOD FOR GENERATING AND MODIFYING IMAGES OF A 3D SCENE

A method of generating and modifying a 2D image of a 3D scene the method including the steps: processing an image of a 3D scene to generate a set of data points representative of the 3D scene and 3D objects within the scene; retrieving one or more data points from the set of data; transforming the one or more data points according to one or more mathematical conversion functions, including; a function defining a projection trajectory for each data point; a function defining a geometry of a projection surface for each data point; a function defining a projection volume for each data point; a function defining an angle of projection of each data point with respect to a convergence point on a projection surface; a function defining the size to distance ratio of each data point from a projection surface; generating a transformed set of data points; projecting the transformed set of data points representative of a modified 2D image of the 3D scene; and, rendering the projected transformed set of data points into a 2D image on a display.

METHOD AND DEVICE FOR PREDICTIVE ENCODING/DECODING OF A POINT CLOUD

This method for inter-predictive encoding of a time-varying 3D point cloud including a series of successive frames divided in 3D blocks into at least one bitstream comprises encoding (20) 3D motion information including a geometric transformation comprising rotation information representative of a rotation transformation and translation information representative of a translation transformation, wherein the translation information comprises a vector T representing an estimation error of the translation transformation.

Creating a floor plan from images in spherical format
10740870 · 2020-08-11 · ·

A method and apparatus are provided for creating a floor plan from a spherical image. A spherical format is created of an image obtained by a camera, wherein the spherical format has a centre that corresponds to the position from which the image was obtained by the camera, and wherein a first surface represented in the image had a first orientation and was at a first distance from the camera when the image was obtained. A plurality of selected points are obtained in the spherical format, each defined by spherical coordinates consisting of a yaw angle and a pitch angle defining a line from the centre. A plane is identified that has the first orientation and that is at the first distance from the centre of the sphere. For each of the selected points, a location in a Cartesian coordinate system is identified where the line from the centre of the sphere to the selected point intersects with the first plane, two of the axes of the Cartesian coordinate system being parallel to the first plane. A floor plan is rendered using the locations, which represents the positions of the selected points on the first surface.

Displaying oblique imagery
10740875 · 2020-08-11 · ·

An oblique imagery application receives an oblique image captured by an oblique camera at a non-orthogonal angle with respect to a ground plane and map data including a map tile corresponding to geographic coordinates. A principal axis is determined that is orthogonal to an image plane defined by the oblique image and intersecting a center of the oblique image. For each pixel of the oblique image, a pixel vector is determined and a set of deviation coordinates based on a deviation of the pixel vector from the principal axis is determined for the pixel, with the pixel vector of a pixel passing through a focal point of the oblique camera and ending at the pixel. The map tile is associated to the pixels of the oblique image based on the camera parameters, the deviation coordinates of the pixels, the oblique camera parameters, and the geographic coordinates of the map tile.

Hybrid feature point/watermark-based augmented reality
10740613 · 2020-08-11 · ·

A camera captures video imagery depicting a digitally-watermarked object. A reference signal in the watermark is used to discern the pose of the object relative to the camera, and this pose is used in affine-transforming and positioning a graphic on the imagery as an augmented reality overlay. Feature points are also discerned from the captured imagery, or recalled from a database indexed by the watermark. As the camera moves relative to the object, the augmented reality overlay tracks the changing object depiction, using these feature points. When feature point-based tracking fails, the watermark is again processed to determine pose, and the overlay presentation is updated accordingly. In another arrangement, feature points are extracted from images of supermarket objects captured by multiple users, and are compiled in a database in association with watermark data identifying the objectsserving as a crowd-sourced repository of feature point data. A great number of other features and arrangements are also detailed.