G06T3/0031

METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM FOR FLATTENING THREE-DIMENSIONAL SHOE UPPER TEMPLATE

A method for flattening a three-dimensional shoe upper template is provided. The method includes providing a three-dimensional last model, obtaining a three-dimensional grid model, obtaining a three-dimensional thickened grid model, obtaining a two-dimensional initial-value grid model, and obtaining a two-dimensional grid model with the smallest energy value. A system and a non-transitory computer-readable medium for performing the method are also provided. The method makes it possible to precisely flatten a three-dimensional last model with a non-developable surface and thereby convert the three-dimensional last model into a two-dimensional grid model.

TARGET DETECTION AND MODEL TRAINING METHOD AND APPARATUS, DEVICE AND STORAGE MEDIUM

The present disclosure provides a target detection and model training method and apparatus, a device and a storage medium, and relates to the field of artificial intelligence, and in particular, to computer vision and deep learning technologies, which may be applied to smart city and intelligent transportation scenarios. The target detection method includes: performing feature extraction processing on an image to obtain image features of a plurality of stages of the image; performing position coding processing on the image to obtain a position code of the image; obtaining detection results of the plurality of stages of a target in the image based on the image features of the plurality of stages and the position code; and obtaining a target detection result based on the detection results of the plurality of stages.

VOLUMETRIC IMAGE ANNOTATION USING ONE OR MORE NEURAL NETWORKS

Apparatuses, systems, and techniques are presented to predict annotations for objects in images. In at least one embodiment, one or more neural networks are used to help generate one or more segmentation boundaries of one or more objects within one or more digital images, wherein the one or more neural networks are to transform one or more representations of one or more portions of the one or more objects into one or more lower-dimensional representations of the one or more portions of the one or more objects.

Method and apparatus for processing 360-degree image

A communication technique for merging, with an IoT technology, a 5G communication system for supporting a data transmission rate higher than that of a 4G system is provided. The communication technique can be applied to an intelligent service (for example, smart home, smart building, smart city, smart car or connected car, health care, digital education, retail business, and security and safety-related services, and the like) on the basis of a 5G communication technology and an IoT-related technology. A method for processing a 360-degree image is provided. The method includes determining a three-dimensional (3D) model for mapping a 360-degree image; determining a partition size for the 360-degree image; determining a rotational angle for each of the x, y, and z axes of the 360-degree image; determining an interpolation method to be applied when mapping the 360-degree image to a two-dimensional (2D) image; and converting the 360-degree image into the 2D image.

Platform For Co-Culture Imaging To Characterize In Vitro Efficacy Of Heterotypic Effector Cellular Therapies In Cancer

A method for characterizing cancer organoid response to an immune cell based therapy, includes providing a panel of different combinations of cancer organoid cells and immune cells to culturing wells and culturing the different combination under conditions that support organoid growth. Brightfield and corresponding fluorescence images of the culturing wells are captured and provided to one or more trained machine learning algorithms that identify and distinguish cancer organoid cells from immune cells and characterize cancer organoid morphology changes caused by an immune cell based therapies, from which an analytical report including a characterization of cancer organoid cell death caused by the immune cell based therapy is provided.

METHOD AND TERMINAL FOR DETECTING PROTRUSION IN INTESTINAL TRACT, AND COMPUTER-READABLE STORAGE MEDIUM
20220351388 · 2022-11-03 ·

A method of detecting a protrusion in an intestinal tract in a computer according to an embodiment of the present disclosure includes acquiring a three-dimensional model of the intestinal tract scanned by a scanning device, the three-dimensional model comprising three-dimensional data of the intestinal tract; mapping, in the computer, the three-dimensional model to a two-dimensional plane in an area-preserving manner; and detecting an area of the protrusion in the two-dimensional plane. The method can replace traditional modes such as enteroscopy, and the protrusion in the intestinal tract is detected in a painless and low-cost mode.

Device and method for registering three-dimensional data

A method and a device for registering three-dimensional data are disclosed. The method for registering three-dimensional data comprises: generating first two-dimensional data by two-dimensionally converting first three-dimensional data indicating a surface of a three-dimensional model of a target, generating second two-dimensional data by two-dimensionally converting second three-dimensional data indicating at least a part of the three-dimensional surface of the target; determining a first matching region in the first two-dimensional data and a second matching region in the second two-dimensional data by matching the second two-dimensional data to the first two-dimensional data; setting, as initial position, a plurality of points of the first three-dimensional data, which correspond to the first matching region and a plurality of points of the second three-dimensional data, which correspond to the second matching region; and registering the first three-dimensional data and the second three-dimensional data using the initial position.

Apparatus, a method and a computer program for volumetric video
11599968 · 2023-03-07 · ·

Embodiments for volumetric video encoding and decoding relating to one or more three-dimensional objects are disclosed. In encoding, after mapping from 3D space to 2D plane (802) a point in the 2D plane is examined (805) to determine which points of the 3D object are mapped to the same point to obtain a set of candidate points. Candidate points belonging to a same surface can be used to determine a center of mass for the surface (807). A depth value of the centre of mass is mapped to a 2D projection depth plane (808). A colour value for the centre of mass is interpolated from colour values of points of the set of surface points which are nearest neighbours of the center of mass (810), and used as the colour of the surface in the texture plane (812). Corresponding embodiments for decoding are provided.

Face detection in spherical images

A face located along a stitch line in a spherical image is detected by rendering views of regions of the spherical image along the stitch line. The spherical image may be produced by combining first and second images. A first view of a projection of the spherical image is rendered. A scaling factor for rendering a second view of the projection is determined based characteristics of the first portion of the face. The second view is then rendered according to the scaling factor. The use of the scaling factor to render the second view causes a change in the depiction of the second portion of the face. For example, the scaling factor can indicate to change the resolution or expected size of the second portion of the face when rendering the second view. A face is then detected within the spherical image based on the rendered first and second views.

IMAGE PROJECTION METHOD AND PROJECTOR
20230124225 · 2023-04-20 ·

An image projection method includes projecting an image including a plurality of adjustment points on a screen, determining positions of the plurality of adjustment points on the screen, determining whether each of a plurality of sides connecting the adjustment points adjacent to each other is a linear line or a curved line, obtaining geometrically-corrected image by performing geometric correction on the image in a range corresponding to an area defined by the plurality of sides including the linear line and the curved line based on the plurality of sides defining the area, and projecting the geometrically-corrected image on the screen.