Patent classifications
G06T2200/04
DEFORMABLE REGISTRATION OF MEDICAL IMAGES
Systems and computer-implemented methods of performing image registration. One method includes receiving a first image and a second image acquired from a patient at different times and, in each of the first image and the second image, detecting an upper boundary of an imaged object in an image coordinate system and detecting a lower boundary of the imaged object in the image coordinate system. The method further includes, based on the upper boundary and the lower boundary of each of the first image and the second image, cropping and padding at least one of the first image and the second image to create an aligned first image and an aligned second image and executing a registration model on the aligned first image and the aligned second image to compute a deformation field between the aligned first image and the aligned second image.
MODEL-BASED IMAGE SEGMENTATION
A method and system for mapping boundary detecting features of at least one source triangulated mesh of known topology to a target triangulated mesh of arbitrary topology. A region of interest in a volumetric image associated with each triangle of the target triangulated mesh is provided to a feature mapping network. The feature mapping network assigns a feature selection vector to each triangle of the target triangulated mesh. The associated region of interest and assigned feature selection vector for each triangle of the target triangulated mesh are provided to a boundary detection network. A predicted boundary based on features of the associated region of interest selected by the assigned feature selection vector is obtained from the boundary detection network.
METHOD AND SYSTEM FOR DETERMINING A FITTED POSITION OF AN OPHTHALMIC LENS WITH RESPECT TO A WEARER REFERENTIAL AND METHOD FOR DETERMINING A LENS DESIGN OF AN OPHTHALMIC LENS
A method for determining a fitted position of an ophthalmic lens to be mounted on a spectacle frame equipping a wearer, the fitted position being defined with respect to a wearer referential linked to the head of the wearer. The method includes defining at least one fitting criteria relating to the positioning of the ophthalmic lens with respect to the spectacle frame, determining frame 3D data at least partially representative of the geometry and position of the spectacle frame with respect to the wearer referential, determining lens 3D data at least partially representative of the geometry of at least a peripheral portion of the ophthalmic lens, and determining the fitted position of said ophthalmic lens with respect to the wearer referential using the frame 3D data and said lens 3D data to fit the ophthalmic lens within the spectacle frame meeting the fitting criteria.
FINGERPRINT UNLOCKING METHOD AND ELECTRONIC DEVICE, AND STORAGE MEDIUM
A fingerprint unlocking method, includes obtaining a fingerprint image through an optical fingerprint sensor of an electronic device when a user performs fingerprint unlocking on the electronic device; obtaining an extended fingerprint image by performing fingerprint extending processing on the fingerprint image according to a preset curvature; and unlocking the electronic device if the extended fingerprint image matches a preset fingerprint image.
Semantic labeling of point clouds using images
Systems and methods for semantic labeling of point clouds using images. Some implementations may include obtaining a point cloud that is based on lidar data reflecting one or more objects in a space; obtaining an image that includes a view of at least one of the one or more objects in the space; determining a projection of points from the point cloud onto the image; generating, using the projection, an augmented image that includes one or more channels of data from the point cloud and one or more channels of data from the image; inputting the augmented image to a two dimensional convolutional neural network to obtain a semantic labeled image wherein elements of the semantic labeled image include respective predictions; and mapping, by reversing the projection, predictions of the semantic labeled image to respective points of the point cloud to obtain a semantic labeled point cloud.
Adaptive model updates for dynamic and static scenes
In one embodiment, a computing system may update a first 3D model of a region of an environment based on comparisons between the first 3D model and first depth measurements of the region generated during a first time period. The computing system may determine that the region is static by comparing the first 3D model to second depth measurements of the region generated during a second time period. The computing system may in response to determining that the region is static, detect whether the region changed after the second time period based on comparisons between a second 3D model of the region and third depth measurements of the region generated after the second time period, the second 3D model having a lower resolution than the first 3D model. The computing system may in response to detecting a change in the region, update the first 3D model of the region.
Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas
An exemplary method includes maintaining a receiver-side mesh-vertices list, receiving duplicative-vertex information from a sender, and responsively reducing the receiver-side mesh-vertices list in accordance with the received duplicative-vertex information, and rendering, using the reduced receiver-side mesh-vertices list, viewpoint-adaptive three-dimensional (3D) personas of a subject at least in part by weighting video pixel colors from different video-camera vantage points of video cameras that capture video streams of the subject, the weighting being performed according to a respective geometric relationship of each video-camera vantage point to a user-selected viewpoint.
Systems, methods, and media for displaying interactive augmented reality presentations
Systems, methods, and media for displaying interactive augmented reality presentations are provided. In some embodiments, a system comprises: a plurality of head mounted displays, a first head mounted display comprising a transparent display; and at least one processor, wherein the at least one processor is programmed to: determine that a first physical location of a plurality of physical locations in a physical environment of the head mounted display is located closest to the head mounted display; receive first content comprising a first three dimensional model; receive second content comprising a second three dimensional model; present, using the transparent display, a first view of the first three dimensional model at a first time; and present, using the transparent display, a first view of the second three dimensional model at a second time subsequent to the first time based one or more instructions received from a server.
Patient-specific instrumentation for implant revision surgery
A system for creating at least one model of a bone and implanted implant comprises a processing unit; and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer-readable program instructions executable by the processing unit for: obtaining at least one image of at least part of a bone and of an implanted implant on the bone, the at least one image being patient specific, obtaining a virtual model of the implanted implant using an identity of the implanted implant, overlaying the virtual model of the implanted implant on the at least one image to determine a relative orientation of the implanted implant relative to the bone in the at least one image, and generating and outputting a current bone and implant model using the at least one image, the virtual model of the implanted implant and the overlaying.
Virtual teach and repeat mobile manipulation system
A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.