Patent classifications
G06T15/10
OPHTHALMIC INFORMATION PROCESSING APPARATUS, OPHTHALMIC APPARATUS, OPHTHALMIC INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
An ophthalmic information processing apparatus includes a specifying unit and an image deforming unit. The specifying unit is configured to specify a three-dimensional position of each pixel in a two-dimensional front image depicting a predetermined site of a subject's eye, based on OCT data obtained by performing optical coherence tomography on the predetermined site. The image deforming unit is configured to deform the two-dimensional front image, by changing position of at least one pixel in the two-dimensional front image based on the three-dimensional position, to generate a three-dimensional front image.
Graphical element rooftop reconstruction in digital map
A client device receives a first map tile, a second map tile, and map terrain data from a mapping system, the first and second map tiles together including map feature having a geometric base with a height value, the geometric base represented by a set of vertices split across the first and second map tiles. The client device identifies edges of the geometric base that intersect a tile border between the first and second map tiles. The client device determines a set of sample points based on the identified edges and determines a particular sample elevation value corresponding to a sample point in the set. The client device renders the map feature based on the particular sample elevation value and displays the rendering of the map feature.
INTERACTION PERIPHERAL, DETECTION METHOD, VIRTUAL REALITY HEADSET, METHOD FOR REPRODUCING A REAL POINT IN VIRTUAL SPACE, DEVICE AND METHOD FOR VIRTUALISING A REAL SPACE
An interaction peripheral, a method for detecting a real point, a virtual reality headset, a method for reproducing a real point in virtual space, a device and a method for virtualising a real space, particularly allowing a plane to be obtained in two, three or n dimensions of a real space which may be reproducible in virtual reality. An interaction peripheral which can be connected to a virtual reality headset, includes a range finder which can supply, to the headset, a measurement signal including a relative position measurement of a real point of a real space, the real point being sighted by the range finder. The measurement signal enables reproduction of the real point measured in a virtual space generated by the headset. Thus, the real point can be reproduced in real space while reducing risks of errors because the measurement tools are simple interaction peripherals handled by a user.
INTERACTION PERIPHERAL, DETECTION METHOD, VIRTUAL REALITY HEADSET, METHOD FOR REPRODUCING A REAL POINT IN VIRTUAL SPACE, DEVICE AND METHOD FOR VIRTUALISING A REAL SPACE
An interaction peripheral, a method for detecting a real point, a virtual reality headset, a method for reproducing a real point in virtual space, a device and a method for virtualising a real space, particularly allowing a plane to be obtained in two, three or n dimensions of a real space which may be reproducible in virtual reality. An interaction peripheral which can be connected to a virtual reality headset, includes a range finder which can supply, to the headset, a measurement signal including a relative position measurement of a real point of a real space, the real point being sighted by the range finder. The measurement signal enables reproduction of the real point measured in a virtual space generated by the headset. Thus, the real point can be reproduced in real space while reducing risks of errors because the measurement tools are simple interaction peripherals handled by a user.
Photo of a patient with new simulated smile in an orthodontic treatment review software
A computer-implemented method for generating a virtual depiction of an orthodontic treatment of a patient is disclosed herein. The computer-implemented method may involve gathering a three-dimensional (3D) model modeling the patient's dentition at a specific treatment stage of an orthodontic treatment plan. An image of the patient's face and dentition may be gathered. A first set of reference points modeled on the 3D model of the patient's dentition and a second set of reference points represented on the dentition of the image of the patient may be received. The image of the patient's dentition may be projected into a 3D space to create a projected 3D model of the image of the patient's dentition. Based on a comparison of the first reference points and projections of the second set of reference points, a plurality of modified images of the patient may be constructed to depict progressive stages of a treatment plan.
Photo of a patient with new simulated smile in an orthodontic treatment review software
A computer-implemented method for generating a virtual depiction of an orthodontic treatment of a patient is disclosed herein. The computer-implemented method may involve gathering a three-dimensional (3D) model modeling the patient's dentition at a specific treatment stage of an orthodontic treatment plan. An image of the patient's face and dentition may be gathered. A first set of reference points modeled on the 3D model of the patient's dentition and a second set of reference points represented on the dentition of the image of the patient may be received. The image of the patient's dentition may be projected into a 3D space to create a projected 3D model of the image of the patient's dentition. Based on a comparison of the first reference points and projections of the second set of reference points, a plurality of modified images of the patient may be constructed to depict progressive stages of a treatment plan.
DEVICES AND METHODS FOR GENERATING ELEMENTARY GEOMETRIES
Elementary geometries for rendering objects of a 3D scene are generated from input geometry data sets. Instructions of a source program are transformed into a code executable in a rendering pipeline by at least one graphics processor, by segmenting the source program into sub-programs, each adapted to process the input data sets, and by ordering the sub-programs in function of the instructions. Each ordered sub-program is configured in the executable code for being executed only after the preceding sub-program has been executed for all input data sets. Launching the execution of instructions to generate elementary geometries includes determining among the sub-programs a starting sub-program, deactivating all sub-programs preceding it and activating it as well as all sub-programs following it. Modularity is thereby introduced in generating elementary geometries, allowing time-efficient lazy execution of grammar rules.
DEVICES AND METHODS FOR GENERATING ELEMENTARY GEOMETRIES
Elementary geometries for rendering objects of a 3D scene are generated from input geometry data sets. Instructions of a source program are transformed into a code executable in a rendering pipeline by at least one graphics processor, by segmenting the source program into sub-programs, each adapted to process the input data sets, and by ordering the sub-programs in function of the instructions. Each ordered sub-program is configured in the executable code for being executed only after the preceding sub-program has been executed for all input data sets. Launching the execution of instructions to generate elementary geometries includes determining among the sub-programs a starting sub-program, deactivating all sub-programs preceding it and activating it as well as all sub-programs following it. Modularity is thereby introduced in generating elementary geometries, allowing time-efficient lazy execution of grammar rules.
METHOD OF HIDING AN OBJECT IN AN IMAGE OR VIDEO AND ASSOCIATED AUGMENTED REALITY PROCESS
A method for generating a final image from an initial image including an object suitable to be worn by an individual. The presence of the object in the initial image is detected. A first layer is superposed on the initial image. The first layer includes a mask at least partially covering the object in the initial image. The appearance of at least one part of the mask is modified. The suppression of all or part of an object in an image or a video is enabled. Also, a process of augmented reality intended to be used by an individual wearing a vision device on the face, and a try-on device for a virtual object.
METHOD OF HIDING AN OBJECT IN AN IMAGE OR VIDEO AND ASSOCIATED AUGMENTED REALITY PROCESS
A method for generating a final image from an initial image including an object suitable to be worn by an individual. The presence of the object in the initial image is detected. A first layer is superposed on the initial image. The first layer includes a mask at least partially covering the object in the initial image. The appearance of at least one part of the mask is modified. The suppression of all or part of an object in an image or a video is enabled. Also, a process of augmented reality intended to be used by an individual wearing a vision device on the face, and a try-on device for a virtual object.