G06T7/73

Enhanced Illumination-Invariant Imaging

Devices, systems, and methods for generating illumination-invariant images are disclosed. A method may include activating, by a device, a camera to capture first image data; while the camera is capturing the first image data, activating of a first, light source; receiving the first image data, the first image data having pixels having first color values; identifying first light generated by the first light source while the camera is capturing the first image data; identifying, based on the first image data, second light generated by a second light source; generating, based on the first light and the second light, second image data that are illumination-invariant; and presenting the second image data.

Enhanced Illumination-Invariant Imaging

Devices, systems, and methods for generating illumination-invariant images are disclosed. A method may include activating, by a device, a camera to capture first image data; while the camera is capturing the first image data, activating of a first, light source; receiving the first image data, the first image data having pixels having first color values; identifying first light generated by the first light source while the camera is capturing the first image data; identifying, based on the first image data, second light generated by a second light source; generating, based on the first light and the second light, second image data that are illumination-invariant; and presenting the second image data.

IMAGE BASED DETECTION OF FIT FOR A HEAD MOUNTED WEARABLE COMPUTING DEVICE

A system and method of detecting display fit measurements and/or ophthalmic measurements for a head mounted wearable computing device including a display device is provided. An image of a fitting frame worn by a user of the computing device is captured by the user, through an application running on the computing device. One or more keypoints and/or features and/or landmarks are detected in the image including the fitting frame. A three-dimensional pose of the fitting frame is determined based on the detected keypoints and/or features and/or landmarks, and configuration information associated with the fitting frame. The display device of the head mounted wearable computing device can then be configured based on the three-dimensional pose of the fitting frame as captured in the image.

IMAGE BASED DETECTION OF FIT FOR A HEAD MOUNTED WEARABLE COMPUTING DEVICE

A system and method of detecting display fit measurements and/or ophthalmic measurements for a head mounted wearable computing device including a display device is provided. An image of a fitting frame worn by a user of the computing device is captured by the user, through an application running on the computing device. One or more keypoints and/or features and/or landmarks are detected in the image including the fitting frame. A three-dimensional pose of the fitting frame is determined based on the detected keypoints and/or features and/or landmarks, and configuration information associated with the fitting frame. The display device of the head mounted wearable computing device can then be configured based on the three-dimensional pose of the fitting frame as captured in the image.

TEXT BORDER TOOL AND ENHANCED CORNER OPTIONS FOR BACKGROUND SHADING

Disclosed herein are various techniques for more precisely and reliably (a) positioning top and bottom border edges relative to textual content, (b) positioning left and right border edges relative to textual content, (c) positioning mixed edge borders relative to textual content, (d) positioning boundaries of a region of background shading that fall within borders of textual content, (e) positioning borders relative to textual content that spans columns, (f) positioning respective borders relative to discrete portions of textual content, (g) positioning collective borders relative to discrete, abutting portions of textual content, (h) applying stylized corner boundaries to a region of background shading, and (i) applying stylized corners to borders.

MONITORING OF DENTITION

A method for acquiring at least one two-dimensional image of a part of arches of a patient includes steps carried out by the patient or other person who is not a dental health professional, for example, including placing a dental separator in the mouth of the patient in order to separate the lips of the patient and improve the visibility of the teeth during the acquisition of said at least one two-dimensional image, and acquiring, in a mouth closed position and with a personal image acquisition apparatus, said at least one two-dimensional image.

MONITORING OF DENTITION

A method for acquiring at least one two-dimensional image of a part of arches of a patient includes steps carried out by the patient or other person who is not a dental health professional, for example, including placing a dental separator in the mouth of the patient in order to separate the lips of the patient and improve the visibility of the teeth during the acquisition of said at least one two-dimensional image, and acquiring, in a mouth closed position and with a personal image acquisition apparatus, said at least one two-dimensional image.

COORDINATING ALIGNMENT OF COORDINATE SYSTEMS USED FOR A COMPUTER GENERATED REALITY DEVICE AND A HAPTIC DEVICE
20230050367 · 2023-02-16 ·

A first electronic device controls a second electronic device to measure a position of the first electronic device. The first electronic device includes a motion sensor, a network interface circuit, a processor, and a memory. The motion sensor senses motion of the first electronic device. The network interface circuit communicates with the second electronic device. The memory stores program code that is executed by the processor to perform operations that include, responsive to determining that the first electronic device has a level of motion that satisfies a defined rule, transmitting a request for the second electronic device to measure a position of the first electronic device. The position of the first electronic device is sensed and then stored in the memory. An acknowledgement is received from the second electronic device indicating that it has stored sensor data that can be used to measure the position of the first electronic device.

METHOD AND SYSTEM FOR DETERMINING A FITTED POSITION OF AN OPHTHALMIC LENS WITH RESPECT TO A WEARER REFERENTIAL AND METHOD FOR DETERMINING A LENS DESIGN OF AN OPHTHALMIC LENS

A method for determining a fitted position of an ophthalmic lens to be mounted on a spectacle frame equipping a wearer, the fitted position being defined with respect to a wearer referential linked to the head of the wearer. The method includes defining at least one fitting criteria relating to the positioning of the ophthalmic lens with respect to the spectacle frame, determining frame 3D data at least partially representative of the geometry and position of the spectacle frame with respect to the wearer referential, determining lens 3D data at least partially representative of the geometry of at least a peripheral portion of the ophthalmic lens, and determining the fitted position of said ophthalmic lens with respect to the wearer referential using the frame 3D data and said lens 3D data to fit the ophthalmic lens within the spectacle frame meeting the fitting criteria.

METHOD AND SYSTEM FOR DETERMINING A FITTED POSITION OF AN OPHTHALMIC LENS WITH RESPECT TO A WEARER REFERENTIAL AND METHOD FOR DETERMINING A LENS DESIGN OF AN OPHTHALMIC LENS

A method for determining a fitted position of an ophthalmic lens to be mounted on a spectacle frame equipping a wearer, the fitted position being defined with respect to a wearer referential linked to the head of the wearer. The method includes defining at least one fitting criteria relating to the positioning of the ophthalmic lens with respect to the spectacle frame, determining frame 3D data at least partially representative of the geometry and position of the spectacle frame with respect to the wearer referential, determining lens 3D data at least partially representative of the geometry of at least a peripheral portion of the ophthalmic lens, and determining the fitted position of said ophthalmic lens with respect to the wearer referential using the frame 3D data and said lens 3D data to fit the ophthalmic lens within the spectacle frame meeting the fitting criteria.