Patent classifications
G06T3/40
Image positioning system and image positioning method based on upsampling
An image positioning system based on upsampling and a method thereof are provided. The image positioning method based on upsampling is to fetch a region image covering a target from a wide region image, determine a rough position of the target, execute an upsampling process on the region image based on neural network data model for obtaining a super-resolution region image, map the rough position onto the super-resolution region image, and analyze the super-resolution region image for determining a precise position of the target. The present disclosed example can significantly improve the efficiency of positioning and effectively reduce the required cost of hardware.
Eye image selection
Systems and methods for eye image set selection, eye image collection, and eye image combination are described. Embodiments of the systems and methods for eye image set selection can include comparing a determined image quality metric with an image quality threshold to identify an eye image passing an image quality threshold, and selecting, from a plurality of eye images, a set of eye images that passes the image quality threshold.
Imaging display device and wearable device
An imaging display device includes an imaging unit, a processing unit, a display unit, and a pupil detection unit. The imaging unit includes a plurality of photoelectric conversion elements and is configured to acquire first image information. The processing unit is configured to process a signal from the imaging unit and generate second image information. The display unit is configured to display an image that is based on the signal from the processing unit. The pupil detection unit is configured to detect vector information of a pupil. The processing unit generates the second image information by processing the first image information based on the vector information on the pupil.
Imaging display device and wearable device
An imaging display device includes an imaging unit, a processing unit, a display unit, and a pupil detection unit. The imaging unit includes a plurality of photoelectric conversion elements and is configured to acquire first image information. The processing unit is configured to process a signal from the imaging unit and generate second image information. The display unit is configured to display an image that is based on the signal from the processing unit. The pupil detection unit is configured to detect vector information of a pupil. The processing unit generates the second image information by processing the first image information based on the vector information on the pupil.
Virtual reality training, simulation, and collaboration in a robotic surgical system
A virtual reality system providing a virtual robotic surgical environment, and methods for using the virtual reality system, are described herein. Within the virtual reality system, various user modes enable different kinds of interactions between a user and the virtual robotic surgical environment. For example, one variation of a method for facilitating navigation of a virtual robotic surgical environment includes displaying a first-person perspective view of the virtual robotic surgical environment from a first vantage point, displaying a first window view of the virtual robotic surgical environment from a second vantage point and displaying a second window view of the virtual robotic surgical environment from a third vantage point. Additionally, in response to a user input associating the first and second window views, a trajectory between the second and third vantage points can be generated sequentially linking the first and second window views.
Virtual reality training, simulation, and collaboration in a robotic surgical system
A virtual reality system providing a virtual robotic surgical environment, and methods for using the virtual reality system, are described herein. Within the virtual reality system, various user modes enable different kinds of interactions between a user and the virtual robotic surgical environment. For example, one variation of a method for facilitating navigation of a virtual robotic surgical environment includes displaying a first-person perspective view of the virtual robotic surgical environment from a first vantage point, displaying a first window view of the virtual robotic surgical environment from a second vantage point and displaying a second window view of the virtual robotic surgical environment from a third vantage point. Additionally, in response to a user input associating the first and second window views, a trajectory between the second and third vantage points can be generated sequentially linking the first and second window views.
Automatic graph scoring for neuropsychological assessments
Systems and methods of the present invention provide for: receiving a digital image data; modifying the digital image data to reduce a width of a feature within the digital image data; executing a dimension reduction process on the feature; storing a feature vector comprising: at least one feature for each of the received digital image data, and a correct or incorrect label associated with each feature vector; selecting the feature vector from a data store; training a classification software engine to classify each feature vector according to the label; classifying the image data as correct or incorrect according to a classification software engine; and generating an output labeling a second digital image data as correct or incorrect.
Automatic graph scoring for neuropsychological assessments
Systems and methods of the present invention provide for: receiving a digital image data; modifying the digital image data to reduce a width of a feature within the digital image data; executing a dimension reduction process on the feature; storing a feature vector comprising: at least one feature for each of the received digital image data, and a correct or incorrect label associated with each feature vector; selecting the feature vector from a data store; training a classification software engine to classify each feature vector according to the label; classifying the image data as correct or incorrect according to a classification software engine; and generating an output labeling a second digital image data as correct or incorrect.
Systems and methods for controlling virtual scene perspective via physical touch input
Systems, methods, and non-transitory computer readable media for controlling perspective in an extended reality environment are disclosed. In one embodiment, a non-transitory computer readable medium contains instructions to cause a processor to perform the steps of: outputting for presentation via a wearable extended reality appliance (WER-appliance), first display signals reflective of a first perspective of a scene; receiving first input signals caused by a first multi-finger interaction with the touch sensor; in response, outputting for presentation via the WER-appliance second display signals to modify the first perspective of the scene, causing a second perspective of the scene to be presented via the WER-appliance; receiving second input signals caused by a second multi-finger interaction with the touch sensor; and in response, outputting for presentation via the WER-appliance third display signals to modify the second perspective of the scene, causing a third perspective of the scene to be presented via the WER-appliance.
Systems and methods for controlling virtual scene perspective via physical touch input
Systems, methods, and non-transitory computer readable media for controlling perspective in an extended reality environment are disclosed. In one embodiment, a non-transitory computer readable medium contains instructions to cause a processor to perform the steps of: outputting for presentation via a wearable extended reality appliance (WER-appliance), first display signals reflective of a first perspective of a scene; receiving first input signals caused by a first multi-finger interaction with the touch sensor; in response, outputting for presentation via the WER-appliance second display signals to modify the first perspective of the scene, causing a second perspective of the scene to be presented via the WER-appliance; receiving second input signals caused by a second multi-finger interaction with the touch sensor; and in response, outputting for presentation via the WER-appliance third display signals to modify the second perspective of the scene, causing a third perspective of the scene to be presented via the WER-appliance.