Patent classifications
G06T3/20
CO-REGISTRATION OF INTRAVASCULAR DATA AND MULTI-SEGMENT VASCULATURE, AND ASSOCIATED DEVICES, SYSTEMS, AND METHODS
Disclosed is a medical imaging system, including a processor circuit configured for communication with an x-ray imaging device movable relative to a patient and an intravascular catheter or guidewire sized and shaped for positioning within a blood vessel of the patient, wherein the processor circuit is configured to receive a first angiographic image of a first length of the vessel and a second angiographic image of a second length of the vessel, wherein the first image is obtained at a first position and the second angiographic image is obtained at a second position. The processor is further configured to generate a roadmap image of a combined length of the blood vessel by combining the first image and the second image, and to receive intravascular data associated with the blood vessel, and to co-register the intravascular data to corresponding locations in the roadmap image; and output the roadmap image and a graphical representation of the intravascular data at the corresponding locations in the roadmap image.
Storage controller having data augmentation components for use with non-volatile memory die
Methods and apparatus are disclosed for implementing data augmentation within a storage controller of a data storage device based on machine learning data read from a non-volatile memory (NVM) array of a memory die. Some particular aspects relate to configuring the storage controller to generate augmented versions of training images for use in training a Deep Learning Accelerator of an image recognition system by rotating, translating, skewing, cropping, etc., a set of initial training images obtained from a host device and stored in the NVM array. Other aspects relate to controlling components of the memory die to generate noise-augmented images by, for example, storing and then reading training images from worn regions of the NVM array to inject noise into the images. Data augmentation based on data read from multiple memory dies is also described, such as image data spread across multiple NVM arrays or multiple memory dies.
Storage controller having data augmentation components for use with non-volatile memory die
Methods and apparatus are disclosed for implementing data augmentation within a storage controller of a data storage device based on machine learning data read from a non-volatile memory (NVM) array of a memory die. Some particular aspects relate to configuring the storage controller to generate augmented versions of training images for use in training a Deep Learning Accelerator of an image recognition system by rotating, translating, skewing, cropping, etc., a set of initial training images obtained from a host device and stored in the NVM array. Other aspects relate to controlling components of the memory die to generate noise-augmented images by, for example, storing and then reading training images from worn regions of the NVM array to inject noise into the images. Data augmentation based on data read from multiple memory dies is also described, such as image data spread across multiple NVM arrays or multiple memory dies.
Cross-device supervisory computer vision system
A supervisory computer vision (CV) system may include a secondary CV system running in parallel with a native CV system on a mobile device. The secondary CV system is configured to run less frequently than the native CV system. CV algorithms are then run on these less-frequent sample images, generating information for localizing the device to a reference point cloud (e.g., provided over a network) and for transforming between a local point cloud of the native CV system and the reference point cloud. AR content may then be consistently positioned relative to the convergent CV system's coordinate space and visualized on a display of the mobile device. Various related algorithms facilitate the efficient operation of this system.
Cross-device supervisory computer vision system
A supervisory computer vision (CV) system may include a secondary CV system running in parallel with a native CV system on a mobile device. The secondary CV system is configured to run less frequently than the native CV system. CV algorithms are then run on these less-frequent sample images, generating information for localizing the device to a reference point cloud (e.g., provided over a network) and for transforming between a local point cloud of the native CV system and the reference point cloud. AR content may then be consistently positioned relative to the convergent CV system's coordinate space and visualized on a display of the mobile device. Various related algorithms facilitate the efficient operation of this system.
Method and device for converting 2D image into 3D image and 3D imaging system
The present disclosure discloses a method and a device for converting two-dimensional (2D) images into three-dimensional (3D) images and a 3D imaging system, wherein the method comprises the following steps: acquiring 2D image to be processed; performing perspective transformation on the 2D image to obtain a left-eye image and a right-eye image respectively; adjusting a distance between the left-eye image and the right-eye image according to the result of perspective transformation; and synthesizing the left-eye image and the right-eye image after the distance adjustment. In embodiments of the present disclosure, binocular parallax images are created by performing perspective transformation on the 2D image to be processed; the distance between the left-eye image and the right-eye image after perspective transformation is adjusted to form binocular parallax and create a convergence angle, so that the images observed by naked eyes are located at different depths, thus different stereoscopic effects may be seen. The image transformation is performed on the 2D image without involving the resolution and definition of the image, so that the image quality of the 3D imaged image is the same as that of the original 2D image and the 3D imaging effect is not affected.
Method and device for converting 2D image into 3D image and 3D imaging system
The present disclosure discloses a method and a device for converting two-dimensional (2D) images into three-dimensional (3D) images and a 3D imaging system, wherein the method comprises the following steps: acquiring 2D image to be processed; performing perspective transformation on the 2D image to obtain a left-eye image and a right-eye image respectively; adjusting a distance between the left-eye image and the right-eye image according to the result of perspective transformation; and synthesizing the left-eye image and the right-eye image after the distance adjustment. In embodiments of the present disclosure, binocular parallax images are created by performing perspective transformation on the 2D image to be processed; the distance between the left-eye image and the right-eye image after perspective transformation is adjusted to form binocular parallax and create a convergence angle, so that the images observed by naked eyes are located at different depths, thus different stereoscopic effects may be seen. The image transformation is performed on the 2D image without involving the resolution and definition of the image, so that the image quality of the 3D imaged image is the same as that of the original 2D image and the 3D imaging effect is not affected.
ELECTRONIC DEVICE AND METHOD FOR CAPTURING IMAGE IN THE ELECTRONIC DEVICE
An electronic device is provided. The electronic device includes a camera module, a display, a plurality of detectors, and at least one processor, and the at least one processor may be configured to detect at least one first region of interest (ROI) in a first image received through the camera module by using a first detector for first ROI detection among the plurality of detectors, detect at least one second ROI in a second image received through the camera module by using a second detector for second ROI detection among the plurality of detectors when failing in detecting at least one first ROI matching the at least one first ROI detected in the first image in the second image by using the first detector, estimate at least one first ROI based on the at least one second ROI, update the estimated at least one first ROI to at least one first ROI when the at least one first ROI detected in the first image matches the estimated at least one first ROI, and change a position of a preview region including the updated at least one first ROI based on a position of the estimated at least one first ROI matching the at least one first ROI detected in the first image.
Displaying a unified desktop across connected devices
Embodiments provide for a handheld device with a unified desktop for integrating the functionality of the handheld device with a larger computer system. When connected to a peripheral display and/or a display of the larger computer system, the handheld device provides a unified desktop displayed across the screen(s) of the handheld device and the peripheral display. The unified desktop unifies the functionality provided by the larger computer system and the handheld functionality, e.g., communication applications (e.g., phone, SMS, MMS). A user can seamlessly interact with applications, e.g., open, move, close, receive notifications, on the unified desktop whether the applications are displayed on the screens of the handheld device, or the peripheral display of the larger computer system.
Displaying a unified desktop across connected devices
Embodiments provide for a handheld device with a unified desktop for integrating the functionality of the handheld device with a larger computer system. When connected to a peripheral display and/or a display of the larger computer system, the handheld device provides a unified desktop displayed across the screen(s) of the handheld device and the peripheral display. The unified desktop unifies the functionality provided by the larger computer system and the handheld functionality, e.g., communication applications (e.g., phone, SMS, MMS). A user can seamlessly interact with applications, e.g., open, move, close, receive notifications, on the unified desktop whether the applications are displayed on the screens of the handheld device, or the peripheral display of the larger computer system.