Patent classifications
G06T2207/30204
SYSTEM AND METHOD FOR DETERMINING TOOL POSITIONING, AND FIDUCIAL MARKER THEREFORE
A spatial orientation determining system includes an imaging device configured for obtaining an image of a surgical site, a surgical tool defining a central axis, a processor, and a memory. The surgical tool configured for operating at the surgical site and disposed thereon, a fiducial marker generated by a machine learning network. The fiducial marker includes a distinct pattern. The memory, includes instructions which when executed by the processor, cause the system to: access an image from the imaging device, the image including a portion of the fiducial marker; determine a spatial positioning of the surgical tool based on at least a visible portion of the fiducial marker and the distinct pattern; and determine, based on the spatial positioning, a positioning, an orientation, and/or a rotational angle of the surgical tool.
PROVIDING A COMPLETE SET OF SECOND KEY ELEMENTS IN AN X-RAY IMAGE
A method comprises: applying a first trained function to first input data to generate first output data, the first output data including first key elements; receiving second input data, the second input data being an X-ray image of an examination region acquired using a first collimation region; applying a second trained function to the second input data to generate second output data, the second output data including second key elements; receiving third input data in response to an incomplete set of second key elements, the third input data including the second key elements and an X-ray image of the examination region acquired using the first collimation region; applying a third trained function to the third input data to generate third output data, the third output data including an estimated third key element to complete the set of second key elements; and providing a complete set of second key elements.
IMAGE ENHANCEMENT BASED ON FIBER OPTIC SHAPE-SENSING
The present invention relates to an image processing system (10), comprising: a processor unit (20) arranged to receive imaging data associated with an imaging system (40) and optical shape sensing data associated with an optical shape sensing system (50) registered with the imaging system (40) such that the optical shape sensing data can be positioned in the imaging system; wherein the processor unit (20) is configured to define in the imaging data a region of interest based on the imaging data and/or the optical shape sensing data and further configured to use the optical shape sensing data as markers within the region of interest such that the processor unit applies image enhancement of imaging data on the region of interest based on received optical shape sensing data.
METHODS FOR OPTICAL TRACKING AND SURFACE ACQUISITION IN SURGICAL ENVIRONMENTS AND DEVICES THEREOF
A computer assisted system is disclosed that includes an optical tracking system and one or more computing devices. The optical tracking system includes an RGB sensor and is configured to capture color images of an environment in the visible light spectrum and tracking images of fiducials in the environment in a near-infrared spectrum. The computer assisted system is configured to generate a color image of the environment using the color images, identify fiducial locations using the tracking images, generate depth maps from the color images, reconstruct three-dimensional surfaces of structures based on the depth maps, and output a display comprising the reconstructed three-dimensional surface and one or more surgical objects that are associated with the tracked fiducials. The computer assisted system can further include a monitor or a head-mounted display (HMD) configured to present augmented reality (AR) images during a procedure.
METHOD AND SYSTEM FOR POSITIONING INDOOR AUTONOMOUS MOBILE ROBOT
A method and system for positioning an indoor autonomous mobile robot is disclosed in this application, which includes: indoor layout of moving paths and indoor relative position information of the moving path are obtained by a vision sensor; visual positioning is performed by a visual locator on indoor image data collected by the visual sensor to obtain the first position information; and second position information of a UWB location tag is obtained and solved by an UWB locator; the first position information and the second position information are fused by an adaptive Kalman filter, to obtain final positioning information of the autonomous mobile robot. After fusion, the UWB locator can correct the accumulated error caused by visual positioning, and at the same time, visual positioning can smooth measured data of the UWB locator to make up for deficiencies.
REGISTRATION FOR AUGMENTED REALITY SYSTEM FOR VIEWING AN EVENT
Augmented reality systems provide graphics over views from a mobile device for both in-venue and remote viewing of a sporting or other event. A server system can provide a transformation between the coordinate system of a mobile device (smart phone, tablet computer, head mounted display) and a real world coordinate system. Requested graphics for the event are displayed over a view of an event.
METHODS FOR ARTHROSCOPIC SURGERY VIDEO SEGMENTATION AND DEVICES THEREFOR
Methods, non-transitory computer readable media, and arthroscopic video segmentation apparatuses and systems that facilitate improved, automatic segmentation analysis of videos of arthroscopic procedures are disclosed. With this technology, a video feed of an arthroscopic surgery can be automatically segmented using machine learning models and one or more tags related to the segments can be associated with the video feed. The generated videos can be output in real time to provide segmented information related to the surgical procedure or can be saved with the one or more segments tagged for playback for training or informational purposes.
DETERMINATION SYSTEM, DETERMINATION METHOD, COMPUTER PROGRAM, AND AUTHENTICATION SYSTEM
A determination system includes: a projection control unit that controls a projection unit to project a random marker within an angle of view of an imaging unit; an acquisition unit that obtains an image of a target person including the marker from the imaging unit; and a determination unit that determines whether or not the target person imaged by the imaging unit is a living body on the basis of a state of the marker included in the image. According to such a determination system, it is possible to accurately determine whether or not the target person is a living body. Therefore, for example, it is possible to avoid a breakthrough of biometric authentication by an illegal method, or the like.
POSE DETERMINING METHOD AND RELATED DEVICE
A pose determining method, which may be applied to the field of photographing and image processing, includes: obtaining a target image, where the target image includes a target parking space mark and a target parking space line, and a target parking space corresponding to the target parking space mark includes the target parking space line; and determining pose information based on the target parking space mark and the target parking space line. The pose information indicates a corresponding pose of a terminal during photographing of the target image. According to the pose determining method, the pose information may be determined based on the target parking space mark and the target parking space line, to implement positioning.
POSITION ANALYSIS DEVICE AND METHOD, AND CAMERA SYSTEM
A position analysis device includes: an input interface that acquires the captured image obtained by the camera; a processor that calculates coordinate transformation from a first coordinate system to a second coordinate system with respect to the target position in the acquired captured image; and a memory that stores a correction model to generate a correction amount of a position in the second coordinate system, wherein the processor corrects a transformed position from the target position, based on the correction model including weight functions corresponding to reference positions, the transformed position being a position obtained by the coordinate transformation on the target position in the first coordinate system to the second coordinate system, the weight functions each producing a larger weighting in the correction amount as the transformed position is closer to a corresponding reference position, the reference positions being different from each other in the second coordinate system.