G06T2207/30241

CONTEMPORANEOUSLY CALIBRATING A GAZE-TRACKING SYSTEM AND AUTHORIZING ACCESS TO ANOTHER SYSTEM
20230237847 · 2023-07-27 ·

A system for contemporaneously calibrating a gaze-tracking system and authorizing access to a first other system can include a processor and a memory. The memory can store a preliminary operations module, an authorization module, and a gaze-tracking module. The preliminary operations module can include instructions to compare a trajectory, of a point of gaze of an eye, with a pattern associated both with a calibration of the gaze-tracking system and with a first authorization process, that excludes an iris recognition process, for the first other system. The authorization module can include instructions to cause an access to the first other system to be authorized. The gaze-tracking module can include instructions to: (1) cause the gaze-tracking system to be calibrated and (2) cause, in response to the gaze-tracking system being calibrated, the gaze-tracking system to be configured to be a user interface for a second other system.

Vehicular trailer assist system
11708111 · 2023-07-25 · ·

A vehicular trailer assist system includes a camera disposed at a rear portion of a vehicle and viewing a portion of a trailer hitched at a hitch of the vehicle. During a reversing maneuver of the vehicle and hitched trailer, the vehicular trailer assist system, responsive to processing at an electronic control unit (ECU) of image data captured by the camera, determines a trailer angle of the trailer relative to a longitudinal axis of the vehicle. Based at least in part on the determined trailer angle, the vehicular trailer assist system determines a trailer direction of movement of the trailer while the vehicle is reversing with the trailer hitched at the hitch of the vehicle. The vehicular trailer assist system determines a virtual destination location rearward of the trailer and in the determined trailer direction and controls steering of the vehicle to reverse the trailer towards the virtual destination location.

Methods and systems for medical imaging based analysis of ejection fraction and fetal heart functions

Systems and methods are provided for enhanced heart medical imaging operations, particularly as by incorporating use of artificial intelligence (AI) based fetal heart functional analysis and/or real-time and automatic ejection fraction (EF) measurement and analysis.

Systems, methods, and computer-program products for assessing athletic ability and generating performance data

Methods, systems, and computer-program products used for assessing athletic ability and generating performance data. In one embodiment, athlete performance data is generated through computer-vision analysis of video of an athletic performing, e.g., during practice or gameplay. The generated performance data for the athlete may include, for example, maximum speed, maximum acceleration, time to maximum speed, transition time (e.g., time to change direction), closing speed (e.g., time to close the distance to another athlete), average separation (e.g., between the athlete and another athlete), play-making ability, athleticism (e.g., a weighted computation and/or combination of multiple metrics), and/or other performance data. This performance data may be used to generate and/or update a profile associated with the athlete, which can be utilized for recruiting, scouting, comparing, and/or assessing athletes with greater efficiency and precision.

DETERMINING NEEDLE POSITION
20230225684 · 2023-07-20 ·

In an embodiment, a method (100) is described. The method comprises receiving (102) data corresponding to a plurality of radiographic imaging slices of a body. The method further comprises determining (104) a position of a needle inserted in the body. The determination is based on combining information from at least one of the radiographic imaging slices comprising an indication of a first portion of the needle outside the body and at least one other of the radiographic imaging slices comprising an indication of a second portion of the needle inside the body. A combined needle region is generated by merging data corresponding to a position of the first portion of the needle outside the body with data corresponding to a position of the second portion of the needle inside the body. The method further comprises generating (106) display data for providing a visual representation of the needle in an image of the body in combination with a visual representation of at least the first and second portions of the needle superimposed on the image. The image is in a plane that is digitally tilted with respect to a plane parallel to the plurality of radiographic imaging slices.

DIGITAL ANTIMICROBIAL SUSCEPTIBILITY TESTING

Detecting single bacterial cells in a sample includes collecting, from a sample provided to an imaging apparatus, a multiplicity of images of the sample over a length of time; assessing a trajectory of each bacterial cell in the sample; and assessing, based on the trajectory of each bacterial cell in the sample, a number of bacterial cell divisions that occur in the sample during the length of time.

Systems and methods for adjusting medical device

A method for adjusting a medical device is provided. The method includes obtaining an initial trajectory of a component of the medical device. The initial trajectory of the component includes a plurality of initial positions. For each of the plurality of initial positions, the method further includes determining whether a collision is likely to occur between a subject and the component according to the initial trajectory of the component. In response to the determination that the collision is likely to occur, the method further includes updating the initial trajectory of the component to determine an updated trajectory of the component.

Vehicle path restoration system through sequential image analysis and vehicle path restoration method using the same
20230230391 · 2023-07-20 ·

Disclosed are a vehicle path restoration system through sequential image analysis which includes: an image capturing unit that acquires sequential images from the front camera installed in the subject vehicle; an image analysis unit for generating multiple lanes that can be recognized from the sequential images of the video file acquired by the image capturing unit and multi-paths calculated using the geometric characteristics of the lanes recognized at the current time and the speed of the subject vehicle, that restores the path of the subject vehicle and restores the path of the front vehicle driving in front of the subject vehicle; a memory for storing path data of the subject vehicle and the front vehicle restored by the image analysis unit; and a display unit that expresses the path data of the subject vehicle and the front vehicle stored in the memory in the form of a top view.

DEPTH COMPLETION METHOD AND APPARATUS USING A SPATIAL-TEMPORAL

Provided are a depth completion method and apparatus using spatial-temporal information. The depth completion apparatus according to the present invention comprises a processor; and a memory connected to the processor, wherein the memory stores program instructions executable by the processor for performing operations comprising receiving an RGB image and a sparse image through a camera and LiDAR, generating a dense first depth map by processing color information of the RGB image through a first branch based on an encoder-decoder, generating a dense second depth map by up-sampling the sparse image through a second branch based on an encoder-decoder, generating a third depth map by fusing the first depth map and the second depth map, and generating a final depth map including a trajectory of a moving object included in an RGB image continuously captured during movement by inputting the third depth map to a convolution long term short memory (LSTM).

Automatic field of view detection
11562497 · 2023-01-24 · ·

Implementations are described herein for analyzing a sequence of digital images captured by a mobile vision sensor (e.g., integral with a robot), in conjunction with information (e.g., ground truth) known about movement of the vision sensor, to determine spatial dimensions of object(s) and/or an area captured in a field of view of the mobile vision sensor. Techniques avoid the use of visual indicia of known dimensions and/or other conventional tools for determining spatial dimensions, such as checkerboards. Instead, techniques described herein allow spatial dimensions to be determined using less resources, and are more scalable than conventional techniques.