Patent classifications
G06T2207/30244
VISION-ASSIST DEVICES AND METHODS OF CALIBRATING VISION-ASSIST DEVICES
Vision-assist devices and methods for calibrating a position of a vision-assist device worn by a user are disclosed. In one embodiment, a method of calibrating a vision-assist device includes capturing a calibration image using at least one capturing device of the vision-assist device, obtaining at least one attribute of the calibration image, and comparing the at least one attribute of the calibration image with a reference attribute. The method further includes determining an adjustment of the at least one image sensor based at least in part on the comparison of the at least one attribute of the calibration image with the reference attribute, and providing an output corresponding to the determined adjustment of the vision-assist device.
ALIGNING VISION-ASSIST DEVICE CAMERAS BASED ON PHYSICAL CHARACTERISTICS OF A USER
A vision-assist device may include at least one image sensor for generating image data corresponding to an environment, a user input device for receiving user input regarding one or more physical characteristics of a user, and a processor. The processor may be programmed to receive the image data from the at least one image sensor, receive the user input from the user input device, and adjust an alignment of the at least one image sensor based on the received image data and the user input. Methods for aligning an image sensor are also provided.
Surveillance information generation apparatus, imaging direction estimation apparatus, surveillance information generation method, imaging direction estimation method, and program
A surveillance information generation apparatus (2000) includes a first surveillance image acquisition unit (2020), a second surveillance image acquisition unit (2040), and a generation unit (2060). The first surveillance image acquisition unit (2020) acquires a first surveillance image (12) generated by a fixed camera (10). The second surveillance image acquisition unit (2040) acquires a second surveillance image (22) generated by a moving camera (20). The generation unit (2060) generates surveillance information (30) relating to object surveillance, using the first surveillance image (12) and first surveillance information (14).
Automated camera positioning for feeding behavior monitoring
Methods, systems, and apparatus, including computer programs encoded on computer-readable storage media, for automated camera positioning for feeding behavior monitoring. In some implementations, a system obtains an image of a scene, a spatial model that corresponds to a subfeeder, and calibration parameters of a camera, the system determines a size of the subfeeder in the image of the scene, the system selects an updated position of the camera relative to the subfeeder, the system provides the updated position of the camera relative to the subfeeder to a winch controller, and the system moves the camera to the updated position.
DETERMINING ROAD LOCATION OF A TARGET VEHICLE BASED ON TRACKED TRAJECTORY
Systems and methods are provided for navigating a host vehicle. In an embodiment, a processing device may be configured to receive images captured over a time period; analyze images to identify a target vehicle; receive map information associated including a plurality of target trajectories; determine, based on analysis of the images, first and second estimated positions of the target vehicle within the time period; determine, based on the first and second estimated positions, a trajectory of the target vehicle over the time period; compare the determined trajectory to the plurality of target trajectories to identify a target trajectory traversed by the target vehicle; determine, based on the identified target trajectory, a position of the target vehicle; and determine a navigational action for the host vehicle based on the determined position.
THREE-DIMENSIONAL STABILIZED 360-DEGREE COMPOSITE IMAGE CAPTURE
Many embodiments can comprise a system. The system can comprise a processor and a memory coupled to the processor. The memory can include instructions that, when executed by the processor, cause the processor to: determine a direction of gravity in each image of a sequence of images around an object; estimate a center of mass of the object in each image of the sequence of images using the direction of gravity and dimensions of the object; stabilize each image in the sequence of images using the center of mass; and generate a 360 degree display of the object using each image in the stabilized sequence of images. Other embodiments are disclosed herein.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM
An information processing system that acquires video data captured by an image pickup unit; detects an object from the video data; detects a condition corresponding to the image pickup unit; and controls a display to display content associated with the object at a position other than a detected position of the object based on the condition corresponding to the image pickup unit.
ONLINE MATCHING AND OPTIMIZATION METHOD COMBINING GEOMETRY AND TEXTURE, 3D SCANNING DEVICE, SYSTEM AND NON-TRANSITORY STORAGE MEDIUM
An online matching and optimization method combining geometry and texture and a three-dimensional (3D) scanning system are provided. The method includes obtaining pairs of depth texture images with a one-to-one corresponding relationship, and collecting the pairs of the depth texture images including depth images by a depth sensor and collecting texture images by a camera device; adopting a strategy of coarse to fine to perform feature, matching on the depth texture images corresponding to a current frame and on the depth texture images corresponding to the target frames, to estimate a preliminary pose of the depth sensor in the 3D scanning system; combining a geometric constraint and a texture constraint to optimize the estimated preliminary pose, and obtaining a refined motion estimation between the frames.
APPARATUS, METHOD AND COMPUTER PROGRAM FOR MONITORING A SUBJECT DURING A MEDICAL IMAGING PROCEDURE
The invention refers to an apparatus for monitoring a subject (121) during an imaging procedure, e.g. CT-imaging The apparatus (110) comprises a monitoring image providing unit (111) providing a first monitoring image and a second monitoring image acquired at different support positions, a monitoring position providing unit (112) providing a first monitoring position of a region of interest in the first monitoring image, a support position providing unit (113) providing support position data of the support positions, a position map providing unit (114) providing a position map mapping calibration support positions to calibration monitoring positions, and a region of interest position determination unit (115) determining a position of the region of interest in the second monitoring image based on the first monitoring position, the support position data, and the position map. This allows to determine the position of the region of interest accurately and with low computational effort.
COMPUTER SYSTEM, APPARATUS, AND METHOD FOR AN AUGMENTED REALITY HAND GUIDANCE APPLICATION FOR PEOPLE WITH VISUAL IMPAIRMENTS
A system, device, application stored on non-transitory memory, and method can be configured to help a user of a device locate and pick up objects around them. Embodiments can be configured to help vision-impaired users find, locate, and pickup objects near them. Embodiments can be configured so that such functionality is provided locally via a single device so the device is able to provide assistance and hand guidance without a connection to the internet, a network, or another device (e.g. a remote server, a cloud based server, a server connectable to the device via an application programming interface, API, etc.).