Patent classifications
G01S3/7864
Remote-controlled weapon system in moving platform and moving target tracking method thereof
A remote-controlled weapon system, mounted in a moving platform, includes at least one processor that implements: a first posture calculator that calculates a first pixel movement amount corresponding to a posture change amount of a camera during a time interval between a first image and a second image, received after the first image; a second posture calculator that calculates a second pixel movement amount corresponding to a control command for changing a posture of the camera to match a moving target, detected from the second image, with an aiming point; and a region of interest (ROI) controller that calculates a third pixel movement amount corresponding to vibration of the camera based on the first pixel movement amount and the second pixel movement amount, and estimate a location of an ROI that is to be set on the moving target of the second image, based on the third pixel movement amount.
Wearable apparatus with wide viewing angle image sensor
A wearable apparatus and method are provided for capturing image data. In one implementation, a wearable apparatus for capturing image data is provided. The wearable apparatus includes at least one image sensor for capturing image data of an environment of a user, wherein a field of view of the image sensor includes a chin of the user. The wearable apparatus includes two or more microphones, and an attachment mechanism configured to enable the image sensor and microphones to be worn by the user. The wearable apparatus includes a processing device programmed to capture at least one image, identify the chin of the user to obtain a location of the chin, select a microphone from the two or more microphones based on the location, process input from the selected microphone using a first processing scheme, and process input from a microphone that is not selected using a second processing scheme.
System and method for measuring tracker system accuracy
The present invention relates to a simple and effective system and method for measuring camera based tracker system accuracy, especially for a helmet-mounted tracker system, utilizing Coordinate Measuring Machine (CMM). The method comprises the steps of; computing spatial relation between tracked object and calibration pattern using CMM; computing relation between reference camera and tracker camera; computing relation between reference camera and calibration pattern; computing ground truth relation between tracker camera and tracked object; obtaining actual tracker system results; comparing these results with the ground truth relations and finding accuracy of the tracker system; recording accuracy results; testing if the accuracy results is a new calculation required. The system comprises; a reference camera; a calibration pattern visible by reference camera; a camera spatial relation computation unit; a relative spatial relation computation unit a memory unit; a spatial relation comparison unit.
Moving-object position/attitude estimation apparatus and moving-object position/attitude estimation method
A moving-object position/attitude estimation apparatus includes: an image-capturing unit configured to acquire a captured image; a comparative image acquiring unit configured to acquire a comparative image viewed from a predetermined position at a predetermined attitude angle; a likelihood setting unit configured to compare the captured image with the comparative image and to assigns a high attitude angle likelihood to the comparative image and to assigns a high position likelihood to the comparative image; a moving-object position/attitude estimation unit configured to estimate the attitude angle of the moving object based on the attitude angle of the comparative image assigned the high attitude angle likelihood and to estimate the position of the moving object based on the position of the comparative image assigned the high position likelihood.
Camera systems for motion capture
Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.
Object detecting apparatus, image capturing apparatus, method for controlling object detecting apparatus, and storage medium
An object detecting apparatus includes a detecting unit configured to detect an area of a predetermined object from an image, a calculating unit configured to calculate an evaluation value on the area detected by the detecting unit, and a control unit configured, when the evaluation value satisfies a predetermined criterion, to determine that the area is the predetermined object. The predetermined criterion is set depending on an amount of distortion of an image displayed on a display unit.
Pedestrian path predictions
Systems and techniques for pedestrian path predictions are disclosed herein. For example, an environment, features of the environment, and pedestrians within the environment may be identified. Models for the pedestrians may be generated based on features of the environment. A model may be indicative of goals of a corresponding pedestrian and predicted paths for the corresponding pedestrian. Pedestrian path predictions for the pedestrians may be determined based on corresponding predicted paths. A pedestrian path prediction may be indicative of a probability that the corresponding pedestrian will travel a corresponding predicted path. Pedestrian path predictions may be rendered for the predicted paths, such as using different colors or different display aspects, thereby enabling a driver of a vehicle to be presented with information indicative of where a pedestrian is likely to travel.
System and method for tracking
Systems and methods are provided for generating calibration information for a media projector. The method includes tracking at least position of a tracking apparatus that can be positioned on a surface. The media projector shines a test spot on the surface, and the test spot corresponds to a known pixel coordinate of the media projector. The system includes a computing device in communication with at least two cameras, wherein each of the cameras are able to capture images of one or more light sources attached to an object. The computing device determines the object's position by comparing images of the light sources and generates an output comprising the real-world position of the object. This real-world position is mapped to the known pixel coordinate of the media projector.
Tracking device for portable astrophotography of the night sky
A tracking device for use when performing astrophotography comprises a guider camera and at least one tilt stage, with the topmost of the tilt stages arranged to support an astrophotography camera and the guider camera. Actuators are coupled to the tilt stages such that the astrophotography and guider cameras can be tilted about three axes. The guider camera and actuators are connected to electronics which include a computer programmed to operate in a calibration mode and a tracking mode. In calibration mode, a calibration procedure determines the effect of each actuator on the positions of at least two objects within the field-of-view (FOV) of the guider camera. In tracking mode, the actuators are operated as needed to maintain the positions of the at least two objects constant within the said FOV.
COMPOSITION CONTROL DEVICE, COMPOSITION CONTROL METHOD, AND PROGRAM
Operability is improved for an operation related to composition adjustment.
Composition designation operation information is acquired that is operation information designating a composition of an imaging device and includes information of a designated position on a screen that displays a captured image of the imaging device, and an imaging range of the imaging device is controlled to adjust the composition of the imaging device on the basis of subject information corresponding to the designated position of the composition designation operation information. Thus, the adjustment to the target composition is performed on the basis of the subject information corresponding to the designated position on the screen displaying the captured image.