Patent classifications
H04N5/222
Displaying a unified desktop across connected devices
Embodiments provide for a handheld device with a unified desktop for integrating the functionality of the handheld device with a larger computer system. When connected to a peripheral display and/or a display of the larger computer system, the handheld device provides a unified desktop displayed across the screen(s) of the handheld device and the peripheral display. The unified desktop unifies the functionality provided by the larger computer system and the handheld functionality, e.g., communication applications (e.g., phone, SMS, MMS). A user can seamlessly interact with applications, e.g., open, move, close, receive notifications, on the unified desktop whether the applications are displayed on the screens of the handheld device, or the peripheral display of the larger computer system.
System and method for adjusting an image for a vehicle mounted camera
A system and method provides an image that adjusts in response to at least one vehicle mounted sensor.
Tracking objects using sensor rotation
An example apparatus for tracking objects includes a controller to receive a depth map, a focus distance, and an image frame of an object to be tracked. The controller is to detect the object to be tracked in the image frame and generate an object position for the object in the image frame. The controller is to calculate a deflection angle for the object based on the depth map, the focus distance, and the object position. The controller is to further rotate an imaging sensor based on the deflection angle.
COMPUTER-IMPLEMENTED METHOD FOR AUTOMATED DETECTION OF A MOVING AREA OF INTEREST IN A VIDEO STREAM OF FIELD SPORTS WITH A COMMON OBJECT OF INTEREST
A computer-implemented method is provided for automated detection of a moving area of interest, such as a ball, in a video stream of filed sports with a common object of interest encompassing a plurality of players and the object of interest. Images of a sports ground are captured by means of a video camera system producing a video stream which is digitally processed to continuously identify a detected concentration of action within the boundaries of the field. The concentration of action in the field is determined based on an estimated position of the object of interest in at least one frame of the video stream. Players' postures or orientations may be detected to improve the accuracy of the determination of the object of interest.
Method and apparatus of depth detection, and computer-readable storage medium
Spherical or hemispherical non-visible light depth detection includes projecting a hemispherical non-visible light static structured light pattern, in response to projecting the hemispherical non-visible light static structured light pattern, detecting non-visible light, determining three-dimensional depth information based on the detected non-visible light and the projected hemispherical non-visible light static structured light pattern, and outputting the three-dimensional depth information.
Camera signal monitoring apparatus and method
The present disclosure provides a camera signal monitoring apparatus and method, the camera signal monitoring apparatus including a processor, which includes a vehicle information input unit for receiving a vehicle speed and a yaw rate signal of a vehicle; a camera information input unit for receiving a camera signal including a vehicle speed and a yaw rate signal from a vehicle front camera; and a monitoring unit for calculating a vehicle driving trajectory using the vehicle speed and the yaw rate signal of the vehicle input from the vehicle information input unit, calculating a reference curvature value based on the calculated vehicle driving trajectory, and determining a reliability by comparing the calculated reference curvature value with a curvature value calculated by the camera signal input from the camera information input unit.
IMAGE SENSOR MODULE, IMAGE PROCESSING SYSTEM, AND OPERATING METHOD OF IMAGE SENSOR MODULE
An image sensor module includes an image sensor configured to generate image data and memory including at least a memory bank storing the image data and a processor-in-memory (PIM) circuit, the PIM circuit including a plurality of processing elements. The memory is configured to read the image data from the memory bank; generate optical flow data and pattern density data using the plurality of processing elements, the optical flow data indicating time-sequential motion of at least one object included in the image data, and the pattern density data indicating a density of a pattern of the image data; and output the image data, the optical flow data, and the pattern density data.
Remote support system and method
A remote support communication system comprises a user terminal unit and an operator terminal unit. Said user terminal unit comprises a camera module configured to shoot video images of an object in order to acquire an ordered sequence of video frames forming a corresponding first video and a video transmission module configured to transmit the video frames to the operator terminal unit. Said operator terminal unit comprises a display and drawing module configured to display the video frames and allow an operator to generate digital graphic components to be superimposed on respective video frames; a graphic transmitter module configured to transmit said graphic components to the user terminal unit. The user terminal unit further comprises a graphic receiver module configured to receive, from the operator terminal unit, the digital graphic components; a display module configured to display a second video based on the video frames and the received digital graphic components; and a synchronization module configured to control the display module to display received digital graphic components superimposed on respective video frames.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM
An image processing apparatus includes a processor and a memory built in or connected to the processor, in which wherein the processor acquires specific region information indicating a specific region designated in an imaging region image screen on which an imaging region image obtained by imaging an imaging region is displayed, and outputs a specific region processed image obtained by processing an image corresponding to the specific region indicated by the specific region information among a plurality of images obtained by imaging the imaging region.
Information processing apparatus and information processing method
The present technology relates to an information processing apparatus, an information processing method, and a program that make it possible to eliminate or minimize VR sickness with an immersive feeling enhanced. On the basis of head posture of a user, a video generation section generates a video resulting from control of an angle of view of a virtual camera, the angle of view corresponding to a field of view of the user travelling in a virtual space. When the user is in an acceleration state in the virtual space, the video generation section changes the angle of view of the virtual camera from a first angle of view at the time when the user is in a non-acceleration state to a second angle of view based on an acceleration direction of the user. The present technology can be applied to, for example, an HMD.