Patent classifications
H04N23/667
INFORMATION PROCESSING DEVICE, PROGRAM, AND METHOD
There is a demand for an information processing device capable of determining a position where a user attempts to perform operation on a screen of the information processing device, without impairing usability. Therefore, the present disclosure proposes an information processing device including: an acceleration sensor unit that detects a tilt of the information processing device; a display unit; a gyroscope sensor unit that measures angular velocity of the information processing device; and a determination unit that, in response to detection of a first tilt of the information processing device, determines a position where a user attempts to perform operation on the display unit on the basis of a first gyro waveform obtained from the angular velocity and a second gyro waveform at each operation position of the information processing device measured in advance or a learning model generated by using the second gyro waveform as training data. The display unit further displays a screen displayed on the display unit while reducing the screen by a predetermined amount or moving the screen by a predetermined amount in a predetermined direction in accordance with the determined position.
IMAGE CAPTURING METHOD AND APPARATUS, ELECTRONIC PHOTOGRAPHY DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
An image capturing method and apparatus, an electronic photography device and a computer-readable storage medium are provided. The image capturing method includes: in a snapshot mode, performing image acquisition at a first frame rate and buffering an acquired first captured image; in a live mode, acquiring a second captured image at a second frame rate, where the second frame rate is less than the first frame rate; and processing the buffered first captured image and the second captured image at the second frame rate.
Communication apparatus, control method and storage medium
A communication apparatus includes a wireless communication unit configured to wirelessly communicate with an external apparatus, a speaker, a memory, and a control unit, wherein in a case where an operation is executed to change the communication apparatus to a predetermined mode in which a sound is not output from the speaker, the control unit performs control such that the communication apparatus is changed to the predetermined mode, wherein in a case where identification information about the external apparatus is stored in the memory, the control unit enables a wireless communication function in response to the execution of the operation to change the communication apparatus to the predetermined mode, and wherein in a case where the identification information about the external apparatus is not stored in the memory, even if the operation to change the communication apparatus to the predetermined mode is executed, the wireless communication function is not enabled.
Region of interest-based resolution normalization
Normalized resolutions are determined for first and second regions of interest of an initial video stream captured by a video capture device located within a physical space. The first region of interest is associated with a first conference participant within the physical space and the second region of interest is associated with a second conference participant within the physical space. Instructions are transmitted to the video capture device to cause the video capture device to capture, at the normalized resolutions, a first video stream associated with the first region of interest and a second video stream associated with the second region of interest. The first and second video streams conform sizes and quality levels of the first and second conference participants within separate user interface tiles of a conferencing software user interface to which the first and second video streams are output.
Systems and methods for exposure control
An imaging control method is provided. The method may include obtaining a current image generated based on an exposure parameter, wherein the current image includes a plurality of pixels; determining a plurality of target pixels or target pixel groups from at least portion of the plurality of pixels; determining a statistic representation based on target pixels or target pixel groups; determining a characteristic feature based on the statistic representation; and, updating the exposure parameter, based on the characteristic feature, to generate an updated image.
CAMERA CONTROL USING SYSTEM SENSOR DATA
A method for using cameras in an augmented reality headset is provided. The method includes receiving a signal from a sensor mounted on a headset worn by a user, the signal being indicative of a user intention for capturing an image. The method also includes identifying the user intention for capturing the image, based on a model to classify the signal from the sensor according to the user intention, selecting a first image capturing device in the headset based on a specification of the first image capturing device and the user intention for capturing the image, and capturing the image with the first image capturing device. An augmented reality headset, a memory storing instructions, and a processor to execute the instructions to cause the augmented reality headset as above are also provided.
CAPSULE ENDOSCOPE APPARATUS AND METHOD OF SUPPORTING LESION DIAGNOSIS
Provided are a capsule endoscope apparatus for supporting a lesion diagnosis and a lesion diagnosis supporting method using the same. The capsule endoscope apparatus for supporting lesion diagnosis includes an imaging unit configured to capture one or more images of an inside of a body, a control unit configured to detect a suspected lesion region in the images and perform a precision diagnosis procedure when a suspected lesion region corresponding to a value equal to or greater than a certain threshold is detected, an image processing unit configured to process the images in the precision diagnosis procedure, and a communication module configured to transmit and receive processed images to another capsule endoscope apparatus or a terminal by using a wireless communication method.
Image pickup control device, image pickup device, control method for image pickup device, non-transitory computer-readable storage medium
An image pickup control device, comprising: a first obtainment unit configured to obtain a picked-up image picked up by an image pickup unit; a display control unit configured to display the picked-up image on a display; a detection unit configured to detect a viewpoint region which is a region viewed by a user in the display; a second obtainment unit configured to obtain a feature amount relating to the picked-up image; and a control unit configured to switch between a first mode, in which a focus of the image pickup unit is controlled such that a subject displayed on the viewpoint region is focused, and a second mode, in which control is executed such that the focus is not changed, on a basis of the viewpoint region and the feature amount.
Image processing system and method thereof for generating projection images based on inward or outward multiple-lens camera
An image processing system is disclosed, comprising: an M-lens camera, a compensation device and a correspondence generator. The M-lens camera generates M lens images. The compensation device generates a projection image according to a first vertex list and the M lens images. The correspondence generator is configured to conduct calibration for vertices to define vertex mappings, horizontally and vertically scan each lens image to determine texture coordinates of its image center, determine texture coordinates of control points according to the vertex mappings, and P1 control points in each overlap region in the projection image; and, determine two adjacent control points and a coefficient blending weight for each vertex in each lens image according to the texture coordinates of the control points and the image center in each lens image to generate the first vertex list, where M>=2.
Using an image sensor for always-on application within a mobile device
A mobile device includes an application processor and an image sensor. The application processor includes an imaging subsystem configured to process high resolution image data through a first interface and a sensor hub configured to process sensor data through a second interface. The image sensor operates in one of first and second modes. The image sensor is configured to capture the high resolution image data in response to a request from the imaging subsystem and the imaging subsystem is configured to access the high resolution image data using the first interface for performing a first operation, during the first mode. The image sensor is configured to capture low resolution image data and the sensor hub is configured to access the low resolution image data using the second bus for performing a second operation, during the second mode.