Patent classifications
H04N5/265
Methods and apparatus for use with multiple optical chains
Methods and apparatus for supporting zoom operations using a plurality of optical chain modules, e.g., camera modules, are described. Switching between use of groups of optical chains with different focal lengths is used to support zoom operations. Digital zoom is used in some cases to support zoom levels corresponding to levels between the zoom levels of different optical chain groups or discrete focal lengths to which optical chains may be switched. In some embodiments optical chains have adjustable focal lengths and are switched between different focal lengths. In other embodiments optical chains have fixed focal lengths with different optical chain groups corresponding to different fixed focal lengths. Composite images are generated from images captured by multiple optical chains of the same group and/or different groups. Composite image is in accordance with a user zoom control setting. Individual composite images may be generated and/or a video sequence.
Image capturing control apparatus capable of displaying OSD and image capturing control method
An image capturing control apparatus that enables a user to recognize a transparency of OSD for recording. A system controller sets a transparency of OSD superimposed on a captured LV image and sets whether or not to record the captured image in a state combined with the OSD, as an image file. The OSD is displayed in a state superimposed on the LV image at a transparency of OSD for display, regardless of a setting concerning recording of the image file, and in a case where it is set to record the LV image in the state combined with the OSD, the OSD is displayed according to a specific operation, in a state superimposed on the LV image, at a transparency of OSD for recording, regardless of the transparency of OSD for display.
Image capturing control apparatus capable of displaying OSD and image capturing control method
An image capturing control apparatus that enables a user to recognize a transparency of OSD for recording. A system controller sets a transparency of OSD superimposed on a captured LV image and sets whether or not to record the captured image in a state combined with the OSD, as an image file. The OSD is displayed in a state superimposed on the LV image at a transparency of OSD for display, regardless of a setting concerning recording of the image file, and in a case where it is set to record the LV image in the state combined with the OSD, the OSD is displayed according to a specific operation, in a state superimposed on the LV image, at a transparency of OSD for recording, regardless of the transparency of OSD for display.
IMAGE PROCESSING METHOD, PROGRAM, AND IMAGE PROCESSING DEVICE
Image processing includes obtaining image I[0,0] of a picture captured by an image capture means, in a state where light is irradiated to the picture from a light source at a reference position relative to a normal line of the picture, obtaining image I[α1,0] of the picture captured by an image capture means, in a state where the light is irradiated to the picture from the light source at a position inclined from the reference position at an angle α1 in the first direction, obtaining image I[0, β1] of the picture captured by an image capture means, in a state where the light is irradiated to the picture from the light source at a position inclined by an angle β1 from the reference position in a second direction different from the first direction, creating a three-dimensional map of the picture, using a set of images I[0, β1] and I[0, β2], merging at least a part of each of image I[α1,0], image I[0,β1], and image I[0,β2] with respect to image I[0,0], and recording as two-dimensional image data the image subjected to the emphasizing process.
GEO-SPATIAL CONTEXT FOR FULL-MOTION VIDEO
A method and a system for generating a composite video feed for a geographical area are disclosed. A video of the geographical area, captured by a camera, of an aerial platform is received. The video includes metadata indicative of location information, which is used to identify the coordinates of the geographical area. An image that is adjacent to the geographical area is received from the geographical information system and is transformed according to the metadata. The coordinates of the geographical area are used to determine an area with the image. The video is embedded in the area by matching the area with the coordinates of the geographical area, where the edges of the video correspond to the boundaries of the area. A composite video feed, including the video embedded along with the image, is generated and a video player displays the composite video feed.
GEO-SPATIAL CONTEXT FOR FULL-MOTION VIDEO
A method and a system for generating a composite video feed for a geographical area are disclosed. A video of the geographical area, captured by a camera, of an aerial platform is received. The video includes metadata indicative of location information, which is used to identify the coordinates of the geographical area. An image that is adjacent to the geographical area is received from the geographical information system and is transformed according to the metadata. The coordinates of the geographical area are used to determine an area with the image. The video is embedded in the area by matching the area with the coordinates of the geographical area, where the edges of the video correspond to the boundaries of the area. A composite video feed, including the video embedded along with the image, is generated and a video player displays the composite video feed.
METHOD FOR GENERATING A VIDEO COMPRISING BLINK DATA OF A USER VIEWING A SCENE
A method performed by a computer for generating a video comprising blink data of a user viewing a scene depicted as video data, where the blink data is overlayed on the video data. The method includes receiving sensor data. The sensor data includes at least the video data including at least one video frame, and gaze tracking data at least indicative of viewed positions within the scene depicted by at least one video frame of the video data. The method includes processing the sensor data to generate blink data indicative of blink motion of at least one eye of the user. The method includes generating a video overlay by rendering the blink data. generating an output video by mixing the video data and the video overlay.
METHOD FOR GENERATING A VIDEO COMPRISING BLINK DATA OF A USER VIEWING A SCENE
A method performed by a computer for generating a video comprising blink data of a user viewing a scene depicted as video data, where the blink data is overlayed on the video data. The method includes receiving sensor data. The sensor data includes at least the video data including at least one video frame, and gaze tracking data at least indicative of viewed positions within the scene depicted by at least one video frame of the video data. The method includes processing the sensor data to generate blink data indicative of blink motion of at least one eye of the user. The method includes generating a video overlay by rendering the blink data. generating an output video by mixing the video data and the video overlay.
IMAGE PROCESSING DEVICE, IMAGING DEVICE, METHOD FOR CONTROLLING IMAGE PROCESSING DEVICE, AND STORAGE MEDIUM
A digital camera includes: a display control unit that displays an image acquired by an imaging unit and setting values regarding the imaging on a display unit at the time of live-view image; and an image synthesis unit that synthesizes a plurality of images acquired by the imaging unit on the basis of the setting values, the display control unit displaying two or more setting values out of an exposure time per image out of the images to be synthesized, the number of images to be synthesized, and a total exposure time of the images to be synthesized, in an image synthesis mode in which a plurality of images are captured and synthesized.
IMAGE PROCESSING DEVICE, IMAGING DEVICE, METHOD FOR CONTROLLING IMAGE PROCESSING DEVICE, AND STORAGE MEDIUM
A digital camera includes: a display control unit that displays an image acquired by an imaging unit and setting values regarding the imaging on a display unit at the time of live-view image; and an image synthesis unit that synthesizes a plurality of images acquired by the imaging unit on the basis of the setting values, the display control unit displaying two or more setting values out of an exposure time per image out of the images to be synthesized, the number of images to be synthesized, and a total exposure time of the images to be synthesized, in an image synthesis mode in which a plurality of images are captured and synthesized.