Patent classifications
H04N13/282
MULTIVIEW DISPLAY SYSTEM AND METHOD WITH ADAPTIVE BACKGROUND
An adaptive background multiview image display system and method provides improved multiview image quality. Systems and methods may involve generating crosstalk data that reduces crosstalk between a first view of subject image and a second view of the subject image. The subject image may be a multiview image to be overlaid on a background image. A crosstalk violation may be detected in the subject image based on the crosstalk data. At least one of a color value or a brightness value of the background image is determined according to a degree of the crosstalk violation to generate the background image. The subject image may then be overlaid on the generated background image.
360-DEGREE VIRTUAL-REALITY SYSTEM FOR DYNAMIC EVENTS
A dynamic event capturing and rendering system collects and aggregates video, audio, positional, and motion data to create a comprehensive user perspective 360-degree rendering of a field of play. An object associated with a user collects data that is stitched together and synchronized to provide post event analysis and training. Through an interface actions that occurred during an event can be recreated providing the viewer with information on what the user associated with the object was experiencing, where the user was looking, and how certain actions may have changed the outcome. Using the collected data, a virtual realty environment is created that can be manipulated to present alternative courses of action and outcomes.
SYSTEM AND METHOD OF SPEAKER REIDENTIFICATION IN A MULTIPLE CAMERA SETTING CONFERENCE ROOM
In a multi-camera videoconferencing configuration, the locations of each camera are known. By referencing a known object visible to each camera, a 3D coordinate system is developed, with the position and angle of each camera being associated with that 3D coordinate system. The locations of the conference participants in the 3D coordinate system are determined for each camera. Sound source localization (SSL) from one camera, generally a central camera, is used to determine the speaker. The pose of the speaker is then determined. From the pose and the known locations of the cameras, the camera with the best frontal view of the speaker is determined. The 3D coordinates of the speaker are then used to direct the determined camera to frame the speaker. If the face of the speaker is not sufficiently visible, the next best camera view is determined, and the speaker framed from that camera view.
SYSTEM AND METHOD OF SPEAKER REIDENTIFICATION IN A MULTIPLE CAMERA SETTING CONFERENCE ROOM
In a multi-camera videoconferencing configuration, the locations of each camera are known. By referencing a known object visible to each camera, a 3D coordinate system is developed, with the position and angle of each camera being associated with that 3D coordinate system. The locations of the conference participants in the 3D coordinate system are determined for each camera. Sound source localization (SSL) from one camera, generally a central camera, is used to determine the speaker. The pose of the speaker is then determined. From the pose and the known locations of the cameras, the camera with the best frontal view of the speaker is determined. The 3D coordinates of the speaker are then used to direct the determined camera to frame the speaker. If the face of the speaker is not sufficiently visible, the next best camera view is determined, and the speaker framed from that camera view.
AUTONOMOUS WALKING VEHICLE
In one aspect, a vehicle is provided that includes i) a plurality of wheel-leg components and ii) a surround view imaging system for generating a surround view image of the vehicle. The plurality of wheel-leg components can operate to provide locomotion to the vehicle. The surround view image comprising a 360-degree, three-dimensional view of an environment surrounding the vehicle. The vehicle is configured to operate autonomously using the surround image view to control the locomotion of the plurality of the wheel-leg components.
AUTONOMOUS WALKING VEHICLE
In one aspect, a vehicle is provided that includes i) a plurality of wheel-leg components and ii) a surround view imaging system for generating a surround view image of the vehicle. The plurality of wheel-leg components can operate to provide locomotion to the vehicle. The surround view image comprising a 360-degree, three-dimensional view of an environment surrounding the vehicle. The vehicle is configured to operate autonomously using the surround image view to control the locomotion of the plurality of the wheel-leg components.
Information processing apparatus and information processing method
There is provided an information processing apparatus, information processing method, and recording medium that each allow a user to recognize a border of a virtual space without breaking the world view of the virtual space. The information processing apparatus includes a control unit that tracks a motion of a user to present an image of a virtual space to the user, and performs distance control to increase a distance between a viewpoint of the user and a border region in the virtual space while an operation of the user coming closer toward the border region is being inputted. The border region is fixed at a specific position in the virtual space.
Information processing apparatus and information processing method
There is provided an information processing apparatus, information processing method, and recording medium that each allow a user to recognize a border of a virtual space without breaking the world view of the virtual space. The information processing apparatus includes a control unit that tracks a motion of a user to present an image of a virtual space to the user, and performs distance control to increase a distance between a viewpoint of the user and a border region in the virtual space while an operation of the user coming closer toward the border region is being inputted. The border region is fixed at a specific position in the virtual space.
Computer-generated image processing including volumetric scene reconstruction
An imagery processing system determines pixel color values for pixels of captured imagery from volumetric data, providing alternative pixel color values. A main imagery capture device, such as a camera, captures main imagery such as still images and/or video sequences, of a live action scene. Alternative devices capture imagery of the live action scene, in some spectra and form, and capture information related to pixel color values for multiple depths of a scene, which can be processed to provide reconstruction.
Method for obtaining a three-dimensional model of an inspection site
A method for obtaining a three-dimensional model of an inspection site, using a perception module, is disclosed. The perception module comprises a detection unit, e.g. comprising one or more cameras and/or a three-dimensional laser scanner, configured to obtain a three-dimensional image. At least one three-dimensional image is obtained by means of the detection unit. A three-dimensional model of surroundings of the perception module is created, based on the obtained three-dimensional image. The created three-dimensional model and a plan of the inspection site are compared and features of the created three-dimensional model and features of the plan of the inspection site are matched. A site-specific three-dimensional model of the inspection site is formed, based on the created three-dimensional model and the plan of the inspection site, and based on the comparison.