Patent classifications
G06T7/40
Tire sensing and analysis system
The tire sensing and analysis system may comprise a measurement device and local application software. The measurement device may make contact with a tire of a vehicle such that the measurement device is positioned at a specific distance and orientation relative to the tire. The measurement device may capture multiple images of the tire using an RGB camera and a pair of infrared cameras. The local application software may analyze the images and may construct a 3D mesh describing the 3-dimensional contours of the tread. The local application software may determine a tread depth and may display status and warning messages on a display unit that is coupled to the measurement device. The measurements may be communicated to remote application software for additional analysis. As non-limiting examples, the remote application software may detect specific tire wear patterns and may transmit a report to share results of the analysis.
REPRESENTING VOLUMETRIC VIDEO IN SALIENCY VIDEO STREAMS
Saliency regions are identified in a global scene depicted by volumetric video. Saliency video streams that track the saliency regions are generated. Each saliency video stream tracks a respective saliency region. A saliency stream based representation of the volumetric video is generated to include the saliency video streams. The saliency stream based representation of the volumetric video is transmitted to a video streaming client.
REPRESENTING VOLUMETRIC VIDEO IN SALIENCY VIDEO STREAMS
Saliency regions are identified in a global scene depicted by volumetric video. Saliency video streams that track the saliency regions are generated. Each saliency video stream tracks a respective saliency region. A saliency stream based representation of the volumetric video is generated to include the saliency video streams. The saliency stream based representation of the volumetric video is transmitted to a video streaming client.
Mid-air haptic textures
Described is a method for instilling the haptic dimension of texture to virtual and holographic objects using mid-air ultrasonic technology. A set of features is extracted from imported images using their associated displacement maps. Textural qualities such as the micro and macro roughness are then computed and fed to a haptic mapping function together with information about the dynamic motion of the user's hands during holographic touch. Mid-air haptic textures are then synthesized and projected onto the user's bare hands. Further, mid-air haptic technology enables tactile exploration of virtual objects in digital environments. When a user's prior and current expectations and rendered tactile texture differ, user immersion can break. A study aims at mitigating this by integrating user expectations into the rendering algorithm of mid-air haptic textures and establishes a relationship between visual and mid-air haptic roughness.
Mid-air haptic textures
Described is a method for instilling the haptic dimension of texture to virtual and holographic objects using mid-air ultrasonic technology. A set of features is extracted from imported images using their associated displacement maps. Textural qualities such as the micro and macro roughness are then computed and fed to a haptic mapping function together with information about the dynamic motion of the user's hands during holographic touch. Mid-air haptic textures are then synthesized and projected onto the user's bare hands. Further, mid-air haptic technology enables tactile exploration of virtual objects in digital environments. When a user's prior and current expectations and rendered tactile texture differ, user immersion can break. A study aims at mitigating this by integrating user expectations into the rendering algorithm of mid-air haptic textures and establishes a relationship between visual and mid-air haptic roughness.
System, method, and computer program for capturing an image with correct skin tone exposure
A system and method are provided for capturing an image with correct skin tone exposure. In use, one or more faces are detected having threshold skin tone within a scene. Next, based on the detected one or more faces, the scene is segmented into one or more face regions and one or more non-face regions. A model of the one or more faces is constructed based on a depth map and a texture map, the depth map including spatial data of the one or more faces, and the texture map includes surface characteristics of the one or more faces. The one or more images of the scene are captured based on the model. Further, in response to the capture, the one or more face regions are processed to generate a final image.
System, method, and computer program for capturing an image with correct skin tone exposure
A system and method are provided for capturing an image with correct skin tone exposure. In use, one or more faces are detected having threshold skin tone within a scene. Next, based on the detected one or more faces, the scene is segmented into one or more face regions and one or more non-face regions. A model of the one or more faces is constructed based on a depth map and a texture map, the depth map including spatial data of the one or more faces, and the texture map includes surface characteristics of the one or more faces. The one or more images of the scene are captured based on the model. Further, in response to the capture, the one or more face regions are processed to generate a final image.
SYSTEM AND METHOD FOR GENERATING A THREE-DIMENSIONAL IMAGE WHERE A POINT CLOUD IS GENERATED ACCORDING TO A SET OF COLOR IMAGES OF AN OBJECT
A method for generating a three-dimensional image includes capturing a set of color images of an object, generating a first point cloud according to at least the set of color images, generating a second point cloud by performing a filtering operation to the first point cloud according to the set of color images, selectively performing a pairing operation using the second point cloud and a target point cloud to generate pose information, and combining the first point cloud and the target point cloud according to the pose information to update the target point cloud to generate the three-dimensional image of the object. The set of color images is related to color information of the object. The relativity of the second point cloud and the rigid surface is higher than the relativity of the second point cloud and the non-rigid surface.
SYSTEM AND METHOD FOR GENERATING A THREE-DIMENSIONAL IMAGE WHERE A POINT CLOUD IS GENERATED ACCORDING TO A SET OF COLOR IMAGES OF AN OBJECT
A method for generating a three-dimensional image includes capturing a set of color images of an object, generating a first point cloud according to at least the set of color images, generating a second point cloud by performing a filtering operation to the first point cloud according to the set of color images, selectively performing a pairing operation using the second point cloud and a target point cloud to generate pose information, and combining the first point cloud and the target point cloud according to the pose information to update the target point cloud to generate the three-dimensional image of the object. The set of color images is related to color information of the object. The relativity of the second point cloud and the rigid surface is higher than the relativity of the second point cloud and the non-rigid surface.
METHOD AND APPARATUS FOR PROVIDING USER-INTERACTIVE CUSTOMIZED INTERACTION FOR XR REAL OBJECT TRANSFORMATION
A method of providing a user-interactive customized interaction for a transformation of an extended reality (XR) real object includes segmenting a target object from an input image received through a camera, extracting a similar target object having a highest similarity to the target object from three-dimensional (3D) model data that has been previously learnt, extracting texture of the target object through the camera and mapping the texture to the similar target object, transforming a shape of the similar target object by incorporating intention information of a user based on a user interaction, and rendering and outputting the transformed similar target object.