Patent classifications
G06V10/25
METHOD AND APPARATUS FOR IDENTIFYING OBJECT OF INTEREST OF USER
The present disclosure relates to methods and apparatuses for identifying an object of interest of a user. One example method includes obtaining information about a line-of-sight-gazed region of the user and an environment image corresponding to the user, obtaining information about a first gaze region of the user in the environment image based on the environment image, where the first gaze region is used to indicate a sensitive region determined by using a physical feature of a human body, and obtaining a target gaze region of the user based on the information about the line-of-sight-gazed region and the information about the first gaze region. The gaze region is used to indicate a region in which a target object gazed by the user in the environment image is located.
METHOD AND SYSTEM FOR DETECTING PHYSICAL FEATURES OF OBJECTS
A computer can operated, including detecting defects, or other physical features, of artificial objects. Image data is received of one or more artificial objects, and applying an image segmentation process to the image data to detect predetermined defects of the one or more artificial objects. The image segmentation process identifies one or more regions of the image data determined to have a likelihood of showing one or more of the predetermined defects. The identified one or more regions is output. The image segmentation process determines severity metrics for the defects in the one or more regions, wherein a severity metric represents a severity or significance of a defect. The image segmentation process further determines a confidence factor for each region of the one or more regions, wherein the confidence factor represents a likelihood of the presence of a predetermined defect in the region.
METHOD AND SYSTEM FOR DETECTING PHYSICAL FEATURES OF OBJECTS
A computer can operated, including detecting defects, or other physical features, of artificial objects. Image data is received of one or more artificial objects, and applying an image segmentation process to the image data to detect predetermined defects of the one or more artificial objects. The image segmentation process identifies one or more regions of the image data determined to have a likelihood of showing one or more of the predetermined defects. The identified one or more regions is output. The image segmentation process determines severity metrics for the defects in the one or more regions, wherein a severity metric represents a severity or significance of a defect. The image segmentation process further determines a confidence factor for each region of the one or more regions, wherein the confidence factor represents a likelihood of the presence of a predetermined defect in the region.
METHOD AND APPARATUS FOR DISPLAYING EXPRESSION IN VIRTUAL SCENE
This disclosure is directed to a method and apparatus for displaying an expression in a virtual scene. The method includes: displaying a virtual scene; displaying an expression selection region at a first target position in the virtual scene in response to a drag operation on an expression addition icon; and displaying the first target expression in the virtual scene in response to a selection operation on a first target expression in a plurality of first candidate expressions.
METHOD AND APPARATUS FOR DISPLAYING EXPRESSION IN VIRTUAL SCENE
This disclosure is directed to a method and apparatus for displaying an expression in a virtual scene. The method includes: displaying a virtual scene; displaying an expression selection region at a first target position in the virtual scene in response to a drag operation on an expression addition icon; and displaying the first target expression in the virtual scene in response to a selection operation on a first target expression in a plurality of first candidate expressions.
TEXTURE FILTERING OF TEXTURE REPRESENTED BY MULTILEVEL MIPMAP
Texture filtering is applied to a texture represented with a mipmap comprising a plurality of levels, wherein each level of the mipmap comprises an image representing the texture at a respective level of detail. A texture filtering unit has minimum and maximum limits on an amount by which it can alter the level of detail when it filters texels from an image of a single level of the mipmap. The range of level of detail between the minimum and maximum limits defines an intrinsic region of the texture filtering unit. If it is determined that a received input level of detail is in an intrinsic region of the texture filtering unit, texels are read from a single mipmap level of the mipmap, and the read texels from the single mipmap level are filtered to determine a filtered texture value representing part of the texture at the input level of detail. If it is determined that the received input level of detail is in an extrinsic region of the texture filtering unit: texels are read from two mipmap levels of the mipmap, and the read texels from the two mipmap levels are processed to determine a filtered texture value representing part of the texture at the input level of detail.
TEXTURE FILTERING OF TEXTURE REPRESENTED BY MULTILEVEL MIPMAP
Texture filtering is applied to a texture represented with a mipmap comprising a plurality of levels, wherein each level of the mipmap comprises an image representing the texture at a respective level of detail. A texture filtering unit has minimum and maximum limits on an amount by which it can alter the level of detail when it filters texels from an image of a single level of the mipmap. The range of level of detail between the minimum and maximum limits defines an intrinsic region of the texture filtering unit. If it is determined that a received input level of detail is in an intrinsic region of the texture filtering unit, texels are read from a single mipmap level of the mipmap, and the read texels from the single mipmap level are filtered to determine a filtered texture value representing part of the texture at the input level of detail. If it is determined that the received input level of detail is in an extrinsic region of the texture filtering unit: texels are read from two mipmap levels of the mipmap, and the read texels from the two mipmap levels are processed to determine a filtered texture value representing part of the texture at the input level of detail.
GROUND HEIGHT-MAP BASED ELEVATION DE-NOISING
The disclosed technology provides solutions provides solutions for improving sensor data accuracy and in particular, for improving radar data by de-noising radar elevation measurements using a height-map. In some aspects, a process of the disclosed technology can include steps for receiving camera data corresponding with a first location, receiving radar data comprising a plurality of radar points, and processing the radar data to generate height-corrected radar data. In some aspects, the process can further include steps for projecting the height-corrected radar data into an image space to generate radar-image data. Systems and machine-readable media are also provided.
IMAGE PROCESSING METHOD, NETWORK TRAINING METHOD, AND RELATED DEVICE
This application provides an image processing method, a network training method, and a related device, and relates to image processing technologies in the artificial intelligence field. The method includes: inputting a first image including a first vehicle into an image processing network to obtain a first result output by the image processing network, where the first result includes location information of a two-dimensional 2D bounding frame of the first vehicle, coordinates of a wheel of the first vehicle, and a first angle of the first vehicle, and the first angle of the first vehicle indicates an included angle between a side line of the first vehicle and a first axis of the first image; and generating location information of a three-dimensional 3D outer bounding box of the first vehicle based on the first result.
IMAGE REGISTRATION METHOD AND ELECTRONIC DEVICE
An image registration method includes: acquiring a target image comprising a target object; inputting the target image to a preset network model, and outputting position information and rotation angle information of the target object; obtaining a reference image comprising the target object by querying a preset image database according to the position information and the rotation angle information; and performing image registration on the target image and the reference image to obtain a corresponding position of the target object of the target image in the reference image.