G06T3/12

INSTRUMENT RECOGNITION METHOD BASED ON IMPROVED U2 NETWORK
20240005639 · 2024-01-04 ·

The present invention discloses an instrument recognition method based on an improved U.sup.2 network. The method includes: replacing common convolution of each layer with grouped convolution Grouped Cony on a basis of an RSU, and segmenting a dial plate and a pointer by using the network; performing noise reduction processing on the scale value array by using a mean filter; and determining a position of a scale value corresponding to the pointer by using a peak value, and outputting a reading according to the scale value and preset data. A pointer-type instrument is segmented by using the improved U.sup.2 network, and automatic reading of the obtained dial plate is implemented by using a conventional computer vision method. Therefore, compared with manual reading, the method has advantages of high precision, high reliability, fast reading, low costs, and the like, and working efficiency can be greatly improved.

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD THEREFOR

An image processing apparatus is disclosed. The image processing apparatus includes a storage unit; a transceiver; and a processor for controlling the storage unit to store an input frame including a plurality of image areas having preset arrangement attributes and metadata including the preset arrangement attributes, control the transceiver to receive viewing angle information, and control the transceiver to transmit the metadata and image data of at least one image region corresponding to the viewing angle information among the plurality of image regions by using at least one of the plurality of transmission channels matched with the plurality of image regions.

PRODUCING 360 DEGREE IMAGE CONTENT ON RECTANGULAR PROJECTION IN ELECTRONIC DEVICE USING PADDING INFORMATION

Embodiments herein disclose a method for producing 360 degree image content on a rectangular projection in an electronic device. The method includes obtaining a 360 degree image content represented by packing one or more projection segments arranged in a rectangular projection. The method includes detecting whether at least one discontinuous boundary is present in the 360-degree image content. The at least one discontinuous boundary is detected using the packing of one or more projection segments. The method includes applying at least one padding information on the at least one discontinuous boundary. The method includes producing another 360 degree image content on the rectangular projection in the electronic device based on the padding information.

Image processing system and image processing method
10863083 · 2020-12-08 · ·

Two image processing systems and an image processing method are provided. One of the image processing systems includes a first unit configured to output a portion of input image data, a second unit configured to transform a coordinate of input image data, and a third unit configured to output the image data processed by the first unit and the second unit as video data to be displayed on a display. The other one of the image processing system further includes a fourth unit configured to combine input image data of a plurality of images to output one piece of image data. The image processing method includes outputting a portion of input image data, transforming a coordinate of input image data, and outputting the image data as video data to be displayed on a display.

Method and system for handling images
10861139 · 2020-12-08 · ·

A method performed by a vehicle system for handling images of surroundings of a vehicle. An image of surroundings of the vehicle is obtained. The image is obtained from at least one image capturing device mounted in or on the vehicle, and the image capturing device comprises a fisheye lens. At least a part of distortions in the image is corrected to obtain a corrected image. The corrected image is rotationally transformed using a first rotational transformation to obtain a first transformed image. The corrected image is rotationally transformed using a second rotational transformation to obtain a second transformed image. The first and second rotational transformations are different from each other, and the first and second transformed images are consecutive images.

Surround-view with seamless transition to 3D view system and method

A method for seamless transition from a 2D surround view to a 3D surround view. The method includes initializing the 2D-SRV processing chain, displaying the 2D surround view while waiting for HLOS handshake to complete and upon completion of a HLOS handshake, initializing a 3D-SRV processing chain and waiting for a 3D-SRV buffer output; disabling the 2D-SRV display pipeline and enabling a 3D-SRV display pipeline; enabling switchback monitor; atomically switching to 3D surround view seamlessly and glitch free; and displaying 3D surround view on a monitor. Another method includes detecting a crash in a HLOS and seamlessly switching to a 2D surround view from a 3D surround view.

CAMERA PARAMETER ESTIMATION DEVICE, METHOD AND PROGRAM
20200380729 · 2020-12-03 · ·

A projection image generation unit 91 applies a plurality of projection schemes that use the radius of a visual field region of a fisheye-lens camera to an image that is imaged by the fisheye-lens camera to generate a plurality of projection images. A display unit 92 displays the plurality of projection images. A selection acceptance unit 93 accepts a projection image selected by a user from among the plurality of displayed projection images. A projection scheme determination unit 94 determines a projection scheme on the basis of the selected projection image. An output unit 95 outputs an internal parameter of the fisheye-lens camera that corresponds to the determined projection scheme.

Optical Imaging and Scanning of Holes
20200371332 · 2020-11-26 · ·

Methods and apparatus for optical imaging and scanning of holes machined, drilled or otherwise formed in a substrate made of composite or metallic material. The method utilizes an optical instrument for imaging and scanning a hole in combination with an image processor configured (e.g., programmed) to post-process the image data to generate one complete planarized image without conical optical distortion. The optical instrument includes an optical microscope with confocal illumination and a conical mirror axially positioned to produce a full 360-degree sub-image with conical distortion. In the post-processing step, a mathematical transformation in the form of computer-executable code is used to transform the raw conical sub-images to planar sub-images. The planarized sub-images may be stitched together to form a complete planarized image of the hole.

Overlay processing method in 360 video system, and device thereof

A 360 image data processing method performed by a 360 video receiving device, according to the present invention, comprises the steps of: receiving 360 image data; acquiring information and metadata on an encoded picture from the 360 image data; decoding the picture on the basis of the information on the encoded picture; and rendering the decoded picture and an overlay on the basis of the metadata, wherein the metadata includes overlay-related metadata, the overlay is rendered on the basis of the overlay-related metadata, and the overlay-related metadata includes information on a region of the overlay.

Image processing method, apparatus and machine-readable media
10846823 · 2020-11-24 · ·

Embodiments of the present application provide a method and an apparatus, and a machine readable media for image processing. The method includes obtaining a panoramic video image, where the panoramic video image is determined based on a perspective mapping, and the panoramic video image includes a primary perspective region and at least one secondary perspective region; dividing the secondary perspective region into at least two sub-regions based on distribution information of high-frequency components in the secondary perspective region; determining respective filter templates of the sub-regions, and filtering the sub-regions using the respective filter templates; and determining a filtered panoramic video image.