Patent classifications
G06T3/4038
METHOD AND DEVICE FOR GENERATING VEHICLE PANORAMIC SURROUND VIEW IMAGE
The present disclosure relates to a method for generating a panoramic surround view image of a vehicle, comprising: acquiring actual original images of external environment of a first part and a second part of the vehicle hinged to each other; processing the actual original images to obtain respective actual independent surround view images of the first part and the second part; obtaining coordinates of respective hinge points of the first part and the second part; determining matched feature point pairs in the actual independent surround view images of the first part and the second pan; calculating a distance between two points in each matched feature point pair accordingly, and taking matched feature point pairs with the distance less than a preset first threshold as a successfully matched feature point pairs; and taking a rotation angle corresponding to the maximum number of the successfully matched feature point pairs as a candidate rotation angle of the first part relative to the second part. The present disclosure further provides a device for generating a panoramic surround view image of a vehicle and an intelligent vehicle.
SYSTEMS, METHODS, STORAGE MEDIA, AND COMPUTING PLATFORMS FOR SCANNING ITEMS AT THE POINT OF MANUFACTURING
Systems, methods, storage media, and computing platforms for scanning items at the point of manufacturing are disclosed. Exemplary implementations may: receive a first set of images of an item from a first set of camera sources; detect a code in the first set of images; combine, responsive to detecting the code, along a second axis perpendicular to the first axis, the first set of images into a first set of combined images; rotate parallel to the first axis; and combine along the first axis.
Dynamic imaging system
A dynamic imaging system is disclosed. The dynamic imaging system may comprise one or more imager, one or more input device, a controller, and/or a display. Each imager may be operable to capture a video stream having a field of view. In some embodiments, the controller may articulate the imager or crop the field of view to change the field of view in response to signals from the one or more input devices. For example, the signal may relate to the vehicle's speed. In other embodiments, the controller may apply a warp to the field of view. The warp may be applied in response to signals from the one or more input devices. In yet other embodiments, video streams from one or more imagers may be stitched together by the controller. Further, the controller may likewise move the stitch line in response to signals from the or more input devices.
ELECTRONIC DEVICE GENERATING IMAGE AND METHOD FOR OPERATING THE SAME
An electronic device is provided. The electronic device includes at least one processor, and a memory functionally connected to the at least one processor. The memory may store instructions that, when executed, enable the electronic device to obtain a plurality of images, generate a first basic extended image based on first images among the plurality of images, identify at least one first masking area included in the first basic extended image, and generate a first inference image by modifying the at least one first masking area using at least one first inference result, based on the first images and the first basic extended image. An angle of view of the first inference image may be larger than an angle of view of each of the first images.
User feedback for real-time checking and improving quality of scanned image
A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.
Video data processing method and apparatus
Example video data processing methods and apparatus are disclosed. One example method includes receiving a first stream from a client, where the first bitstream is obtained by encoding image data in a specified spatial object. The specified spatial object is part of panoramic space, and a size of the specified spatial object is larger than a size of a spatial object of the panoramic space corresponding to viewport information. The spatial object corresponding to the viewport information is located in the specified spatial object. The client receives a second stream, where the second bitstream is obtained by encoding image data of a panoramic image of the panoramic space with a lower resolution than a resolution of the image data included in the specified spatial object. The client plays the second bitstream and first bitstream.
Image distribution device, image distribution system, image distribution method, and image distribution program
By performing a simple operation on an information processing terminal, a direction of a subject desired to be viewed by a user 40 can be smoothly displayed from various directions. An acquisition unit that acquires a plurality of pieces of moving image data, a data generating unit that generates still image data for each of the plurality of pieces of moving image data, a storage unit that stores the still image data in association with position data and time data, a designated value accepting unit that accepts a position designation value in the still image data desired to be viewed by a user, and a selection unit that selects the still image data on the basis of the position designation value accepted by the designated value accepting unit and transmits the selected still image data to an external display device via a communication network are included, and the selection unit selects the still image data corresponding to the position designation value that has already been designated in a case in which the designated value accepting unit has not accepted the position designation value and selects the corresponding still image data on the basis of a change in the position designation value by using the time data as a reference in a case in which the designated value accepting unit has accepted the position designation value.
AGRICULTURAL MAPPING AND RELATED SYSTEMS AND METHODS
A method for generating a 2D orthomosaic map including obtaining a series of images of a field from a camera located on a ground based vehicle, processing the series of images to mark pixels of the ground based vehicle and optionally an implement, identifying, marking, and removing pixels containing plants, stitching together the series of images into a single map, and reintroducing pixels containing plants into the single map.
INTELLIGENT IMAGE SEGMENTATION PRIOR TO OPTICAL CHARACTER RECOGNITION (OCR)
A medical device monitoring system and method extract information from screen images from medical device controllers, with a single OCR process invocation per screen image, despite critical information appearing in different screen locations, depending on which medical device controller's screen image is processed. For example, different software versions of the medical device controllers might display the same type of information in different screen locations. Copies of the critical screen information, one copy from each different screen location, are made in a mosaic image, and then the mosaic image is OCR processed to produce text results. Text is selectively extracted from the OCR text results, depending on contents of a selector field on the screen image, such as a software version number or a heart pump model identifier.
Mobile multi-camera multi-view capture
A background scenery portion may be identified in each of a plurality of image sets of an object, where each image set includes images captured simultaneously from different cameras. A correspondence between the image sets may determined, where the correspondence tracks control points associated with the object and present in multiple images. A multi-view interactive digital media representation of the object that is navigable in one or more dimensions and that includes the image sets may be generated and stored.