Patent classifications
G06V10/763
Lane count estimation
A method for assigning a number of lanes and a direction of travel on a path includes receiving location data including a plurality of location points, projecting the plurality of location points on to an aggregation axis perpendicular to a centerline of the vehicle path, grouping the plurality of location points as projected onto the aggregation axis into one or more clusters, and determining the number of vehicle lanes of the vehicle path based on a count of the one or more clusters.
TEST SUPPORT METHOD, TEST SUPPORT DEVICE, AND STORAGE MEDIUM
A test support method includes a step of obtaining a pre-change image and a post-change image to be displayed on a monitoring and control system, a step of extracting, from the post-change image, multiple symbols that have changed from corresponding symbols in the pre-change image, a step of adding order information to the multiple symbols extracted, and a step of outputting a test image in which the order information is added to the multiple symbols.
VIDEO DATA PROCESSING METHOD AND APPARATUS, DEVICE, AND MEDIUM
Embodiments of the disclosure provide a data processing method and apparatus, a device, and a medium. The method includes: performing video analysis on video data of a target video to obtain a plurality of video segments; determining a video template associated with a target user from a video template database based on a user portrait of the target user, and obtaining at least one predetermined template segment and a template tag sequence in the video template; screening at least one video segment matching the template attribute tag of the at least one template segment; splicing the at least one matched video segment according to a position of a template attribute tag of each template segment in the template tag sequence as a video material segment of the target video; and pushing the video data and the video material segment to an application client corresponding to the target user,
CROWD MOTION SIMULATION METHOD BASED ON REAL CROWD MOTION VIDEOS
A crowd motion simulation method is provided based on real crowd motion videos. The method includes framing the videos and storing the framed videos into continuous high-definition images, generating a crowd density map of each image, and accurately positioning an individual in each density map to obtain the accurate position of each individual. The method also includes correlating the positions of each individual in different images to form a complete motion trajectory, and extracting motion trajectory data; and quantifying motion trajectory data, defining training data and data labels, and calculating data correlation. The method further includes building a deep convolutional neural network, and inputting the motion trajectory data for training to learn crowd motion behaviors; and randomly placing a plurality of simulation individuals in a two-dimensional space, testing a prediction effect of the deep convolutional neural network, adjusting parameters for simulation, and drawing a crowd motion trajectory.
Endoscopic image observation system, endosopic image observation device, and endoscopic image observation method
An endoscopic image observation system supports the observation of a plurality of images captured by a capsule endoscope. The endoscopic image observation system includes a distinguishing unit that outputs an accuracy score indicating the likelihood that each of the plurality of images represents an image of a region sought to be distinguished; a grouping unit that groups the plurality of images into a plurality of clusters in accordance with the accuracy score; and an identification unit that identifies a candidate image for a boundary of the region from among the plurality of images in accordance with the grouping into the plurality of clusters.
Systems and methods for analysis of images of apparel in a clothing subscription platform
Disclosed are methods, systems, and non-transitory computer-readable medium for color and pattern analysis of images including wearable items. For example, a method may include receiving an image depicting a wearable item, identifying the wearable item within the image by identifying a face of an individual wearing the wearable item or segmenting a foreground silhouette of the wearable item from background image portions of the image, determining a portion of the wearable item identified within the image as being a patch portion representative of the wearable item depicted within the image, deriving one or more patterns of the wearable item based on image analysis of the determined patch portion of the image, deriving one or more colors of the wearable item based on image analysis of the determined patch portion of the image, and transmitting information regarding the derived one or more colors and information regarding the derived one or more patterns.
LANDMARK DETECTION USING CURVE FITTING FOR AUTONOMOUS DRIVING APPLICATIONS
In various examples, one or more deep neural networks (DNNs) are executed to regress on control points of a curve, and the control points may be used to perform a curve fitting operation—e.g., Bezier curve fitting—to identify landmark locations and geometries in an environment. The outputs of the DNN(s) may thus indicate the two-dimensional (2D) image-space and/or three-dimensional (3D) world-space control point locations, and post-processing techniques—such as clustering and temporal smoothing—may be executed to determine landmark locations and poses with precision and in real-time. As a result, reconstructed curves corresponding to the landmarks—e.g., lane line, road boundary line, crosswalk, pole, text, etc.—may be used by a vehicle to perform one or more operations for navigating an environment.
Real-time interface classification in an application
Integration code usable to cause a computing device to determine which category from a plurality of categories corresponds to an interface of an interface provider is generated based at least in part on output from a machine learning algorithm trained to categorize interfaces. The computing device is caused, by providing the integration code to the computing device, to execute the integration code to cause the computing device to evaluate characteristics of an interface of an interface provider, determine a category of an interface of the interface provider, and interact with the interface in a manner that accords with the category.
Grouping Clothing Images Of Brands Using Vector Representations
Described herein is a system and computer implemented method of grouping clothing products by brands within a set of clothing images in an electronic catalog of an internet store serving online customers. Apply an object detection model to extract the dress section within the clothing image(s) to create preprocessed image(s). A machine learning model model is applied to the preprocessed image(s) to convert the image into a vector representation through an unsupervised technique. The vector contains the design features of the clothing image. The design features are representative of the brands. A clustering model is applied on the vector representations to arrive at the grouping of similar images of the clothing products. The grouped clothing products are displayed via a user interface, ordered by brands, to the online customers.
Flow-based color transfer from source graphic to target graphic
Certain embodiments involve flow-based color transfers from a source graphic to target graphic. For instance, a palette flow is computed that maps colors of a target color palette to colors of the source color palette (e.g., by minimizing an earth-mover distance with respect to the source and target color palettes). In some embodiments, such color palettes are extracted from vector graphics using path and shape data. To modify the target graphic, the target color from the target graphic is mapped, via the palette flow, to a modified target color using color information of the source color palette. A modification to the target graphic is performed (e.g., responsive to a preview function or recoloring command) by recoloring an object in the target color with the modified target color.