G06F8/38

LANGUAGE FOR GENERATING ABLATION PROTOCOLS AND SYSTEM CONFIGURATIONS
20230048486 · 2023-02-16 ·

A method includes generating an ablation programming language, which defines commands for (i) setting ablation protocol parameters and respective values, (ii) setting a configuration of an ablation system, (iii) applying automatic logic that relates the ablation protocol parameters and the values to the configuration of the ablation system, and (iv) generating one or more graphical user interfaces (GUIs) showing one or more of the parameters of the ablation protocol and the system configuration. The ablation programming language is provided for subsequent use with the ablation system.

LANGUAGE FOR GENERATING ABLATION PROTOCOLS AND SYSTEM CONFIGURATIONS
20230048486 · 2023-02-16 ·

A method includes generating an ablation programming language, which defines commands for (i) setting ablation protocol parameters and respective values, (ii) setting a configuration of an ablation system, (iii) applying automatic logic that relates the ablation protocol parameters and the values to the configuration of the ablation system, and (iv) generating one or more graphical user interfaces (GUIs) showing one or more of the parameters of the ablation protocol and the system configuration. The ablation programming language is provided for subsequent use with the ablation system.

SYSTEMS AND METHODS FOR TRANSFORMING AN INTERACTIVE GRAPHICAL USER INTERFACE ACCORDING TO MACHINE LEARNING MODELS

A computerized method for transforming an interactive graphical user interface according to machine learning includes selecting a persona, loading a data structure associated with the selected persona, and generating the interactive graphical user interface. The method includes, in response to a user selecting a first selectable element, inputting a first set of explanatory variables to a first trained machine learning model to generate a first metric, and transforming the user interface according to the selected persona and the first metric. The method includes, in response to the user selecting a second selectable element, inputting a second set of explanatory variables to a second trained machine learning model to generate a second metric, and transforming the user interface according to the selected persona and the second metric. In various implementations, first metric is a first probability of the persona being approved for a first prior authorization prescription.

Analyzing augmented reality content item usage data
11579757 · 2023-02-14 · ·

Usage metrics for augmented reality content may be identified and analyzed to determine measures of fitness for respective usage metrics. The measures of fitness may indicate a level of correlation with an outcome specified by an augmented reality content creator and an amount of interaction with an augmented reality content item by users of a client application. Recommendations may be provided to augmented reality content creators indicating modifications to augmented reality content items that have at least a threshold probability of increasing the level of interaction between users of the client application and the augmented reality content item.

Generating higher-level semantics data for development of visual content

Techniques are described for generating HLSD for a textual format source code, which, when rendered, causes a display of visual content. The rendering of the source code generates a tree hierarchy of visual source elements, which logically is possible to map to any graph tree. In an embodiment, visual source elements of the source code are classified to higher-level semantic data (HLSD) labels based on their property(s) and/or the property(s) of neighbor visual source element(s) in the tree hierarchy (context). The HLSD labels indicate the type of HLSD widget mapped to the visual source elements. Techniques further include determining features and a layout arrangement for HLSD widgets and generating a template thereof for the visual content.

Generating higher-level semantics data for development of visual content

Techniques are described for generating HLSD for a textual format source code, which, when rendered, causes a display of visual content. The rendering of the source code generates a tree hierarchy of visual source elements, which logically is possible to map to any graph tree. In an embodiment, visual source elements of the source code are classified to higher-level semantic data (HLSD) labels based on their property(s) and/or the property(s) of neighbor visual source element(s) in the tree hierarchy (context). The HLSD labels indicate the type of HLSD widget mapped to the visual source elements. Techniques further include determining features and a layout arrangement for HLSD widgets and generating a template thereof for the visual content.

Method for generating web code for UI based on a generative adversarial network and a convolutional neural network
11579850 · 2023-02-14 · ·

Provided is a method for generating web codes for a user interface (UI) based on a generative adversarial network (GAN) and a convolutional neural network (CNN). The method includes steps described below. A mapping relationship between display effects of a HyperText Markup Language (HTML) element and source codes of the HTML element is constructed. A location of an HTML element in an image I is recognized. Complete HTML codes of the image I are generated. The similarity between manually-written HTML codes and the generated complete HTML codes and the similarity between the image I and an image I.sub.1 generated by the generated complete HTML codes are obtained. After training, an image-to-HTML-code generation model M is obtained. A to-be-processed UI image is input into the model M so as to obtain corresponding HTML codes. According to the method of the present disclosure, an image-to-HTML-code generation model M can be obtained.

Method for generating web code for UI based on a generative adversarial network and a convolutional neural network
11579850 · 2023-02-14 · ·

Provided is a method for generating web codes for a user interface (UI) based on a generative adversarial network (GAN) and a convolutional neural network (CNN). The method includes steps described below. A mapping relationship between display effects of a HyperText Markup Language (HTML) element and source codes of the HTML element is constructed. A location of an HTML element in an image I is recognized. Complete HTML codes of the image I are generated. The similarity between manually-written HTML codes and the generated complete HTML codes and the similarity between the image I and an image I.sub.1 generated by the generated complete HTML codes are obtained. After training, an image-to-HTML-code generation model M is obtained. A to-be-processed UI image is input into the model M so as to obtain corresponding HTML codes. According to the method of the present disclosure, an image-to-HTML-code generation model M can be obtained.

GROUP CONTROL AND MANAGEMENT AMONG ELECTRONIC DEVICES

In a method of group control and management among electronic devices, wherein the electronic devices is in communication with a control device, a projectable space instance is provided for the control device to create a workspace, wherein a control and management tool and a plurality of unified tools for driving respective electronic devices are selectively added to the projectable space instance. The projectable space instance is then parsed with a projector by the control device to automatically generate a projected workspace corresponding to the workspace to be created via the projectable space instance. The control and management tool realizes at least one status information of at least a first one of the electronic devices by way of the unified tools, and controls at least a second one of the electronic devices to execute at least one task corresponding to the at least one status information.

GROUP CONTROL AND MANAGEMENT AMONG ELECTRONIC DEVICES

In a method of group control and management among electronic devices, wherein the electronic devices is in communication with a control device, a projectable space instance is provided for the control device to create a workspace, wherein a control and management tool and a plurality of unified tools for driving respective electronic devices are selectively added to the projectable space instance. The projectable space instance is then parsed with a projector by the control device to automatically generate a projected workspace corresponding to the workspace to be created via the projectable space instance. The control and management tool realizes at least one status information of at least a first one of the electronic devices by way of the unified tools, and controls at least a second one of the electronic devices to execute at least one task corresponding to the at least one status information.