G06F15/76

SYSTEM AND METHOD FOR SMART INTERACTION BETWEEN WEBSITE COMPONENTS
20230036518 · 2023-02-02 ·

A website building system includes at least one database storing website components and their associated component hierarchies, each component comprising overridable parameterized-behavior elements, non-overridable parameterized-behavior elements and a data handler, the data handler handling override protocols for the components; and an element handler to review all components to be rendered for a current view and for a current component, to handle a communication request between the current component and at least one other component within the component hierarchy in order to implement an override request from the at least one other component, the element handler to update the current component only if the override request is related to an overridable parameterized-behavior element of the current component according to the data handler of the current component.

ELECTRONIC DEVICE AND OPERATING METHOD WITH MODEL CO-LOCATION

An electronic device to co-locating models and a method of operating the electronic device is provided. The electronic device includes one or more of processors configured to analyze computational characteristics in response to a plurality of models being located to an accelerator, determine an affinity representing a utilization of the accelerator in response to two models among the plurality of models being co-located based on the computational characteristics of the plurality of models, and co-locate the two models among the plurality of models to the accelerator based on the affinity.

ELECTRONIC DEVICE AND OPERATING METHOD WITH MODEL CO-LOCATION

An electronic device to co-locating models and a method of operating the electronic device is provided. The electronic device includes one or more of processors configured to analyze computational characteristics in response to a plurality of models being located to an accelerator, determine an affinity representing a utilization of the accelerator in response to two models among the plurality of models being co-located based on the computational characteristics of the plurality of models, and co-locate the two models among the plurality of models to the accelerator based on the affinity.

Systems and Methods for Optimizing Distributed Computing Systems Including Server Architectures and Client Drivers

Systems and methods for optimizing distributed computing systems are disclosed, such as for processing raw data from data sources (e.g., structured, semi-structured, key-value paired, etc.) in applications of big data. A process for utilizing multiple processing cores for data processing can include receiving raw input data and a first portion of digested input data from a data source client through an input/output bus at a first processor core, receiving, from the first processor core, the raw input data and first portion of digested input data by a second processor core, digesting the received raw input data by the second processor core to create a second portion of digested input data, receiving the second portion of digested input data by the first processor core, and writing, by the first processor core, the first portion of digested input data and the second portion of digested input data to a storage medium.

Compiler for implementing memory shutdown for neural network implementation configuration
11615322 · 2023-03-28 · ·

Some embodiments provide a compiler for optimizing the implementation of a machine-trained network (e.g., a neural network) on an integrated circuit (IC). The compiler of some embodiments receives a specification of a machine-trained network including multiple layers of computation nodes and generates a graph representing options for implementing the machine-trained network in the IC. In some embodiments, the graph includes nodes representing options for implementing each layer of the machine-trained network and edges between nodes for different layers representing different implementations that are compatible. The compiler of some embodiments is also responsible for generating instructions relating to shutting down (and waking up) memory units of cores. In some embodiments, the memory units to shutdown are determined by the compiler based on the data that is stored or will be stored in the particular memory units.

Data processing systems and methods for automatically detecting and documenting privacy-related aspects of computer software

Data processing systems and methods according to various embodiments are adapted for automatically detecting and documenting privacy-related aspects of computer software. Particular embodiments are adapted for: (1) automatically scanning source code to determine whether the source code include instructions for collecting personal data; and (2) facilitating the documentation of the portions of the code that collect the personal data. For example, the system may automatically prompt a user for comments regarding the code. The comments may be used, for example, to populate: (A) a privacy impact assessment; (B) system documentation; and/or (C) a privacy-related data map. The system may comprise, for example, a privacy comment plugin for use in conjunction with a code repository.

Data processing systems and methods for automatically detecting and documenting privacy-related aspects of computer software

Data processing systems and methods according to various embodiments are adapted for automatically detecting and documenting privacy-related aspects of computer software. Particular embodiments are adapted for: (1) automatically scanning source code to determine whether the source code include instructions for collecting personal data; and (2) facilitating the documentation of the portions of the code that collect the personal data. For example, the system may automatically prompt a user for comments regarding the code. The comments may be used, for example, to populate: (A) a privacy impact assessment; (B) system documentation; and/or (C) a privacy-related data map. The system may comprise, for example, a privacy comment plugin for use in conjunction with a code repository.

AUTOMATIC LABELING OF PRODUCTS VIA EXPEDITED CHECKOUT SYSTEM
20230081912 · 2023-03-16 ·

A portable checkout unit automatically generates training data for an automatic checkout system as a customer collects items in a store. A customer uses an item scanner of portable checkout unit to generate a virtual shopping list of items collected in the shopping cart. When the customer adds a new item to the shopping cart or on some regular interval, the portable checkout unit captures images of the items contained by the shopping cart and can generate bounding boxes for each product in each image. The bounding boxes can be associated with item identifiers from previously-generated bounding boxes to identify the items captured by the bounding boxes. Each bounding box paired with an item identifier can then be used as training data for an automated checkout system.

AUTOMATIC LABELING OF PRODUCTS VIA EXPEDITED CHECKOUT SYSTEM
20230081912 · 2023-03-16 ·

A portable checkout unit automatically generates training data for an automatic checkout system as a customer collects items in a store. A customer uses an item scanner of portable checkout unit to generate a virtual shopping list of items collected in the shopping cart. When the customer adds a new item to the shopping cart or on some regular interval, the portable checkout unit captures images of the items contained by the shopping cart and can generate bounding boxes for each product in each image. The bounding boxes can be associated with item identifiers from previously-generated bounding boxes to identify the items captured by the bounding boxes. Each bounding box paired with an item identifier can then be used as training data for an automated checkout system.

RECONFIGURABLE SERVER AND SERVER RACK WITH SAME

A reconfigurable server includes improved bandwidth connection to adjacent servers and allows for improved access to near-memory storage and for an improved ability to provision resources for an adjacent server. The server includes processor array and a near-memory accelerator module that includes near-memory and the near-memory accelerator module helps provide sufficient bandwidth between the processor array and near-memory. A hardware plane module can be used to provide additional bandwidth and interconnectivity between adjacent servers and/or adjacent switches.