G06F9/505

RECOMBINANT INFLUENZA VIRUS-LIKE PARTICLES (VLPS) PRODUCED IN TRANSGENIC PLANTS

A method for synthesizing influenza virus-like particles (VLPs) within a plant or a portion of a plant is provided. The method involves expression of influenza HA in plants and the purification by size exclusion chromatography. The invention is also directed towards a VLP comprising influenza HA protein and plant lipids. The invention is also directed to a nucleic acid encoding influenza HA as well as vectors. The VLPs may be used to formulate influenza vaccines, or may be used to enrich existing vaccines.

DISTRIBUTED MACHINE LEARNING USING NETWORK MEASUREMENTS

A method performed by a central server node in a distributed machine learning environment is provided. The method includes: managing distributed machine learning for a plurality of local client nodes, such that a first set of the plurality of local client nodes are assigned to assist training of a first central model and a second set of the plurality of local client nodes are assigned to assist training of a second central model; obtaining information regarding network conditions for the plurality of local client nodes; clustering the plurality of local client nodes into one or more clusters based at least in part on the information regarding network conditions; re-assigning a local client node in the first set to the second set based on the clustering; and sending to the local client node a message including model weights for the second central model.

SYSTEM AND METHOD FOR BATCH AND SCHEDULER MIGRATION IN AN APPLICATION ENVIRONMENT MIGRATION

A method of batch and scheduler migration assesses a batch job, scans it's scheduling mechanism and components, ascertains a quantum change for migrating the batch job to a target batch service and forecasts an assessment statistic that provides at least one functional readiness and a timeline to complete the migration of the batch job. The method generates a transformed batch job structure by breaking the batch job according to the target batch service while retaining the scheduling mechanism. Further, it updates containerized batch service components of the target batch service as per the forecasted assessment statistic and the transformed batch job structure, and migrates the batch job to the target batch service by re-platforming the updated containerized batch service components.

BLOCKCHAIN-BASED INTERACTION METHOD AND SYSTEM FOR EDGE COMPUTING SERVICE
20230040149 · 2023-02-09 ·

A blockchain-based interaction method and system for an edge computing service: using, as a bearing entity of an MECaaS, a device that has an environment for an operating system and that is of a user; registering a computing power device of the user as an edge node by using the MECaaS; uploading or updating registration information of the edge node to a blockchain layer; issuing, by a requesting device as a data producer, a computing task to the MECaaS; invoking, by the MECaaS, the smart contract deployed on the blockchain layer; standardizing a data format of the computing task; matching a target edge node for the requesting device; establishing an M2M communication between the requesting device and the target edge node, so that the requesting device can transmit raw data to the target edge node, and the target edge node can feed back a computing result to the requesting device.

Data Re-Encryption For Software Applications
20230045103 · 2023-02-09 ·

Some embodiments provide a non-transitory machine-readable medium that stores a program. The program receives a request to execute a task for re-encrypting a set of data associated with an application that has been encrypted with a first encryption key. The task is for re-encrypting the set of data using a second encryption key. The program further determines an amount of work to complete the task. The program also divides the task into a set of subtasks based on the amount of work. The program further assigns each subtask in the set of subtasks to a node in a plurality of nodes for execution of the subtask. The plurality of nodes are configured to implement the application.

METHOD AND APPARATUS FOR SCHEDULING TASKS IN MULTI-CORE PROCESSOR

An apparatus includes a plurality of processing cores, and a memory including a plurality of task queues corresponding to the plurality of processing cores, respectively, wherein at least one processing core of the plurality of processing cores is configured, by executing a scheduler, to determine execution of task rescheduling, based on states of the plurality of processing cores, tasks stored in the plurality of task queues, and at least one reference value, and, when the task rescheduling is executed, move a first task stored in a first task queue to a second task queue.

SYSTEMS AND METHODS FOR UNIVERSAL AUTO-SCALING
20230040512 · 2023-02-09 ·

Systems and methods for universal auto-scaling are disclosed. In one embodiment, a method may include: (1) monitoring, by an auto-scale computer program executed by a computer processor, a utilization level at each of a plurality of data layers in a data pod, wherein each data layer comprises at least one node; (2) comparing, by the auto-scale computer program, each of the utilization levels to a threshold; (3) identifying, by the auto-scale computer program, that one of the thresholds is met or exceeded; and (4) deploying, by the auto-scale computer program, an additional node to the data layer with the met or exceeded utilization level.

System and method for appraising resource configuration
11556383 · 2023-01-17 · ·

To more properly size resources in a destination to which IT resources will be migrated, a system for appraising a resource configuration estimates a source's load model representing a load of first resources in a first computer system which is the source of migration and estimates a destination's load model representing a load of second resources to be built by migrating the first resources to a second computer system based on the source's load model. The system compares performance requirements of the first resources against the destination's load model and finds the destination's load model that is conformable to the performance requirements. When determining design values of the second resources' configuration, the system corrects those design values based on the destination's load model estimated conformable to the performance requirements to decrease design margins of the resource configuration using a design correction value defined to meet a service level requested.

Scalable runtime validation for on-device design rule checks

An apparatus to facilitate scalable runtime validation for on-device design rule checks is disclosed. The apparatus includes a memory to store a contention set, one or more multiplexors, and a validator communicably coupled to the memory. In one implementation, the validator is to: receive design rule information for the one or more multiplexers, the design rule information referencing the contention set; analyze, using the design rule information, a user bitstream against the contention set at a programming time of the apparatus, the user bitstream for programming the one or more multiplexors; and provide an error indication responsive to identifying a match between the user bitstream and the contention set.

Image processing apparatus and computer-readable recording medium storing screen transfer program
11557018 · 2023-01-17 · ·

An image processing apparatus of transferring a display image to a client machine, the display image being an image to be displayed on a display device associated with the client machine, the image processing apparatus including: a memory; and a processor coupled to the memory, the processor being configured to perform processing, the processing including: executing a first transfer process configured to transfer only moving image data as the display image; executing a second transfer process configured to transfer moving image data and still image data as the display image; and executing a control process configured to select either the executing of the first transfer process or the executing of the second transfer process, by using a frame rate of the display image and a state of a graphics processing unit (GPU) circuitry configured to perform a process related to an image.