Patent classifications
G06F11/3423
Optimizing host CPU usage based on virtual machine guest OS power and performance management
Techniques for optimizing CPU usage in a host system based on VM guest OS power and performance management are provided. In one embodiment, a hypervisor of the host system can capture information from a VM guest OS that pertains to a target power or performance state set by the guest OS for a vCPU of the VM. The hypervisor can then perform, based on the captured information, one or more actions that align usage of host CPU resources by the vCPU with the target power or performance state.
Regression testing of computer systems using recorded prior computer system communications
A technique includes accessing, by at least one hardware processor, a recorded request and a recorded response associated with an integration test involving a first computer system and a second computer system. The recorded request was previously issued by the first computer system to the second computer system to cause the second computer system to provide the recorded response. The technique includes, in a virtualized integration test involving the second computer system and initiated using the recorded request, comparing, by the hardware processor(s), the recorded response to a request produced by the second computer system in the virtualized integration test. The technique includes identifying, by the hardware processor(s), an action taken by the second computer system as being likely to be associated with a regression based on the comparison.
Installation device and installation method
A storage unit stores statistical information including an amount of resource consumption and performance information, which represents a performance, of each piece of hardware of a plurality of types that are candidates for an arrangement destination of a function, an accepting unit accepts inputs of description details of a function in a high-level language corresponding to the hardware of the plurality of types, and a performance requirement that represents a required performance, a performance predicting unit calculates a predicted performance, and a predicted amount of resource consumption, using the description details and a predetermined algorithm for each piece of hardware; and a device selecting unit selects, as an arrangement destination, hardware with the calculated predicted performance and the performance information satisfying the performance requirement and a total value of the predicted amount of resource consumption and the amount of resource consumption being equal to or smaller than a resource capacity.
METHOD AND SYSTEM FOR IMPLEMENTING VIRTUAL MACHINE (VM) MANAGEMENT USING HARDWARE COMPRESSION
Novel tools and techniques are provided for implementing virtual machine (“VM”) management, and, more particularly, to methods, systems, and apparatuses for implementing VM management using hardware compression. In various embodiments, a computing system might identify one or more first virtual machines (“VM's”) among a plurality of VM's that are determined to be currently inactive and might identify one or more second VM's among the plurality of VM's that are determined to be currently active. The computing system might compress a virtual hard drive associated with each of the identified one or more first VM's that are determined to be currently inactive. The computing system might also perform or continue to perform one or more operations using each of the identified one or more second VM's that are determined to be currently active.
SELECTING A NODE GROUP OF A WORK GROUP FOR EXECUTING A TARGET TRANSACTION OF ANOTHER WORK GROUP TO OPTIMIZE PARALLEL EXECUTION OF STEPS OF THE TARGET TRANSACTION
A computing network includes nodes of different work groups. Nodes of a work group are dedicated to transactions of the work group. If a node of a first work group is predicted to have an idleness window, a second work group may borrow the node to execute a transaction of the second work group. At least a subset of steps of the transaction may be categorized into a step group. Trees of a transaction may be categorized into one or more tree groups. A node is selected for executing a transaction, if the predicted idleness duration of the node is sufficient relative to the predicted runtime of the transaction, the step group, and/or tree group. A credit system is maintained. A first work group transfers a credit to a second work group when borrowing a node of the second work group for executing a transaction of the first work group.
Electronic apparatus and control method thereof
An electronic apparatus is provided. The electronic apparatus includes: a memory configured to store at least one instruction; and a processor configured to execute the at least one instruction to: obtain usage information on an application installed in the electronic apparatus, obtain a natural language understanding model, among a plurality of natural language understanding models, corresponding to the application based on the usage information, perform natural language understanding of a user voice input related to the application based on the natural language understanding model corresponding to the application, and perform an operation of the application based on the preformed natural language understanding.
PROVIDING SYSTEM UPDATES IN AUTOMOTIVE CONTEXTS
A system includes a memory, a processor in communication with the memory, and an automotive operating system (OS) with a software update manager for an automobile. The system is configured to determine a new software update is available, monitor operating metrics of the automotive OS, and determine an installation time-window when each of the operating metrics collectively fall within respective predetermined thresholds. Responsive to determining that each of the operating metrics fall within respective predetermined thresholds, the system is configured to signal to the software update manager to start the installation once the automobile meets installation criteria. The installation criteria include at least (i) a first criteria that the automobile is stationary and (ii) a second criteria that the automotive OS is in an available state.
Server Classification Using Machine Learning Techniques
Methods, apparatus, and processor-readable storage media for server classification using machine learning techniques are provided herein. An example computer-implemented method includes obtaining, from at least one data source, data pertaining to server activity attributed to one or more servers; processing at least a portion of the obtained data using one or more rule-based analyses; selecting at least a particular machine learning classification algorithm from a set of multiple machine learning classification algorithms, based at least in part on results from the processing and one or more portions of the obtained data; classifying an activity level of at least a portion of the one or more servers by processing at least a portion of the obtained data using the selected machine learning classification algorithm; and performing at least one automated action based at least in part on results of the classifying.
APPLICATION-SPECIFIC LAUNCH OPTIMIZATION
Certain embodiments disclosed herein provide application-specific launch optimization. Aspects of the present disclosure include one or more cost functions for each application, where each cost function corresponds to a likelihood that a particular application should be placed into a particular pre-activation state. For each of the inactive applications, a respective one of the pre-activation states is selected based on comparing cost values obtained by evaluating the cost functions. Each of the inactive applications can be moved to or maintained in the respectively-selected pre-activation state to more efficiently provide an expedited application launch experience for a user.
CALIBRATION TECHNIQUE USING COMPUTER ANALYSIS FOR ASCERTAINING PERFORMANCE OF CONTAINERS
Monitoring and enhancing performance of containers using a calibration technique is implemented using a computer. Performance of a new container as part of an application running on the computer is checked by comparing a current performance of the new container with baseline data corresponding to the new container. The baseline data is derived from a calibration container corresponding to the new container. The new container is categorized in a category of performance based on the checking of the performance of the new container. An alert can be sent to a device of an administrator regarding the new container meeting a threshold of performance, in response to the new container meeting the threshold of performance. The alert can be sent to the device of the administrator for the administrator to initiate an action pertaining to the new container in response to receiving the alert.