Patent classifications
G06F11/0715
PROCESS FOR THE PREPARATION OF A POWDERED FLAVORING BASED ON MACRURAN DECAPOD CRUSTACEANS, THE FLAVORING PRODUCT OBTAINED WITH IT AND COOKING SALT FLAVORED WITH SAID PRODUCT
A process for the preparation of a flavoring powder based on macruran decapod crustaceans that comprises a) having macruran decapod crustacean cephalothoraxes kept at no more than about 2 degrees centigrade, b) optionally adding vegetable oil to the cephalothoraxes, c) heat treating the cephalothoraxes using dry heat at a temperature of between about 160 and about 180 degrees centigrades so that the inside reaches approximately 70 degrees centigrade for a period of approximately 2 minutes, d) extracting the hepatopancreas of the cephalothoraxes in liquid form, e) filtering the creamy liquid obtained without pressure, separating the solids present; f) lyophilizing the filtrated product obtained in e); and g) grinding the product obtained by lyophilization to a powder with a particle size from about 5 μm to about 80 μm. The flavoring product obtained by the process described above, where the macruran decapod crustacean cephalothoraxes come from Patagonian shrimps (Pleoticusmuelleri), being a powder of between about 5 μm to about 80 μm in particle size, and comprising about 39% protein, which does not include chitin, about 43.0% lipids and about 8.0% ash. Cooking flavored salt which comprises from about 10% w/w to about 15% w/w of the powdered flavoring product obtained by the process described above, mixed with sea salt or salt flakes for culinary use, where the % w/w refer to the final mix.
Managing exceptions on a shared resource
Examples are disclosed that relate to managing workloads provided by a plurality of clients to a shared resource. One example provides a computing device comprising a processor and a storage device storing instructions executable by the processor. The instructions are executable to provide a first workload from a first client and a second workload from a second client to a shared memory accessible by the first client, the second client, and a resource configured to process the first workload and the second workload. The computing device is configured to determine that an exception has occurred while processing the first workload, and to take an action to prevent execution of at least some additional work from the first client. The instructions are further executable to receive a result of processing the second workload after taking the action to prevent the execution of the additional work.
Method, management node and processing node for continuous availability in cloud environment
Method, management node and processing node are disclosed for continuous availability in a cloud environment. According to an embodiment, the cloud environment comprises a plurality of layers and each layer includes at least two processing nodes. Each processing node in a layer can pull job(s) from the processing nodes in the upper layer if any and prepare job(s) for the processing nodes in the under layer if any. A method implemented at a management node comprises receiving measurement reports from the plurality of layers. The measurement report of each processing node comprises information about job(s) pulled from the upper layer if any and job(s) pulled by the under layer if any. The method further comprises determining information about failure in the cloud environment based on the measurement reports.
System fault detection and processing method, device, and computer readable storage medium
Disclosed are a method, a device, and a computer readable storage medium for detecting and processing a system fault. The method includes: an interrupt service routine sending a first stage kicking dog signal, and receiving a second stage kicking dog signal for a system detection task (S101); and when task dead loop or task abnormity is detected, performing system abnormity processing according to a preset processing policy, wherein when the interrupt service routine fails to receive the second stage kicking dog signal within a set period of time, the interrupt service routine stops sending the first stage kicking dog signal, and the system reboots (S102).
Providing a Watchdog Timer to Enable Collection of Crash Data
A system and method for providing a watchdog timer to enable collection of crash data is provided. Upon execution of certain operations, a source thread of an application initiates a watchdog thread that periodically sample state of data relating to the application. Should the operation not complete within a watchdog timeout period, the watchdog thread invokes a crash function to collect additional state data. At least a portion of the state data is stored for later analysis and debugging.
METHODS AND SYSTEMS FOR ANOMALY DETECTION
This disclosure relates generally to anomaly detection, and more particularly to system and method for detecting anomalies. In one embodiment, the method includes executing at least one thread associated with the application. Executing the at least one thread results in invoking one or more methods associated with the at least one thread. During the execution metrics associated with the one or more methods are captured. The metrics are systematically arranged in a data structure to represent a plurality of thread-method pairs and the metrics corresponding to each of the plurality of thread-method pairs. One or more anomalies associated with the one or more methods are identified from the data structure based on a detection of at least one predetermined condition in the data structure. An anomaly of the one or more anomalies includes one of un-exited anomaly, an exception anomaly and a user-defined anomaly.
SYSTEMS AND METHODS FOR MICRO-BATCH PROCESSING OF DATA
This disclosure relates to micro-batch processing of data. Micro-batch processing of data may be accomplished by receiving data conveying information pertaining to operation of client computing platforms. For a general time duration, the data may be added to a general queue. The data in the general queue may be processed in memory in accordance with a general job. For one or more specific time durations, the data may be added to one or more specific queues based on the client computing platform to which the data pertains. The data in the one or more specific queues may be processed in memory in accordance with one or more specific jobs. One or more errors in processing the data may be detected. The data corresponding to the detected error may be added to a skip queue.
Self-healing job executor pool
Aspects of the present disclosure relate to a self-healing job executor pool. A server detects that a job executing on an executor failed. The server determines, based on at least one factor from a predetermined set of executor-related factors, that the job executing on the executor failed due to a state of the executor. The server adjusts, in response to determining that the job executing on the executor failed due to the state of the executor, the state of the executor to a known good state, where the known good state is selected from a stored set of known good states.
RESTART TOLERANCE IN SYSTEM MONITORING
When a restart event is detected within a technology landscape, restart-impacted performance metrics and non-restart-impacted performance metrics may be identified. The non-restart-impacted performance metrics may continue to be included within a performance characterization of the technology landscape. The restart-impacted performance metrics may be monitored, while being excluded from the performance characterization. The restart-impacted performance metric of the restart-impacted performance metrics may be transitioned to a non-restart-impacted performance metric, based on a monitored value of the restart-impacted performance metric following the restart event.
Automated problem diagnosis on logs using anomalous telemetry analysis
Systems and techniques are described for performing automatic problem diagnosis. Telemetry data of a system can be analyzed to identify a set of time ranges during which the telemetry data exhibits anomalous behavior. Next, a subset of log entries having a timestamp that is in one of the time ranges in set of time ranges can be extracted from a set of log entries generated by the system. The subset of log entries can then be analyzed, by using natural language processing, to identify a subset of the subset of log entries that has a high likelihood to be associated with one or problems in the system. Next, human-readable text can be extracted from the subset of the subset of log entries. A knowledge database can then be searched by using the human-readable text to identify one or more solutions to resolve the one or more problems in the system.