Patent classifications
G06F15/00
Printing apparatus and method of controlling the same, and storage medium
A printing apparatus of the present invention receives a print job from an information processing apparatus and performs printing. The printing apparatus obtains, from the information processing apparatus, information indicating the number of pages per copy of the print job, determines whether double-sided printing is to be performed for the print job, and determines, based on the obtained information, whether the print job is a print job in which the number of pages per copy is an odd number of pages. The printing apparatus controls to print by inserting a blank page before a page that is received first in the print job, based on a determination that, in the print job, the number of pages per copy is an odd number of pages.
Server system, and printing apparatus having capability information identified by different server system and used for displaying print setting screen
A server system includes a first receiving unit configured to receive, from a printing apparatus operated by a user, a registration request for information regarding the printing apparatus, an acquisition unit configured to acquire information to be used for acquisition of capability information of the printing apparatus, from the printing apparatus that has transmitted the registration request, a transmission unit configured to transmit the acquired information to a different server system, and a second receiving unit configured to receive, from the different server system, capability information of the printing apparatus that is identified by the different server system based on the transmitted information.
Reinforcement learning for training compression policies for machine learning models
A compression policy to produce compression profiles for compressing trained machine learning models may be trained using reinforcement learning. An iterative reinforcement learning may be performed response to a search request. Different prospective compression profiles may be generated for received machine learning models according to a compression policy being trained. Performance of compressed versions of the trained neural networks according to the compression profiles may be caused using data sets used to train the machine learning models. The compression policy may be updated according to reward signal determined from an application of a reward function for performance criteria to performance results of the different versions of the machine learning models. When a search criteria is satisfied, the trained compression policy may be provided.
Fitness activity related messaging
In one embodiment, a method for generating a message to a friend of a user is provided, comprising: processing activity data of a first user measured by an activity monitoring device to update a value of an activity metric for the first user; identifying a change in an inequality relationship between the value of the activity metric for the first user and a value of the activity metric for a second user; in response to identifying the change in the inequality relationship, prompting the first user to generate a message to the second user.
Fitness activity related messaging
In one embodiment, a method for generating a message to a friend of a user is provided, comprising: processing activity data of a first user measured by an activity monitoring device to update a value of an activity metric for the first user; identifying a change in an inequality relationship between the value of the activity metric for the first user and a value of the activity metric for a second user; in response to identifying the change in the inequality relationship, prompting the first user to generate a message to the second user.
DEVICE AND METHOD FOR SHARED MEMORY PROCESSING AND NON-TRANSITORY COMPUTER STORAGE MEDIUM
A device for shared memory processing is provided in implementations of the disclosure. The device for shared memory processing includes a set of shared memory units, a set of processing units, and a set of global clock synchronizers. Each shared memory unit corresponds to one global clock synchronizer and is coupled with K processing units via the corresponding global clock synchronizer, and the coupled K processing units perform conflict-free memory access to the shared memory unit during one instruction cycle of the corresponding global clock synchronizer. One instruction cycle of each global clock synchronizer includes N clocks, K is less than or equal to N, and K and N are integers greater than zero. A method for shared memory processing and a non-transitory computer storage medium are also provided.
System and method for server connection using multiple network adapters
A system and method for managing print services between multifunction peripherals and a print server includes a connector server. The connector server subscribes to event notifications from each multifunction peripheral through two or more network adapters. Events are relayed from the connector server to the print server, irrespective as to which network adapter receives them. The connector server also relays web content between the multifunction peripheral and the print server.
NEURAL NETWORK ACCELERATION CIRCUIT AND METHOD
Provided are a neural network acceleration circuit and method. The neural network acceleration circuit includes a data storage module a data cache module, a computing module, and a delay processing module. The data storage module is configured to store input data required for a neural network computation. The data cache module is configured to cache input data output by the data storage module and required for the neural network computation. The computing module includes multiple computing units configured to compute input data output by the data cache module and required for the neural network computation so that multiple groups of output data are obtained. The delay processing module is configured to perform delay processing on the multiple groups of output data separately and output the multiple groups of output data subjected to the delay processing at the same time.
Automatic machine learning feature backward stripping
Features are used to train one or more ML models in a modelling layer. In a feature selection layer, each generated ML model is analyzed to determine, for each input feature, a degree of importance of the feature on the results generated by the ML model. Features with low importance are identified and the information is propagated backward to the data source and feature engineering layers. In response, the data source and feature engineering layers refrain from gathering or generating the unimportant features. Based on a confidence measure of the determination that each feature is important or unimportant, a number of periods between reevaluation of the feature importance is determined. After the number of periods has elapsed, a removed feature is restored to the pipeline.
Web conference system, method, and server system
A web conference system is provided whereby a second apparatus transmits first information indicating whether a printer is connected to a second apparatus or second information indicating setting related to the printer connected to the second apparatus to a server. The server transmits the first information or the second information to a first apparatus. The first apparatus displays printer information about the printer connected to the second apparatus on a display screen based on the first information or the second information.