Patent classifications
G06F2212/464
GARBAGE COLLECTION OF PRELOADED TIME-BASED GRAPH DATA
The described technology is generally directed towards garbage collecting content selection graphs and related data from in an in-memory content selection graph data store. When a set of content selection graphs expire, a more current content selection graph set becomes active, and the storage space (e.g., in a Redis cache) used by the expired content selection graphs is reclaimed via garbage collection. Some graphs can be replaced before use, referred to as orphaned graphs, and the storage space for any such orphaned graphs is also reclaimed during garbage collection. Also garbage collected is storage space including related data structures used to generate and validate graphs.
Garbage collection of preloaded time-based graph data
The described technology is generally directed towards garbage collecting content selection graphs and related data from in an in-memory content selection graph data store. When a set of content selection graphs expire, a more current content selection graph set becomes active, and the storage space (e.g., in a Redis cache) used by the expired content selection graphs is reclaimed via garbage collection. Some graphs can be replaced before use, referred to as orphaned graphs, and the storage space for any such orphaned graphs is also reclaimed during garbage collection. Also garbage collected is storage space including related data structures used to generate and validate graphs.
Asset processing from persistent memory
In some examples, during execution of an application as an application asset is called, an asset map that is stored in a persistent memory device is searched for an asset identifier associated with the application asset. Using this asset identifier, an application asset stored in the persistent memory device is located. The persistent memory device is directly accessed by a processor executing the application. The processor processes the application asset from its location in the persistent memory device.
CUSTOMIZED MEDIA OVERLAYS
Among other things, embodiments of the present disclosure improve the functionality of electronic messaging and imaging software and systems by enabling users to generate customized media overlays that can be shared with other users. For example, media overlays can be generated by the system and displayed in conjunction with media content (e.g., images and/or video) generated by an image-capturing device (e.g., a digital camera). In some embodiments, existing media overlays may be used by users to create derivative media overlays. The system may track usage of media overlays and any derivatives created based thereon, and allow users to control the distribution and use of their overlays in future derivatives. In some embodiments, for example, a user can modify an overlay they created and cause the modification to propagate to all derivative overlays based on the user’s overlay.
Garbage collection of preloaded time-based graph data
The described technology is generally directed towards garbage collecting content selection graphs and related data from in an in-memory content selection graph data store. When a set of content selection graphs expire, a more current content selection graph set becomes active, and the storage space (e.g., in a Redis cache) used by the expired content selection graphs is reclaimed via garbage collection. Some graphs can be replaced before use, referred to as orphaned graphs, and the storage space for any such orphaned graphs is also reclaimed during garbage collection. Also garbage collected is storage space including related data structures used to generate and validate graphs.
Cached data expiration and refresh
The described technology is directed towards maintaining a cache of data items, with cached data items having current value subsets and next value subsets. The cache is accessed for data item requests, to return a cache miss if a requested data item is not cached, to return data from the current value subset if not expired, to return data from the next value subset if the current value subset is expired and the next value subset is not expired, or to return a cache miss (or expired data) if both subsets are expired. Cached data items are refreshed, (e.g., periodically), when a data item's current value subset is expired by replacing the data item's current value subset with the next value subset and caching a new next value subset, or caching a new next value subset when the next value subset will expire within a threshold time.
ASSET PROCESSING FROM PERSISTENT MEMORY
In one example in accordance with the present disclosure, a method is described. According to the method, during execution of an application as an application asset is called, an asset map that is stored in a persistent memory device is searched for an asset identifier associated with the application asset. Using this asset identifier, an application asset stored in a persistent memory device is located. The persistent memory device is directly accessed by a processor executing the application. A processor the processes the application asset from its location in the persistent memory device.
MULTI-STATE MIDTIER CACHE
A server includes a data cache for storing data objects requested by mobile devices, desktop devices, and server devices, each of which may execute a different configuration of an application. When a cache miss occurs, the cache may begin loading portions of a requested data object from various data sources. Instead of waiting for the entire object to load to change the object state to “valid,” the cache may incrementally update the state through various levels of validity based on the calling application configurations. When a portion of the data object used by a mobile configuration is received, the object state can be upgraded to be valid for mobile devices while data for desktop and other devices continues to load, etc. The mobile portion of the data object can then be sent to the mobile devices without waiting for the rest of the data object to load.
Content Distribution Network Supporting Popularity-Based Caching
A content delivery network may provide content items to requesting devices using a popularity-based distribution hierarchy. A central analysis system may determine popularity data for a content item stored in a first caching device. The central analysis system may determine that a change in the popularity data is beyond a threshold value. The central analysis system may then transmit an instruction to move the content item from the first caching device to a second caching device in a different tier of caching devices than the first caching device. The central analysis system may update a content index to indicate that the content item has been moved to the second caching device. A user device may be redirected to request the content item directly from the second caching device.
Multi-state midtier cache
A server includes a data cache for storing data objects requested by mobile devices, desktop devices, and server devices, each of which may execute a different configuration of an application. When a cache miss occurs, the cache may begin loading portions of a requested data object from various data sources. Instead of waiting for the entire object to load to change the object state to valid, the cache may incrementally update the state through various levels of validity based on the calling application configurations. When a portion of the data object used by a mobile configuration is received, the object state can be upgraded to be valid for mobile devices while data for desktop and other devices continues to load, etc. The mobile portion of the data object can then be sent to the mobile devices without waiting for the rest of the data object to load.