Patent classifications
G06F3/04842
Reduced friction for merchant interactions
Improvements to existing technologies associated with point-of-sale transactions and merchant ecosystems to, among other things, reduce in-person contact and, in some examples, improve the efficiency at which point-of-sale transactions are completed (i.e., reduce friction) are described. In some examples, such reduced in-person contact and/or improved efficiencies can limit transmission of infectious diseases. As such, techniques described are directed to modifying aspects of point-of-sale transactions such that they occur on different computing devices (e.g., customer computing devices instead of merchant computing devices), are automated, and/or occur at different times than with conventional point-of-sale transactions. Furthermore, in at least one example, techniques described can leverage a distributed, network-based merchant ecosystem—comprising multiple merchant computing devices and/or customer computing devices that are specially configured to communicate with a service provider—to facilitate social distancing, which can reduce in-person contact and, in some examples, improve the efficiency at which point-of-sale transactions are completed.
Payload recording and comparison techniques for discovery
Persistent storage may contain an input discovery payload that contains entries representing configuration items and relationships therebetween, wherein the configuration items contain attributes defining devices, components, or applications on a network. One or more processors may be configured to: provide, for display, a graphical user interface containing a representation of the input discovery payload and a button; provide the input discovery payload to an identification and reconciliation engine (IRE) software application; receive, from the IRE software application, an output discovery payload that includes a log generated from execution of the IRE software application on the input discovery payload, wherein the log indicates, for the configuration items and the relationships in the input discovery payload, how a configuration management database (CMDB) would be updated by the IRE software application; and provide, for display, a further graphical user interface containing a further representation of the output discovery payload.
3D user interface depth forgiveness
A head-worn device system includes one or more cameras, one or more display devices and one or more processors. The system also includes a memory storing instructions that, when executed by the one or more processors, configure the system to generate a virtual object, generate a virtual object collider for the virtual object, determine a conic collider for the virtual object, provide the virtual object to a user, detect a landmark on the user's hand in the real-world, generate a landmark collider for the landmark, and determine a selection of the first virtual object by the user based on detecting a collision between the landmark collider with the conic collider and with the virtual object collider.
Methods and systems for reducing inadvertent interactions with advertisements displayed on a computing device
A computing device can receive an interactive advertisement comprising a first content object and a second content object. The computing device can display the first content object corresponding to a collapsed version of the interactive advertisement. The computing device can receive a first action to activate the interactive advertisement. The computing device can provide for display, responsive to receiving the first action, a target object identifying a location on the display screen to which to move the first content object. The computing device can receive a second action to move the first content object towards the target object. The computing device can then provide for display, the second content object corresponding to an expanded version of the interactive ad on the display screen of the computing device.
Methods and systems for reducing inadvertent interactions with advertisements displayed on a computing device
A computing device can receive an interactive advertisement comprising a first content object and a second content object. The computing device can display the first content object corresponding to a collapsed version of the interactive advertisement. The computing device can receive a first action to activate the interactive advertisement. The computing device can provide for display, responsive to receiving the first action, a target object identifying a location on the display screen to which to move the first content object. The computing device can receive a second action to move the first content object towards the target object. The computing device can then provide for display, the second content object corresponding to an expanded version of the interactive ad on the display screen of the computing device.
Image forming apparatus and information processing method
An image forming apparatus includes a display device, a first detection unit, a second detection unit, a first acquisition unit, and an output unit. The display device displays at least one image for display. The first detection unit detects a user's selection of an image for display in at least one image for display. The second detection unit detects an input to a drawing region of drawing indicating an image quality abnormality with respect to a target image corresponding to the selected image for display. The first acquisition unit acquires drawing information based on the input of the drawing detected by the second detection unit. The output unit outputs the drawing information acquired by the first acquisition unit.
Image forming apparatus and information processing method
An image forming apparatus includes a display device, a first detection unit, a second detection unit, a first acquisition unit, and an output unit. The display device displays at least one image for display. The first detection unit detects a user's selection of an image for display in at least one image for display. The second detection unit detects an input to a drawing region of drawing indicating an image quality abnormality with respect to a target image corresponding to the selected image for display. The first acquisition unit acquires drawing information based on the input of the drawing detected by the second detection unit. The output unit outputs the drawing information acquired by the first acquisition unit.
Methods and systems for populating application-specific information using overlay applications
Methods and systems are described herein for populating application-specific information using overlay applications. For example, in order to relieve some of the difficulties users face in inputting information into mobile devices which may have smaller screen sizes and may not feature dedicated input mechanisms, the methods and systems described herein automatically populate application-specific information. The methods and systems do this using an application that presents an application overlay feature. That is, the application is accessible while a user is using another application (e.g., on the mobile device) and/or while a user is scrolling through other applications.
Methods and systems for populating application-specific information using overlay applications
Methods and systems are described herein for populating application-specific information using overlay applications. For example, in order to relieve some of the difficulties users face in inputting information into mobile devices which may have smaller screen sizes and may not feature dedicated input mechanisms, the methods and systems described herein automatically populate application-specific information. The methods and systems do this using an application that presents an application overlay feature. That is, the application is accessible while a user is using another application (e.g., on the mobile device) and/or while a user is scrolling through other applications.
Augmented reality placement for user feedback
Methods and systems are provided for generating augmented reality (AR) scenes where the AR scenes include one or more artificial intelligence elements (AIEs) that are rendered as visual objects in the AR scenes. The method includes generating an AR scene for rendering on a display; the AR scene includes a real-world space and virtual objects projected in the real-world space. The method includes analyzing a field of view into the AR scene; the analyzing is configured to detect an action by a hand of the user when reaching into the AR scene. The method includes generating one or more AIEs rendered as virtual objects in the AR scene, each AIE is configured to provide a dynamic interface that is selectable by a gesture of the hand of the user. In one embodiment, each of the AIEs is rendered proximate to a real-world object present in the real-world space; the real-world object is located in a direction of where the hand of the user is detected to be reaching when the user makes the action by the hand.