Patent classifications
G06V30/142
Depth-based image stabilization
Depth information can be used to assist with image processing functionality, such as image stabilization and blur reduction. In at least some embodiments, depth information obtained from stereo imaging or distance sensing, for example, can be used to determine a foreground object and background object(s) for an image or frame of video. The foreground object then can be located in later frames of video or subsequent images. Small offsets of the foreground object can be determined, and the offset accounted for by adjusting the subsequent frames or images. Such an approach provides image stabilization for at least a foreground object, while providing simplified processing and reduce power consumption. Similarly processes can be used to reduce blur for an identified foreground object in a series of images, where the blur of the identified object is analyzed.
Position detection method, position detection device, and display device
Position detection methods and systems are disclosed herein. The position detection method of detecting a position in an operation surface pointed by a pointing element includes obtaining a first taken image with the first infrared camera, obtaining a second taken image with the second infrared camera, removing a noise component from the first and second images converting the first and second taken into converted images without the noise component, forming a difference image between the first converted taken image and the second converted taken image, extracting a candidate area in which a disparity amount between the first converted taken image and the second converted taken image is within a predetermined range, detecting a tip position of the pointing element from the candidate area, and determining a pointing position of the pointing element and whether or not the pointing element had contact with the operation surface based on the detecting.
WRITING INSTRUMENT
A writing instrument comprising a body having an axis, a writing tip, a reservoir and at least one visual mark being a specific mark for augmented reality applications, the at least one visual mark being configured to be hidden/visible when the writing tip is distant from a writing surface and to be respectively visible/hidden when the writing tip is in contact with a writing surface, at least a portion of the body being configured to allow the at least one visual mark to adopt two states, visible and hidden configuration.
GENERATING VISUAL FEEDBACK
A method for generating visual feedback based on a textual representation comprising obtaining and processing a textual representation, identifying at least one textual feature of the textual representation, assigning at least one feature value to the at least one textual feature, and generating visual feedback based on the textual representation. The generated visual feedback comprises at least one visual feature corresponding to the at least one textual feature. A system for generating visual feedback based on a textual representation, comprising a capturing subsystem configured to capture the textual representation, a processing subsystem configured to identify at least one textual feature and to generate visual feedback based on the textual representation, and a graphical user output configured to display the generated visual feedback. The visual feedback generated based on the textual representation comprises at least one visual feature corresponding to the at least one textual feature.
System architecture and method of authenticating a 3-D object
A non-transitory computer-readable medium encoded with a computer-readable program which, when executed by a processor, will cause a computer to execute a method of authenticating a 3-D object with a 2-D camera, the method including building a pre-determined database. The method additionally includes registering the 3-D object to a storage unit of a device comprising the 2-D camera, thereby creating a registered 3-D model of the 3-D object. Additionally, the method includes authenticating a test 3-D object by comparing the test 3-D object to the registered 3-D model.
Generating prescription records from a prescription label on a medication package
The system captures portions of a label on a package in a set of images, reconstructs the label based on the set of images, identifies text in the label, determines associations of identified text and types of information, and stores the set of images, the reconstructed label, the identified text in the label, and the determined associations as, for example, a batch in a review queue. During a review process, the batch is reviewed and a structured prescription record is determined for the batch which is further used by the system and user of the system associated with the batch to provide various features to the user.
Generating prescription records from a prescription label on a medication package
The system captures portions of a label on a package in a set of images, reconstructs the label based on the set of images, identifies text in the label, determines associations of identified text and types of information, and stores the set of images, the reconstructed label, the identified text in the label, and the determined associations as, for example, a batch in a review queue. During a review process, the batch is reviewed and a structured prescription record is determined for the batch which is further used by the system and user of the system associated with the batch to provide various features to the user.
Method for inserting hand-written text
A method and system for inserting hand-written text is disclosed. The method includes detecting, from a stylus, an insertion gesture on a touch screen, determining, on the touch screen, an insertion location where the hand-written text is to be inserted, generating, on the touch screen, an insertion box for receiving the hand-written text from the stylus, detecting, from the stylus, the hand-written text in the insertion box, and, in response to determining that the hand-written text nears or exceeds a boundary of the insertion box, increasing a size of the insertion box to accommodate the hand-written text. The method further includes detecting, from the stylus, a completion gesture on the touch screen, reducing the size of the insertion box to encapsulate the inserted hand-written text, and erasing the insertion box and inserting the hand-written text into a space previously occupied by the insertion box.
Method for inserting hand-written text
A method and system for inserting hand-written text is disclosed. The method includes detecting, from a stylus, an insertion gesture on a touch screen, determining, on the touch screen, an insertion location where the hand-written text is to be inserted, generating, on the touch screen, an insertion box for receiving the hand-written text from the stylus, detecting, from the stylus, the hand-written text in the insertion box, and, in response to determining that the hand-written text nears or exceeds a boundary of the insertion box, increasing a size of the insertion box to accommodate the hand-written text. The method further includes detecting, from the stylus, a completion gesture on the touch screen, reducing the size of the insertion box to encapsulate the inserted hand-written text, and erasing the insertion box and inserting the hand-written text into a space previously occupied by the insertion box.
IMAGE-ASSISTED FIELD VERIFICATION OF QUERY RESPONSE
Processes and systems for displaying information received in a query response. One system includes a communication device including a user interface and an electronic processor. The electronic processor transmits a query, and receives a response to the query including stored information in each of a plurality of fields of information about an object of interest. The electronic processor receives an image of the object from a camera, identifies visually descriptive text information describing the object, and categorizes at least a portion of the visually descriptive text information into the plurality of fields of information. The electronic processor determines a confidence level for each field of information based on a comparison of the stored information and the visually descriptive text information in each field, and displays the stored information in one or more fields for which the confidence level of the comparison is below a predetermined threshold.