Patent classifications
G06F3/0317
DIGITAL MARKING PREDICTION BY POSTURE
The method disclosed herein includes determining a user interaction pattern with respect to a digital marker, the user interaction pattern based at least in part on user interaction with a capacitive sensor of a computing device, determining a geometric characteristic of the digital marker, and generating a predicted digital marking position of the digital marker on a digital medium of the computing device based at least in part on the determined geometric characteristic of the digital marker and the determined user interaction pattern.
Surfaces with embedded information
A surface including: a plurality of visually detectable marks, wherein the visually detectable marks include: a first group of grid marks forming a grid and a second group of information marks encoding information based on positions of information marks of the second group of marks relative to the data page, wherein grid marks are less than 45% of the visually detectable marks on the surface.
STYLUS HAPTIC OUTPUT
Examples are disclosed relating to providing haptic output to a stylus. In one example, rotational position data indicating a rotational position of the stylus about a longitudinal axis of the body of the stylus is received. Travel direction data indicating a direction of travel of a tip of the stylus relative to a touch-sensitive screen of a computing device is also received. Using at least the rotational position data and the travel direction data, one or more characteristics of a drive signal are determined. The drive signal is then transmitted to a haptic feedback mechanism within the body of the stylus to generate haptic output at the body.
MULTIFOCAL LENS, MOLD FOR MANUFACTURING THE SAME AND OPTICAL MACHINE STRUCTURE
There is provided a lens including a first curved surface and a second curved surface. The first curved surface and the second curved surface have different focal distances and are arranged interlacedly along a radial direction of the lens.
Optical positioning system having high resolution
There is provided an operating method of an optical positioning system including: capturing an image frame of a detected surface, which has interleaved bright regions and dark regions, using a field of view and a shutter time of an optical sensor; counting a number of edge pairs between the bright regions and the dark regions that the field of view passes; calculating an average value of the image frame; calculating a ratio between the calculated average value and the shutter time; determining that the field of view is aligned with one of the dark regions when the ratio is smaller than a ratio threshold; and determining that the field of view is aligned with one of the bright regions when the ratio is larger than the ratio threshold.
COMPUTER READABLE RECORDING MEDIUM WHICH CAN BE USED TO PERFORM IMAGE QUALITY IMPROVEMENT METHOD AND OPTICAL NAVIGATION METHOD
A computer readable recording medium storing at least one program, wherein an image quality improvement method is performed when the program is executed. The image quality improvement method comprising: (a) classifying data units of a target image to normal data units and abnormal data units based on relations between brightness values of the data units and a classification parameter, wherein the classification parameter is related with an image quality of the target image or the brightness values of the data units; and (b) adjusting the brightness values of the abnormal data units based on an adjusting parameter to generate adjusted brightness values, such that differences between the adjusted brightness values and the brightness values of the normal data units are reduced. An optical navigation method using the image quality improvement method is also disclosed.
Virtualization of tangible interface objects
An example system includes a computing device located proximate to a physical activity surface, a video capture device, and a detector. The video capture device is coupled for communication with the computing device and is adapted to capture a video stream that includes an activity scene of the physical activity surface and one or more interface objects physically intractable with by a user. The detector processes the video stream to detect the one or more interface objects included in the activity scene, to identify the one or more interface objects that are detectable, to generate one or more events describing the one or more interface objects, and to provide the one or more events to an activity application configured to render virtual information on the one or more computing devices based on the one or more events.
SYSTEM INCLUDING TOUCH IC AND EXTERNAL PROCESSOR, AND TOUCH SENSOR INCLUDING TOUCH IC
A touch sensor includes a sensor electrode group; and a touch integrated circuit coupled to the sensor electrode group for executing touch detection and configured to generate frame data indicative of a detection level of each of two-dimensional positions in the sensor electrode group. The touch integrated circuit is connected to an external processor different from the touch integrated circuit via a first bus. The touch integrated circuit supplies the frame data to the external processor via the first bus. The external processor feeds determination data resulting from performing predetermined processing on the frame data back to the touch integrated circuit. The touch integrated circuit performs an operation based on the determination data.
EXTERNAL USER INTERFACE FOR HEAD WORN COMPUTING
Aspects of the present invention relate to external user interfaces used in connection with head worn computers (HWC).
User-wearable systems and methods to collect data and provide information
Systems and methods involving one or more wearable components that obtain information about a surrounding environment and provide information to a user to assist user in retrieving objects from the surrounding environment. The information may include navigation information related to a route that the user may take to retrieve an object, as well as information about the object to be retrieved. The wearable components may be include ones that may be mounted on the head of the user, as well as components that the user may wear on other parts of the body or that attach to clothing. The wearable components may include image sensors, microphones, machine-readable symbol readers, range-finders, accelerometers, and/or gyroscopes that may collect information from and about the surroundings of the user. The wearable components may also include one or more of a set of speakers and/or a display subsystem to provide information to the user.