Patent classifications
G06F3/03
GRAPHICAL MENU STRUCTURE
A human interface including steps of presenting an image, then receiving a gesture from the user. The image is analyzed to identify the elements of the image and then compared to known images and then either soliciting an input from the user or displaying a menu to the user. Comparing the image and/or graphical image elements may be effectuated using a trained artificial intelligence engine or, in some embodiments, with a structured data source, said data source including predetermined images and menu options. If the input from the user is known, then presenting a predetermined menu. If the image is not known, then presenting an image or other menu options, and soliciting from the user the desired options. Once the user selections an option, the resulting selection may be used to further train the AI system or added to the structured data source for future reference.
DISPLAY DEVICE AND SENSING SYSTEM INCLUDING THE SAME
A display device includes a thin-film transistor layer disposed on a substrate, the thin-film transistor layer including a thin-film transistor, a light emitting element layer disposed on the thin-film transistor layer, the light emitting element layer including a pixel defining layer defining emission areas, and a pixel electrode disposed in each of the emission areas, a touch electrode disposed on the light emitting element layer, the touch electrode overlapping the pixel defining layer and sensing a touch, and a code pattern defined by a planar shape of the pixel defining layer that is distinguished from the pixel electrode and the touch electrode, the code pattern having position information.
Photodetector activations
An example computing device includes a photodetector to measure an amount of light incident on a detection surface of the photodetector. The example computing device includes a state sensor to activate the photodetector responsive to the computing device being in a detection state. The example computing device also includes a processor. An example processor identifies, during the detection state, a user gesture based on an output of the photodetector. The user gesture blocks light incident on the detection surface of the photodetector. The example processor also alters an operation of the computing device based on the user gesture.
Four dimensional energy-field package assembly
Four dimensional (4D) energy-field package assembly for projecting energy fields according to a 4D coordinate function. The 4D energy-field package assembly includes an energy-source system having energy sources capable of providing energy to energy locations, and energy waveguides for directing energy from the energy locations from one side of the energy waveguide to another side of the energy waveguide along energy propagation paths.
Hand controller for robotic surgery system
A Robotic control system has a wand, which emits multiple narrow beams of light, which fall on a light sensor array, or with a camera, a surface, defining the wand's changing position and attitude which a computer uses to direct relative motion of robotic tools or remote processes, such as those that are controlled by a mouse, but in three dimensions and motion compensation means and means for reducing latency.
Realistic virtual/augmented/mixed reality viewing and interactions
The present invention discloses systems and methods for both viewing and interacting with a virtual reality (VR), an augmented reality (AR) or a mixed reality (MR). More specifically, the systems and methods allow the user to interact with aspects of such realities including virtual items presented in such realities or within such environments by manipulating a control device that has an inside-out camera mounted on-board. The apparatus or system uses two distinct representations including a reduced representation in determining the pose of the control device and uses these representations to compute an interactive pose portion of the control device to be used for interacting with the virtual item. The reduced representation is consonant with a constrained motion of the control device.
METHOD, COMPUTER, AND PROGRAM FOR ARTWORK MANAGEMENT
An artwork management method executed by one or more computers is provided. The artwork management method includes detecting, by a first computer included in the one or more computers, artwork included in a website, determining, by the first computer, whether or not a purchase transaction indicating purchase of the artwork detected is recorded in a blockchain network, and transmitting, by the first computer, a report indicating discovery of unauthorized use of the artwork, in a case where the first computer determines that the purchase transaction is not recorded. According to one aspect, the artwork management method suppresses illegal use of artwork while maintaining accessibility of the artwork.
METHOD AND DEVICE FOR OPERATING A LASER UNIT AS A FUNCTION OF A DETECTED STATE OF AN OBJECT, AND LASER DEVICE
A method for operating a laser unit as a function of a detected state of an object. The method includes: outputting a light beam having a light beam intensity, using the laser unit, during a first time period and a second time period; receiving at least one reflected partial beam having a partial beam intensity during the first and second time periods; making the light beam and the partial beam interfere with each other in the first and second time periods to obtain a first interference parameter for the first time period and a second interference parameter for the second time period; ascertaining the state of the object; changing an operating state of the laser unit as a function of the ascertained state of the object.
Optical encoder capable of identifying absolute positions
The present disclosure is related to an optical encoder which is configured to provide precise coding reference data by feature recognition technology. To apply the present disclosure, it is not necessary to provide particular dense patterns on a working surface. The precise coding reference data can be generated by detecting surface features of the working surface.
Control method for portable read-write pen and portable read-write pen
A portable read-write pen and a control method for a portable read-write pen. The control method for a portable read-write pen includes the following steps: step S1 of detecting a state of a pen point of the portable read-write pen; and step S2 of controlling, according to the state of the pen point of the portable read-write pen, the portable read-write pen to enable a touch-read mode or a write-record mode.