Patent classifications
G06V20/36
Method and System for Automatic Detection and Recognition of A Digital Image
An automatic measuring system containing configurable integrated circuits is able to process information via captured images. The automatic measuring system includes a metering instrument, a camera, a recognition module, and a localization module. The metering instrument has at least one display for visually displaying a number and measures the amount of measurable substance or resources (i.e., electricity and water) consumed. The camera captures an image of the number representing at least a portion the amount of measurable substance. The recognition module is operable to generate a value in response to the image and the coordinates wherein the coordinates are used to decode the image via restoring captured image to the original readout counter value. The localization module is removably or remotely coupled to the camera and operable to generate the coordinates in accordance with the image captured by the camera.
Floorplan generation based on room scanning
Various implementations disclosed herein include devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data.
Generation of synthetic image data
Techniques are generally described for generation of photorealistic synthetic image data. A generator network generates first synthetic image data. A first class of image data represented by a first portion of the first synthetic image data is detected and the first portion is sent to a first discriminator network. The first discriminator network generates a prediction of whether the first portion of the first synthetic image data is synthetically generated. A second class of image data represented by a second portion of the first synthetic image data is detected and the second portion is sent to a second discriminator network. The second discriminator network generates a prediction of whether the second portion of the first synthetic image data is synthetically generated. The generator network is updated based on the predictions of the discriminators.
PRIVATE DEVICELESS MEDIA DELIVERY SYSTEM
Aspects of the subject disclosure may include, for example, a system, including: a camera; a media projector; a directional microphone; a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations of: detecting a subscriber entering an intelligent area; receiving a communication for the subscriber from a communications network; determining whether the subscriber can accept the communication; and responsive to a determination that the subscriber can accept the communication: monitoring a position of the subscriber in the intelligent area; and discretely delivering the communication to the subscriber based on the position. Other embodiments are disclosed.
Exit routes
A computing device equipped with a camera may be used to assist a person in planning and traversing exit routes for a premises. For example, a user may be able to interact with one or more user interfaces generated by the computing device to determine an exit route for the premises. The user may be able to identify various objects, such as stairs or doors, along the exit route. The user may be able to identify graphical or textual information that can be displayed at certain points along the exit route. After determining the exit route, a data structure for the exit route may be shared with other users and/or be used to assist a user in traversing the exit route. For example, the data structure may be used as a basis for overlaying graphics and/or text on a real-time video display as the user traverses the exit route.
Providing a response in a session
The present disclosure provides method and apparatus for providing a response to a user in a session. At least one message associated with a first object may be received in the session, the session being between the user and an electronic conversational agent. An image representation of the first object may be obtained. Emotion information of the first object may be determined based at least on the image representation. A response may be generated based at least on the at least one message and the emotion information. The response may be provided to the user.
Floorplan generation based on room scanning
Various implementations disclosed herein include devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data.
Tracking of item of interest using wearable heads up display
A wearable heads-up display (WHUD) obtains attribute data corresponding to an attribute of an item of interest and obtains environmental data of an environment surrounding the WHUD via one or more sensors of the WHUD. The WHUD compares the attribute data with the environmental data to detect the item of interest. In response to the detection, the WHUD obtains location data indicative of a location of the item of interest, stores the location data in association with a context of detection of the item of interest. In response to a trigger, such as a query by a user regarding the item of interest, the WHUD provides a location indication based on the location data, the location indication including, for example, a display of a description of the location of the item of interest, a display of the item of interest at the location, and the like.
INDOOR NAVIGATION METHOD, INDOOR NAVIGATION EQUIPMENT, AND STORAGE MEDIUM
An indoor navigation method is provided, including: receiving an instruction for navigation, and collecting an environment image; extracting an instruction room feature and an instruction object feature carried in the instruction, and determining a visual room feature, a visual object feature, and a view angle feature based on the environment image; fusing the instruction object feature and the visual object feature with a first knowledge graph representing an indoor object association relationship to obtain an object feature, and determining a room feature based on the visual room feature and the instruction room feature; and determining a navigation decision based on the view angle feature, the room feature, and the object feature.
Intelligent volume control
An electronic device includes: a memory storing instructions; and at least one processor configured to execute the instructions stored in the memory to control the at least one processor to: identify a location of the electronic device; obtain an image and a sound signal corresponding to the location; identify, using a trained neural network, a scene where the electronic device is present, based on the image, the sound signal, and the location; and provide settings of the electronic device based on the identified scene.