Patent classifications
G09B21/006
Electronic book system using electromagnetic energy to detect page numbers
An electronic book using electromagnetic energy to detect page numbers includes a book and a base, the book includes a plurality of book pages, each of the book pages is respectively provided with an electromagnetic energy changing member, and the base is provided with at least one antenna coil and at least one control circuit. The electromagnetic energy changing member of each of the book pages is correspondingly disposed adjacent to a position of the antenna coil, and a magnetic flux generated by the antenna coil is transmitted to the electromagnetic energy changing members. The electromagnetic energy changing members generate magnetic variable fluxes to be received by the antenna coil for generating a page number prompt output command by conversion to control generation of a prompt function of the book page correspondingly disposed adjacent to the position of the antenna coil.
IMAGES FOR THE VISUALLY IMPAIRED
Some implementations include methods for communicating features of images to visually impaired users. An image to be displayed on a touch sensitive screen of a computing device may include one or more objects. Each of the one or more objects may be associated with a bounding box. A contact with the image may be detected via the touch sensitive screen. The contact may be determined to be within a bounding box associated with a first object of the one or more objects. Responsive to detecting the contact to be within the bounding box associated with the first object, a caption of the first object may be caused to become audible and the touch sensitive screen may be caused to vibrate based on a vibration pattern unique to the first object.
SYSTEMS AND METHODS FOR COMMUNICATING WITH VISION AND HEARING IMPAIRED VEHICLE OCCUPANTS
Systems and methods associated with a vehicle are provided. The systems and methods include an occupant output system including an output device, a camera or other perception device, and a processor in operable communication with the occupant output system and the camera or other perception device. The processor is configured to execute program instructions to cause the processor to: receive image or other perception data from the camera or other perception device, the image or other perception data including at least part of a head and/or body of an occupant of the vehicle, analyze the image or other perception data to determine if the occupant is of hearing and vision impaired, when the occupant is determined to be of vision and hearing impaired, decide on an output modality to assist the occupant, and generate an output for the occupant on the output device, and in the output modality.
ENABLING THE VISUALLY IMPAIRED WITH AR USING FORCE FEEDBACK
A system and method provide feedback to a user, such as a visually impaired user, to guide the user to an object in the field of view of a camera mounted on a frame worn on the head of the user. A processor identifies at least one object and a body part of the user in the field of view of the camera and tracks relative positions of the body part relative to the identified object. The processor also generates and communicates at least one control signal for guiding the body part of the user to the identified object to a user feedback device worn on or adjacent the body part of the user. The feedback device receives the control signal(s) and converts the control signal(s) into at least one of sounds or haptic feedback that guides the body part to the identified object.
Information processing device and information processing method
Provided is an information processing device that controls and presents sound information in an appropriate form to a user who acts in an environment on the basis of situation recognition including recognition of the environment and recognition of the actions of the user. The information processing device includes: a sensor that detects an object; an open ear style earpiece that is worn on an ear of a listener, and includes an acoustics generation unit, and a sound guide portion that transmits a sound generated by the acoustics generation unit into an earhole; and a processing unit that processes sound information of a sound source, the sound information being generated by the acoustics generation unit, the processing unit acquiring the sound information of the sound source corresponding to the object detected by the sensor, and a process of localizing a sound image of the acquired sound source while varying a position of the sound image in accordance with a position in a three-dimensional acoustic space, the position in the three-dimensional acoustic space corresponding to a position of the detected object.
Electroencephalogram—controlled video input and auditory display blind guiding apparatus and method
The invention discloses an electroencephalogram-controlled video input and auditory display blind guiding apparatus and method. The apparatus includes a video acquisition module, an electroencephalogram acquisition module, a processor and an audio playback module. The video acquisition module is connected to the processor, the electroencephalogram module is connected to the processor and configured to acquire an electroencephalogram signal, the audio playback module is connected to the processor, and the processor is separately connected to the video acquisition module, the electroencephalogram acquisition module and the audio playback module. According to the present invention, control of a region of interest in the blind guiding apparatus is achieved by means of the electroencephalogram signal, so that sensing a whole image and sensing details of interest in the image are switched conveniently. A region of interest in a current video is set by using the electroencephalogram, a flexible auditory display resolution and a control method are provided for a user, and the user can sense global information or local detail information of the image with the setting of the region of interest, so that the defect of low auditory display resolution is overcome effectively. The invention can be widely applied to different occasions where blind guiding is required.
METHOD AND APPARATUS FOR ASSISTING THE DISABLED
A system for facilitating access for a disabled user includes a wrist-length glove configured for being worn over the user's hand and wrist, a LIDAR transmitter/receiver configured for emitting a laser light, receiving reflected laser light and calculating a range to objects that reflected the reflected laser light, wherein the LIDAR transmitter/receiver is removably coupled to a top of a wrist area of the glove, a mobile computing device and a mobile application for providing a text to speech program, a speech to text program, a money recognition system, a visual recognition system that detects scenes, and a proximity detection system that reads data from the LIDAR transmitter/receiver and produces speech that identifies the range to the objects that reflected the reflected laser light.
Touchscreen keyboard configuration method, apparatus, and computer-readable medium storing program
An input method usable by a terminal including a touch screen display, identifies first touch input locations on a touch screen. The first touch input locations comprise a predetermined number of concurrent multiple touches. A predetermined number of input buttons comprising a predetermined number of areas on the touch screen are associated with corresponding identified first touch input locations. A touch pattern is detected comprising a second touch input of one or more of the predetermined number of input buttons and an alphanumeric input corresponding to the detected touch pattern is processed.
First-Person Camera Based Visual Context Aware System
A method is disclosed of discriminating detected objects in an area with a vision apparatus. The method includes generating image data of a portion of the area using an imaging device of the object detection device, and processing the image data to classify the image data as an imaged scene type selected from a plurality of scene types stored as scene type data in the memory. The method further includes processing the image data using the object identification data to generate object detection data for each object of the plurality of objects located in the portion of the area, each object detection data having a corresponding scene type of the plurality of scene types obtained from the object identification data, and generating user a sensible output only for the object detection data having a corresponding scene type that is the same as the imaged scene type.
SYSTEM, DEVICE, AND METHOD FOR IMPROVING VISUAL AND/OR AUDITORY TRACKING OF A PRESENTATION GIVEN BY A PRESENTER
A system, device, and method to improve visual and/or auditory tracking of a presentation given by a presenter, the system having a first electronic device integrating a first piece of software for obtaining information run in the device; a second electronic device integrating a second piece of software; a microphone for obtaining auditory information of the presentation; a compact module comprising a single-board computer, a router, a power supply, a fixed camera for acquiring information of the presentation shown in a support, and a moving camera for acquiring information of the presenter's position; tracking device for obtaining presenter tracking information based on the information of the position. The second piece of software is adapted for showing the information run in the first electronic device, the auditory information of the presentation, the information shown through the support, and the presenter tracking information.