Patent classifications
A61F9/08
Comprehensive intraocular vision advancement
An intraocular implant device for comprehensive intraocular vision advancement includes an intraocular implant body shaped for positioning inside a lens chamber of an eye. In some embodiments, the implant includes an optical adjustable base accommodating lens configured to provide both base adjustment and accommodation. In further embodiments, the implant includes a photoelectric sensor operable to receive incident light through the cornea and to convert the received light into electrical energy for use with one or more circuit components disposed on the body, and wherein the photoelectric sensor is also operable to convert the received light into image data. The ocular implant device may include a projector for projecting the an image representative of the image data onto the retina of a user.
Comprehensive intraocular vision advancement
An intraocular implant device for comprehensive intraocular vision advancement includes an intraocular implant body shaped for positioning inside a lens chamber of an eye. In some embodiments, the implant includes an optical adjustable base accommodating lens configured to provide both base adjustment and accommodation. In further embodiments, the implant includes a photoelectric sensor operable to receive incident light through the cornea and to convert the received light into electrical energy for use with one or more circuit components disposed on the body, and wherein the photoelectric sensor is also operable to convert the received light into image data. The ocular implant device may include a projector for projecting the an image representative of the image data onto the retina of a user.
Head-mounted electronic vision aid device and automatic image magnification method thereof
Disclosed in the present invention is a head-mounted electronic vision aid device and an image magnification method thereof. The head-mounted electronic vision aid device comprising a memory unit, a processing unit, an image zooming unit, and at least one ranging unit; the ranging unit being configured to obtain distance data between a target object of interest to a user and the device and/or three-dimensional profile data of the object and output the data to the processing unit; the memory unit stores a correspondence table between the distance data and the magnification of the image zooming unit; the processing unit confirms the target object of interest to the user, performs operations on the distance data and/or the three-dimensional profile data of the object, and outputs an magnification matching the distance data to the image zooming unit according to the correspondence table; and the image zooming unit can automatically adjust to the matching magnification. For visually impaired users, accurate, intuitive and rapid automatic magnification of the target objects of interest can be realized on demand. Compared with the prior art, the repeated and tedious manual adjustment is avoided, and the user experience is greatly improved.
Head-mounted electronic vision aid device and automatic image magnification method thereof
Disclosed in the present invention is a head-mounted electronic vision aid device and an image magnification method thereof. The head-mounted electronic vision aid device comprising a memory unit, a processing unit, an image zooming unit, and at least one ranging unit; the ranging unit being configured to obtain distance data between a target object of interest to a user and the device and/or three-dimensional profile data of the object and output the data to the processing unit; the memory unit stores a correspondence table between the distance data and the magnification of the image zooming unit; the processing unit confirms the target object of interest to the user, performs operations on the distance data and/or the three-dimensional profile data of the object, and outputs an magnification matching the distance data to the image zooming unit according to the correspondence table; and the image zooming unit can automatically adjust to the matching magnification. For visually impaired users, accurate, intuitive and rapid automatic magnification of the target objects of interest can be realized on demand. Compared with the prior art, the repeated and tedious manual adjustment is avoided, and the user experience is greatly improved.
Intelligent blind guide method and apparatus
The embodiment of the present disclosure discloses an intelligent blind guide method and apparatus, and relates to the field of artificial intelligence, so as to solve the problem that an intelligent blind guide system requires the whole-process intervention of an artificial customer service staff, which results in high work intensity. The intelligent blind guide method includes: obtaining a confidence of an intelligent blind guide according to sensor information, wherein the confidence indicates the reliability of blind guide information generated according to the sensor information without artificial decision; generating the blind guide information according to the sensor information if the confidence is greater than or equal to a preset threshold, and importing artificial decision to generate the blind guide information if the confidence is less than the preset threshold. The embodiment of the present disclosure is applied to a blind guide helmet.
Intelligent blind guide method and apparatus
The embodiment of the present disclosure discloses an intelligent blind guide method and apparatus, and relates to the field of artificial intelligence, so as to solve the problem that an intelligent blind guide system requires the whole-process intervention of an artificial customer service staff, which results in high work intensity. The intelligent blind guide method includes: obtaining a confidence of an intelligent blind guide according to sensor information, wherein the confidence indicates the reliability of blind guide information generated according to the sensor information without artificial decision; generating the blind guide information according to the sensor information if the confidence is greater than or equal to a preset threshold, and importing artificial decision to generate the blind guide information if the confidence is less than the preset threshold. The embodiment of the present disclosure is applied to a blind guide helmet.
DYNAMIC VISION ENABLING VISOR
Systems of presenting environmental data include a frequency emitting device, a frequency receiving device, wherein the frequency receiving device is tuned to receive a reflected signal from the frequency emitting device, a processor, and a sound emitting device adapted to play a sound transmission. The processor is programmed to compile data from the reflected signal and convert the data from the reflected signal into a sound transmission.
DYNAMIC VISION ENABLING VISOR
Systems of presenting environmental data include a frequency emitting device, a frequency receiving device, wherein the frequency receiving device is tuned to receive a reflected signal from the frequency emitting device, a processor, and a sound emitting device adapted to play a sound transmission. The processor is programmed to compile data from the reflected signal and convert the data from the reflected signal into a sound transmission.
Tactile Vision
A seeing device and process that enables a visually impaired user to see using touch. It comprises: Sensors/cameras worn approximately at eye level, a microprocessor, and a garment that contains small tactile elements in a matrix that vibrate in a pattern related to the location of objects In front of the user. The cameras take pictures of the area and then map the depth and position of objects. This map is translated onto a person's skin through the garment with tactile elements with each tactile element corresponding to a zone in real space. Some of the tactile elements will trigger sequentially in a snaking pattern with only certain ones activating in a pattern. This will help the person sense where there are objects in his/her path.
Tactile Vision
A seeing device and process that enables a visually impaired user to see using touch. It comprises: Sensors/cameras worn approximately at eye level, a microprocessor, and a garment that contains small tactile elements in a matrix that vibrate in a pattern related to the location of objects In front of the user. The cameras take pictures of the area and then map the depth and position of objects. This map is translated onto a person's skin through the garment with tactile elements with each tactile element corresponding to a zone in real space. Some of the tactile elements will trigger sequentially in a snaking pattern with only certain ones activating in a pattern. This will help the person sense where there are objects in his/her path.