Patent classifications
G06F16/50
Detecting cross-lingual comparable listings
In various example embodiments, a system and method for a Listing Engine that translates a first listing from a first language to a second language. The first listing includes an image(s) of a first item. The Listing Engine provides as input to an encoded neural network model a portion(s) of a translated first listing and a portions(s) of a second listing in the second language. The second listing includes an image(s) of a second item. The Listing Engine receives from the encoded neural network model a first feature vector for the translated first listing and a second feature vector for the second listing. The first and the second feature vectors both include at least one type of image signature feature and at least one type of listing text-based feature. Based on a similarity score of the first and second feature vectors at least meeting a similarity score threshold, the Listing Engine generates a pairing of the first listing in the first language with the second listing in the second language for inclusion in training data of a machine translation system.
Detecting cross-lingual comparable listings
In various example embodiments, a system and method for a Listing Engine that translates a first listing from a first language to a second language. The first listing includes an image(s) of a first item. The Listing Engine provides as input to an encoded neural network model a portion(s) of a translated first listing and a portions(s) of a second listing in the second language. The second listing includes an image(s) of a second item. The Listing Engine receives from the encoded neural network model a first feature vector for the translated first listing and a second feature vector for the second listing. The first and the second feature vectors both include at least one type of image signature feature and at least one type of listing text-based feature. Based on a similarity score of the first and second feature vectors at least meeting a similarity score threshold, the Listing Engine generates a pairing of the first listing in the first language with the second listing in the second language for inclusion in training data of a machine translation system.
System and method for authenticating transactions from a mobile device
Systems and methods for authenticating transactions from a mobile device are described, including authenticating a user and a merchant location. A remote server receives an authentication request from a point-of-sale device, and requests that a user's mobile device use an associated camera to take a picture of the user at the merchant location. The remote server then processes the picture to determine the authenticity of the user and the location, and provides an authentication approval or denial to the point-of-sale device, instructing the point-of-sale device to execute, or not to execute, the transaction.
System and method for authenticating transactions from a mobile device
Systems and methods for authenticating transactions from a mobile device are described, including authenticating a user and a merchant location. A remote server receives an authentication request from a point-of-sale device, and requests that a user's mobile device use an associated camera to take a picture of the user at the merchant location. The remote server then processes the picture to determine the authenticity of the user and the location, and provides an authentication approval or denial to the point-of-sale device, instructing the point-of-sale device to execute, or not to execute, the transaction.
Client-server multimedia archiving system with metadata encapsulation
A system, method and computer program product for archiving image, audio, and text data with metadata encapsulation in a client-server storage library is described. The server receives and holds the images, audio, or text to be archived in an image, audio or text logical partition which includes a directory of the images, audio, or text. The information is encapsulated in a metadata wrapper and stored in the library as a closed image, audio, or text file along with a closed copy of the directory. The closed image, audio, or text directory is also stored in the client. The images may be encapsulated in MXF, DICOM, Tape Archive (TAR) or GZIP formats. The storage library may have magnetic tapes, magnetic disks or optical disks as storage media.
Client-server multimedia archiving system with metadata encapsulation
A system, method and computer program product for archiving image, audio, and text data with metadata encapsulation in a client-server storage library is described. The server receives and holds the images, audio, or text to be archived in an image, audio or text logical partition which includes a directory of the images, audio, or text. The information is encapsulated in a metadata wrapper and stored in the library as a closed image, audio, or text file along with a closed copy of the directory. The closed image, audio, or text directory is also stored in the client. The images may be encapsulated in MXF, DICOM, Tape Archive (TAR) or GZIP formats. The storage library may have magnetic tapes, magnetic disks or optical disks as storage media.
EYE CENTER LOCALIZATION METHOD AND LOCALIZATION SYSTEM THEREOF
An eye center localization method includes performing an image sketching step, a frontal face generating step, an eye center marking step and a geometric transforming step. The image sketching step is performed to drive a processing unit to sketch a face image from the image. The frontal face generating step is performed to drive the processing unit to transform the face image into a frontal face image according to a frontal face generating model. The eye center marking step is performed to drive the processing unit to mark a frontal eye center position information on the frontal face image. The geometric transforming step is performed to drive the processing unit to calculate two rotating variables between the face image and the frontal face image, and calculate the eye center position information according to the two rotating variables and the frontal eye center position information.
EYE CENTER LOCALIZATION METHOD AND LOCALIZATION SYSTEM THEREOF
An eye center localization method includes performing an image sketching step, a frontal face generating step, an eye center marking step and a geometric transforming step. The image sketching step is performed to drive a processing unit to sketch a face image from the image. The frontal face generating step is performed to drive the processing unit to transform the face image into a frontal face image according to a frontal face generating model. The eye center marking step is performed to drive the processing unit to mark a frontal eye center position information on the frontal face image. The geometric transforming step is performed to drive the processing unit to calculate two rotating variables between the face image and the frontal face image, and calculate the eye center position information according to the two rotating variables and the frontal eye center position information.
Post capture imagery processing and deployment systems
A post capture imagery processing system is provided. The system is for use with aerial imagery and includes a server having a processor and a memory and a software application providing instruction to the server to process the captured aerial imagery, such as spherical imagery. The server further includes instructions to geo-rectify the spherical imagery. The geo rectifying of the spherical imagery may include one of use of a third party GIS map to associate corresponding data with the spherical imagery in order to produce a geo-referenced spherical image, or calculate the geo-references by a software application performing particular operations on the server.
Post capture imagery processing and deployment systems
A post capture imagery processing system is provided. The system is for use with aerial imagery and includes a server having a processor and a memory and a software application providing instruction to the server to process the captured aerial imagery, such as spherical imagery. The server further includes instructions to geo-rectify the spherical imagery. The geo rectifying of the spherical imagery may include one of use of a third party GIS map to associate corresponding data with the spherical imagery in order to produce a geo-referenced spherical image, or calculate the geo-references by a software application performing particular operations on the server.