会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明专利
    • Anti-spoofing
    • GB2588538A
    • 2021-04-28
    • GB202018751
    • 2019-12-04
    • YOTI HOLDING LTD
    • SYMEON NIKITIDISFRANCISCO ANGEL GARCIA RODRIGUEZERLEND DAVIDSONSAMUEL NEUGBER
    • G06K9/00
    • A method of configuring an anti-spoofing system to detect if a spoofing attack has been attempted, in which an image processing component of the anti-spoofing system is trained to process 2D verification images according to a set of image processing parameters, in order to extract depth information from the 2D verification images. The configured anti-spoofing system comprises an anti-spoofing component which uses an output from the processing of a 2D verification image by the image processing component to determine whether an entity captured in that image corresponds to an actual human or a spoofing entity. The image processing parameters are learned during the training from a training set of captured 3D training images of both actual humans and spoofing entities, each 3D training image comprising 2D image data and corresponding depth data, by: processing the 2D image data of each 3D training image according to the image processing parameters, so as to compute an image processing output for comparison with the corresponding depth data of that 3D image; and adapting the image processing parameters in order to match the image processing outputs to the corresponding depth data, thereby training the image processing component to extract depth information from 2D verification images.
    • 3. 发明专利
    • Biometric user authentication
    • GB2574919B
    • 2020-07-01
    • GB201903784
    • 2017-12-21
    • YOTI HOLDING LTD
    • SYMEON NIKITIDISJAN KURCIUSFRANCISCO ANGEL GARCIA RODRIGUEZ
    • H04L9/32G06F21/32G06K9/00H04L29/06H04W12/00H04W12/06
    • A method of authenticating a user 102 of a device 104 comprises: receiving data from a device motion sensor 124 (e.g. accelerometer or gyroscope) captured during a user-induced interval of motion of the device; receiving image data captured using an image capture device 126 of the device during the motion; and authenticating the user by determining whether characteristics of the user-induced motion match characteristics of an expected pattern, previously learned from and uniquely associated with the authorized user, and by analysing the image data to determine whether three-dimensional facial structure is present therein. The method may be combined with facial recognition. The image and motion data may be compared to verify that movement of the facial structure corresponds to the device motion. The motion data may be domain transformed to form a feature vector comprising cepstral coefficients to be inputted to a neural network trained to distinguish between feature vectors from different users, the training being based on feature vectors captured either from the authorized user, or from a group of users not including the authorized user. The authentication may include inputting the neural network output vector to a support vector machine one-class or binary classifier.
    • 4. 发明专利
    • Biometric user authentication
    • GB2569794A
    • 2019-07-03
    • GB201721636
    • 2017-12-21
    • YOTI HOLDING LTD
    • SYMEON NIKITIDISJAN KURCIUSFRANCISCO ANGEL GARCIA RODRIGUEZ
    • H04L9/32G06F21/32G06K9/00H04W12/06
    • Methods are disclosed of authenticating a device user. A motion sensor 126, such as an accelerometer or gyroscope, captures data during user-induced device motion. A feature vector obtained from the data, possibly by obtaining ceptral coefficients 202, is inputted to a – possibly convolutional – neural network 204 trained to distinguish between feature vectors from different users. A network vector output is used to determine whether the motion matches an expected authorized user motion pattern. In one embodiment (fig. 6), the network is trained based on feature vectors captured from a group of training users not including the authorized user. The vector output may be inputted to a one-class support vector machine (SVM) or binary classifier 206. In another embodiment (fig. 7), the network is trained based on earlier captured authorized user vectors. It reproduces inputted authorized user vectors as vector outputs, and authentication includes determining whether there is a discrepancy between network input and output vectors. Another method comprises capturing both device motion data and image capture device data during user-induced motion of the device, and authenticating the user by comparing the motion data to an expected pattern, and analysing the image data to determine whether three-dimensional facial structure is present.
    • 5. 发明专利
    • Biometric user authentication
    • GB2574919A
    • 2019-12-25
    • GB201903784
    • 2017-12-21
    • YOTI HOLDING LTD
    • SYMEON NIKITIDISJAN KURCIUSFRANCISCO ANGEL GARCIA RODRIGUEZ
    • H04L9/32G06F21/32G06K9/00H04L29/06H04W12/00H04W12/06
    • A method of authenticating a user 102 of a device 104 comprises: receiving data from a device motion sensor 124 (e.g. accelerometer or gyroscope) captured during a user-induced interval of motion of the device; receiving image data captured using an image capture device 126 of the device during the motion; and authenticating the user by determining whether characteristics of the user-induced motion match characteristics of an expected pattern, previously learned from and uniquely associated with the authorized user, and by analysing the image data to determine whether three-dimensional facial structure is present therein. The method may be combined with facial recognition. The image and motion data may be compared to verify that movement of the facial structure corresponds to the device motion. The motion data may be domain transformed to form a feature vector comprising cepstral coefficients to be inputted to a neural network trained to distinguish between feature vectors from different users, the training being based on feature vectors captured either from the authorized user, or from a group of users not including the authorized user. The authentication may include inputting the neural network output vector to a support vector machine one-class or binary classifier.
    • 6. 发明专利
    • Anti-spoofing
    • GB2607496A
    • 2022-12-07
    • GB202211582
    • 2019-12-04
    • YOTI HOLDING LTD
    • SYMEON NIKITIDISFRANCISCO ANGEL GARCIA RODRIGUEZERLEND DAVIDSONSAMUEL NEUGBER
    • G06V40/40G06V40/16
    • An anti-spoofing system 602 is disclosed which comprises a depth estimation component, a global anti-spoofing classifier, and a patch-based anti-spoofing classifier. The depth estimation component receives a 2D verification image (206) and extracts estimated depth information therefrom. The global anti-spoofing classifier 504 uses the extracted depth information to classify the 2D verification image in relation to real (actual humans) and spoofing classes, and thereby assigns a global classification value to the whole of the image. The patch-based anti-spoofing classifier 1102a,b classifies each image patch of the 2D verification image in relation to the real and anti-spoofing classes, and thereby assigns a local classification value to each image patch. The system combines 1104 the global and local classification values to determine whether an entity captured in the 2D verification image corresponds to an actual human or a spoofing entity. The patched-based classifier could employ convolutional neural networks 110a,b to define patches. The methods could be used to detect mask, cut-out, replay or print attacks.
    • 9. 发明专利
    • Anti-spoofing
    • GB2579583A
    • 2020-07-01
    • GB201819794
    • 2018-12-04
    • YOTI HOLDING LTD
    • SYMEON NIKITIDISFRANCISCO ANGEL GARCIA RODRIGUEZ
    • G06K9/00G06N3/08
    • A method of configuring an anti-spoofing system 150 to detect if a spoofing attack has been attempted. An image processing component (308) of the anti-spoofing system is trained to process 2D verification images (600) according to a set of image processing parameters in order to extract depth information from the 2D images. The configured anti-spoofing system uses an output from the processing of a 2D image to determine whether an entity captured in that image corresponds to an actual human or a spoofing entity. The image processing parameters are learned during training from a set of 3D training images of both actual humans and spoofing entities, each 3D image comprising 2D image data and corresponding depth data. The training comprises processing the 2D image data of each 3D training image according to the image processing parameters, so as to compute an image processing output for comparison with the corresponding depth data of that 3D image; and adapting the image processing parameters in order to match the image processing outputs to the corresponding depth data, thereby training the image processing component to extract depth information from 2D verification images. An independent claim is also included for determining spoofing attacks.