会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明公开
    • LANE TRACKING METHOD AND APPARATUS
    • EP4141736A1
    • 2023-03-01
    • EP20933833.4
    • 2020-04-28
    • HUAWEI TECHNOLOGIES CO., LTD.
    • YUAN, WeipingWU, ZuguangZHOU, Peng
    • G06K9/00
    • This application provides a lane line tracking method and apparatus. The method includes: obtaining a first prediction value, where the first prediction value is used to indicate a lane line model in a vehicle coordinate system, and is obtained through prediction by using motion information of an autonomous driving vehicle at a prior moment; obtaining first detection information, where the first detection information includes a pixel of a lane line in an image coordinate system at a current moment; determining a first mapping relationship based on the first prediction value and the first detection information, where the first mapping relationship is used to indicate a real-time mapping relationship between the image coordinate system and the vehicle coordinate system; and determining a second prediction value based on the first mapping relationship, where the second prediction value is used to indicate a correction value of the first prediction value. A real-time mapping relationship between two coordinate systems is obtained by using a prediction value and detection information, so that impact of road surface changes and the like can be eliminated, to improve accuracy of lane line tracking. In addition, the method does not need to be based on a plane assumption and a lane line parallel assumption, so that the method is more universal.
    • 4. 发明公开
    • DEVICE AND METHOD FOR REALIZING DATA SYNCHRONIZATION IN NEURAL NETWORK INFERENCE
    • EP4075343A1
    • 2022-10-19
    • EP19958452.5
    • 2019-12-31
    • Huawei Technologies Co., Ltd.
    • WANG, YanyanFENG, YuanWU, ZuguangZHOU, Peng
    • G06N3/08
    • This application provides an apparatus and a method for implementing data synchronization during neural network inference, relates to an artificial intelligence (Artificial Intelligence, AI) field, and specifically relates to neural network inference technologies. The apparatus includes: a memory, configured to store a first feature map; and a neural-network processing unit NPU, configured to: obtain the first feature map from the memory, where the first feature map includes M blocks, and M is a positive integer; separately perform, in an asynchronous manner, inference computation at at least two layers of a neural network model on the M blocks to obtain M inference results, where the asynchronous manner means that data synchronization is not performed on an intermediate result obtained after inference computation at one layer of the neural network model is performed on each block, and inference computation at a next layer continues to be performed on the intermediate result; and perform data synchronization on the M inference results to obtain synchronized data. Because the NPU performs data synchronization only after completing inference computation at the at least two layers of the neural network model, a quantity of data synchronization times in a neural network inference process is relatively small, and less data migration overheads are generated.