会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • AI MODEL USED IN AN AI INFERENCE ENGINE CONFIGURED TO PREDICT HARDWARE FAILURES
    • WO2023022754A1
    • 2023-02-23
    • PCT/US2022/015430
    • 2022-02-07
    • RAKUTEN SYMPHONY SINGAPORE PTE. LTD.RAKUTEN MOBILE USA LLC
    • KESAVAN, KrishnakumarSUTHAR, Manish
    • G06F11/00G06F11/07
    • Server hardware failure is predicted, with a probability estimate, of a possible future server failure along with an estimated cause of the future server failure. Based on the prediction, the particular server can be evaluated and if the risk is confirmed, load balancing can be performed to move a load (e.g., virtual machines (VMs)) off of the at-risk server onto low-risk servers. High availability of deployed load (e.g., VMs) is then achieved. A flow of big data may be on the order of 1,000,000 parameters per minute. A scalable tree-based AI inference engine processes the flow. One or more leading indicators are identified (including server parameters and statistic types) which reliably predict hardware failure. This allows a telco operator to monitor cloud-based VMs and perform a hot-swap on virtual machines if needed by shifting virtual machines VMs from the at-risk server to low-risk servers. Servers having a health score indicating high risk are indicated on a visual display called a heat map. The heat map quickly provides a visual indication to the telco person of identities of at-risk servers. The heat map can also indicate commonalities between at-risk servers, such as if the at-risk servers are correlated in terms of protocols in use, if the at-risk servers are correlated in terms of geographic location, server manufacturer, server OS load, or the particular hardware failure mechanism predicted for the at-risk servers.
    • 4. 发明申请
    • INFERENCE ENGINE CONFIGURED TO PROVIDE A HEAT MAP INTERFACE
    • WO2023022755A1
    • 2023-02-23
    • PCT/US2022/015431
    • 2022-02-07
    • RAKUTEN SYMPHONY SINGAPORE PTE. LTD.RAKUTEN MOBILE USA LLC
    • KESAVAN, KrishnakumarSUTHAR, Manish
    • H04W24/02G06F15/173
    • Server hardware failure is predicted, with a probability estimate, of a possible future server failure along with an estimated cause of the future server failure. Based on the prediction, the particular server can be evaluated and if the risk is confirmed, load balancing can be performed to move a load (e.g., virtual machines (VMs)) off of the at-risk server onto low-risk servers. High availability of deployed load (e.g., VMs) is then achieved. A flow of big data may be on the order of 1,000,000 parameters per minute. A scalable tree-based AI inference engine processes the flow. One or more leading indicators are identified (including server parameters and statistic types) which reliably predict hardware failure. This allows a telco operator to monitor cloud-based VMs and perform a hot-swap on virtual machines if needed by shifting virtual machines VMs from the at-risk server to low-risk servers. Servers having a health score indicating high risk are indicated on a visual display called a heat map. The heat map quickly provides a visual indication to the telco person of identities of at-risk servers. The heat map can also indicate commonalities between at-risk servers, such as if the at-risk servers are correlated in terms of protocols in use, if the at-risk servers are correlated in terms of geographic location, server manufacturer, server OS load, or the particular hardware failure mechanism predicted for the at-risk servers.
    • 5. 发明申请
    • FEATURE IDENTIFICATION METHOD FOR TRAINING OF AI MODEL
    • WO2023022753A1
    • 2023-02-23
    • PCT/US2022/015429
    • 2022-02-07
    • RAKUTEN SYMPHONY SINGAPORE PTE. LTD.RAKUTEN MOBILE USA LLC
    • KESAVAN, KrishnakumarSUTHAR, Manish
    • G06N20/00G06F30/00
    • Server hardware failure is predicted, with a probability estimate, of a possible future server failure along with an estimated cause of the future server failure. Based on the prediction, the particular server can be evaluated and if the risk is confirmed, load balancing can be performed to move a load (e.g., virtual machines (VMs)) off of the at-risk server onto low-risk servers. High availability of deployed load (e.g., VMs) is then achieved. A flow of big data may be on the order of 1,000,000 parameters per minute. A scalable tree-based AI inference engine processes the flow. One or more leading indicators are identified (including server parameters and statistic types) which reliably predict hardware failure. This allows a telco operator to monitor cloud-based VMs and perform a hot-swap on virtual machines if needed by shifting virtual machines VMs from the at-risk server to low-risk servers. Servers having a health score indicating high risk are indicated on a visual display called a heat map. The heat map quickly provides a visual indication to the telco person of identities of at-risk servers. The heat map can also indicate commonalities between at-risk servers, such as if the at-risk servers are correlated in terms of protocols in use, if the at-risk servers are correlated in terms of geographic location, server manufacturer, server OS load, or the particular hardware failure mechanism predicted for the at-risk servers.