会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明申请
    • SPLIT MULTIPLICATION
    • 分割多项式
    • WO2005045662A1
    • 2005-05-19
    • PCT/EP2004/012366
    • 2004-11-02
    • TELEFONAKTIEBOLAGET L M ERICSSON (publ)BERKEMAN, Anders
    • BERKEMAN, Anders
    • G06F7/52
    • G06F7/5324G06F7/728
    • A first number is multiplied by a second number, by representing the first number as a first set of one or more W- bit wide numbers, and representing the second number as a second set or one or more W -bit wide numbers. Each of the W -bit wide numbers from the first set is paired with each of the W -bit wide number from the second set. For each pair of W -bit wide numbers, a set of sub-partial products is generated. Combinations of the sub-partial products are formed such that each combination is representable by a W -bit wide lower partial products and the carry out terms are combined to form the product of the first number and the second number. The carry out term is advantageously representable by ( W /2)+1 bits.
    • 通过将第一数字表示为一个或多个W位宽数字的第一组,并将第二数字表示为第二组或一个或多个W位宽数字,将第一数字乘以第二数字。 来自第一组的每个W位宽的数字与来自第二组的每个W位宽的数字配对。 对于每对W位宽的数字,生成一组子部分乘积。 形成子部分产品的组合,使得每个组合可以由W位宽的较低部分乘积表示,并且执行项被组合以形成第一数量和第二数量的乘积。 进位项有利地由(W / 2)+1位表示。
    • 5. 发明申请
    • LOAD DISTRIBUTION FOR A DISTRIBUTED NEURAL NETWORK
    • WO2020164698A1
    • 2020-08-20
    • PCT/EP2019/053536
    • 2019-02-13
    • TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    • BASTANI, SaeedLI, YunBERKEMAN, AndersHENNINGSSON, MariaKALANTARI, Ashkan
    • G06N3/063G06N3/04
    • A method for dynamic load distribution for a distributed neural network is disclosed. The method comprises estimating (103), in a device of the neural network, an energy usage for processing at least one non-processed layer in the device, and estimating (106), in the device of 5 the neural network, an energy usage for transmitting layer output of at least one processed layer to a cloud service of the neural network for processing. The method further comprises comparing (107), in the device of the neural network, the estimated energy usage for processing the at least one non-processed layer in the device with the estimated energy usage for transmitting the layer output of the at least one processed layer to the cloud service. The 10 method furthermore comprises determining (108) to process the at least one non-processed layer in the device when the estimated energy usage for transmitting the layer output of the at least one processed layer to the cloud service is equal or greater than the estimated energy usage for processing the at least one non-processed layer, and determining (110) to transmit the layer output of the at least one processed layer to the cloud service for processing 15 subsequent layers when the estimated energy usage for transmitting the layer output of the at least one processed layer to the cloud service is less than the estimated energy usage for processing the at least one non-processed layer in the device. Corresponding computer program product, apparatus, cloud service assembly, and system are also disclosed.20