会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 5. 发明授权
    • Field-programmable gate array based accelerator system
    • 基于现场可编程门阵列的加速器系统
    • US08131659B2
    • 2012-03-06
    • US12238239
    • 2008-09-25
    • Ning-Yi XuXiong-Fei CaiRui GaoJing YanFeng-Hsiung Hsu
    • Ning-Yi XuXiong-Fei CaiRui GaoJing YanFeng-Hsiung Hsu
    • G06Q30/00
    • G06N3/063
    • Accelerator systems and methods are disclosed that utilize FPGA technology to achieve better parallelism and processing speed. A Field Programmable Gate Array (FPGA) is configured to have a hardware logic performing computations associated with a neural network training algorithm, especially a Web relevance ranking algorithm such as LambaRank. The training data is first processed and organized by a host computing device, and then streamed to the FPGA for direct access by the FPGA to perform high-bandwidth computation with increased training speed. Thus, large data sets such as that related to Web relevance ranking can be processed. The FPGA may include a processing element performing computations of a hidden layer of the neural network training algorithm. Parallel computing may be realized using a single instruction multiple data streams (SIMD) architecture with multiple arithmetic logic units in the FPGA.
    • 公开了利用FPGA技术实现更好的并行性和处理速度的加速器系统和方法。 现场可编程门阵列(FPGA)被配置为具有执行与神经网络训练算法相关联的计算的硬件逻辑,特别是诸如LambaRank的Web相关性排序算法。 训练数据首先由主机计算机处理和组织,然后流式传输到FPGA,以便FPGA直接访问,以提高训练速度进行高带宽计算。 因此,可以处理与Web相关性排名相关的大数据集。 FPGA可以包括执行神经网络训练算法的隐藏层的计算的处理元件。 可以使用FPGA中具有多个算术逻辑单元的单指令多数据流(SIMD)架构来实现并行计算。
    • 8. 发明授权
    • Rough-cut capacity planning with production constraints and dynamic bottleneck considerations
    • 粗略的能力规划,生产约束和动态瓶颈考虑
    • US07925365B2
    • 2011-04-12
    • US10977393
    • 2004-10-29
    • Tay Jin ChuaFeng Yu WangWen Jing YanTian Xiang Cai
    • Tay Jin ChuaFeng Yu WangWen Jing YanTian Xiang Cai
    • G06F19/00
    • G06Q10/06Y02P90/86
    • To assess the sufficiency of a plurality of machines for processing a number of items, machine availability information indicative of availability of the machines for processing the items, machine capacity information indicative of a capacity of each of the machines for processing the items, and machine preference information indicative of a preference of each of the machines for processing the items are obtained. A capacity constraint, such as an upper limit of items to be processed during a time interval, is determined based on the machine availability information, machine capacity information and machine preference information. At least some of the machines are allocated to process at least some of the items based on the machine availability information, machine capacity information and machine preference information, subject to the capacity constraint. The resulting rough-cut capacity plan may be used to balance available capacity against required capacity.
    • 为了评估用于处理多个物品的多个机器的充足性,指示用于处理物品的机器的可用性的机器可用性信息,指示用于处理物品的每个机器的容量的机器容量信息和机器偏好 获得指示用于处理物品的每个机器的偏好的信息。 基于机器可用性信息,机器容量信息和机器偏好信息来确定诸如在时间间隔期间处理的项目的上限的容量约束。 根据容量约束,至少部分机器被分配以基于机器可用性信息,机器容量信息和机器偏好信息来处理至少一些项目。 所产生的粗切容量计划可用于平衡可用容量与所需容量。
    • 9. 发明申请
    • Field-Programmable Gate Array Based Accelerator System
    • 基于现场可编程门阵列加速器系统
    • US20100076915A1
    • 2010-03-25
    • US12238239
    • 2008-09-25
    • Ning-Yi XuXiong-Fei CaiRui GaoJing YanFeng-Hsiung Hsu
    • Ning-Yi XuXiong-Fei CaiRui GaoJing YanFeng-Hsiung Hsu
    • G06N3/08
    • G06N3/063
    • Accelerator systems and methods are disclosed that utilize FPGA technology to achieve better parallelism and processing speed. A Field Programmable Gate Array (FPGA) is configured to have a hardware logic performing computations associated with a neural network training algorithm, especially a Web relevance ranking algorithm such as LambaRank. The training data is first processed and organized by a host computing device, and then streamed to the FPGA for direct access by the FPGA to perform high-bandwidth computation with increased training speed. Thus, large data sets such as that related to Web relevance ranking can be processed. The FPGA may include a processing element performing computations of a hidden layer of the neural network training algorithm. Parallel computing may be realized using a single instruction multiple data streams (SIMD) architecture with multiple arithmetic logic units in the FPGA.
    • 公开了利用FPGA技术实现更好的并行性和处理速度的加速器系统和方法。 现场可编程门阵列(FPGA)被配置为具有执行与神经网络训练算法相关联的计算的硬件逻辑,特别是诸如LambaRank的Web相关性排序算法。 训练数据首先由主机计算机处理和组织,然后流式传输到FPGA,以便FPGA直接访问,以提高训练速度进行高带宽计算。 因此,可以处理与Web相关性排名相关的大数据集。 FPGA可以包括执行神经网络训练算法的隐藏层的计算的处理元件。 可以使用FPGA中具有多个算术逻辑单元的单指令多数据流(SIMD)架构来实现并行计算。