会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明授权
    • Hardware matrix computation for wireless receivers
    • 无线接收机的硬件矩阵计算
    • US07974997B2
    • 2011-07-05
    • US11731174
    • 2007-03-30
    • Eliahou ArvivRobert L. LangYi-Chen LiOliver RidlerXiao-an Wang
    • Eliahou ArvivRobert L. LangYi-Chen LiOliver RidlerXiao-an Wang
    • G06F17/16
    • G06F17/16H04B1/707H04B2201/70707H04B2201/709727
    • In one embodiment, a receiver including one or more signal-processing blocks and a hardware-based matrix co-processor. The one or more signal-processing blocks are adapted to generate a processed signal from a received signal. The hardware-based matrix co-processor includes two or more different matrix-computation engines, each adapted to perform a different matrix computation, and one or more shared hardware-computation units, each adapted to perform a mathematical operation. At least one signal-processing block is adapted to offload matrix-based signal processing to the hardware-based matrix co-processor. Each of the two or more different matrix-computation engines is adapted to offload the same type of mathematical processing to at least one of the one or more shared hardware-computation units.
    • 在一个实施例中,接收机包括一个或多个信号处理块和基于硬件的矩阵协处理器。 所述一个或多个信号处理块适于从接收到的信号产生经处理的信号。 基于硬件的矩阵协处理器包括两个或更多个不同的矩阵计算引擎,每个矩阵计算引擎适于执行不同的矩阵计算,以及一个或多个共享硬件计算单元,每个共享硬件计算单元适于执行数学运算。 至少一个信号处理块适于将基于矩阵的信号处理卸载到基于硬件的矩阵协处理器。 两个或更多个不同的矩阵计算引擎中的每一个适于将相同类型的数学处理卸载到一个或多个共享硬件计算单元中的至少一个。
    • 5. 发明申请
    • Hardware matrix computation for wireless receivers
    • 无线接收机的硬件矩阵计算
    • US20080243982A1
    • 2008-10-02
    • US11731174
    • 2007-03-30
    • Eliahou ArvivRobert L. LangYi-Chen LiOliver RidlerXiao-an Wang
    • Eliahou ArvivRobert L. LangYi-Chen LiOliver RidlerXiao-an Wang
    • G06F17/10G06F7/32
    • G06F17/16H04B1/707H04B2201/70707H04B2201/709727
    • In one embodiment, a receiver including one or more signal-processing blocks and a hardware-based matrix co-processor. The one or more signal-processing blocks are adapted to generate a processed signal from a received signal. The hardware-based matrix co-processor includes two or more different matrix-computation engines, each adapted to perform a different matrix computation, and one or more shared hardware-computation units, each adapted to perform a mathematical operation. At least one signal-processing block is adapted to offload matrix-based signal processing to the hardware-based matrix co-processor. Each of the two or more different matrix-computation engines is adapted to offload the same type of mathematical processing to at least one of the one or more shared hardware-computation units.
    • 在一个实施例中,接收机包括一个或多个信号处理块和基于硬件的矩阵协处理器。 所述一个或多个信号处理块适于从接收到的信号产生经处理的信号。 基于硬件的矩阵协处理器包括两个或更多个不同的矩阵计算引擎,每个矩阵计算引擎适于执行不同的矩阵计算,以及一个或多个共享硬件计算单元,每个共享硬件计算单元适于执行数学运算。 至少一个信号处理块适于将基于矩阵的信号处理卸载到基于硬件的矩阵协处理器。 两个或更多个不同的矩阵计算引擎中的每一个适于将相同类型的数学处理卸载到一个或多个共享硬件计算单元中的至少一个。
    • 8. 发明申请
    • Pre-emptive interleaver address generator for turbo decoders
    • turbo解码器的抢占交织器地址发生器
    • US20060242476A1
    • 2006-10-26
    • US11103489
    • 2005-04-12
    • Mark BickerstaffYi-Chen LiChris NicolBejamin Widdup
    • Mark BickerstaffYi-Chen LiChris NicolBejamin Widdup
    • G11C29/00
    • H03M13/2775H03M13/2714H03M13/276H03M13/2771H03M13/6516H03M13/6519
    • An interleaver address generator is provided with pruning avoidance technology. It anticipates the points in time when incorrect addresses are computed by an IAG, and bypasses these events. It produces a stream of valid, contiguous addresses for all specified code block sizes. A single address computation engine firstly ‘trains’ itself about violating generated addresses (for a related block size) during the initial H1 half-iteration of decoder operation, and then produces a continuous, correct stream of addresses as required by the turbo decoder. Thus regions of pruned addresses are determined, and then training is performed only in these regions. Thus, computation and population of a pruned event table is determined in less than 1/10 the time required to do a conventional style full training. The resulting pruned event table is compressed down to 256 bits.
    • 交织器地址发生器具有修剪避免技术。 它预计在IAG计算不正确地址的时间点,并绕过这些事件。 它为所有指定的代码块大小生成一个有效,连续的地址流。 单个地址计算引擎首先在解码器操作的初始H1半迭代期间“违反”生成的地址(对于相关块大小)进行“训练”,然后根据turbo解码器的要求产生连续的正确的地址流。 因此,确定了修剪地址的区域,然后仅在这些区域中执行训练。 因此,修剪事件表的计算和人口在不到传统风格完全训练所需时间的1/10内确定。 生成的修剪事件表被压缩到256位。
    • 9. 发明授权
    • Pre-emptive interleaver address generator for turbo decoders
    • turbo解码器的抢占交织器地址发生器
    • US07437650B2
    • 2008-10-14
    • US11103489
    • 2005-04-12
    • Mark Andrew BickerstaffYi-Chen LiChris NicolBejamin John Widdup
    • Mark Andrew BickerstaffYi-Chen LiChris NicolBejamin John Widdup
    • H03M13/27H03M13/29
    • H03M13/2775H03M13/2714H03M13/276H03M13/2771H03M13/6516H03M13/6519
    • An interleaver address generator is provided with pruning avoidance technology. It anticipates the points in time when incorrect addresses are computed by an IAG, and bypasses these events. It produces a stream of valid, contiguous addresses for all specified code block sizes. A single address computation engine firstly ‘trains’ itself about violating generated addresses (for a related block size) during the initial H1 half-iteration of decoder operation, and then produces a continuous, correct stream of addresses as required by the turbo decoder. Thus regions of pruned addresses are determined, and then training is performed only in these regions. Thus, computation and population of a pruned event table is determined in less than 1/10 the time required to do a conventional style full training. The resulting pruned event table is compressed down to 256 bits.
    • 交织器地址发生器具有修剪避免技术。 它预计在IAG计算不正确地址的时间点,并绕过这些事件。 它为所有指定的代码块大小生成一个有效,连续的地址流。 单个地址计算引擎首先在解码器操作的初始H1半迭代期间“违反”生成的地址(对于相关块大小)进行“训练”,然后根据turbo解码器的要求产生连续的正确的地址流。 因此,确定了修剪地址的区域,然后仅在这些区域中执行训练。 因此,修剪事件表的计算和人口在不到传统风格完全训练所需时间的1/10内确定。 生成的修剪事件表被压缩到256位。