会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • DEPENDENCY-PREDICTION OF INSTRUCTIONS
    • 指示的依赖性预测
    • WO2016048651A1
    • 2016-03-31
    • PCT/US2015/048959
    • 2015-09-08
    • QUALCOMM INCORPORATED
    • STEMPEL, Brian MichaelDIEFFENDERFER, James NorrisMCILVAINE, Michael ScottBROWN, Melinda
    • G06F9/30G06F9/38
    • G06F9/30072G06F9/30149G06F9/38G06F9/3836G06F9/3842G06F9/3885
    • Systems and methods for dependency-prediction include executing instructions in an instruction pipeline of a processor and detecting a conditionality-imposing control instruction, such as an If-Then (IT) instruction, which imposes dependent behavior on a conditionality block size of one or more dependent instructions. Prior to executing a first instruction, a dependency-prediction is made to determine if the first instruction is a dependent instruction of the conditionality-imposing control instruction, based on the conditionality block size and one or more parameters of the instruction pipeline. The first instruction is executed based on the dependency-prediction. When the first instruction is dependency-mispredicted, an associated dependency-misprediction penalty is mitigated. If the first instruction is a branch instruction, the mitigation involves training a branch prediction tracking mechanism to correctly dependency-predict future occurrences of the first instruction.
    • 用于依赖性预测的系统和方法包括在处理器的指令流水线中执行指令,并且检测诸如If-Then(IT)指令之类的有条件的控制指令,该指令将依赖行为强加于一个或多个 依赖说明。 在执行第一指令之前,基于条件块大小和指令流水线的一个或多个参数,进行依赖性预测以确定第一指令是否是条件施加控制指令的依赖指令。 基于依赖关系预测执行第一指令。 当第一条指令是依赖性错误预测时,减轻相关的依赖关系错误估计。 如果第一指令是分支指令,则缓解涉及训练分支预测跟踪机制以正确依赖 - 预测第一指令的将来出现。
    • 2. 发明申请
    • DETERMINING CACHE HIT/MISS OF ALIASED ADDRESSES IN VIRTUALLY-TAGGED CACHE(S), AND RELATED SYSTEMS AND METHODS
    • 确定VARTUALLY-TAGGED CACHE(S)中的缓存地址的高速缓存/错误,以及相关系统和方法
    • WO2013109696A2
    • 2013-07-25
    • PCT/US2013/021849
    • 2013-01-17
    • QUALCOMM INCORPORATED
    • DIEFFENDERFER, James NorrisCLANCY, Robert DSPEIER, Thomas Philip
    • G06F12/10
    • G06F12/1063G06F12/1045
    • Apparatuses and related systems and methods for determining cache hit/miss of aliased addresses in virtually-tagged cache(s) are disclosed. In one embodiment, a virtual aliasing cache hit/miss detector for a VIVT cache is provided. The detector comprises a TLB configured to receive a first virtual address and a second virtual address from the VIVT cache resulting from an indexed read into the VIVT cache based on the first virtual address. The TLB is further configured to generate first and second physical addresses translated from the first and second virtual addresses, respectively. The detector further comprises a comparator configured to receive the first and second physical addresses and effectuate a generation of an aliased cache hit/miss indicator based on a comparison of the first and second physical addresses. In this manner, the virtual aliasing cache hit/miss detector correctly generates cache hits and cache misses, even in the presence of aliased addressing.
    • 公开了用于确定虚拟标记的高速缓存中的别名地址的高速缓存命中/未命中的装置和相关系统和方法。 在一个实施例中,提供了用于VIVT高速缓存的虚拟混叠高速缓存命中/未命中检测器。 检测器包括TLB,其被配置为基于第一虚拟地址从VIVT高速缓存中接收第一虚拟地址和第二虚拟地址,该第一虚拟地址和第二虚拟地址是从索引读入到V​​IVT高速缓存中产生的。 TLB还被配置为分别生成从第一和第二虚拟地址转换的第一和第二物理地址。 检测器还包括比较器,其被配置为接收第一物理地址和第二物理地址,并且基于第一和第二物理地址的比较来实现别名高速缓存命中/未命中指示符的生成。 以这种方式,即使在存在别名寻址的情况下,虚拟混叠高速缓存命中/未命中检测器也正确地产生高速缓存命中和高速缓存未命中。
    • 3. 发明申请
    • METHOD FOR FILTERING TRAFFIC TO A PHYSICALLY-TAGGED DATA CACHE
    • 将流量过滤到物理标签数据缓存的方法
    • WO2013109679A1
    • 2013-07-25
    • PCT/US2013/021822
    • 2013-01-17
    • QUALCOMM INCORPORATED
    • CLANCY, Robert DDIEFFENDERFER, James NorrisSPEIER, Thomas Philip
    • G06F12/10
    • G06F12/1054G06F12/0864G06F12/1045G06F12/1063Y02D10/13
    • Embodiments of a data cache are disclosed that substantially decrease a number of accesses to a physically-tagged tag array of the data cache are provided. In general, the data cache includes a data array that stores data elements, a physically-tagged tag array, and a virtually-tagged tag array. In one embodiment, the virtually-tagged tag array receives a virtual address. If there is a match for the virtual address in the virtually-tagged tag array, the virtually-tagged tag array outputs, to the data array, a way stored in the virtually-tagged tag array for the virtual address. In addition, in one embodiment, the virtually-tagged tag array disables the physically-tagged tag array. Using the way output by the virtually-tagged tag array, a desired data element in the data array is addressed.
    • 公开了数据高速缓存的实施例,其提供了对数据高速缓存的物理标记的标签阵列的访问的数量的大量减少。 通常,数据高速缓存包括存储数据元素,物理标记的标签阵列和虚拟标记的标签阵列的数据阵列。 在一个实施例中,虚拟标记的标签阵列接收虚拟地址。 如果虚拟标记的标签阵列中的虚拟地址匹配,则虚拟标记的标签阵列将数据阵列中的虚拟地址标记在虚拟标记的数组中。 此外,在一个实施例中,虚拟标记的标签阵列禁用物理标记的标签阵列。 使用由虚拟标记的标签数组输出的方式,寻址数据数组中所需的数据元素。
    • 4. 发明申请
    • LINK STACK REPAIR OF ERRONEOUS SPECULATIVE UPDATE
    • 链路堆栈修复错误的参数更新
    • WO2009046326A1
    • 2009-04-09
    • PCT/US2008/078789
    • 2008-10-03
    • QUALCOMM INCORPORATEDDIEFFENDERFER, James NorrisSTEMPEL, Brian MichaelSMITH, Rodney Wayne
    • DIEFFENDERFER, James NorrisSTEMPEL, Brian MichaelSMITH, Rodney Wayne
    • G06F9/38
    • G06F9/3842G06F9/3806G06F9/3861
    • Whenever a link address is written to the link stack, the prior value of the link stack entry is saved, and is restored to the link stack after a link stack push operation is speculatively executed following a mispredicted branch. This condition is detected by maintaining a count of the total number of uncommitted link stack write instructions in the pipeline, and a count of the number of uncommitted link stack write instructions ahead of each branch instruction. When a branch is evaluated and determined to have been mispredicted, the count associated with it is compared to the total count. A discrepancy indicates a link stack write instruction was speculatively issued into the pipeline after the mispredicted branch instruction, and pushed a link address onto the link stack. The prior link address is restored to the link stack from the link stack restore buffer.
    • 每当链接地址被写入链接堆栈时,链接堆栈条目的先前值被保存,并且在错误预测的分支之后推测地执行链路堆叠推送操作之后被还原到链路栈。 通过维持流水线中未提交的链路堆栈写入指令的总数的计数以及每个分支指令之前的未提交的链路栈写入指令的数量的计数来检测该条件。 当分支被评估并确定为被误判时,将与之相关联的计数与总计数进行比较。 一个差异表示在错误预测的分支指令之后推测发出链路堆栈写入指令,并将链路地址推送到链路堆栈上。 链路堆栈恢复缓冲区中的链路栈恢复到先前的链路地址。
    • 7. 发明申请
    • CONTENT-TERMINATED DMA
    • 内消旋DMA
    • WO2008092044A2
    • 2008-07-31
    • PCT/US2008/051965
    • 2008-01-24
    • QUALCOMM IncorporatedSAPP, Kevin AllenDIEFFENDERFER, James Norris
    • SAPP, Kevin AllenDIEFFENDERFER, James Norris
    • G06F13/28
    • G06F13/28Y02D10/14
    • A Content-Terminated Direct Memory Access (CT-DMA) circuit autonomously transfers data of an unknown length from a source to a destination, terminating the transfer based on the content of the data. Filter criteria are provided to the CT-DMA prior to the data transfer. The filter criteria include pattern data that are compared to transfer data, and transfer termination rules for interpreting the comparison results. Data are written to the destination until the filter criteria are met. Representative filter criteria may include that one or more units of transfer data match pattern data; that one or more units of transfer data fail to match pattern data; or that one or more units of transfer data match pattern data a predetermined number of times.
    • 内容终止的直接存储器访问(CT-DMA)电路自主地将来自源的未知长度的数据传送到目的地,基于数据的内容终止传送。 在数据传输之前,将过滤标准提供给CT-DMA。 过滤标准包括与传输数据进行比较的模式数据,以及用于解释比较结果的转移终止规则。 将数据写入目的地,直到满足过滤条件。 代表性的过滤标准可以包括一个或多个传送数据单元匹配模式数据; 一个或多个传输数据单元不能匹配模式数据; 或者一个或多个传送数据单元与图案数据匹配预定次数。
    • 8. 发明申请
    • METHODS AND APPARATUS FOR LOW-COMPLEXITY INSTRUCTION PREFETCH SYSTEM
    • 低复杂度指导预制系统的方法和装置
    • WO2008073741A1
    • 2008-06-19
    • PCT/US2007/086254
    • 2007-12-03
    • QUALCOMM IncorporatedMORROW, Michael WilliamDIEFFENDERFER, James Norris
    • MORROW, Michael WilliamDIEFFENDERFER, James Norris
    • G06F9/38G06F12/08
    • G06F9/3802G06F12/0862
    • When misses occur in an instruction cache, prefetching techniques are used that minimize miss rates, memory access bandwidth, and power use. One of the prefetching techniques operates when a miss occurs. A notification that a fetch address missed in an instruction cache is received. The fetch address that caused the miss is analyzed to determine an attribute of the fetch address and based on the attribute a line of instructions is prefetched. The attribute may indicate that the fetch address is a target address of a non-sequential operation. Another attribute may indicate that the fetch address is a target address of a non-sequential operation and the target address is more than X% into a cache line. A further attribute may indicate that the fetch address is an even address in the instruction cache. Such attributes may be combined to determine whether to prefetch.
    • 当在指令高速缓存中发生错误时,使用预取技术来最小化错误率,存储器访问带宽和功率使用。 当缺失发生时,预取技术之一运行。 接收到在指令高速缓存中丢失的获取地址的通知。 分析导致遗漏的提取地址,以确定提取地址的属性,并根据属性预取一行指令。 该属性可以指示提取地址是非顺序操作的目标地址。 另一个属性可以指示提取地址是非顺序操作的目标地址,并且目标地址大于高速缓存行中的X%。 进一步的属性可以指示提取地址是指令高速缓存中的偶数地址。 可以组合这些属性以确定是否预取。