会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 41. 发明授权
    • Multiple variable cache replacement policy
    • 多变量缓存替换策略
    • US06523091B2
    • 2003-02-18
    • US09931115
    • 2001-08-16
    • Anup S. TirumalaMarc Tremblay
    • Anup S. TirumalaMarc Tremblay
    • G06F1200
    • G06F12/127G06F12/0804
    • A method for selecting a candidate to mark as overwritable in the event of a cache miss while attempting to avoid a write back operation. The method includes associating a set of data with the cache access request, each datum of the set is associated with a way, then choosing an invalid way among the set. Where no invalid ways exist among the set, the next step is determining a way that is not most recently used among the set. Next, the method determines whether a shared resource is crowded. When the shared resource is not crowded, the not most recently used way is chosen as the candidate. Where the shared resource is crowded, the next step is to determine whether the not most recently used way differs from an associated source in the memory and where the not most recently used way is the same as an associated source in the memory, the not most recently used way is chosen as the candidate. Where the not most recently used way differs from an associated source in the memory, the candidate is chosen as the way among the set that does not differ from an associated source in the memory. Where all ways among the set differ from respective sources in the memory, the not most recently used way is chosen as the candidate and the not most recently used way is stored in the shared resource.
    • 一种方法,用于在尝试避免回写操作时,在高速缓存未命中的情况下选择候选标记为可重写。 该方法包括将一组数据与高速缓存访​​问请求相关联,该集合的每个数据与一种方式相关联,然后在该集合中选择无效的方式。 在集合中不存在无效方式的情况下,下一步是确定集合中最近不使用的方式。 接下来,该方法确定共享资源是否拥挤。 当共享资源不拥挤时,选择不是最近使用的方式作为候选者。 在共享资源拥挤的地方,下一步是确定不是最近使用的方式与存储器中的相关源不同,哪些不是最近使用的方式与存储器中的相关源相同,不是最大的 最近选用的方式是候选人。 在不是最近使用的方式不同于存储器中的相关源的情况下,候选者被选择为与存储器中的相关源不相同的集合中的方式。 在集合中的所有方式与存储器中的各个源不同的情况下,不是最近使用的方式被选择为候选,并且不是最近使用的方式被存储在共享资源中。
    • 42. 发明授权
    • Method and apparatus for enforcing memory reference dependencies through a load store unit
    • 用于通过加载存储单元来执行存储器参考依赖性的方法和装置
    • US06430649B1
    • 2002-08-06
    • US09327398
    • 1999-06-07
    • Shailender ChaudhryMarc TremblayJames M. O'Connor
    • Shailender ChaudhryMarc TremblayJames M. O'Connor
    • G06F938
    • G06F9/3842G06F9/30043G06F9/3834G06F9/3838G06F9/3851G06F9/3863
    • One embodiment of the present invention provides a system that enforces dependencies between memory references within a load store unit (LSU) in a processor. When a write request is received in the load store unit, the write request is loaded into a store buffer in the LSU. The write request may include a “watch address” specifying that a subsequent load from the watch address cannot occur before the write request completes. Note that the watch address is not necessarily the same as the destination address of the write operation. When a read request is received in the load store unit, the read request is loaded into a load buffer of the LSU. The system determines if the read request is directed to the same address as a matching watch address in the store buffer. If so, the system waits for the write request associated with the matching watch address to complete before completing the read request. In one embodiment of the present invention, if the read request is directed to the same address as a matching write request in the store buffer, the system completes the read request by returning a data value contained in the matching write request without going out to memory. In one embodiment of the present invention, the system provides an executable code write instruction that specifies the watch address.
    • 本发明的一个实施例提供了一种在处理器中的加载存储单元(LSU)内实现存储器引用之间的依赖性的系统。 当在加载存储单元中接收到写请求时,写请求被加载到LSU中的存储缓冲器中。 写请求可以包括指定来自监视地址的后续加载在写请求完成之前不会发生的“监视地址”。 请注意,手表地址不一定与写入操作的目标地址相同。 当在加载存储单元中接收到读请求时,读请求被加载到LSU的加载缓冲器中。 系统确定读请求是否与存储缓冲区中匹配的监视地址指向相同的地址。 如果是这样,则在完成读取请求之前,系统等待与匹配的监视地址相关联的写入请求完成。 在本发明的一个实施例中,如果读请求针对与存储缓冲器中的匹配写请求相同的地址,则系统通过返回包含在匹配写请求中的数据值来完成读请求,而不用外存 。 在本发明的一个实施例中,系统提供了一个指定监视地址的可执行代码写入指令。
    • 43. 发明授权
    • Combining results of selectively executed remaining sub-instructions with that of emulated sub-instruction causing exception in VLIW processor
    • 将选择执行的剩余子指令的结果与在VLIW处理器中引起异常的仿真子指令的结果相结合
    • US06405300B1
    • 2002-06-11
    • US09273602
    • 1999-03-22
    • Marc TremblayWilliam N. Joy
    • Marc TremblayWilliam N. Joy
    • G06F944
    • G06F9/3017G06F9/30101G06F9/3853G06F9/3861G06F9/3885
    • One embodiment of the present invention provides a system that efficiently emulates sub-instructions in a very long instruction word (VLIW) processor. The system operates by receiving an exception condition during execution of a VLIW instruction within a VLIW program. This exception condition indicates that at least one sub-instruction within the VLIW instruction requires emulation in software or software assistance. In processing this exception condition, the system emulates the sub-instructions that require emulation in software and stores the results. The system also selectively executes in hardware any remaining sub-instructions in the VLIW instruction that do not require emulation in software. The system finally combines the results from the sub-instructions emulated in software with the results from the remaining sub-instructions executed in hardware, and resumes execution of the VLIW program.
    • 本发明的一个实施例提供了一种在非常长的指令字(VLIW)处理器中有效地模拟子指令的系统。 该系统通过在VLIW程序中执行VLIW指令期间接收到异常情况来进行操作。 该异常条件表示VLIW指令中的至少一个子指令需要软件或软件协助进行仿真。 在处理此异常情况时,系统会模拟需要软件仿真并存储结果的子指令。 该系统还在硬件中选择性地执行VLIW指令中的任何剩余子指令,这些指令不需要软件仿真。 系统最终将从软件中仿真的子指令的结果与硬件中执行的剩余子指令的结果相结合,并恢复VLIW程序的执行。
    • 46. 发明授权
    • Generation isolation system and method for garbage collection
    • 垃圾收集隔离系统和方法
    • US6098089A
    • 2000-08-01
    • US841543
    • 1997-04-23
    • James Michael O'ConnorMarc TremblaySanjay Vishin
    • James Michael O'ConnorMarc TremblaySanjay Vishin
    • G06F12/00G06F12/02G06F17/30
    • G06F12/0276Y10S707/99953Y10S707/99957
    • Architectural support for generation isolation is provided through trapping of intergenerational pointer stores. Identification of pointer stores as intergenerational is performed by a store barrier responsive to an intergenerational pointer store trap matrix that is programmably encoded with store target object and store pointer data generation pairs to be trapped. The write barrier and intergenerational pointer store trap matrix provide a programmably-flexible definition of generation pairs to be trapped, affording a garbage collector implementer with support for a wide variety of generational garbage collection methods, including remembered set-based methods, card-marking type methods, write barrier based copying collector methods, etc., as well as combinations thereof and combinations including train algorithm type methods to managing mature portions of a generationally collected memory space. Pointer specific store instruction replacement allows implementations in accordance with this invention to provide an exact barrier to not only pointer stores, but to the specific intergenerational pointer stores of interest to a particular garbage collection method or combination of methods.
    • 通过捕获代际指针存储来提供对代隔离的建筑支持。 响应于代码指针存储陷阱矩阵的存储屏障来执行指针存储作为代际的识别,该存储屏障可编程地用存储目标对象编码并存储要被捕获的指针数据生成对。 写屏障和代际指针存储陷阱矩阵提供了可编程灵活的定义要被捕获的生成对,提供了一个垃圾回收器实现者,支持各种各样的代代垃圾收集方法,包括记住基于集合的方法,卡标记类型 方法,基于写屏障的复制收集器方法等,以及其组合以及包括训练算法类型方法组合以管理代收收集的存储器空间的成熟部分的组合。 指针特定存储指令替换允许根据本发明的实现不仅提供指针存储器的确切障碍,而且提供对特定垃圾回收方法或方法组合感兴趣的特定代际指针存储器。
    • 47. 发明授权
    • Method for stack-caching method frames
    • 堆栈缓存方法框架的方法
    • US6092152A
    • 2000-07-18
    • US880466
    • 1997-06-23
    • Marc TremblayJames Michael O'Connor
    • Marc TremblayJames Michael O'Connor
    • G06F9/30G06F12/08G06F12/00
    • G06F9/30134G06F12/0875
    • The present invention includes methods for caching method frames using multiple stack cache management units to provide access to multiple portions of the method frames. In some embodiments of the invention, a first frame component of a first method frame is cached in a first stack cache management unit. A second frame component of the first method frame is cached in a second stack cache management unit. In addition, a first frame component of a second method frame is also cached in the second stack cache management unit and a second frame component of the second method frame is cached in the first stack cache management unit. The first frame components of the method frames can be, for example, operand stacks of the method frames. The second frame components of the method frames can be, for example, the arguments and local variable areas of the method frames.
    • 本发明包括使用多个堆栈高速缓存管理单元缓存方法帧以提供对方法帧的多个部分的访问的方法。 在本发明的一些实施例中,第一方法帧的第一帧分量被缓存在第一堆栈高速缓存管理单元中。 第一方法帧的第二帧分量被缓存在第二堆栈高速缓存管理单元中。 此外,第二方法帧的第一帧分量也缓存在第二堆栈高速缓存管理单元中,并且第二方法帧的第二帧分量被缓存在第一堆栈高速缓存管理单元中。 方法框架的第一帧组件可以是例如方法框架的操作数堆栈。 方法框架的第二帧组件可以是例如方法框架的参数和局部变量区域。
    • 48. 发明授权
    • Multi-stack-caching memory architecture
    • 多堆栈缓存内存架构
    • US6067602A
    • 2000-05-23
    • US880633
    • 1997-06-23
    • Marc TremblayJames Michael O'Connor
    • Marc TremblayJames Michael O'Connor
    • G06F9/38G06F12/08
    • G06F9/38G06F12/0875G06F9/3851
    • The present invention provides a memory system that caches method frames using multiple stack cache management units to provide access to multiple portions of the method frames. In some embodiments of the invention, a memory system includes a main memory circuit, a first stack cache management unit, and a second stack cache management unit. The first stack cache management unit is configured to cache a first frame component of a first method frame and a second frame component of a second method frame. The second stack cache management unit is configured to cache a second frame component of the first method frame and a first frame component of the second method frame. Some embodiments of the memory system also includes a main memory cache coupled between the main memory circuit and the stack cache management units. The first frame components of the method frames can be, for example, the operand stacks method frames. The second frame components of the method frames can be, for example, the arguments and local variable areas of the method frames.
    • 本发明提供了一种使用多个堆栈高速缓存管理单元高速缓存方法帧以提供对方法帧的多个部分的访问的存储器系统。 在本发明的一些实施例中,存储器系统包括主存储器电路,第一堆栈高速缓存管理单元和第二堆栈高速缓存管理单元。 第一堆栈缓存管理单元被配置为缓存第一方法帧的第一帧分量和第二方法帧的第二帧分量。 第二堆栈缓存管理单元被配置为缓存第一方法帧的第二帧分量和第二方法帧的第一帧分量。 存储器系统的一些实施例还包括耦合在主存储器电路和堆栈高速缓存管理单元之间的主存储器高速缓存。 方法框架的第一帧组件可以是例如操作数堆栈方法框架。 方法框架的第二帧组件可以是例如方法框架的参数和局部变量区域。
    • 49. 发明授权
    • Non-quick instruction accelerator including instruction identifier and
data set storage and method of implementing same
    • 非快速指令加速器,包括指令标识符和数据集存储及其实现方法
    • US6065108A
    • 2000-05-16
    • US788805
    • 1997-01-23
    • Marc TremblayJames Michael O'Connor
    • Marc TremblayJames Michael O'Connor
    • G06F9/30G06F9/318G06F9/345G06F9/38G06F9/40G06F9/42G06F9/445G06F9/455G06F9/34
    • G06F9/30021G06F9/30134G06F9/30149G06F9/3017G06F9/30174G06F9/30181G06F9/345G06F9/3802G06F9/3808G06F9/383G06F9/4425G06F9/443G06F9/44589G06F9/45504G06F2212/451
    • An instruction accelerator which includes a processor and an associative memory. The processor is coupled to receive a stream of instructions and a corresponding stream of instruction identifier values. The instructions include at least one non-quick instruction which has a first associated data set which must be accessed prior to executing the non-quick instruction. A memory, which is coupled to the processor, stores one or more instruction identifier values and one or more associated data sets. The memory receives the stream of instruction identifier values. When a current instruction identifier value in the stream of instruction identifier values matches an instruction identifier value stored in the memory, an associated data set is accessed from the memory. More specifically, if the first instruction identifier value and the first data set are stored in the memory, and the current instruction identifier value is equal to the first instruction identifier value, then the first data set is read out of the memory. Execution of the non-quick instruction is accelerated because the first data set is readily accessible within the memory. If the first data set is not stored in the memory, the associative memory and the processor control the initial retrieval of the first data set.
    • 一种包括处理器和关联存储器的指令加速器。 处理器被耦合以接收指令流和相应的指令标识符值流。 该指令包括至少一个非快速指令,其具有在执行非快速指令之前必须被访问的第一关联数据集。 耦合到处理器的存储器存储一个或多个指令标识符值和一个或多个相关联的数据集。 存储器接收指令标识符值流。 当指令标识符流中的当前指令标识符值与存储在存储器中的指令标识符值匹配时,从存储器访问相关数据集。 更具体地,如果第一指令标识符值和第一数据组存储在存储器中,并且当前指令标识符值等于第一指令标识符值,则从存储器读出第一数据组。 加速非快速指令的执行,因为第一个数据集可以在存储器内容易地访问。 如果第一数据集不存储在存储器中,则关联存储器和处理器控制第一数据集的初始检索。
    • 50. 发明授权
    • Stack management unit and method for a processor having a stack
    • 具有堆栈的处理器的堆栈管理单元和方法
    • US6038643A
    • 2000-03-14
    • US787736
    • 1997-01-23
    • Marc TremblayJames Michael O'Connor
    • Marc TremblayJames Michael O'Connor
    • G06F9/30G06F9/318G06F9/345G06F9/40G06F9/42G06F9/445G06F9/455G06F12/08G06F13/00
    • G06F9/30021G06F12/0875G06F9/30134G06F9/30145G06F9/30174G06F9/30196G06F9/345G06F9/3885G06F9/4425G06F9/443G06F9/44589G06F9/45504G06F2212/451
    • The present invention provides a stack management unit including a stack cache to accelerate data transfers between the stack-based computing system and the stack. In one embodiment, the stack management unit includes a stack cache, a dribble manager unit, and a stack control unit. The dribble manager unit includes a fill control unit and a spill control unit. Since the vast majority of memory accesses to the stack occur at or near the top of the stack, the dribble manager unit maintains the top portion of the stack in the stack cache. Specifically, when the stack-based computing system is pushing data onto the stack and a spill condition occurs, the spill control unit transfers data from the bottom of the stack cache to the stack so that the top portion of the stack remains in the stack cache. When the stack-based computing system is popping data off of the stack and a fill condition occurs, the fill control unit transfer data from the stack to the bottom of the stack cache to maintain the top portion of the stack in the stack cache. Typically, a fill condition occurs as the stack cache becomes empty and a spill condition occurs as the stack cache becomes full.
    • 本发明提供了一种堆栈管理单元,其包括堆栈高速缓存,以加速基于堆栈的计算系统和堆栈之间的数据传输。 在一个实施例中,堆栈管理单元包括堆栈高速缓存,运球管理器单元和堆栈控制单元。 运球管理器单元包括一个填充控制单元和溢出控制单元。 由于绝大多数对堆栈的存储器访问发生在堆栈顶部或附近,运球管理器单元将堆栈的顶部部分维护在堆栈高速缓存中。 具体来说,当基于堆栈的计算系统将数据推送到堆栈并发生溢出条件时,溢出控制单元将数据从堆栈高速缓存的底部传送到堆栈,使堆栈的顶部保留在堆栈高速缓存中 。 当基于堆栈的计算系统从堆栈中弹出数据并发生填充条件时,填充控制单元将数据从堆栈传送到堆栈高速缓存的底部,以将堆栈的顶部部分保持在堆栈高速缓存中。 通常,填充条件发生在堆栈高速缓存变为空并且随着堆栈高速缓存已满时发生溢出状态。