会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 63. 发明授权
    • Ring based distributed communication bus for a multiprocessor network
    • 用于多处理器网络的基于环的分布式通信总线
    • US5551048A
    • 1996-08-27
    • US253474
    • 1994-06-03
    • Simon C. Steely, Jr.
    • Simon C. Steely, Jr.
    • G06F12/08G06F12/00
    • G06F12/0813G06F12/0815Y10S707/99952
    • A method for providing communication between a plurality of nodes coupled in a ring arrangement, wherein a plurality of the nodes comprise processors each having a cache memory for storing a subset of shared data. Each of the nodes on the ring deposits data into a data slot during a given time period. The data deposited by each node may comprise an address field and a node field. To ensure data coherency between the caches, each processor on the ring includes a queue for saving a plurality of received data representative of the latest bus data transmitted on the bus. As each processor receives new data, the new data is compared against the plurality of saved data in the queue to determine if the address field of the new data matches the address field of any of the saved data of the queue. In the event that the new data matches one of the plurality of saved data, it is determined whether the new data represents updated data from the memory device. If the new data represents updated data it is shifted into the queue. If it does not represent updated data, it is discarded.
    • 一种用于在以环形布置耦合的多个节点之间提供通信的方法,其中多个节点包括具有用于存储共享数据子集的高速缓存存储器的处理器。 环中的每个节点在给定时间段内将数据存入数据时隙。 由每个节点存储的数据可以包括地址字段和节点字段。 为了确保高速缓存之间的数据一致性,环上的每个处理器包括用于保存表示在总线上发送的最新总线数据的多个接收数据的队列。 当每个处理器接收新数据时,将新数据与队列中的多个保存的数据进行比较,以确定新数据的地址字段是否与队列中保存的任何数据的地址字段匹配。 在新数据与多个保存数据中的一个匹配的情况下,确定新数据是否表示来自存储装置的更新数据。 如果新的数据表示更新的数据,则它被转移到队列中。 如果不表示已更新的数据,则会被丢弃。
    • 64. 发明授权
    • Multi instruction register mapper
    • 多指令寄存器映射器
    • US5519841A
    • 1996-05-21
    • US974776
    • 1992-11-12
    • David J. SagerSimon C. Steely, Jr.David B. Fite, Jr.
    • David J. SagerSimon C. Steely, Jr.David B. Fite, Jr.
    • G06F9/30G06F9/32G06F9/38G06F9/34
    • G06F9/3863G06F9/32G06F9/3806G06F9/3814G06F9/3836G06F9/384G06F9/3848G06F9/3855G06F9/3857
    • A pipelined processor includes an instruction unit including a register mapper, to map register operand fields of a set of instructions and an instruction scheduler, fed by the set of instructions, to reorder the issuance of the set of instructions from the processor. The mapped register operand fields are associated with the corresponding instructions of the reordered set of instructions prior to issuance of the instructions. The processor further includes a branch prediction table which maps a stored pattern of past histories associated with a branch instruction to a more likely prediction direction of the branch instruction. The processor further includes a memory reference tagging store associated with the instruction scheduler so that the scheduler can reorder memory reference instructions without knowing the actual memory location addressed by the memory reference instruction.
    • 流水线处理器包括指令单元,其包括寄存器映射器,映射一组指令的寄存器操作数字段和由该组指令馈送的指令调度器,以从处理器重新排序指令集的发布。 映射的寄存器操作数字段在指令发布之前与重新排序的指令集的相应指令相关联。 处理器还包括分支预测表,其将与分支指令相关联的过去历史的存储模式映射到分支指令的更可能的预测方向。 处理器还包括与指令调度器相关联的存储器参考标记存储器,使得调度器可以在不知道由存储器参考指令寻址的实际存储器位置的情况下重新排序存储器参考指令。
    • 66. 发明授权
    • Cache with at least two fill rates
    • 缓存至少有两个填充率
    • US5038278A
    • 1991-08-06
    • US611337
    • 1990-11-09
    • Simon C. Steely, Jr.Raj K. RamanujanPeter J. BannonWalter A. Beach
    • Simon C. Steely, Jr.Raj K. RamanujanPeter J. BannonWalter A. Beach
    • G06F12/08
    • G06F12/0842
    • During the operation of a computer system whose processor is supported by virtual cache memory, the cache must be cleared and refilled to allow the replacement of old data with more current data. The cache is filled with either P or N (N>P) blocks of data. Numerous methods for dynamically selecting N or P blocks of data are possible. For instance, immediately after the cache has been flushed, the miss is refilled with N blocks, moving data to the cache at high speed. Once the cache is mostly full, the miss tends to be refilled with P blocks. This maintains the currency of the data in the cache, while simultaneously avoiding writing-over of data already in the cache. The invention is useful in a multi-user/multi-tasking system where the program being run changes frequently, necessitating flushing and clearing the cache frequently.
    • 在处理器由虚拟高速缓冲存储器支持的计算机系统的操作期间,必须清除缓存并重新填充以允许用更多当前数据替换旧数据。 高速缓存用P或N(N> P)数据块填充。 用于动态选择N或P个数据块的许多方法是可能的。 例如,在高速缓冲存储器被刷新之后,错误将被重新填充N个块,将数据高速移动到高速缓存。 一旦缓存大部分已满,则错误将被重新填充P块。 这将保持高速缓存中的数据的货币,同时避免缓存中已经存在的数据的写入。 本发明在运行程序频繁变化的多用户/多任务系统中是有用的,需要频繁地刷新和清除缓存。
    • 67. 发明授权
    • Cache memory system
    • 缓存存储系统
    • US5003459A
    • 1991-03-26
    • US176595
    • 1988-04-01
    • Raj K. RamanujanSimon C. Steely, Jr.Peter J. BannonDavid J. Sager
    • Raj K. RamanujanSimon C. Steely, Jr.Peter J. BannonDavid J. Sager
    • G06F12/10
    • G06F12/1045
    • The invention is directed to a cache memory system in a data processor including a virtual cache memory, a physical cache memory, a virtual to physical translation buffer, a physical to virtual backmap, an Old-PA pointer and a lockout register. The backmap implements invalidates by clearing the valid flags in virtual cache memory. The Old-PA pointer indicates the backmap entry to be invalidated after a reference misses in the virtual cache. The physical address for data written to virtual cache memory is entered to Old-PA pointer by the translation buffer. The lockout register arrests all references to data which may have synonyms in virtual cache memory. The backmap is also used to invalidate any synonyms.
    • 本发明涉及包括虚拟高速缓冲存储器,物理高速缓冲存储器,虚拟到物理转换缓冲器,物理到虚拟背景图,旧PA指针和锁定寄存器的数据处理器中的高速缓冲存储器系统。 反向映射通过清除虚拟高速缓存中的有效标志来实现无效。 Old-PA指针指示在虚拟缓存中的引用未命中之后,将无效的背景条目。 写入虚拟高速缓冲存储器的数据的物理地址由转换缓冲区输入到旧PA指针。 锁定寄存器阻止对虚拟高速缓冲存储器中可能具有同义词的数据的所有引用。 背景图也用于使任何同义词无效。
    • 69. 发明授权
    • Efficient support of sparse data structure access
    • 有效支持稀疏数据结构访问
    • US09037804B2
    • 2015-05-19
    • US13995209
    • 2011-12-29
    • Simon C. Steely, Jr.William C. HasenplaughJoel S. Emer
    • Simon C. Steely, Jr.William C. HasenplaughJoel S. Emer
    • G06F13/00G06F12/08
    • G06F12/0891G06F12/0895
    • Method and apparatus to efficiently organize data in caches by storing/accessing data of varying sizes in cache lines. A value may be assigned to a field indicating the size of usable data stored in a cache line. If the field indicating the size of the usable data in the cache line indicates a size less than the maximum storage size, a value may be assigned to a field in the cache line indicating which subset of the data in the field to store data is usable data. A cache request may determine whether the size of the usable data in a cache line is equal to the maximum data storage size. If the size of the usable data in the cache line is equal to the maximum data storage size the entire stored data in the cache line may be returned.
    • 通过在高速缓存行中存储/访问不同大小的数据来高效地组织高速缓存中的数据的方法和装置。 可以将值分配给指示存储在高速缓存行中的可用数据的大小的字段。 如果指示高速缓存行中的可用数据的大小的字段指示小于最大存储大小的大小,则可以将值分配给高速缓存行中的字段,指示字段中存储数据的数据的哪个子集是可用的 数据。 缓存请求可以确定高速缓存行中的可用数据的大小是否等于最大数据存储大小。 如果高速缓存行中的可用数据的大小等于最大数据存储大小,则可以返回高速缓存行中的整个存储数据。
    • 70. 发明授权
    • Instruction prefetching using cache line history
    • 指令预取使用高速缓存行历史记录
    • US08533422B2
    • 2013-09-10
    • US12895387
    • 2010-09-30
    • Samantika SubramaniamAamer JaleelSimon C. Steely, Jr.
    • Samantika SubramaniamAamer JaleelSimon C. Steely, Jr.
    • G06F12/06G06F12/08
    • G06F12/0862G06F9/3816G06F2212/452G06F2212/6024Y02D10/13
    • An apparatus of an aspect includes a prefetch cache line address predictor to receive a cache line address and to predict a next cache line address to be prefetched. The next cache line address may indicate a cache line having at least 64-bytes of instructions. The prefetch cache line address predictor may have a cache line target history storage to store a cache line target history for each of multiple most recent corresponding cache lines. Each cache line target history may indicate whether the corresponding cache line had a sequential cache line target or a non-sequential cache line target. The cache line address predictor may also have a cache line target history predictor. The cache line target history predictor may predict whether the next cache line address is a sequential cache line address or a non-sequential cache line address, based on the cache line target history for the most recent cache lines.
    • 一方面的装置包括预取高速缓存行地址预测器,用于接收高速缓存行地址并预测要预取的下一个高速缓存行地址。 下一个高速缓存行地址可以指示具有至少64字节指令的高速缓存行。 预取高速缓存线地址预测器可以具有高速缓存行目标历史存储器,以存储多个最新对应的高速缓存行中的每一个的高速缓存行目标历史。 每个高速缓存行目标历史可以指示对应的高速缓存线是否具有顺序高速缓存行目标或非顺序高速缓存行目标。 高速缓存行地址预测器也可以具有高速缓存行目标历史预测器。 高速缓存行目标历史预测器可以基于最近的高速缓存行的高速缓存行目标历史来预测下一个高速缓存行地址是顺序高速缓存行地址还是非顺序高速缓存行地址。