会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Optimizing responses in a coherent distributed electronic system
including a computer system
    • 在包括计算机系统的连贯分布式电子系统中优化响应
    • US5829033A
    • 1998-10-27
    • US673059
    • 1996-07-01
    • Erik HagerstenAshok SinghalBjorn Liencres
    • Erik HagerstenAshok SinghalBjorn Liencres
    • G06F12/08G06F13/368G06F13/00
    • G06F13/368G06F12/0831
    • In a computer system implementing state transitions that change logically and atomically at an address packet independently of a response, the coherence domain is extended among distributed memory. As such, memory line ownership transfers upon request, and not upon requestor receipt of data. Requestor receipt of data is rapidly implemented by providing a ReadToShareFork transaction that simultaneously causes a write-type operation that updates invalid data from a requested memory address, and provides the updated data to the requesting device. More specifically, when writing valid data to memory, the ReadToShare Fork transaction simultaneously causes reissuance of the originally requested transaction using the same memory address and ID information. The requesting device upon recognizing its transaction ID on the bus system will pull the now valid data from the desired memory location.
    • 在实现状态转换的计算机系统中,其独立于响应在地址分组上逻辑地和原子地地改变,所述相干域在分布式存储器之间被扩展。 因此,内存线所有权根据请求转移,而不是请求者接收数据。 通过提供ReadToShareFork事务来快速实现数据的请求者接收,该事务同时导致从所请求的存储器地址更新无效数据的写入型操作,并将更新的数据提供给请求设备。 更具体地说,当向存储器写入有效数据时,ReadToShare Fork事务同时使用相同的存储器地址和ID信息来重新发出原始请求的事务。 请求设备在总线系统上识别其交易ID将从期望的存储器位置提​​取现在的有效数据。
    • 2. 发明授权
    • Implementing snooping on a split-transaction computer system bus
    • 在分割事务计算机系统总线上实现窥探
    • US5978874A
    • 1999-11-02
    • US673038
    • 1996-07-01
    • Ashok SinghalBjorn LiencresJeff PriceFrederick M. CerauskisDavid BroniarczykGerald CheungErik HagerstenNalini Agarwal
    • Ashok SinghalBjorn LiencresJeff PriceFrederick M. CerauskisDavid BroniarczykGerald CheungErik HagerstenNalini Agarwal
    • G06F12/08G06F13/368G06F13/00
    • G06F13/368G06F12/0831
    • Snooping is implemented on a split transaction snooping bus for a computer system having one or many such buses. Circuit boards including CPU or other devices and/or distributed memory, data input/output buffers, queues including request tag queues, coherent input queues ("CIQ"), and address controller implementing address bus arbitration plug-into one or more split transaction snooping bus systems. All devices snoop on the address bus to learn whether an identified line is owned or shared, and an appropriate owned/shared signal is issued. Receipt of an ignore signal blocks CIQ loading of a transaction until the transaction is reloaded and ignore is deasserted. Ownership of a requested memory line transfers immediately at time of request. Asserted requests are queued such that state transactions on the address bus occur atomically logically without dependence upon the request. Subsequent requests for the same data are tagged to become the responsibility of the owner-requestor. A subsequent requestor's activities are not halted awaiting grant and completion of an earlier request transaction. Processor-level cache changes state upon receipt of transaction data. A single multiplexed arbitration bus carries address bus and data bus request transactions, which transactions are each two-cycles in length.
    • 在具有一个或多个这样的总线的计算机系统的分离事务监听总线上实现侦听。 电路板包括CPU或其他设备和/或分布式存储器,数据输入/输出缓冲器,包括请求标签队列,相干输入队列(“CIQ”)和地址控制器的队列,实现地址总线仲裁插入到一个或多个拆分事务监听 总线系统 所有设备在地址总线上窥探,了解所标识的行是否拥有或共享,并发出适当的拥有/共享信号。 接收忽略信号阻止事务的CIQ加载,直到重新加载事务并忽略忽略。 所请求的内存线的所有权在请求时立即转移。 被排除的请求排队,使得地址总线上的状态事务在逻辑上发生,而不依赖于请求。 对相同数据的后续请求被标记为成为所有者请求者的责任。 后续请求者的活动不会暂停等待授予并完成较早的请求事务。 处理器级缓存在收到交易数据后更改状态。 单个复用仲裁总线承载地址总线和数据总线请求事务,这些事务的长度分别为两个周期。
    • 3. 发明授权
    • Split transaction snooping bus protocol
    • 拆分事务侦听总线协议
    • US5911052A
    • 1999-06-08
    • US673967
    • 1996-07-01
    • Ashok SinghalBjorn LiencresJeff PriceFrederick M. CerauskisDavid BroniarczykGerald CheungErik HagerstenNalini Agarwal
    • Ashok SinghalBjorn LiencresJeff PriceFrederick M. CerauskisDavid BroniarczykGerald CheungErik HagerstenNalini Agarwal
    • G06F12/08G06F13/368G06F13/00
    • G06F13/368G06F12/0831
    • A split transaction snooping bus protocol and architecture is provided for use in a system having one or many such buses. Circuit boards including CPU or other devices and/or distributed memory, data input/output buffers, queues including request tag queues, coherent input queues ("CIQ"), and address controller implementing address bus arbitration plug-into one or more split transaction snooping bus systems. All devices snoop on the address bus to learn whether an identified line is owned or shared, and an appropriate owned/shared signal is issued. Receipt of an ignore signal blocks CIQ loading of a transaction until the transaction is reloaded and ignore is deasserted. Ownership of a requested memory line transfers immediately at time of request. Asserted requests are queued such that state transactions on the address bus occur atomically logically without dependence upon the request. Subsequent requests for the same data are tagged to become the responsibility of the owner-requestor. A subsequent requestor's activities are not halted awaiting grant and completion of an earlier request transaction. Processor-level cache changes state upon receipt of transaction data. A single multiplexed arbitration bus carries address bus and data bus request transactions, which transactions are each two-cycles in length.
    • 分组交易监听总线协议和架构被提供用于具有一个或多个这样的总线的系统中。 电路板包括CPU或其他设备和/或分布式存储器,数据输入/输出缓冲器,包括请求标签队列,相干输入队列(“CIQ”)和地址控制器的队列,实现地址总线仲裁插入到一个或多个拆分事务监听 总线系统 所有设备在地址总线上窥探,了解所标识的行是否拥有或共享,并发出适当的拥有/共享信号。 接收忽略信号阻止事务的CIQ加载,直到重新加载事务并忽略忽略。 所请求的内存线的所有权在请求时立即转移。 被排除的请求排队,使得地址总线上的状态事务在逻辑上发生,而不依赖于请求。 对相同数据的后续请求被标记为成为所有者请求者的责任。 后续请求者的活动不会暂停等待授予并完成较早的请求事务。 处理器级缓存在收到交易数据后更改状态。 单个复用仲裁总线承载地址总线和数据总线请求事务,这些事务的长度分别为两个周期。
    • 4. 发明授权
    • Method and apparatus for hot plugging/unplugging a sub-system to an
electrically powered system
    • 将子系统热插拔插入电力系统的方法和装置
    • US5644731A
    • 1997-07-01
    • US499150
    • 1995-07-07
    • Bjorn LiencresAshok SinghalJeff PriceKang S. Lim
    • Bjorn LiencresAshok SinghalJeff PriceKang S. Lim
    • G06F1/26G06F1/18G06F3/00G06F13/40G08B21/00H02J1/14G06F13/20
    • H02J1/14G06F13/4081
    • The present invention provides an "alert" interface for a component which can be safely "hot-plugged/unplugged" to an "alert" interconnect of an electrically powered system. The alert interface has a mating edge which includes daughter precharge/ground connectors, a daughter (engage) waning connector, a number of daughter signal connectors and a daughter engage connector. The alert interconnect includes corresponding mother connectors. The respective connectors of the interconnect and the interface are arranged so that they mate in the following exemplary order when the interface is hot-plugged/unplugged to the interconnect: precharge/ground connectors, warning connectors, signal connectors and finally engage connectors. When the daughter (engage) warning connector mates with the mother warning connector, the component sends an "engage warning" signal to the powered system. Eventully, all the signal connectors mate followed by the engage connectors, enabling the component to send an "engaged" signal to the system indicating that all the signal connectors have completely mated. In accordance with another aspect of the invention, the component can also be safely "hot-unplugged" by first, substantially increasing the degree of recess of the daughter engage connector relative to the daughter signal connectors along the mating edge, and second, doubling the daughter engage connector to function as a daughter disengage warning connector.
    • 本发明提供了一种用于组件的“警报”界面,该组件可以安全地“热插拔”到电力系统的“警报”互连。 警报界面具有配合边缘,其包括女儿预充电/接地连接器,女儿(接合)连接器,多个子信号连接器和女儿接合连接器。 警报互连包括相应的母连接器。 互连和接口的相应连接器被布置成当接口被热插拔插入到互连:预充电/接地连接器,警告连接器,信号连接器并且最终接合连接器时,它们以下列示例顺序配合。 当女儿(接合)警告连接器与母体警告连接器配合时,组件向动力系统发送“接合警告”信号。 Eventully,所有的信号连接器都配合接合连接器,使组件能够向系统发送一个“已接合”信号,表明所有的信号连接器已完全配合。 根据本发明的另一方面,该部件还可以首先安全地“热插拔”,从而基本上增加了女孩接合连接器相对于沿着配合边缘的子信号连接器的凹陷度,其次,将 女儿接合连接器作为女儿脱开警告连接器。
    • 5. 发明授权
    • System and method for accessing a shared computer resource using a lock featuring different spin speeds corresponding to multiple states
    • 使用具有对应于多个状态的不同旋转速度的锁来访问共享计算机资源的系统和方法
    • US06578033B1
    • 2003-06-10
    • US09597863
    • 2000-06-20
    • Ashok SinghalErik Hagersten
    • Ashok SinghalErik Hagersten
    • G06F1730
    • G06F9/52Y10S707/99938
    • A probabilistic queue lock divides requesters for a lock into at least three sets. In one embodiment, the requesters are divided into the owner of the lock, the first waiting contender, and the other waiting contenders. The first waiting contender is made probabilistically more likely to obtain the lock by having it spin faster than the other waiting contenders. Because the other waiting contenders spin more slowly, the first waiting contender is more likely to be able to observe the free lock and acquire it before the other waiting contenders notice that it is free. The first of the other waiting contenders that determines that the previous first waiting contender has acquired the lock is promoted to be the new first waiting contender and begins spinning fast. Because only the first waiting contender is spinning fast on the lock, it is probable that only the first waiting contender will attempt to acquire the lock when it becomes available.
    • 概率队列锁定将请求者锁定至少三组。 在一个实施例中,请求者被分为锁的所有者,第一等待竞争者和其他等待竞争者。 第一个等待竞争者的概率比其他等待竞争者更容易获得锁。 由于其他等待竞争者的旋转速度较慢,第一个等待竞争者更有可能在其他等待竞争者注意到自由之前观察到免费锁定并获得锁定。 第一个等待竞争者的第一个等待竞争者,确定先前的第一个等待竞争者已经获得锁被提升为新的第一个等待竞争者,并开始快速旋转。 因为只有第一个等待竞争者在锁上快速旋转,所以只有第一个等待竞争者才有可能尝试获取锁。
    • 6. 发明授权
    • Method and apparatus providing short latency round-robin arbitration for
access to a shared resource
    • 提供用于访问共享资源的短延迟轮询仲裁的方法和装置
    • US5987549A
    • 1999-11-16
    • US675286
    • 1996-07-01
    • Erik HagerstenAshok Singhal
    • Erik HagerstenAshok Singhal
    • G06F12/08G06F13/368G06F13/00
    • G06F12/0831G06F13/368
    • Low-latency distributed round-robin arbitration is used to grant requests for access to a shared resource such as a computer system bus. A plurality of circuit board cards that each include two devices such as CPUs, I/O units, and ram and an address controller plugs into an Address Bus in the bus system. Each address controller contains logic implementing the arbitration mechanism with a two-level hierarchy: a single top arbitrator and preferably four leaf arbitrators. Each address controller is coupled to two devices and the logical "OR" of their arbitration request is coupled via an Arbitration Bus to other address controllers on other boards. Each leaf arbitrator has four prioritized request in lines, each such line being coupled to a single address controller serviced by that leaf arbitrator. By default, each leaf arbitrator and the top arbitrator implement a prioritized algorithm. However a last winner ("LW") state is maintained at every arbitrator that overrides the default, to provide round-robin selection. Each leaf arbitrator arbitrates among the zero to four requests it sees, selects a winner and signals the top arbitrator that it has a device wishing access. At the top arbitrator, if the first leaf arbitrator last won a grant, it now has lowest grant priority, and a grant will go to the next highest leaf arbitrator having a device seeking access.
    • 低延迟分布式循环仲裁用于授予访问共享资源(如计算机系统总线)的请求。 多个电路板卡每个包括诸如CPU,I / O单元和RAM的两个设备和地址控制器插入总线系统中的地址总线中。 每个地址控制器包含实现具有两级层次的仲裁机制的逻辑:单个顶级仲裁器,最好是四个叶子仲裁器。 每个地址控制器耦合到两个设备,并且其仲裁请求的逻辑“或”通过仲裁总线耦合到其他板上的其他地址控制器。 每个叶子仲裁器具有四个优先级的请求,每个这样的行被耦合到该叶子仲裁器所服务的单个地址控制器。 默认情况下,每个叶子仲裁器和顶级仲裁器实现优先级算法。 然而,每个仲裁员维护最后一个赢家(“LW”)状态,以覆盖默认值,以提供循环选择。 每个叶仲裁员在它看到的零到四个请求之间进行仲裁,选择一个获胜者,并向顶级仲裁员发出信号,指示它具有希望访问的设备。 在最高的仲裁员身上,如果第一个叶子仲裁员最后一次获得授权,那么它现在具有最低的授权优先权,而授权将转到具有寻求访问权限的设备的下一个最高的叶子仲裁员。
    • 8. 发明授权
    • Hybrid queue and backoff computer resource lock featuring different spin
speeds corresponding to multiple-states
    • 具有不同旋转速度的混合队列和后退计算机资源锁对应于多个状态
    • US6148300A
    • 2000-11-14
    • US100667
    • 1998-06-19
    • Ashok SinghalErik Hagersten
    • Ashok SinghalErik Hagersten
    • G06F9/52G06F9/46G06F13/00G06F17/30
    • G06F9/52Y10S707/99938
    • A probabilistic queue lock divides requesters for a lock into at least three sets. In one embodiment, the requesters are divided into the owner of the lock, the first waiting contender, and the other waiting contenders. The first waiting contender is made probabilistically more likely to obtain the lock by having it spin faster than the other waiting contenders. Because the other waiting contenders spin more slowly, the first waiting contender is more likely to be able to observe the free lock and acquire it before the other waiting contenders notice that it is free. The first of the other waiting contenders that determines that the previous first waiting contender has acquired the lock is promoted to be the new first waiting contender and begins spinning fast. Because only the first waiting contender is spinning fast on the lock, it is probable that only the first waiting contender will attempt to acquire the lock when it becomes available.
    • 概率队列锁定将请求者锁定至少三组。 在一个实施例中,请求者被分为锁的所有者,第一等待竞争者和其他等待竞争者。 第一个等待竞争者的概率比其他等待竞争者更容易获得锁。 因为其他等待竞争者的转动速度更慢,所以第一个等待竞争者更有可能在其他等待竞争者注意到它是免费的前提下观察到自由锁。 第一个等待竞争者的第一个等待竞争者,确定先前的第一个等待竞争者已经获得锁被提升为新的第一个等待竞争者,并开始快速旋转。 因为只有第一个等待竞争者在锁上快速旋转,所以只有第一个等待竞争者才有可能尝试获取锁。
    • 9. 发明授权
    • Method and apparatus for selecting a way of a multi-way associative
cache by storing waylets in a translation structure
    • US5778427A
    • 1998-07-07
    • US499590
    • 1995-07-07
    • Erik HagerstenAshok Singhal
    • Erik HagerstenAshok Singhal
    • G06F12/08G06F12/10
    • G06F12/1054G06F12/0864G06F2212/6082
    • The present invention provides a cache manager (CM) for use with an address translation table (ATT) which take advantage of way information, available when a cache line is first cached, for efficiently accessing a multi-way cache of a computer system having a main memory and one or more processors. The main memory and the ATT are page-oriented while the cache is organized using cache lines. The cache includes a plurality of cache lines divided into a number of segments corresponding to the number of "ways". Each cache line includes an address tag (AT) field and a data field. The way information is stored in the ATT for later cache access. In this implementation, "waylets" provide an efficiency mechanism for storing the way information whenever a cache line is cached. Accordingly, each table entry of the ATT includes a virtual address (VA) field, a physical address (PA) field, and a plurality of waylets associated with each pair of VA and PA fields. Subsequently, the waylets can be used to quickly index directly into a single segment of the cache as follows. Upon receiving a virtual address of a target cache line, the CM attempts to match a virtual address field of one of the ATT entries with a page index portion of the virtual address. If there is a match, a waylet of the ATT entry is retrieved using a page offset portion of the virtual address. If the waylet value is valid, the CM indexes directly into a single cache line using the waylet value, the physical address field of the ATT entry and the page offset portion of the virtual address. If the AT field of the retrieved cache line matches with a portion of the physical address field of the ATT entry, the processor retrieves the data field of the cache line using the page offset portion of the VA. If the AT field does not match, the target cache line is retrieved from the main memory, and the waylet value in both the ATT and the main memory is updated.
    • 10. 发明授权
    • Multiprocessing system employing address switches to control mixed broadcast snooping and directory based coherency protocols transparent to active devices
    • 多处理系统采用地址交换机控制混合广播窥探和基于目录的一致性协议,对活动设备是透明的
    • US07222220B2
    • 2007-05-22
    • US10601402
    • 2003-06-23
    • Robert E. CypherAshok Singhal
    • Robert E. CypherAshok Singhal
    • G06F12/12
    • G06F12/0815G06F12/0817G06F12/0831
    • A multiprocessor computer system is configured to selectively transmit address transactions through an address network using either a broadcast mode or a point-to-point mode transparent to the active devices that initiate the transactions. Depending on the mode of transmission selected, either a directory-based coherency protocol or a broadcast snooping coherency protocol is implemented to maintain coherency within the system. A computing node is formed by a group of clients which share a common address and data network. The address network is configured to determine whether a particular transaction is to be conveyed in broadcast mode or point-to-point mode. In one embodiment, the address network includes a mode table with entries which are configurable to indicate transmission modes corresponding to different regions of the address space within the node. Upon receiving a coherence request transaction, the address network may then access the table in order to determine the transmission mode, broadcast or point-to-point, which corresponds to the received transaction.
    • 多处理器计算机系统被配置为通过地址网络选择性地发送地址事务,所述地址网络使用对启动事务的活动设备透明的广播模式或点对点模式。 根据所选择的传输模式,实现基于目录的一致性协议或广播窥探一致性协议,以保持系统内的一致性。 计算节点由共享公共地址和数据网络的一组客户端形成。 地址网络被配置为确定特定事务是以广播模式还是点对点模式传送。 在一个实施例中,地址网络包括具有可配置为指示对应于节点内的地址空间的不同区域的传输模式的条目的模式表。 在接收到一致性请求事务时,地址网络然后可以访问该表,以便确定对应于所接收的事务的传输模式,广播或点对点。