会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Shared memory multiprocessor performing cache coherence control and node controller therefor
    • 共享内存多处理器执行高速缓存一致性控制和节点控制器
    • US06636926B2
    • 2003-10-21
    • US09740816
    • 2000-12-21
    • Yoshiko YasudaNaoki HamanakaToru ShonaiHideya AkashiYuji TsushimaKeitaro Uehara
    • Yoshiko YasudaNaoki HamanakaToru ShonaiHideya AkashiYuji TsushimaKeitaro Uehara
    • G06F1300
    • G06F12/0813G06F12/0833G06F2212/1016
    • Each node includes a node controller for decoding the control information and the address information for the access request issued by a processor or an I/O device, generating, based on the result of decoding, the cache coherence control information indicating whether the cache coherence control is required or not, the node information and the unit information for the transfer destination, and adding these information to the access request. An intra-node connection circuit for connecting the units in the node controller holds the cache coherence control information, the node information and the unit information added to the access request. When the cache coherence control information indicates that the cache coherence control is not required and the node information indicates the local node, then the intra-node connection circuit transfers the access request not to the inter-node connection circuit interconnecting the nodes but directly to the unit designated by the unit information.
    • 每个节点包括用于解码由处理器或I / O设备发出的访问请求的控制信息和地址信息的节点控制器,基于解码结果生成指示高速缓存一致性控制的高速缓存一致性控制信息 是否需要节点信息和传输目的地的单位信息,并将这些信息添加到访问请求。 用于连接节点控制器中的单元的节点内连接电路保持高速缓存一致性控制信息,节点信息和添加到访问请求的单元信息。 当高速缓存一致性控制信息指示不需要高速缓存一致性控制并且节点信息指示本地节点时,节点内连接电路不是将互连节点的节点间连接电路的访问请求传送到节点间连接电路,而是直接连接到 单位由单位信息指定。
    • 2. 发明授权
    • Shared memory multiprocessor performing cache coherence control and node controller therefor
    • 共享内存多处理器执行高速缓存一致性控制和节点控制器
    • US06874053B2
    • 2005-03-29
    • US10654983
    • 2003-09-05
    • Yoshiko YasudaNaoki HamanakaToru ShonaiHideya AkashiYuji TsushimaKeitaro Uehara
    • Yoshiko YasudaNaoki HamanakaToru ShonaiHideya AkashiYuji TsushimaKeitaro Uehara
    • G06F12/08G06F15/173G06F13/00G06F15/167
    • G06F12/0813G06F12/0833G06F2212/1016
    • Each node includes a node controller for decoding the control information and the address information for the access request issued by a processor or an I/O device, generating, based on the result of decoding, the cache coherence control information indicating whether the cache coherence control is required or not, the node information and the unit information for the transfer destination, and adding these information to the access request. An intra-node connection circuit for connecting the units in the node controller holds the cache coherence control information, the node information and the unit information added to the access request. When the cache coherence control information indicates that the cache coherence control is not required and the node information indicates the local node, then the intra-node connection circuit transfers the access request not to the inter-node connection circuit inter-connecting the node but directly to the unit designated by the unit information.
    • 每个节点包括用于解码由处理器或I / O设备发出的访问请求的控制信息和地址信息的节点控制器,基于解码结果生成指示高速缓存一致性控制的高速缓存一致性控制信息 是否需要节点信息和传输目的地的单位信息,并将这些信息添加到访问请求。 用于连接节点控制器中的单元的节点内连接电路保持高速缓存一致性控制信息,节点信息和添加到访问请求的单元信息。 当高速缓存一致性控制信息指示不需要高速缓存一致性控制并且节点信息指示本地节点时,节点间连接电路将访问请求传送到不是直接连接节点的节点间连接电路 到由单位信息指定的单位。
    • 4. 发明授权
    • Multiprocessor system and methods for transmitting memory access transactions for the same
    • 用于传输内存访问事务的多处理器系统和方法相同
    • US06516391B1
    • 2003-02-04
    • US09523737
    • 2000-03-13
    • Yuji TsushimaHideya AkashiKeitaro UeharaNaoki HamanakaToru ShonaiTetsuhiko OkadaMasamori Kashiyama
    • Yuji TsushimaHideya AkashiKeitaro UeharaNaoki HamanakaToru ShonaiTetsuhiko OkadaMasamori Kashiyama
    • G06F1200
    • G06F12/0813G06F12/0817G06F15/177G06F2212/2542
    • In a multiprocessor arranged in accordance with either NUMA or UMA in which a plurality of processor nodes containing a plurality of processor units are coupled to each other via a network, a cache snoop operation executed in connection with a memory access operation is performed at two stages, namely, local snoop operation executed within a node, and global snoop operation among nodes. Before executing the local snoop operation, an ACTV command for specifying only an RAS of a memory is issued to a target node having a memory to be accessed, and the memory access is activated in advance. A CAS of a memory is additionally specified and a memory access is newly executed after the ACTV command has been issued and then a memory access command has been issued. When there is such a possibility that a memory to be accessed is cached in a processor node except for a source node, this memory access command is issued to be distributed to all nodes so as to execute the global snoop operation. On the other hand, when there is no possibility that the memory to be accessed is cached, this memory access command is transferred only to the target node in yan one-to-one correspondence.
    • 在根据其中包含多个处理器单元的多个处理器节点经由网络彼此耦合的NUMA或UMA而布置的多处理器中,结合存储器访问操作执行的高速缓存侦听操作在两个阶段 即在节点内执行的本地侦听操作,以及节点之间的全局侦听操作。 在执行本地侦听操作之前,向具有存储器的目标节点发出用于指定存储器的RAS的ACTV命令,并且预先激活存储器访问。 另外指定存储器的CAS,并且在发出ACTV命令之后重新执行存储器访问,然后发出存储器访问命令。 当存在待访问的存储器存在除了源节点之外的处理器节点的可能性时,该存储器访问命令被发布以分发给所有节点,以便执行全局侦听操作。 另一方面,当不存在要访问的存储器被缓存时,该存储器访问命令仅以一对一对应的方式传送到目标节点。
    • 5. 发明授权
    • Method and apparatus of out-of-order transaction processing using request side queue pointer and response side queue pointer
    • 使用请求侧队列指针和响应端队列指针进行乱序事务处理的方法和装置
    • US06591325B1
    • 2003-07-08
    • US09547392
    • 2000-04-11
    • Hideya AkashiYuji TsushimaKeitaro UeharaNaoki HamanakaToru ShonaiTetsuhiko OkadaMasamori Kashiyama
    • Hideya AkashiYuji TsushimaKeitaro UeharaNaoki HamanakaToru ShonaiTetsuhiko OkadaMasamori Kashiyama
    • G06F1314
    • G06F13/4204
    • An information processing system that transfers transactions between a plurality of system modules. A request side interface unit in a request side module has a request ID queue in which issued request transactions are stored in order of issuance. A request side queue pointer points to an entry in this request ID queue corresponding to a response transaction to be accepted next. A response side interface unit in a response side module has a response queue in which accepted request transactions are stored in order of acceptance. A response side queue pointer points to an entry in this response queue corresponding to a response transaction to be issued next. Therefore, a request transaction and the corresponding response transaction are transferred between the request side interface unit and the response side interface unit without transferring transaction IDs. When the response order is changed, the response side interface unit issues a command, which changes the value of the request side queue pointer, to inform the request side interface unit of the change in the order.
    • 一种在多个系统模块之间传送交易的信息处理系统。 请求侧模块中的请求侧接口单元具有请求ID队列,其中发出的请求事务按照发布的顺序存储。 请求侧队列指针指向与要接受的响应事务相对应的该请求ID队列中的条目。 响应侧模块中的响应侧接口单元具有响应队列,其中接受请求事务按接受顺序存储。 响应侧队列指针指向对应于接下来要发出的响应事务的该响应队列中的条目。 因此,在请求侧接口单元和响应侧接口单元之间传送请求事务和相应的响应事务,而不转移事务ID。 当响应顺序改变时,响应侧接口单元发出改变请求侧队列指针的值的命令,以通知请求侧接口单元的顺序改变。
    • 7. 发明授权
    • Computer system for sharing I/O device
    • 用于共享I / O设备的计算机系统
    • US07890669B2
    • 2011-02-15
    • US11561557
    • 2006-11-20
    • Keitaro UeharaYuji TsushimaToshiomi MorikiYoshiko Yasuda
    • Keitaro UeharaYuji TsushimaToshiomi MorikiYoshiko Yasuda
    • G06F13/28
    • G06F13/385
    • Provided is a computer system in which an I/O card is shared among physical servers and logical servers. Servers are set in advance such that one I/O card is used exclusively by one physical or logical server, or shared among a plurality of servers. An I/O hub allocates a virtual MM I/O address unique to each physical or logical server to a physical MM I/O address associated with each I/O card. The I/O hub keeps allocation information indicating the relation between the allocated virtual MM I/O address, the physical MM I/O address, and a server identifier unique to each physical or logical server. When a request to access an I/O card is sent from a physical or logical server, the allocation information is referred to and a server identifier is extracted from the access request. The extracted server identifier is used to identify the physical or logical server that has made the access request.
    • 提供了在物理服务器和逻辑服务器之间共享I / O卡的计算机系统。 服务器预先设置,使得一个I / O卡由一个物理或逻辑服务器专门使用,或者在多个服务器之间共享。 I / O集线器将每个物理或逻辑服务器唯一的虚拟MM I / O地址分配给与每个I / O卡相关联的物理MM I / O地址。 I / O集线器保持指示分配的虚拟MM I / O地址,物理MM I / O地址与每个物理或逻辑服务器唯一的服务器标识之间的关系的分配信息。 当从物理或逻辑服务器发送访问I / O卡的请求时,参考分配信息并从访问请求中提取服务器标识符。 提取的服务器标识符用于标识已进行访问请求的物理或逻辑服务器。