会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 4. 发明申请
    • Configurable bi-directional bus for communicating between autonomous units
    • 可配置双向总线,用于在自主单元之间进行通信
    • US20050154858A1
    • 2005-07-14
    • US10757673
    • 2004-01-14
    • Kerry KravecAli SaidiJan SlyfieldPascal Tannhof
    • Kerry KravecAli SaidiJan SlyfieldPascal Tannhof
    • G06F13/40G06F15/00
    • G06F13/4027
    • Processing units (PUs) are coupled with a gated bi-directional bus structure that allows the PUs to be cascaded. Each PUn has communication logic and function logic. Each PUn is physically coupled to two other PUs, a PUp and a PUf. The communication logic receives Link Out data from a PUp and sends Link In data to a PUf. The communication logic has register bits for enabling and disabling the data transmission. The communication logic couples the Link Out data from a PUp to the function logic and couples Link In data to the PUp from the function logic in response to the register bits. The function logic receives output data from the PUn and Link In data from the communication logic and forms Link Out data which is coupled to the PUf. The function logic couples Link In data from the PUf to the PUn and to the communication logic.
    • 处理单元(PU)与门控双向总线结构耦合,允许将PU级联。 每个PUn具有通信逻辑和功能逻辑。 每个PUn物理耦合到另外两个PU,PUp和PUf。 通信逻辑从PUp接收Link Out数据,并将Link In数据发送到PUf。 通信逻辑具有用于启用和禁用数据传输的寄存器位。 通信逻辑将链路输出数据从PUp耦合到功能逻辑,并且响应于寄存器位将Link In数据从功能逻辑耦合到PUp。 功能逻辑从通信逻辑的PUn和Link In数据接收输出数据,并形成耦合到PUf的Link Out数据。 功能逻辑将来自PUf的链接数据耦合到PUn和通信逻辑。
    • 6. 发明申请
    • PROCESSING UNIT HAVING A DUAL CHANNEL BUS ARCHITECTURE
    • 具有双通道总线架构的处理单元
    • US20050138324A1
    • 2005-06-23
    • US10905100
    • 2004-12-15
    • Pascal TannhofJan Slyfield
    • Pascal TannhofJan Slyfield
    • G06F15/00G06F15/173
    • G06F15/17368
    • A processing unit having a dual channel bus architecture associated with a specific instruction set, configured to receive an input message and transmit an output message that is identical or derived therefrom. A message consists of one opcode, with or without associated data, used to control each processing unit depending on logic conditions stored in dedicated registers in each unit. Processing units are serially connected but can work simultaneously for a total pipelined operation. This dual architecture is organized around two channels labeled Channel 1 and Channel 2. Channel 1 mainly transmits an input message to all units while Channel 2 mainly transmits the results after processing in a unit as an output message. Depending on the logic conditions, an input message not processed in a processing unit may be transmitted to the next one without any change.
    • 一种具有与特定指令集相关联的双通道总线架构的处理单元,其被配置为接收输入消息并发送相同或从其导出的输出消息。 消息由一个操作码组成,具有或不具有关联数据,用于根据存储在每个单元中的专用寄存器中的逻辑条件来控制每个处理单元。 处理单元串联连接,但可以同时工作进行总体流水线操作。 该双重架构围绕标记为通道1和通道2的两个通道组合。通道1主要向所有单元发送输入消息,而通道2主要在以单元处理之后将结果作为输出消息发送。 根据逻辑条件,在处理单元中未处理的输入消息可以被发送到下一个,而没有任何改变。
    • 7. 发明授权
    • Method and circuits to virtually increase the number of prototypes in artificial neural networks
    • 实际增加人造神经网络中原型数量的方法和电路
    • US07254565B2
    • 2007-08-07
    • US10137969
    • 2002-05-03
    • Ghislain Imbert De TremiollesPascal Tannhof
    • Ghislain Imbert De TremiollesPascal Tannhof
    • G06E1/00G06E3/00G06F15/18G06G7/00G06N3/00G06N3/08G06N3/04
    • G06K9/6276G06N3/063
    • An improved Artificial Neural Network (ANN) is disclosed that comprises a conventional ANN, a database block, and a compare and update circuit. The conventional ANN is formed by a plurality of neurons, each neuron having a prototype memory dedicated to store a prototype and a distance evaluator to evaluate the distance between the input pattern presented to the ANN and the prototype stored therein. The database block has: all the prototypes arranged in slices, each slice being capable to store up to a maximum number of prototypes; the input patterns or queries to be presented to the ANN; and the distances resulting of the evaluation performed during the recognition/classification phase. The compare and update circuit compares the distance with the distance previously found for the same input pattern updates or not the distance previously stored.
    • 公开了一种改进的人造神经网络(ANN),其包括常规ANN,数据库块以及比较和更新电路。 常规ANN由多个神经元形成,每个神经元具有专用于存储原型的原型存储器和距离评估器,以评估呈现给ANN的输入模式与存储在其中的原型之间的距离。 数据库块具有:所有原型以切片排列,每个切片能够存储最多数量的原型; 要呈现给ANN的输入模式或查询; 以及在识别/分类阶段期间进行评估的距离。 比较和更新电路将距离与先前发现的相同输入模式更新的距离进行比较,或将之前存储的距离进行比较。
    • 8. 发明授权
    • Circuits and method for shaping the influence field of neurons and neural networks resulting therefrom
    • 用于形成由此产生的神经元和神经网络的影响场的电路和方法
    • US06347309B1
    • 2002-02-12
    • US09223478
    • 1998-12-30
    • Ghislain Imbert De TremiollesPascal Tannhof
    • Ghislain Imbert De TremiollesPascal Tannhof
    • G06N306
    • G06K9/6271G06N3/063
    • The improved neural network of the present invention results from the combination of a dedicated logic block with a conventional neural network based upon a mapping of the input space usually employed to classify an input data by computing the distance between said input data and prototypes memorized therein. The improved neural network is able to classify an input data, for instance, represented by a vector A even when some of its components are noisy or unknown during either the learning or the recognition phase. To that end, influence fields of various and different shapes are created for each neuron of the conventional neural network. The logic block transforms at least some of the n components (A1, . . . , An) of the input vector A into the m components (V1, . . . , Vm) of a network input vector V according to a linear or non-linear transform function F. In turn, vector V is applied as the input data to said conventional neural network. The transform function F is such that certain components of vector V are not modified, e.g. Vk=Aj, while other components are transformed as mentioned above, e.g. Vi=Fi(A1, . . . , An). In addition, one (or more) component of vector V can be used to compensate an offset that is present in the distance evaluation of vector V. Because, the logic block is placed in front of the said conventional neural network any modification thereof is avoided.
    • 本发明的改进的神经网络是基于通常用于通过计算所述输入数据与其中存储的原型之间的距离来对输入数据进行分类的输入空间的映射,将专用逻辑块与传统神经网络的组合。 改进的神经网络能够对例如由向量A表示的输入数据进行分类,即使在学习或识别阶段期间,其一些组件是噪声或未知的。 为此,为传统神经网络的每个神经元创建各种不同形状的影响场。 逻辑块根据线性或非线性将输入矢量A的n个分量(A1,...,An)中的至少一些变换成网络输入矢量V的m个分量(V1,...,Vm) 然后将矢量V作为输入数据施加到所述常规神经网络。 变换函数F使得向量V的某些分量不被修改,例如, Vk = Aj,而其它组分如上所述被转化,例如。 Vi = Fi(A1,...,An)。 另外,矢量V的一个(或多个)分量可以用于补偿矢量V的距离评估中存在的偏移。因为逻辑块被放置在所述传统神经网络的前面,所以避免了其任何修改 。
    • 10. 发明授权
    • System for scaling images using neural networks
    • 使用神经网络缩放图像的系统
    • US07734117B2
    • 2010-06-08
    • US12021511
    • 2008-01-29
    • Pascal TannhofGhislain I De Tremiolles
    • Pascal TannhofGhislain I De Tremiolles
    • G06K9/32
    • G06T3/4046
    • An artificial neural network (ANN) based system that is adapted to process an input pattern to generate an output pattern related thereto having a different number of components than the input pattern. The system (26) is comprised of an ANN (27) and a memory (28), such as a DRAM memory, that are serially connected. The input pattern (23) is applied to a processor (22), where it can be processed or not (the most general case), before it is applied to the ANN and stored therein as a prototype (if learned). A category is associated with each stored prototype. The processor computes the coefficients that allow the determination of the estimated values of the output pattern, these coefficients are the components of a so-called intermediate pattern (24). Assuming the ANN has already learned a number of input patterns, when a new input pattern is presented to the ANN in the recognition phase, the category of the closest prototype is output therefrom and is used as a pointer to the memory. In turn, the memory outputs the corresponding intermediate pattern. The input pattern and the intermediate pattern are applied to the processor to construct the output pattern (25) using the coefficients. Typically, the input pattern is a block of pixels in the field of scaling images.
    • 一种基于人造神经网络(ANN)的系统,其适于处理输入模式以产生与其相关的输出模式,该输出模式具有与输入模式不同数量的分量。 系统(26)由串联连接的ANN(27)和存储器(28)(诸如DRAM存储器)组成。 将输入模式(23)应用于处理器(22),在处理器(22)被应用于ANN并作为原型存储(如果被学习)之前)处理器(22),其可被处理(最常见的情况))。 类别与每个存储的原型相关联。 处理器计算允许确定输出图案的估计值的系数,这些系数是所谓的中间图案的分量(24)。 假设ANN已经学习了许多输入模式,当在识别阶段向ANN呈现新的输入模式时,最近的原型的类别从其输出并被用作指向存储器的指针。 反过来,存储器输出相应的中间模式。 将输入图案和中间图案应用于处理器,以使用系数构造输出图案(25)。 通常,输入图案是缩放图像领域的像素块。