Machine learning based post route path delay estimator from synthesis netlist

    公开(公告)号:US20190325092A1

    公开(公告)日:2019-10-24

    申请号:US15960833

    申请日:2018-04-24

    Applicant: NVIDIA Corp.

    Abstract: A neural network including an embedding layer to receive a gate function vector and an embedding width and alter a shape of the gate function vector by the embedding width, a concatenator to receive a gate feature input vector and concatenate the gate feature input vector with the gate function vector altered by the embedding width, a convolution layer to receive a window size, stride, and output feature size and generate an output convolution vector with a shape based on a length of the gate function vector, the window size of the convolution layer, and the output feature size of the convolution layer, and a fully connected layer to reduce the gate output convolution vector to a final path delay output.

    BROADCAST SCAN NETWORK
    142.
    发明申请

    公开(公告)号:US20190128963A1

    公开(公告)日:2019-05-02

    申请号:US15935438

    申请日:2018-03-26

    Applicant: NVIDIA Corp.

    Abstract: A distributed test circuit includes partitions arranged in series to form a scan path, each partition including a scan multiplexer, a test data register, and a segment insertion bit component. The scan multiplexer of each partition provides inputs to the corresponding test data register of the each partition. Broadcast control logic generates a select signal to the scan multiplexer of each partition to place the test circuit in a broadcast mode when the select signal is asserted, and to switch the test circuit to a daisy mode when select signal is de-asserted. The segment insertion bit is operable to include or bypass each partition from the scan path.

    Method and apparatus for adaptive power consumption
    143.
    发明申请
    Method and apparatus for adaptive power consumption 有权
    用于自适应功耗的方法和装置

    公开(公告)号:US20040039954A1

    公开(公告)日:2004-02-26

    申请号:US10226708

    申请日:2002-08-22

    Applicant: NVIDIA, CORP.

    Abstract: A method for adapting power consumption of a processor based upon an application demand is provided. The method initiates with determining an application demand based upon a current processing operation. Then, a time interval associated with the application demand is determined. Next, unnecessary power consuming functions for the application demand are determined. Then, a clock frequency for the unnecessary power consuming functions is reduced for the time interval. In one embodiment, the power is terminated to the unnecessary power consuming functions. In another embodiment, the clock frequency of the processor is adjusted for at least a portion of the time interval. A program interface for adapting power consumption of a computer system, processor instructions for adapting power consumption of a computer system and a processor are included.

    Abstract translation: 提供了一种基于应用需求来适应处理器的功耗的方法。 该方法通过基于当前处理操作确定应用需求来启动。 然后,确定与应用需求相关联的时间间隔。 接下来,确定用于应用需求的不必要的功耗功能。 然后,在时间间隔内减少不必要的功耗功能的时钟频率。 在一个实施例中,功率被终止于不必要的功率消耗功能。 在另一个实施例中,处理器的时钟频率在时间间隔的至少一部分被调整。 包括用于调整计算机系统的功耗的程序接口,用于调整计算机系统和处理器的功耗的处理器指令。

    Method and apparatus for network address translation integration with internet protocol security
    144.
    发明申请
    Method and apparatus for network address translation integration with internet protocol security 有权
    网络地址转换与互联网协议安全性集成的方法和装置

    公开(公告)号:US20030233475A1

    公开(公告)日:2003-12-18

    申请号:US10172046

    申请日:2002-06-13

    Applicant: Nvidia Corp.

    Abstract: Method and apparatus for enhanced security for communication over a network, and more particularly to Network Address Translation (NAT) integration Internet Protocol Security (IPSec), is described. A client computer makes a second address request in order to prompt an address server to provide a public address. This address, recorded in a mapping table accessible by a gateway computer. This public address is used as a source address for packets from a client using IPSec. When the gateway computer identifies a packet's source address as one of it's public addresses, NAT is suspended for this packet, and the packet is routed without NAT. Incoming traffic is routed using the mapping table.

    Abstract translation: 描述了用于通过网络进行通信的增强安全性的方法和装置,更具体地涉及网络地址转换(NAT)集成因特网协议安全(IPSec)。 客户端计算机作出第二个地址请求,以提示地址服务器提供公共地址。 该地址记录在由网关计算机访问的映射表中。 该公共地址用作来自使用IPSec的客户端的数据包的源地址。 当网关计算机将数据包的源地址标识为其公共地址之一时,该数据包将暂停NAT,并且该数据包不经过NAT路由。 使用映射表路由出站流量。

    ADAPTIVE CLOCK GENERATION FOR SERIAL LINKS

    公开(公告)号:US20250132892A1

    公开(公告)日:2025-04-24

    申请号:US18492126

    申请日:2023-10-23

    Applicant: NVIDIA Corp.

    Abstract: Adaptive clock mechanisms for serial links utilizing a delay-chain-based edge generation circuit to generate a clock that is a faster (higher-frequency) version of an incoming digital clock. The base frequency of the link clock utilized by the line transmitters is determined by the (slower) clock utilized by the digital circuitry supplying data to the line transmitters. An edge generator that may be composed of only non-synchronous circuit elements multiplies the edges of the slower clock to generate the link clock and also a clock forwarded to the receiver at a phase offset from the link clock.

    THREE DIMENSIONAL CIRCUIT MOUNTING STRUCTURES

    公开(公告)号:US20250048532A1

    公开(公告)日:2025-02-06

    申请号:US18919701

    申请日:2024-10-18

    Applicant: NVIDIA Corp.

    Abstract: A circuit board includes chip die mounted on a three dimensional rectangular structure, a three dimensional triangular prism structure, or a combination thereof. A ball grid array for the chip die mounted on any such three dimensional structure is interposed between the three dimensional structure and the circuit board itself.

    Keeper-free volatile memory system
    150.
    发明授权

    公开(公告)号:US12131775B2

    公开(公告)日:2024-10-29

    申请号:US17678799

    申请日:2022-02-23

    Applicant: NVIDIA Corp.

    CPC classification number: G11C11/4125

    Abstract: A static random access memory (SRAM) or other bit-storing cell arrangement includes memory cells and a hierarchical bitline structure including local bitlines for subsets of the memory banks and a global bitline spanning the subsets. A keeper circuit for the global bitline is replaced by bias circuitry on output transistors of the memory cells.

Patent Agency Ranking