Exposure for performing synchronized off-axis alignment
    21.
    发明授权
    Exposure for performing synchronized off-axis alignment 有权
    用于执行同步离轴对准的曝光

    公开(公告)号:US6166392A

    公开(公告)日:2000-12-26

    申请号:US193220

    申请日:1998-11-16

    CPC classification number: G03F9/7011 G03F7/70358 G03F7/70733

    Abstract: An exposure has at least two wafer pads for holding wafers at the same time to perform different tasks including exposing a wafer, aligning a wafer, and loading or unloading a wafer synchronously. The exposure of the invention includes an exposing unit, a wafer supporting unit and a alignment beam scan unit. The wafer-supporting unit contains at least two wafer pads for holding wafers. The alignment beam scan unit contains an interferometer for detecting the interference patterns formed by the alignment beams and the alignment marks on the wafers. The tasks of aligning a wafer, and exposing a wafer, or loading/unloading a wafer can be performed on the wafers placed on each individual wafer pad synchronously.

    Abstract translation: 曝光具有至少两个用于同时保持晶片的晶片垫,以执行不同的任务,包括曝光晶片,对准晶片以及同步地加载或卸载晶片。 本发明的曝光包括曝光单元,晶片支撑单元和对准光束扫描单元。 晶片支撑单元包含至少两个用于保持晶片的晶片垫。 对准束扫描单元包含用于检测由对准光束形成的干涉图案和晶片上的对准标记的干涉仪。 可以对放置在每个单独的晶片垫上的晶片同步地执行对准晶片,暴露晶片或加载/卸载晶片的任务。

    Techniques for connecting an external network coprocessor to a network processor packet parser
    22.
    发明授权
    Techniques for connecting an external network coprocessor to a network processor packet parser 有权
    将外部网络协处理器连接到网络处理器数据包解析器的技术

    公开(公告)号:US09215125B2

    公开(公告)日:2015-12-15

    申请号:US13884664

    申请日:2011-12-19

    Abstract: A network processor includes first communication protocol ports that each support ‘M’ minimum size packet data path traffic on ‘N’ lanes at ‘S’ Gigabits per second (Gbps) and traffic with different communication protocol units on ‘n’ additional lanes at ‘s’ Gbps. The first communication protocol ports support access to an external coprocessor using parsing logic located in each of the first communication protocol ports. The parsing logic, during a parsing period, is configured to send a request to the external coprocessor at reception of a ‘M’ size packet and to receive a response from the external coprocessor. The parsing logic sends a request maximum ‘m’ size byte word to the external coprocessor on one of the additional lanes and receives a response maximum ‘m’ size byte word from the external coprocessor on the one of the additional lanes while complying with the equation N×S/M=

    Abstract translation: 网络处理器包括第一通信协议端口,每个端口以“S”千兆位/秒(Gbps)在“N”通道上支持“M”个最小尺寸分组数据路径业务,并且在“n”个附加车道上以不同的通信协议单元的流量“ s Gbps 第一通信协议端口支持使用位于每个第一通信协议端口中的解析逻辑来访问外部协处理器。 解析逻辑在解析周期期间被配置为在接收到“M”大小的分组时向外部协处理器发送请求并且从外部协处理器接收响应。 解析逻辑在附加通道之一上向外部协处理器发送请求最大“m”字节字,并在附加通道之一上从外部协处理器接收响应最大“m”字节字,同时遵循等式 N×S / M =

    FLEXIBLE AND SCALABLE ENHANCED TRANSMISSION SELECTION METHOD FOR NETWORK FABRICS
    24.
    发明申请
    FLEXIBLE AND SCALABLE ENHANCED TRANSMISSION SELECTION METHOD FOR NETWORK FABRICS 有权
    用于网络织物的灵活和可扩展的增强型传输选择方法

    公开(公告)号:US20130163611A1

    公开(公告)日:2013-06-27

    申请号:US13334306

    申请日:2011-12-22

    CPC classification number: H04L47/78 H04L12/465

    Abstract: IEEE 802.1Q and Enhanced Transmission Selection provide only eight different traffic classes that may be used to control bandwidth in a particular physical connection (or link). Instead of relying only on these eight traffic classes to manage bandwidth, the embodiments discussed herein disclose using an Enhanced Transmission Selection scheduler that permits a network device to set the bandwidth for an individual virtual LAN. Allocating bandwidth in a port based on a virtual LAN ID permits a network device to allocate bandwidth to, e.g., millions of unique virtual LANs. Thus, this technique may increase the granular control of the network fabric and its performance.

    Abstract translation: IEEE 802.1Q和增强型传输选择仅提供可以用于控制特定物理连接(或链路)中的带宽的八个不同的业务类别。 不是仅依赖于这八个业务类别来管理带宽,这里讨论的实施例公开了使用允许网络设备为单个虚拟LAN设置带宽的增强型传输选择调度器。 基于虚拟LAN ID在端口中分配带宽允许网络设备向例如数百万个唯一虚拟LAN分配带宽。 因此,这种技术可以增加网络结构的粒度控制及其性能。

    Assignment constraint matrix for assigning work from multiple sources to multiple sinks
    25.
    发明授权
    Assignment constraint matrix for assigning work from multiple sources to multiple sinks 失效
    分配约束矩阵,用于将工作从多个源分配到多个汇点

    公开(公告)号:US08391305B2

    公开(公告)日:2013-03-05

    申请号:US12650080

    申请日:2009-12-30

    CPC classification number: H04L49/9047

    Abstract: An assignment constraint matrix is used in assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device. The assignment constraint matrix is implemented as a plurality of qualifier matrixes adapted to operate simultaneously in parallel. Each of the plurality of qualifier matrixes is adapted to determine sources in a subset of supported sources that are qualified to provide work to a set of sinks based on assignment constraints. The determination of qualified sources may be based sink availability information that may be provided for a set of sinks on a single chip or distributed on multiple chips.

    Abstract translation: 分配约束矩阵用于从多个源(例如网络处理设备中的数据队列)向诸如网络处理设备中的处理器线程的多个宿分配诸如数据分组的工作。 分配约束矩阵被实现为适于同时并行操作的多个限定符矩阵。 多个限定符矩阵中的每一个适于确定被支持的源的子集中的源,所述源被限定为基于分配约束向一组接收器提供工作。 合格来源的确定可以是可以在单个芯片上提供用于一组接收器或分布在多个芯片上的接收器可用性信息。

    Multiple-level data processing system
    26.
    发明授权
    Multiple-level data processing system 有权
    多级数据处理系统

    公开(公告)号:US07930742B2

    公开(公告)日:2011-04-19

    申请号:US11422087

    申请日:2006-06-05

    Abstract: Methods and systems for processing multiple levels of data in system security approaches are disclosed. In one embodiment, a first set and a second set of resources are selected to iteratively and independently reverse multiple levels of format conversions on the payload portions of a data unit from a first file and a data unit from a second file, respectively. The first file and the second file are associated with a first transport connection and a second transport connection, respectively. Upon completion of the aforementioned reversal operations, the payload portions of a first reversed data unit and a second reversed data unit, which correspond to the data unit of the first file and the data unit of the second file, respectively, are inspected for suspicious patterns prior to any aggregation of the data units of the first file or the second file.

    Abstract translation: 公开了在系统安全方法中处理多级数据的方法和系统。 在一个实施例中,选择第一组和第二组资源以分别从第一文件和来自第二文件的数据单元反复地和独立地反转数据单元的有效载荷部分上的多个格式转换级别。 第一文件和第二文件分别与第一传输连接和第二传输连接相关联。 在完成上述反转操作时,分别对应于第一文件的数据单元和第二文件的数据单元的第一反向数据单元和第二反向数据单元的有效载荷部分被检查为可疑图案 在第一文件或第二文件的数据单元的任何聚合之前。

    Receive queue device with efficient queue flow control, segment placement and virtualization mechanisms
    27.
    发明授权
    Receive queue device with efficient queue flow control, segment placement and virtualization mechanisms 有权
    接收具有高效队列流控制,段放置和虚拟化机制的队列设备

    公开(公告)号:US07912988B2

    公开(公告)日:2011-03-22

    申请号:US11487265

    申请日:2006-07-14

    CPC classification number: H04L69/16 H04L69/12 H04L69/161

    Abstract: A mechanism for offloading the management of receive queues in a split (e.g. split socket, split iSCSI, split DAFS) stack environment, including efficient queue flow control and TCP/IP retransmission support. An Upper Layer Protocol (ULP) creates receive work queues and completion queues that are utilized by an Internet Protocol Suite Offload Engine (IPSOE) and the ULP to transfer information and carry out send operations. As consumers initiate receive operations, receive work queue entries (RWQEs) are created by the ULP and written to the receive work queue (RWQ). The ISPOE is notified of a new entry to the RWQ and it subsequently reads this entry that contains pointers to the data that is to be received. After the data is received, the IPSOE creates a completion queue entry (CQE) that is written into the completion queue (CQ). After the CQE is written, the ULP subsequently processes the entry and removes it from the CQE, freeing up a space in both the RWQ and CQ. The number of entries available in the RWQ are monitored by the ULP so that it does not overwrite any valid entries. Likewise, the IPSOE monitors the number of entries available in the CQ, so as not overwrite the CQ.

    Abstract translation: 一种用于卸载分裂(例如,分裂式插座,拆分式iSCSI,拆分式DAFS)堆栈环境中接收队列管理的机制,包括有效的队列流控制和TCP / IP重传支持。 上层协议(ULP)创建互联网协议套件卸载引擎(IPSOE)和ULP利用的接收工作队列和完成队列,以传输信息并执行发送操作。 当消费者开始接收操作时,接收工作队列条目(RWQE)由ULP创建并写入接收工作队列(RWQ)。 通知ISPOE对RWQ的新条目,并随后读取包含要接收的数据的指针的该条目。 接收到数据后,IPSOE创建写入完成队列(CQ)的完成队列条目(CQE)。 在编写CQE之后,ULP随后处理该条目并将其从CQE中移除,释放了RWQ和CQ两者中的空间。 RWQ中可用的条目数由ULP监视,以便它不会覆盖任何有效的条目。 同样,IPSOE监视CQ中可用条目的数量,以免覆盖CQ。

    Scheduler, network processor, and methods for weighted best effort scheduling
    29.
    发明授权
    Scheduler, network processor, and methods for weighted best effort scheduling 失效
    调度器,网络处理器和加权最佳努力调度的方法

    公开(公告)号:US07529224B2

    公开(公告)日:2009-05-05

    申请号:US11108485

    申请日:2005-04-18

    CPC classification number: H04L47/568 H04L45/00 H04L45/60 H04L47/50 H04L47/527

    Abstract: Systems and methods for scheduling data packets in a network processor are disclosed. Embodiments provide a network processor that comprises a best-effort scheduler with a minimal calendar structure for addressing schedule control blocks. In one embodiment, a three-entry calendar structure provides for weighted best effort scheduling. Each of a plurality different flows has an associated schedule control block. Schedule control blocks are stored as linked lists in a last-in-first-out buffer. Each calendar entry is associated with a different linked list by storing in the calendar entry the address of the first-out schedule control block in the linked list. Each schedule control block has a counter and is assigned a weight according to the bandwidth priority of the flow to which the corresponding packet belongs. Each time a schedule control block is accessed from a last-in-first-out buffer storing the linked list, the scheduler generates a scheduling event and the counter of the schedule control block is incremented. When an incremented counter of a schedule control block equals its weight, the schedule control block is temporarily removed from further scheduling.

    Abstract translation: 公开了一种用于在网络处理器中调度数据分组的系统和方法。 实施例提供了一种网络处理器,其包括具有用于寻址日程控制块的最小日历结构的尽力而为调度器。 在一个实施例中,三入口日历结构提供加权最佳努力调度。 多个不同的流中的每一个具有相关的进度控制块。 计划控制块作为链表存储在先进先出缓冲区中。 通过在日历条目中存储链表中的先出时间表控制块的地址来将每个日历条目与不同的链表相关联。 每个调度控制块具有计数器,并根据相应分组所属的流的带宽优先级分配权重。 每当从存储链表的最先进先出缓冲器访问调度控制块时,调度器生成调度事件,并且调度控制块的计数器递增。 当调度控制块的递增计数器等于其权重时,调度控制块暂时从进一步调度中移除。

Patent Agency Ranking