-
公开(公告)号:US20250103734A1
公开(公告)日:2025-03-27
申请号:US18474157
申请日:2023-09-25
Applicant: Microsoft Technology Licensing, LLC
Inventor: David Alan HILL , Jennifer Marie BOERTLEIN
Abstract: Hybrid access control management systems for managing role-based access control resources and attribute-based access control resources are provided. One aspect provides a computing system for implementing hybrid access control management, the computing system comprising: processing circuitry coupled to memory that stores instructions, which, upon execution by the processing circuitry, cause the processing circuitry to: receive a request from a user account to access an access-controlled resource; determine a protection mechanism of the access-controlled resource, wherein the protection mechanism is an attribute-based protection mechanism or a role-based protection mechanism; validate the request from the user account based on the determination of the protection mechanism; and permit the user account to access the access-controlled resource upon successful validation of the request.
-
公开(公告)号:US20250103325A1
公开(公告)日:2025-03-27
申请号:US18372059
申请日:2023-09-23
Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC.
Inventor: SHENGYU FU , NEELAKANTAN SUNDARESAN , ALEXEY SVYATKOVSKIY , SHUO ZHANG
Abstract: A code review is automatically generated by a large language model given a prompt that includes code changes made to a source code program, an associated intent, and an extended context. The intent represents an issue with the code changes from a code reviewer's perspective and is predicted from a neural classifier given the code changes in a code diff format. The neural classifier is a neural encoder transformer model pre-trained on various code review datasets and fine-tuned on code diff hunks of code changes labeled with an intent.
-
103.
公开(公告)号:US20250103287A1
公开(公告)日:2025-03-27
申请号:US18474943
申请日:2023-09-26
Applicant: Microsoft Technology Licensing, LLC
Inventor: Nir DAVID , Oren ISTRIN , Segev RAVGAD , Anatoly TSVETOV
Abstract: Artificial intelligence (AI) operation is improved by combining pre-processing with quantization and post-processing with dequantization. Floating point conversion may be implemented as fixed point to fixed point conversion. Floating point conversion and precision may be mimicked, for example, using high precision parameters in a fixed point to fixed point conversion. Mimicking floating point using hardware acceleration reduces sequential operations, such as machine learning model preprocessing and quantization by a CPU, to one or two clock cycles in a single step operation. Accordingly, computing resources, such as computing device cameras, may provide raw data to a hardware accelerator configured to quickly render the input in the correct format to an inference model by simultaneously performing preprocessing and quantization, substantially reducing inference latency and device power consumption while freeing up a CPU for other tasks.
-
104.
公开(公告)号:US12261926B2
公开(公告)日:2025-03-25
申请号:US17454731
申请日:2021-11-12
Applicant: Microsoft Technology Licensing, LLC
Inventor: Deepak Goel , Narendra Jayawant Gathoo , Philip A. Thomas , Srihari Raju Vegesna , Pradeep Sindhu , Wael Noureddine , Robert William Bowdidge , Ayaskant Pani , Gopesh Goyal
IPC: H04L69/00 , H04L12/46 , H04L45/16 , H04L45/42 , H04L45/64 , H04L47/10 , H04L47/52 , H04L49/25 , H04L69/324
Abstract: A fabric control protocol is described for use within a data center in which a switch fabric provides full mesh interconnectivity such that any of the servers may communicate packet data for a given packet flow to any other of the servers using any of a number of parallel data paths within the data center switch fabric. The fabric control protocol enables spraying of individual packets for a given packet flow across some or all of the multiple parallel data paths in the data center switch fabric and, optionally, reordering of the packets for delivery to the destination. The fabric control protocol may provide end-to-end bandwidth scaling and flow fairness within a single tunnel based on endpoint-controlled requests and grants for flows. In some examples, the fabric control protocol packet structure is carried over an underlying protocol, such as the User Datagram Protocol (UDP).
-
公开(公告)号:US12260338B2
公开(公告)日:2025-03-25
申请号:US17005067
申请日:2020-08-27
Applicant: Microsoft Technology Licensing, LLC
Inventor: Jian Jiao , Yeyun Gong , Nan Duan , Ruofei Zhang , Ming Zhou
Abstract: A transformer-based neural network includes at least one mask attention network (MAN). The MAN computes an original attention data structure that expresses influence between pairs of data items in a sequence of data items. The MAN then modifies the original data structure by mask values in a mask data structure, to produce a modified attention data structure. Compared to the original attention data structure, the modified attention data structure better accounts for the influence of neighboring data items in the sequence of data items, given a particular data item under consideration. The mask data structure used by the MAN can have static and/or machine-trained mask values. In one implementation, the transformer-based neural network includes at least one MAN in combination with at least one other attention network that does not use a mask data structure, and at least one feed-forward neural network.
-
106.
公开(公告)号:US12260251B2
公开(公告)日:2025-03-25
申请号:US16941033
申请日:2020-07-28
Applicant: Microsoft Technology Licensing, LLC
Inventor: Xenofon Foukas , Bozidar Radunovic
Abstract: The present disclosure relates to systems and methods for sharing compute resources. The systems and methods may include identifying a plurality of workloads to complete by a deadline. The systems and methods may include generating a performance prediction for each workload of the plurality of workloads. The systems and methods may use the performance prediction to calculate a number of compute resources required for the plurality of workloads to complete by the deadline. The systems and methods may schedule the plurality of workloads across the number of compute resources.
-
公开(公告)号:US12260237B2
公开(公告)日:2025-03-25
申请号:US16667776
申请日:2019-10-29
Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
Inventor: John Andrew Starks , Scott A. Brender , Shaheed Gulamabbas Chagani , Ping Xie
IPC: G06F9/455 , G06F9/445 , G06F9/54 , G06F16/174 , G06F16/22
Abstract: Methods, systems, and computer storage media for providing a set of common flat files in a composite image that can be mounted as a container (i.e. composite container) to support isolation and interoperation of computing resources. Container management is provided for a container management system based on a composite image file system engine that executes composite operations to support resource isolation and operating system (OS) virtualization functionality. In particular, a layout manager operates with a composite engine interface to support generating composite images with optimized configurations (i.e., pre-alignment and pre-computed hashes of for executable files). In operation, a plurality of files for generating a composite image are accessed. The composite image for the plurality of files is generated while pre-processing executable files, where pre-processing the one or more files comprises pre-aligning the executable files in the composite image or pre-computing a hash for the executable files in the composite image.
-
公开(公告)号:US12260220B2
公开(公告)日:2025-03-25
申请号:US18083249
申请日:2022-12-16
Applicant: Microsoft Technology Licensing, LLC
Inventor: Saransh Jain , Rami Mohammad Al Sheikh , Daren Eugene Streett , Michael Scott McIlvaine , Somasundaram Arunachalam
IPC: G06F9/38
Abstract: Accelerating fetch target queue (FTQ) processing is disclosed herein. In some aspects, a processor comprises an FTQ and an FTQ acceleration cache (FAC), and is configured to generate a FAC entry corresponding to an FTQ entry of a plurality of FTQ entries of the FTQ, wherein the FTQ entry comprises a fetch address bundle comprising a plurality of sequential virtual addresses (VAs), and the FAC entry comprises metadata for the FTQ entry. The processor is further configured to receive, using the FTQ, a request to access the FTQ entry. The processor is also configured to, responsive to receiving the request to access the FTQ entry, locate, using the FAC, the FAC entry corresponding to the FTQ entry among a plurality of FAC entries of the FAC. The processor is additionally configured to perform accelerated processing of the request to access the FTQ entry using the metadata of the FAC entry.
-
公开(公告)号:US12260028B2
公开(公告)日:2025-03-25
申请号:US15475038
申请日:2017-03-30
Applicant: Microsoft Technology Licensing, LLC
Inventor: Douglas Alexander Harper Orr , Juha Iso-Sipila , Marco Fiscato , Matthew James Willson , Joseph Osborne
Abstract: A data input system is described for inputting text items to an electronic device. The data input system has a store holding a vocabulary of embeddings of text items, each embedding being a numerical encoding of a text item. The data input system receives user input comprising a sequence of one or more context text items and a new text item, the new text item being a text item with an embedding to be computed and added to the vocabulary or with an embedding already in the vocabulary and to be updated. A neural network predictor predicts a next text item in the sequence given the context text items and the vocabulary. An online training module updates the vocabulary by using either a direction associated with the predicted next item, or, by comparing the new text item and the predicted next text item.
-
公开(公告)号:US12259774B2
公开(公告)日:2025-03-25
申请号:US18529563
申请日:2023-12-05
Applicant: Microsoft Technology Licensing, LLC
Inventor: Mika Juhani Rintamaeki , Gregory Allen Nielsen , Rajagopal K. Venkatachalam , Ajit Justin , Francisco Cantu De La Garza
IPC: G06F1/32 , G05B17/02 , G06F1/20 , G06F1/3212 , G06F1/3234
Abstract: A method of thermal and power control in a computing device includes, at the computing device, initializing a thermal module of the computing device, receiving data at the thermal module from a first component assigned to an interface of the thermal module, and sending an output to a second component from the thermal module based on the data. Initializing the thermal module includes detecting a presence of a plurality of potential components of the computing device; querying each of the plurality of potential components to determine capabilities of each component; in response to the querying, for each of at least a subset of the plurality of potential components receiving identification information for the component and, based on the received identification information, configuring one or more interfaces of the plurality of predefined interfaces of the thermal module to establish communication with the sub set of components.
-
-
-
-
-
-
-
-
-