GUIDED CONVERSATION CONTEXT COMPRESSION WITH ADVERSARIAL HYPOTHETICAL QUESTIONS AND EVALUATING RELEVANCE OF CONTEXTUAL INFORMATION FOR LLMS

    公开(公告)号:US20250133037A1

    公开(公告)日:2025-04-24

    申请号:US18920765

    申请日:2024-10-18

    Applicant: Maplebear Inc.

    Abstract: A system may smartly edit the context of a conversation to be input into a chatbot LLM by using a conversation compression algorithm to prune and compress redundant elements. The system evaluates the conversation context compression algorithm using both a chatbot LLM and an adversarial LLM. The system retrieves a logged conversation and generates a compressed conversation context from the logged conversation. The system generates a synthetic user response by applying the adversarial LLM and generates a test conversation by replacing a user response in the conversation with the synthetic user response. The system generates a compressed context of the test conversation. The system generates a test chatbot LLM response by prompting the chatbot LLM with the compressed context of the test conversation. The system evaluates the conversation context compression algorithm by comparing the test chatbot response with a benchmark chatbot response.

    ENABLING MULTI-LANGUAGE COLD START SEARCH USING A LARGE LANGUAGE MODEL

    公开(公告)号:US20250165513A1

    公开(公告)日:2025-05-22

    申请号:US18948027

    申请日:2024-11-14

    Applicant: Maplebear Inc.

    Abstract: An online system uses a machine-learned language model (e.g., an LLM) to improve multilingual search capabilities. The system generates a prompt for the LLM that includes a set of search queries in a first language along with their context, as well as a request for translating these queries into a second language. This prompt is sent to a model serving system, which executes it through the LLM and returns translated queries in the second language. Additionally, the concierge system accesses a first set of features derived from the search results in the first language, and updates these features based on the newly translated search queries to create a second set of features. These translated queries and the second set of features are then used to train a search model optimized for queries in the second language.

Patent Agency Ranking