Abstract:
A method is described. The method includes receiving an instruction, accessing a return cache to load a predicted return target address upon determining that the instruction is a return instruction, searching a lookup table for executable binary code upon determining that the predicted translated return target address is incorrect and executing the executable binary code to perform a binary translation.
Abstract:
Technologies for shadow stack management include a computing device that, when executing a translated call routine in a translated binary, pushes a native return address on to a native stack of the computing device, adds a constant offset to a stack pointer of the computing device, executes a native call instruction to a translated call target, and, after executing the native call instruction, subtracts the constant offset from the stack pointer. Executing the native call instruction pushes a translated return address onto a shadow stack of the computing device. The computing device may map two or more virtual memory pages of the shadow stack onto a single physical memory page. The computing device may execute a translated return routine that pops the native return address from the native stack, adds the constant offset to the stack pointer, and executes a native return instruction. Other embodiments are described and claimed.
Abstract:
One embodiment provides an apparatus. The apparatus includes a processor, a chipset, a memory to store a process, and logic. The processor includes one or more core(s) and is to execute the process. The logic is to acquire performance monitoring data in response to a platform processor utilization parameter (PUP) greater than a detection utilization threshold (UT), identify a spin loop based, at least in part, on at least one of a detected hot function and/or a detected hot loop, modify the identified spin loop using binary translation to create a modified process portion, and implement redirection from the identified spin loop to the modified process portion.
Abstract:
Various embodiments are generally directed to techniques to detect a return-oriented programming (ROP) attack by verifying target addresses of branch instructions during execution. An apparatus includes a processor component, and a comparison component for execution by the processor component to determine whether there is a matching valid target address for a target address of a branch instruction associated with a translated portion of a routine in a table comprising valid target addresses. Other embodiments are described and claimed.
Abstract:
A method is described. The method includes receiving an instruction, accessing a return cache to load a predicted return target address upon determining that the instruction is a return instruction, searching a lookup table for executable binary code upon determining that the predicted translated return target address is incorrect and executing the executable binary code to perform a binary translation.
Abstract:
Technologies for partial binary translation on multi-core platforms include a shared translation cache, a binary translation thread scheduler, a global installation thread, and a local translation thread and analysis thread for each processor core. On detection of a hotspot, the thread scheduler first resumes the global thread if suspended, next activates the global thread if a translation cache operation is pending, and last schedules local translation or analysis threads for execution. Translation cache operations are centralized in the global thread and decoupled from analysis and translation. The thread scheduler may execute in a non-preemptive nucleus, and the translation and analysis threads may execute in a preemptive runtime. The global thread may be primarily preemptive with a small non-preemptive nucleus to commit updates to the shared translation cache. The global thread may migrate to any of the processor cores. Forward progress is guaranteed. Other embodiments are described and claimed.
Abstract:
One embodiment provides an apparatus. The apparatus includes a processor, a chipset, a memory to store a process, and logic. The processor includes one or more core(s) and is to execute the process. The logic is to acquire performance monitoring data in response to a platform processor utilization parameter (PUP) greater than a detection utilization threshold (UT), identify a spin loop based, at least in part, on at least one of a detected hot function and/or a detected hot loop, modify the identified spin loop using binary translation to create a modified process portion, and implement redirection from the identified spin loop to the modified process portion.
Abstract:
Technologies for shadow stack management include a computing device that, when executing a translated call routine in a translated binary, pushes a native return address on to a native stack of the computing device, adds a constant offset to a stack pointer of the computing device, executes a native call instruction to a translated call target, and, after executing the native call instruction, subtracts the constant offset from the stack pointer. Executing the native call instruction pushes a translated return address onto a shadow stack of the computing device. The computing device may map two or more virtual memory pages of the shadow stack onto a single physical memory page. The computing device may execute a translated return routine that pops the native return address from the native stack, adds the constant offset to the stack pointer, and executes a native return instruction. Other embodiments are described and claimed.
Abstract:
Technologies for partial binary translation on multi-core platforms include a shared translation cache, a binary translation thread scheduler, a global installation thread, and a local translation thread and analysis thread for each processor core. On detection of a hotspot, the thread scheduler first resumes the global thread if suspended, next activates the global thread if a translation cache operation is pending, and last schedules local translation or analysis threads for execution. Translation cache operations are centralized in the global thread and decoupled from analysis and translation. The thread scheduler may execute in a non-preemptive nucleus, and the translation and analysis threads may execute in a preemptive runtime. The global thread may be primarily preemptive with a small non-preemptive nucleus to commit updates to the shared translation cache. The global thread may migrate to any of the processor cores. Forward progress is guaranteed. Other embodiments are described and claimed.