CoreTrace

CoreTrace is our system call capture tool that offers a quick way to understand a program's behavior.

avhsupport avatar
Written by avhsupport
Updated over a week ago

What is CoreTrace?

You can trace system calls using either strace, a standard command-line Linux tool, or our proprietary CoreTrace tool. strace is included in our virtual devices, and it is implemented with ptrace. Applications can employ anti-debugging techniques to detect and prevent ptrace-based tracing. However, these techniques cannot prevent, or even easily detect, hypervisor-based tracing. Additionally, you may often be interested in a particular target. CoreTrace makes it easy to filter by specific processes and threads for more targeted analysis.

Tracing system calls is a dynamic analysis reverse engineering technique that can offer a quick way to understand a program’s behavior.

Setting up CoreTrace

To access CoreTrace, open the CoreTrace tab in the device screen. The CoreTrace UI allows you to start and stop a trace, download the log generated by tracing, and clear the log.

Threads and Processes

By default, CoreTrace traces all threads in the system. This rapidly produces a huge amount of data. Often you’ll be interested in a particular target. To apply a filter to the results, click "Add a process or thread" to display the Processes dialog:

The Processes dialog displays all processes and threads in the system. To examine the threads inside a process, click the "THREADS" button in the process' row.

To add a filter, click the "ADD" button in the process' or thread’s row. Alternatively, specify a filter manually. Trace will log traces as long as they match at least one filter.

There are often many processes running. To more easily find the processes and threads you’re interesting in, click the magnifying glass in the top-right corner of the dialog and type a phrase. Only rows that contain the phrase will be displayed.

Then, you are ready to click Start Trace.

Understanding the results

After you have captured the trace (or during the capture) you can download the log file. Each line of the log will look like this:

1<1> [00248.864651618] ffffff806401e040-0/337:sensors@2.1-ser.379/ @00000070efc0b834 read ( fd: 5, buf: 0x6e5f6ec980, count: 4 ) ... @[ 0000006e5f6f9778 0000006e5f6f9840 0000006e5f6f948c 0000006e5f6f7f90 0000006e5f6f7b54 0000006e5f6f7434 00000070efc2088c 00000070efbc0e0c ]

or like this:

1<1> [00248.864656648] ffffff806401e040-0/337:sensors@2.1-ser.379/ @00000070efc0b834 ... read ( result: 4, buf: 0x6e5f6ec980 -> [s"001e"] )

The fixed line header contains the following information:

1<cpu> [time.nsec] threadid-sigid/pid:comm.tid/ @pc

Where:

  • cpu is the processor core the log comes from,

  • time.nsec is time the entry was captured by the hypervisor,

  • threadid is the internal kernel thread ID (usually address of a task or thread structure)

  • sigid is the signal state (if a signal happens, a thread could execute in a different signal state before it's done with the signal, then return to the original signal state),

  • pid is the process ID (PID of the process on Linux),

  • comm is the short process name, which may be the original command but may also be set by process itself,

  • tid is the thread ID (or, PID of the thread),

  • pc is the PC where the syscall happened in EL0 (userland).

After the header, lines that end with ... are syscall invocations and lines that start with ... are syscall returns. On syscall invocation lines, if the environment permits it, there will be an additional trailer of the form:

1@[ lr ret1 ret2 ret3 ... ]

This trailer contains the EL0 return stack of the function that invoked the syscall.

Did this answer your question?