In the blog post Guided by feedback: Smarter fuzzing with Defensics, we introduced the new Defensics® Fuzzing feedback loop and explained how feedback‑driven fuzzing helps the engine focus on more interesting and effective anomaly combinations. That post highlighted Defensics’ built‑in capabilities, such as input message analysis, as the first step toward adaptive fuzzing.
Now we want to introduce the Defensics agent instrumentation framework, a flexible and extensible way to bring external runtime signals, metrics, and system observations into the Defensics testing workflow.
The Defensics agent instrumentation framework provides a structured way to monitor a system under test (SUT) and report findings back to Defensics. Agents run on or near the target system and communicate with Defensics through an HTTP(S) API. With agents, Defensics can detect crashes, monitor system behavior, watch logs, track process health, observe file system changes, and more. Multiple agents can run simultaneously, each monitoring a different aspect of the test environment. This enables users to extend Defensics beyond protocol‑level visibility and incorporate environment‑specific runtime information.
Traditionally, agents have been used to report pass/fail verdicts for each test case. For example, a LogTailerAgent can watch the SUT’s log file, and if a given pattern like the text “Stack Trace” appears, it gives a fail verdict for the test case. Or a ProcessManagerAgent can notice a memory leak appearing in the SUT process.
In the latest release (2026.3), the framework also supports agents as a feedback source for Defensics test suites that include the capability. In this model, an agent observes a relevant runtime metric, and after each test case, produces a feedback score indicating whether the test case produced interesting results. Defensics then uses these scores to prioritize future test‑case generation and direct testing toward promising areas.
Figure 1. The Defensics agent instrumentation framework can be extended with customer agents for feedback score reporting
The agent feedback is reported using the same positive-feedback‑score mechanism as other feedback sources, such as input message analysis. The feedback score reflects the “goodness value” of a test case. After the engine has collected all feedback for an executed test case, it sums up the results and uses any test case with a nonzero score as a base for generating new anomaly combinations. Test case generations with higher scores are prioritized, while zero‑score cases may only contribute to random generation. Verdicts are handled independently from feedback, meaning both passing and failing test cases can receive high feedback scores.
For customers with specialized monitoring needs or existing instrumentation systems, the agent SDK enables the development of custom agents. Custom agents can implement their own instrumentation logic and generate feedback scores based on any meaningful runtime signals. Agents are written in Go, loaded dynamically by the agent server, and appear in Defensics alongside built‑in agents. They follow the same life cycle and can report both verdicts and feedback scores, depending on configuration.
Creating a feedback‑enabled agent requires implementing the agent interface defined in the file interface.go and deciding which measurements your agent should collect during a test run. The SDK takes care of all communication and life cycle coordination with the agent server and Defensics.
Your agent only needs to
· Declare what it monitors
· Collect measurements during test execution
· Report a feedback score (and, optionally, measurement data) back to the framework
The feedback score then appears in the Defensics results logs.
Figure 2 shows the basic communication sequence between the agent and the Defensics test run. At the start of the test run, the agent receives its configuration. These configuration parameters are defined by the custom agent itself, automatically shown in the Defensics UI, and returned to the agent once the user has provided values for them. For each test case, there is a call at the start and at the end. A test case is considered complete after the agent has delivered the feedback score and the verdict. The framework then calls the Instrument function to indicate that the test case has been sent to the SUT and to ask the agent whether it observed anything interesting.
Figure 2. An agent can implement both the verdict and feedback flows, or only one of them
An example implementation is included in the SDK, demonstrating how to structure an agent, manage life cycle events, and return both verdicts and feedback scores. It provides a practical starting point; you can take the example and replace its measurement logic with signals relevant to your environment, such as log patterns, process statistics, sensor values, or application-specific events. You then add a simple calculation that turns those observations into a feedback score.
A safe way to begin is by reporting high scores for the most obvious findings and fine‑tuning the scoring as you learn more about your system’s behavior. With this approach, even small pieces of existing instrumentation can be adapted into feedback sources. This allows Defensics to guide test‑case generation based on signals that only your system can provide.
The agent SDK is available to all Defensics customers and can be downloaded from the customer portal.
In this series of blog posts, we introduced Defensics enhanced unlimited mode, the feedback scoring mechanism, and both internal and external ways to generate those scores. These capabilities address the common customer question of what happens after all issues from fixed test plans are already discovered. While fixed test plans will always remain essential for regression testing, these new features increase the likelihood of uncovering new issues, with unlimited mode providing an infinite number of anomaly combinations.
We are gradually incorporating these capabilities into newly published test suites. Stay tuned — and keep fuzzing!
Feb 05, 2026 | 6 min read
Jan 22, 2026 | 3 min read
Dec 16, 2025 | 4 min read
Oct 08, 2025 | 6 min read
Jun 03, 2025 | 3 min read