Evaluation

Setup

Our experimentation environment involves a customized instance of the original EDS system that uses Java Remote Method Invocation (RMI) for sending and receiving event messages among components. Monitoring functionality is added to each component, so that all events are captured and logged in a MySQL database. A multi-threaded Java client is written to emulate a configurable number of concurrent users. Gaussian-distributed timing delays are inserted to make the simulation as close to the real-world environment as possible. Unless otherwise noted, we used the compromised UC1-TBDLINK as the default test scenario. Malicious events from StrategyAnalyzer to ResourceManager are injected during simulation runs according to a pre-defined anomaly rate.

Except for the Apriori implementation which was implemented using Weka, all other ARMOUR framework components are developed using Java. Both simulation and data analysis are run on a quad-core Mac OS X machine.

In order to evaluate the performance of the framework under different concurrency settings, we set up different test cases with a varying number of concurrent users, from 5 to 100. In reality, however, different users may have different proficiency levels and browsing habits, therefore the number of concurrent users is not a consistent measure of system concurrency. The concurrency measure γ, on the other hand, is a much more objective measure, and its correlation with the number of simulated users in our experiments is shown in the diagram-TBDLINK. Here we call them active users, because they are always-busy users intended to generate a heavy load on the system, and therefore may represent a much larger number of human users in a real-world setting. The near-linear relationship between γ and the number of users can help a real-world system estimate one from the other, once their ratio is estimated.