Befor Usage Befor GUI Interface Befor Command Interface
1. Installation
The Beftigre Orchestrator (Befor) API can either be executed through the command prompt using the API's commands or as a graphical user interface tool.
Befor API, is bundled as BEFtigreOR.jar. To run Befor tool, the jar can be executed directly by double-clicking it.
To run the APIs via command prompt, the befor argument can be parsed to the command execution as shown in the snippet below.
(To see the Befor commands and how to use them, run the help command: by simply typing
help
)
2. Directories and Files
After starting up the tool or the command line, some required directories and files are created for the test. These are:
- /logs directory: is where test generated logs (.log) are stored, logs from the mobile device (i.e. generated by Band API) can be placed here.
- /results directory: is where the data files (.dat) or result of full-tier analyser (.csv) are stored. The data files are the product of analysis on the files in logs directory.
- /results/plot directory: is where the graph files (.png) are stored when plotted.
- /files directory: All created template files (such as slow, TestPlan.jmx, sigar.zip) are stored in the files directory. The template files are the files used to setup the serve test process.
- slow file: is copied to the server by Befor setup command and used (remotely) to issue commands on Linux TC utility which simulate slow/varying network conditions (by setting bandwidth and latency parameters).
- TestPlan.jmx file: is used (locally) to setup a jmeter test which contains PerfMon metrics collector as a listener, so as to receive CPU and memory usage metrics from the server.
- sigar.zip file: is copied to and unzipped at the server by Befor and used (remotely) to compile and start the CPUMemoryAvailServer which computes available cloud CPU and memory of the server prior to test.
Other files generated by Befor during setup (and stored in the files directory) are:
- settings.befor file: is generated to store the connection and test parameters (presented in section 3) when the save button is clicked.
- BandwidthLatencyServer.java file: is generated when 'Install setup files operation' is triggered (see section 4), after being generated it is then copied to and compiled on the server.
- CPUMemoryAvailServer.java file: is also generated, copied to and compiled on the server at the same time as BandwidthLatencyServer.java file. However, CPUMemoryAvailServer compiles and executes with sigar libraries (from sigar.zip) on the classpath.
3. Setting Connection and Test Parameters
Prior to performing any server monitoring and metrics collection operations, the connection and test parameters are required by the API to establish a connection point to the server.
The connection parameters are a pem file, host IP address, port number and the root user of the server.
Test parameters are jmeter directory (required for Perfmon metrics collector), BandwidthLatency port (required to set up a socket connection between the BandwidthLatencyServer and BandwidthLatencyClient), and CPUMemory port (required to set up a socket connection between the CPUMemoryAvailServer and CPUMemoryAvailClient).
Important Note: All ports used within the API setup must be configured in EC2 security groups. Default ports are presented in the table below.
The connection and test parameters can be set using the
params
command of the API.
In EC2 the
Source for each port setting can be left as 0.0.0.0/0
Port |
Protocol |
Source |
Client component |
Server component |
22 |
TCP |
0.0.0.0/0 |
Beftigre API |
EC2 server instance |
8080 |
TCP |
0.0.0.0/0 |
Jmeter Test plan |
EC2 server instance |
4848 |
TCP |
0.0.0.0/0 |
PerfMon Metrics Collector |
PerfMon Server Agent |
1 |
UDP |
0.0.0.0/0 |
BandwidthLatencyClient |
BandwidthLatencyServer |
2 |
UDP |
0.0.0.0/0 |
CPUMemoryAvailClient |
CPUMemoryAvailServer |
Any port, e.g. 3 |
TCP |
0.0.0.0/0 |
An offloadable task on the mobile device |
An offloadable task on the server |
4. Server Monitor Operations
The server monitor operations (with commands) are presented below:
- Install setup files operation (
setup
command): involves the installation of all files required for the test process. The setup process is as follows:
- The setup feature first creates and compiles BandwidthLatencyServer and CPUMemoryAvailServer socket programs on the server using the BandwidthLatency port and CPUMemory port provided in the test parameters (see table above).
Recall from section 2, that sigar libraries (from sigar.zip) are used in classpath for setting up CPUMemoryAvailServer.
- The setup then downloads the ServerAgent monitor zip from jmeter repository and extracts on the server.
- Next the slow file used for network simulation (see section 2) is copied to the server.
- Finally Linux stress utility is installed for simulation of CPU and memory load on the server.
In total six files/programs are used to setup the server for Beftigre test (i.e. BandwidthLatencyServer, CPUMemoryAvailServer and sigar API, ServerAgent, slow, stress).
- Cleanup files operation (
cleanup
command): is used to uninstall or remove all setup files.
- Setup offload tasks operation (
offload
command): is used to setup and run offloadable tasks of an MCA on the server.
- To setup the offloadable tasks on the server, the operation takes a zip file as input.
- The zip provided is composed of the class file(s) of the offloadable task (including any required libraries). The offload operation extracts the zip on the server.
- Furthermore, to run the offloadable task on the server, the name of the main class is passed to the run function of the offload operation.
- Note that the main class has to be a server socket application, and the port used by the class must be made public and accessible by the mobile application.
- Set simulation params operation (
simulate
command): is used to set up the parameters used to simulate slow network, and CPU and memory load.
- Start server monitors operation (
start
command): is used to launch tasks for monitoring server resources - using the four setup processes (associated with Setup/Install operation).
In other words, i) BandwidthLatencyServer, CPUMemoryAvailServer, and ii) ServerAgent are started alongside the simulation processes - iii) slow and iv) stress utilities, using the provided simulation parameters from the simulation operation.
- Stop server monitors operation (
stop
command): is used to stop all monitoring processes.
5. Metrics Collector Operations
The metrics collector operations (with commands) are presented below:
- Edit .jmx test plan operation (
editplan
command): is used to edit the TestPlan.jmx file which contains the jmeter test plan containing PerfMon metrics collector listener used to retrieve CPU and memory usage metrics from the server.
- Start metrics collector operation (
collect
command): is used to start metrics collector using the test plan created.
6. Full-tier Analyser Operations
The full-tier analyser operations (with commands) are presented below:
- Extract results operation (
extract
command): is used to compute the results of the test from the log files, and store the results in dat files in the /results directory. From the tool logs are first selected using the Select Logs button.
- Plot operation (
plot
command): is used to plot any combination of the .dat files in a graph. Graphs are stored as png files in /results/plot directory.
7. auto
mating the Full-tier Test
auto
is Beftigre's test automation command which is used to automate the Beftigre full-tier testing of mobile (Band) and cloud (Befor) tiers.
This makes it easy to repeat experiments on the Beftigre Framework.
As shown in the snippet below, the test automation is initiated by calling the auto command of Befor API with the following three required arguments, and an optional fourth;
The purpose of the interleave argument is to allow the BaseService of Band API to complete execution - as this is necessary for full-tier evaluation.
An auto script file must specify commands useful for full-tier test (as shown below).
Since the file is for
full-tier test, the auto script only supports commands for testing.
Thus, the result analysis is not handled by the command.
- the script file supports five Befor commands relevant for testing the cloud tier:
params
, offload
, simulate
, start
, collect
, and stop
.
- and an
am
command which is used with the adb.exe to launch the test on the mobile tier.
i. Sample auto Script file:
params pemfile ip port user jmeter blport cmport
offload -s mainclass
simulate bandwidth bandwidthType latency cpuload memload timeout
simulate bandwidth bandwidthType latency cpuload memload timeout
simulate bandwidth bandwidthType latency cpuload memload timeout
start
collect
am instrument -w -e class rs.pedjaapps.Linpack.LinpackTest rs.pedjaapps.Linpack.test/android.test.InstrumentationTestRunner
stop
The required commands for constructing the script file to execute
auto
command are
params
,
offload
,
simulate
and
am
.
One or more lines of
simulate
can be provided.
start
,
collect
, and
stop
are optional. As they do not require any argument they are automatically handled by
auto
command in Befor API.
ii. How to obtain the right am
command:
auto
requires that the test project is already installed on the target devices prior to running the test. This is a prerequisite for executing android test via command line.
The application project and test project is automatically installed on first execution from Android studio. To check that a device is connected use
adb devices
command.
Firstly, ensure that the command line directory is changed to the adb location, e.g.
cd C:\Users\Chinenyeze\AppData\Local\Android\sdk\platform-tools
Then enter
adb shell pm list instrumentation
The above command gives a directive of the test projects installed on the connected device, in the format below;
instrumentation:rs.pedjaapps.Linpack.test/android.test.InstrumentationTestRunner (target= rs.pedjaapps.Linpack)
From the above output the instrumentation points to
[test package]/[runner class]
and the target specifies the
[application package]
of the installed app to be evaluated.
Further useful adb documentation can be found
here.
Given that the class of the test code is
rs.pedjaapps.Linpack.LinpackTest
, then the am command for auto script in Befor API can be constructed as follows:
am instrument -w -e class rs.pedjaapps.Linpack.LinpackTest rs.pedjaapps.Linpack.test/android.test.InstrumentationTestRunner
Format:
am instrument -w -e class [test code class] [test package]/[runner class]