Benchmarking

BenchmarkDotNet produces several reports, one of which is an HTML report similar to what you see here:

The Excel report provides the details of every parameter that was used in running the program and is your most extensive source of information. In many cases, most of these parameters will use the default values and be more than you need, but at least you will have the choice to remove what you will:

We'll describe some of these parameters in our next section when we review the source code for creating what you see before:

static void Main(string[] args)
{
var config = ManualConfig.Create(DefaultConfig.Instance);
// Set up an results exporter.
// Note. By default results files will be located in
.BenchmarkDotNet.Artifactsresults directory.
config.Add(new CsvExporter(CsvSeparator.CurrentCulture,
new BenchmarkDotNet.Reports.SummaryStyle
{
PrintUnitsInHeader = true,
PrintUnitsInContent = false,
TimeUnit = TimeUnit.Microsecond,
SizeUnit = BenchmarkDotNet.Columns.SizeUnit.KB
}));
// Legacy JITter tests.
config.Add(new Job(EnvMode.LegacyJitX64,
EnvMode.Clr, RunMode.Short)
{
Env = { Runtime = Runtime.Clr, Platform = Platform.X64 },
Run = { LaunchCount = 1, WarmupCount = 1,
TargetCount = 1, RunStrategy =
BenchmarkDotNet.Engines.RunStrategy.Throughput },
Accuracy = { RemoveOutliers = true }
}.WithGcAllowVeryLargeObjects(true));
// RyuJIT tests.
config.Add(new Job(EnvMode.RyuJitX64, EnvMode.Clr,
RunMode.Short)
{
Env = { Runtime = Runtime.Clr, Platform = Platform.X64 },
Run = { LaunchCount = 1, WarmupCount = 1,
TargetCount = 1, RunStrategy =
BenchmarkDotNet.Engines.RunStrategy.Throughput },
Accuracy = { RemoveOutliers = true }
}.WithGcAllowVeryLargeObjects(true));
// Uncomment to allow benchmarking of non-optimized assemblies.
//config.Add(JitOptimizationsValidator.DontFailOnError);
// Run benchmarks.
var summary = BenchmarkRunner.Run<FunctionBenchmarks>(config);
}

Let's dissect this code a bit more.

To begin with, we'll create a manual configuration object that will hold our configuration parameters used for benchmarking:

var config = ManualConfig.Create(DefaultConfig.Instance);

Next, we'll set up an exporter to hold the parameters we will use for exporting our results. We will export our results to a .csv file using a timing of microseconds and size in kilobytes:

config.Add(new CsvExporter(CsvSeparator.CurrentCulture,
new BenchmarkDotNet.Reports.SummaryStyle
{
PrintUnitsInHeader = true,
PrintUnitsInContent = false,
TimeUnit = TimeUnit.Microsecond,
SizeUnit = BenchmarkDotNet.Columns.SizeUnit.KB
}));

Next, we'll create a benchmark job that will handle the measurements of the LegacyJitX64 on the x64 architecture. You can feel free to change this and any other parameter to experiment with or include whatever results you need or want for your test scenario. In our case, we will be using the x64 platform; a LaunchCount, WarmupCount, and TargetCount of 1; and RunStrategy of Throughput. We will also do the same for RyuJIT but we won't show the code here:

config.Add(new Job(EnvMode.LegacyJitX64, EnvMode.Clr,
RunMode.Short)
{
Env = { Runtime = Runtime.Clr, Platform = Platform.X64 },
Run = { LaunchCount = 1, WarmupCount = 1, TargetCount = 1,
RunStrategy = Throughput },
Accuracy = { RemoveOutliers = true }
}.WithGcAllowVeryLargeObjects(true));

Finally, we will run BenchmarkRunner to perform our tests:

// Run benchmarks.
var summary = BenchmarkRunner.Run<FunctionBenchmarks>(config);

BenchmarkDotNet will run as a DOS command-line application, and the following is an example of the preceding code executing:

Let's take a look at one example of an activation function being plotted:

[Benchmark]
public double LogisticFunctionSteepDouble()
{
double a = 0.0;
for(int i=0; i<__loops; i++)
{
a = Functions.LogisticFunctionSteep(_x[i % _x.Length]);
}
return a;
}

You will notice the [Benchmark] attribute being used. This indicates to BenchmarkDotNet that this will be a test that needs to be benchmarked. Internally, it calls the following function:

For the LogisticFunctionSteep function, the implementation, like most activation functions, is simple (assuming you know the formula). In this case we are not plotting the activation function but rather benchmarking it. You will notice that the function takes and returns double. We have also benchmarked the identical function by using and returning float variables, so we are benchmarking the difference between the function using double and float. Hence, people can see that sometimes the performance impact is more than they may think:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.94.190