API reference

This is the official API reference of PerfTest. Note that it can also be queried interactively from the Julia REPL using the help mode:

julia> using PerfTest
julia>?
help?> PerfTest

Types

Index

Documentation

PerfTest.ASTRuleType

Used by the AST walker to check for expressions that match condition, if they do then modifier will be applied to the expression.

This is the basic building block of the code transformer, a set of these rules compounds to all the needed manipulations to create the testing suite.

source
PerfTest.ContextType

In order to perform with the test suite generation, the AST walk needs to keep a context register to integrate features that rely on the scope hierarchy.

source
PerfTest.MacroParameterMethod
MacroParameter(name::Symbol, type::Type;
              validation_function::Function=(_...) -> true,
              default_value=nothing,
              has_default::Bool=false,
              mandatory::Bool=false)

Define a parameter that can appear in a macro along with properties to validate values when the macro is parsed.

Arguments

  • name: The parameter name as a symbol
  • type: The expected type of the parameter
  • validation_function: Optional function (returns Bool) to validate parameter values
  • default_value: Optional default value for the parameter
  • has_default: Whether this parameter has a default value
  • mandatory: Whether this parameter is required
source
PerfTest.Methodology_ResultType

This struct is used in the test suite to save a methodology result, which in turn is constituted of a group of metric results and their references. Additionally, custom elements that are not subject to test are also saved, e.g. informational metrics, printing functions.

source
PerfTest.Metric_ResultType

This struct is used in the test suite to save a metric measurement, therefore its saves the metric name, its units space and its value.

source
PerfTest.Metric_TestType

This struct is used in the test suite to save a metric test result and its associated data, it saves the reference used and the toreance intervals in absolute and percentual values, also it shows if the test succeded and some additional variables for data printing

source
PerfTest.Test_ResultType

NOTE: SOME METRICS ARE REPEATED IN HERE AND INSIDE A METRIC RESULT, this redundancy is tolerated for now, the copy inside the methodology result might be substituted by a reference in the future.

source

Functions

Index

Documentation

PerfTest._treeRunMethod

This method gets a input julia expression, and a context register and executes a transformation of the input that converts a recipe script (input) into a fully-fledged testing suite (return value).

Arguments

  • input_expr the recipe/source expression. (internally, a.k.a source code space)
  • context a register that will store information useful for the transformation over its run over the AST of the input
source
PerfTest.addLogFunction

Creates and/or appends to a log channel, the message is saved in that channel depending on verbosity the message will be also sent to standard output.

source
PerfTest.by_indexMethod

This method expects a hierarchy tree (dict) in the form of nested dictionaries and a vector of dictionary keys idx. The function will recursively index by the keys to get to a final element.

The DepthRecord struct represents an index.

source
PerfTest.clearLogsFunction

Clears the IOBuffer of the specified channel if 0 all channels will be cleared.

Unbinds the channels to stdout as well.

source
PerfTest.extractMethodologyResultArrayMethod

This method will return a flattened array of all of the results for all the methodologies exercised in the provided dictionary.

Example:

"Test Set 1" -> "Test 1" -> Methodology A result -> Methodology B result "Test Set 2" -> "Test 1" -> Methodology A result Returns: M. A result (Test Set 1) M. B result (Test Set 1) M. A result (Test Set 2)

source
PerfTest.extractNamesResultArrayMethod

This method will return a flattened array of the whole test result hierarchy.

Example

Example:

"Test Set 1" -> "Test 1" -> Methodology A result -> Methodology B result "Test Set 2" -> "Test 1" -> Methodology A result Returns: "Test Set 1 -> Test 1 -> Methodology A" "Test Set 1 -> Test 1 -> Methodology B" "Test Set 2 -> Test 1 -> Methodology A"

source
PerfTest.flattenedInterpolationMethod

This method interpolates the inside_expr into outside_expr anywhere it finds the token substitution_token, which is a symbol. The outside_expr has to be a block or a quote block. It has the particularity that it will remove block heads from the inside_expr and add the nested elements onto the location where the token it.

Example:

outside_expr = :(:A; 4)

inside_expr = :(begin 2;3 end)

substitution_token = :A

returns = :(2;3;4)

source
PerfTest.genTestName!Method

Function that generates a test name if needed, it is used to name test targets to distinguish them if several go in the same testset.

source
PerfTest.getMetricValueMethod

Given a series of methodology results, the the raw values of all the metrics contained in the methodology results.

source
PerfTest.grepOutputMethod

From a string, it will divide it by lines and retrieve the ones that match the regular expression provided.

source
PerfTest.match_failureMethod

Constructs an ASTRule that will always match. The onfail function will then be applied. Useful to catch a match failure.

Arguments:

  • onfail : a function that receives a Context argument.
source
PerfTest.metaGetMethod

Runs over an array of expressions trying to match the desired one. If not found returns "Nothing".

"sym" should follow the MacroTools nomenclature for the @capture macro

source
PerfTest.newLocalScopeForMethod

Creates a new level in the dictionary and updates the local context with the new level

Applies to hierarchy levels with an iteration series.

source
PerfTest.openDataFileMethod

This method is used to get historical data of a performance test suite from a save file located in path.

source
PerfTest.perftestsuffixMethod
Builds the Julia expression that goes after the performance tests, it registers, prints and saves results given the measurements done above it.
source
PerfTest.popQuoteBlocksMethod

Useful to correct operations limited by the tree walking Will remove quote blocks inside the main block without recursion and push their expressions into the main block

source
PerfTest.printDepth!Method

This method is used to print the test names, with consideration on the hierarchy and adding indentation whenever necessary

source
PerfTest.printIntervalLandingFunction

This method is used to print a graphical representation on a test result and the admisible intervals it can take. The result will and the two bounds will be printed in order.

source
PerfTest.printMethodologyMethod

This function is used to print the information relative to a methodology, relative to a a specific test execution result. This will usually print a series of metrics and might also print plots.

source
PerfTest.printMetricMethod

This method is used to dump into the output the information about a metric and the value obtained in a specific test.

source
PerfTest.printfailMethod

This method dumps into the output a test result in case of failure. The output will be formatted to make it easy to read.

source
PerfTest.removeBlockMethod

Pops expr which has a head that is :block or :quote and returns array of nested expressions which are the arguments of such head.

source
PerfTest.ruleSetMethod

This method builds what is known as a rule set. Which is a function that will evaluate if an expression triggers a rule in a set and if that is the case apply the rule modifier. See the ASTRule documentation for more information.

WARNING: the rule set will apply the FIRST rule that matches with the expression, therefore other matches will be ignored

Arguments

  • context the context structure of the tree run, it will be ocassinally used by some rules on the set.
  • rules the collection of rules that will belong to the resulting set.
source
PerfTest.saveDataFileMethod

This method is used to save historical data of a performance test suite to a save file located in path.

source
PerfTest.saveExprAsFileFunction

Utility to save an expression (expr) to a Julia file stored at path

Requires a :toplevel symbol to be the head of the expression.

source
PerfTest.saveLogFolderMethod

Moves the temporal log directory to the persistent directory where results are saved and dumps all IOStream buffers to their respective log files

source
PerfTest.testsetUpdate!Method

Function used to register a new test set in the hierarchy record of the context, where name is the name of the test set.

source
PerfTest.toggleMPIMethod

In order for the suite to be MPI aware, this function has to be called. Calling it again will disable this feature.

source
PerfTest.treeRunMethod

This method implements the transformation that converts a recipe script into a fully-fledged testing suite. The function will return a Julia expression with the resulting performance testing suite. This can be then executed or saved in a file for later usage.

Arguments

  • path the path of the script to be transformed.
source
PerfTest.validASTRuleMethod

Constructs an ASTRule that assumes that the expression is automatically valid if matched. Thus no validation is done.

source
PerfTest.@auxiliary_metricMacro

Defines a custom metric for informational purposes that will not be used for testing but will be printed as output.

Arguments

  • name : the name of the metric for identification purposes.
  • units : the unit space that the metric values will be in.
  • formula block : an expression that returns a single value, which would be the metric value. The formula can have any julia expression inside and additionally some special symbols are supported. The formula may be evaluated several times, so its applied to every target in every test set or just once, if the formula is defined inside a test set, which makes it only applicable to it.

Special symbols:

  • :median_time : will be substituted by the median time the target took to execute in the benchmark.
  • :minimum_time: will be substituted by the minimum time the target took to execute in the benchmark.
  • :ret_value : will be substituted by the return value of the target.
  • :autoflop: will be substituted by the FLOP count the target.
  • :printed_output : will be substituted by the standard output stream of the target.
  • :iterator : will be substituted by the current iterator value in a loop test set.
source
PerfTest.@define_benchmarkMacro
Defines a customized reference to compare against. Must be always defined inside a testset.

Arguments

  • name : (string) an identifier for the custom benchmark, methodologies that use it will refer too it using this name.
  • units: (string) units of the reference value.
  • formula: an expression to calculate the reference value.
source
PerfTest.@define_eff_memory_throughputMacro

This macro is used to define the memory bandwidth of a target in order to execute the effective memory thorughput methodology.

Arguments

  • formula block : an expression that returns a single value, which would be the metric value. The formula can have any julia expression inside and additionally some special symbols are supported. The formula may be evaluated several times, so its applied to every target in every test set or just once, if the formula is defined inside a test set, which makes it only applicable to it.
  • ratio : the allowed minimum percentage over the maximum attainable that is allowed to pass the test, it can be a number or a Julia expression that evaluates to a number
  • membenchmark : which STREAM kernel benchmark to use (e.g :MEMSTREAMCOPY for transfer operations :MEMSTREAM_ADD for transfer and computing)
  • custom_benchmark : in case of using a custom benchmark, the symbol that identifies the chosen benchmark, (must have been defined before)

Special symbols:

  • :median_time : will be substituted by the median time the target took to execute in the benchmark.
  • :minimum_time: will be substituted by the minimum time the target took to execute in the benchmark.
  • :ret_value : will be substituted by the return value of the target.
  • :autoflop: will be substituted by the FLOP count the target.
  • :printed_output : will be substituted by the standard output stream of the target.
  • :iterator : will be substituted by the current iterator value in a loop test set.

Example:

The following definition assumes that each execution of the target expression involves transacting 1000 bytes. Therefore the bandwith is 1000 / execution time.

@define_eff_memory_throughput begin
      1000 / :median_time
end
source
PerfTest.@define_metricMacro

This macro is used to define a new custom metric.

Arguments

  • name : the name of the metric for identification purposes.
  • units : the unit space that the metric values will be in.
  • formula block : an expression that returns a single value, which would be the metric value. The formula can have any julia expression inside and additionally some special symbols are supported. The formula may be evaluated several times, so its applied to every target in every test set or just once, if the formula is defined inside a test set, which makes it only applicable to it. NOTE: If there is the need of referring to a variable on a formula block, it first needs to be exported using the macro @export_vars, otherwise an error will occur.

Special symbols:

  • :median_time : will be substituted by the median time the target took to execute in the benchmark.
  • :minimum_time: will be substituted by the minimum time the target took to execute in the benchmark.
  • :ret_value : will be substituted by the return value of the target.
  • :autoflop: will be substituted by the FLOP count the target.
  • :printed_output : will be substituted by the standard output stream of the target.
  • :iterator : will be substituted by the current iterator value in a loop test set.
source
PerfTest.@export_varsMacro

@export_vars vars...

Exports the specified symbols –along with the values they hold at the moment of the calling– to the scope of metric definitions. In order to use any variable on the definition of a metric such variable needs to be exported with this macro.

source
PerfTest.@on_perftest_execMacro

The expression given to this macro will only be executed in the generated suite, and will be deleted if the source code is executed as is.

source
PerfTest.@on_perftest_ignoreMacro

The expression given to this macro will only be executed in the source code, and will be deleted in the generated performance test suite.

source
PerfTest.@perfcompareMacro

This macro is used to manually declare performance test conditions.

Arguments

  • An expression that must result to a boolean when evaluated. Being true if the comparison leads to a succesful performance test. Special symbols can be used.

Special symbols:

  • :median_time : will be substituted by the median time the target took to execute in the benchmark.
  • :minimum_time: will be substituted by the minimum time the target took to execute in the benchmark.
  • :ret_value : will be substituted by the return value of the target.
  • :autoflop: will be substituted by the FLOP count the target.
  • :printed_output : will be substituted by the standard output stream of the target.
  • :iterator : will be substituted by the current iterator value in a loop test set.

Example:

    @perfcompare :median_time < 0.05
source
PerfTest.@perftestMacro

This macro is used to signal that the wrapped expression is a performance test target, and therefore its performance will be sampled and then evaluated following the current suite configuration.

If the macro is evaluated it does not modify the target at all. The effects of the macro only show when the script is transformed into a performance testing suite.

This macro is sensitive to context since other adjacent macros can change how the target will be evaluated.

@perftest expression [parameters...]

Run a performance test on a given target expression.

Basic usage

The simplest usage is to place @perftest in front of the expression you want to test:

julia> @perftest sin(1)

Additional parameters

You can pass the following keyword arguments to configure the execution process:

  • setup: An expression that is run once per sample before the benchmarked expression. The setup expression is run once per sample, and is not included in the timing results. Note that each sample can require multiple evaluations.

  • teardown: An expression that is run once per sample after the benchmarked expression.

  • samples: The number of samples to take. Execution will end if this many samples have been collected. Defaults to 10000.

  • seconds: The number of seconds budgeted for the benchmarking process. The trial will terminate if this time is exceeded (regardless of samples), but at least one sample will always be taken. In practice, actual runtime can overshoot the budget by the duration of a sample.

  • evals: The number of evaluations per sample. For best results, this should be kept consistent between trials. A good guess for this value can be automatically set on a benchmark via tune!, but using tune! can be less consistent than setting evals manually (which bypasses tuning).

  • gctrial: If true, run gc() before executing this benchmark's trial. Defaults to true.

  • gcsample: If true, run gc() before each sample. Defaults to false.

  • time_tolerance: The noise tolerance for the benchmark's time estimate, as a percentage. This is utilized after benchmark execution, when analyzing results. Defaults to 0.05.

Examples

Basic performance test

 @perftest sin(1)

With setup and teardown

 @perftest sort!(data) setup=(data=rand(100)) teardown=(data=nothing)

With custom parameters

# Run with a 3-second time budget
 @perftest sin(x) setup=(x=rand()) seconds=3

# Limit to 100 samples with 10 evaluations each
 @perftest myfunction(data) samples=100 evals=10

# Disable garbage collection before each sample
 @perftest allocating_function() gcsample=false gctrial=false

See Also

source
PerfTest.@perftest_configMacro

Captures a set of configuration parameters that will override the default configuration. The parameters shall be written in TOML syntax, like a subset of the complete configuration (see config.toml generated by executing transform, or transform/configuration.jl for more information). Order is irrelevant. This macro shall be put as high as possible in the test file (code that is above will be transformed using the default configuration).

Recursive transformation:

This macro will set the new configuration keys for the current file and any other included files. If the included files have the macro as well, those macros will override the configuration locally for each file.

Arguments

  • A String, with the TOML declaration of configuration keys

Example

@perftestconfig " [roofline] enabled = false [general] maxsaved_results = 1 recursive = false "

source
PerfTest.@rooflineMacro

This macro enables roofline modelling, if put just before a target declaration (@perftest) it will proceed to evaluate it using a roofline model.

Mandatory arguments

  • formula block: the macro has to wrap a block that holds a formula to obtain the operational intensity of target algorithms.

Optional arguments

  • cpu_peak : a manual input value for the maximum attainable FLOPS, this will override the empirical runtime benchmark
  • membw_peak : a manual input value for the maximum memory bandwith, this will override the empirical runtime benchmark
  • target_opint : a desired operational intensity for the target, this will turn operational intensity into a test metric
  • actual_flops: another formula that defines the actual performance of the test.
  • target_ratio : the acceptable ratio between the actual performance and the projected performance from the roofline, this will turn actual performance into a test metric.

Special symbols:

  • :median_time : will be substituted by the median time the target took to execute in the benchmark.
  • :minimum_time: will be substituted by the minimum time the target took to execute in the benchmark.
  • :ret_value : will be substituted by the return value of the target.
  • :autoflop: will be substituted by the FLOP count the target.
  • :printed_output : will be substituted by the standard output stream of the target.
  • :iterator : will be substituted by the current iterator value in a loop test set.

Any formula block specified in this macro supports these symbols.

Example

    @roofline actual_flops=:autoflop target_ratio=0.05 begin
        mem = ((:iterator + 1) * :iterator)
        :autoflop / mem
    end

The code block defines operational intensity, whilst the other arguments define how to measure and compare the actual performance with the roofline performance. If the actual to projected performance ratio goes below the target, the test fails.

source
PerfTest.Configuration.load_configMethod

Load configuration from a TOML file.

Args:

  • filepath: Path to the TOML file
  • schema: Optional schema for validation

Returns:

  • Loaded configuration dictionary or nothing
source
PerfTest.Configuration.save_configMethod

Save configuration to a TOML file.

Args:

  • config: Configuration dictionary
  • filepath: Path to save the TOML file
  • schema: Optional schema for validation

Returns:

  • Boolean indicating successful save
source
PerfTest.Configuration.validate_configMethod

Validate configuration against a predefined schema.

Args:

  • config: Dictionary to validate
  • schema: Dictionary defining expected structure and types

Returns:

  • Boolean indicating whether configuration is valid
source
PerfTest.BencherREST.make_requestMethod
make_request(request_dict::Dict{String, Any})

Execute a REST request based on the provided dictionary.

Returns a tuple of (success::Bool, responsedata::Any, errormessage::String)

source
PerfTest.BencherREST.prepare_benchmark_resultsMethod
prepare_benchmark_results(raw_results::Dict{String, Any}, measures_dict::Dict{String, Any})

Prepare benchmark results for submission to Bencher. Filters out any metrics that don't have corresponding measures.

source
PerfTest.BencherREST.processTestHierarchyMethod
TODO flatten_test_hierarchy(results::Dict{String, Union{Dict, Test_Result}}, prefix::String="") -> Dict{String, Test_Result}

Recursively flatten a nested dictionary of test results into a single-level dictionary. The hierarchy levels are incorporated into the keys using "::" as a separator.

Example: { "level1": { "level2": { "benchmarkname": TestResult } } }

becomes:

{
    "level1::level2::benchmark_name": Test_Result
}
source
PerfTest.BencherREST.quick_getMethod
quick_get(url::String; headers::Dict{String, String}=Dict{String, String}(), 
        query_params::Dict{String, Any}=Dict{String, Any}())

Convenience function for simple GET requests.

source
PerfTest.BencherREST.quick_postMethod
quick_post(url::String, body::Dict{String, Any}; 
        headers::Dict{String, String}=Dict{String, String}())

Convenience function for simple POST requests.

source