API reference
This is the official API reference of PerfTest. Note that it can also be queried interactively from the Julia REPL using the help mode:
julia> using PerfTest
julia>?
help?> PerfTestTypes
Index
PerfTest.ASTRulePerfTest.ContextPerfTest.MacroParameterPerfTest.Methodology_ResultPerfTest.Metric_ResultPerfTest.Metric_TestPerfTest.Perftest_Datafile_RootPerfTest.Suite_Execution_ResultPerfTest.Test_Result
Documentation
PerfTest.ASTRule — Type
Used by the AST walker to check for expressions that match condition, if they do then modifier will be applied to the expression.
This is the basic building block of the code transformer, a set of these rules compounds to all the needed manipulations to create the testing suite.
PerfTest.Context — Type
In order to perform with the test suite generation, the AST walk needs to keep a context register to integrate features that rely on the scope hierarchy.
PerfTest.MacroParameter — Method
MacroParameter(name::Symbol, type::Type;
validation_function::Function=(_...) -> true,
default_value=nothing,
has_default::Bool=false,
mandatory::Bool=false)Define a parameter that can appear in a macro along with properties to validate values when the macro is parsed.
Arguments
name: The parameter name as a symboltype: The expected type of the parametervalidation_function: Optional function (returns Bool) to validate parameter valuesdefault_value: Optional default value for the parameterhas_default: Whether this parameter has a default valuemandatory: Whether this parameter is required
PerfTest.Methodology_Result — Type
This struct is used in the test suite to save a methodology result, which in turn is constituted of a group of metric results and their references. Additionally, custom elements that are not subject to test are also saved, e.g. informational metrics, printing functions.
PerfTest.Metric_Result — Type
This struct is used in the test suite to save a metric measurement, therefore its saves the metric name, its units space and its value.
PerfTest.Metric_Test — Type
This struct is used in the test suite to save a metric test result and its associated data, it saves the reference used and the toreance intervals in absolute and percentual values, also it shows if the test succeded and some additional variables for data printing
PerfTest.Perftest_Datafile_Root — Type
This struct is the root of the data recording file, it can save several performance test suite execution results.
PerfTest.Suite_Execution_Result — Type
This struct saves a complete test suite result for one execution. It also saves the raw measurements obtained from the targets.
PerfTest.Test_Result — Type
NOTE: SOME METRICS ARE REPEATED IN HERE AND INSIDE A METRIC RESULT, this redundancy is tolerated for now, the copy inside the methodology result might be substituted by a reference in the future.
Functions
Index
PerfTest.BencherREST.add_body_field!PerfTest.BencherREST.add_header!PerfTest.BencherREST.add_headers!PerfTest.BencherREST.add_query_param!PerfTest.BencherREST.add_query_params!PerfTest.BencherREST.buildPerfTest.BencherREST.create_branchPerfTest.BencherREST.create_measurePerfTest.BencherREST.create_projectPerfTest.BencherREST.create_testbedPerfTest.BencherREST.ensure_bencher_resourcesPerfTest.BencherREST.exportSuiteToBencherPerfTest.BencherREST.get_all_measuresPerfTest.BencherREST.get_current_git_branchPerfTest.BencherREST.get_testbed_namePerfTest.BencherREST.make_requestPerfTest.BencherREST.prepare_benchmark_resultsPerfTest.BencherREST.processTestHierarchyPerfTest.BencherREST.quick_getPerfTest.BencherREST.quick_postPerfTest.BencherREST.rfc3339_formatterPerfTest.BencherREST.search_branchPerfTest.BencherREST.search_measurePerfTest.BencherREST.search_projectPerfTest.BencherREST.search_testbedPerfTest.BencherREST.set_api_key!PerfTest.BencherREST.set_bearer_token!PerfTest.BencherREST.set_body!PerfTest.BencherREST.set_method!PerfTest.BencherREST.set_timeout!PerfTest.BencherREST.set_url!PerfTest.BencherREST.submitReportPerfTest.BencherREST.validate_metric_resultsPerfTest.Configuration.load_configPerfTest.Configuration.merge_configsPerfTest.Configuration.save_configPerfTest.Configuration.validate_configPerfTest._perftest_configPerfTest._treeRunPerfTest.abort_if_invalidPerfTest.addLogPerfTest.auxiliarMetricPrintPerfTest.buildMemTRPTMethodologyPerfTest.buildRegressionPerfTest.buildRooflinePerfTest.by_indexPerfTest.checkTypePerfTest.clearLogsPerfTest.dumpLogsPerfTest.dumpLogsStringPerfTest.empty_exprPerfTest.extractMethodologyResultArrayPerfTest.extractNamesResultArrayPerfTest.flattenedInterpolationPerfTest.genTestName!PerfTest.getMetricValuePerfTest.getNumberPerfTest.get_failed_testsPerfTest.get_metric_resultsPerfTest.get_metrics_for_testPerfTest.grepOutputPerfTest.grepOutputXGetNumberPerfTest.loadFileAsExprPerfTest.match_failurePerfTest.metaGetPerfTest.metaGetStringPerfTest.newLocalScopePerfTest.newLocalScopeForPerfTest.no_transformPerfTest.oldperftestsuffixPerfTest.onMemoryThroughputDefinitionPerfTest.onRegressionDefinitionPerfTest.onRooflineDefinitionPerfTest.openDataFilePerfTest.p_bluePerfTest.p_greenPerfTest.p_redPerfTest.p_yellowPerfTest.perftestToBenchmark!PerfTest.perftestsuffixPerfTest.popQuoteBlocksPerfTest.printDepth!PerfTest.printIntervalLandingPerfTest.printMethodologyPerfTest.printMetricPerfTest.printfailPerfTest.regressionPerfTest.removeBlockPerfTest.ruleSetPerfTest.saveDataFilePerfTest.saveExprAsFilePerfTest.saveLogFolderPerfTest.saveMethodologyDataPerfTest.setLogFolderPerfTest.testsetUpdate!PerfTest.toggleMPIPerfTest.treeRunPerfTest.unblockAndConcatPerfTest.validASTRulePerfTest.verboseOutputTest.print_test_resultsPerfTest.@auxiliary_metricPerfTest.@auxiliary_metricPerfTest.@define_benchmarkPerfTest.@define_eff_memory_throughputPerfTest.@define_eff_memory_throughputPerfTest.@define_metricPerfTest.@define_metricPerfTest.@export_varsPerfTest.@export_varsPerfTest.@lpadPerfTest.@on_perftest_execPerfTest.@on_perftest_execPerfTest.@on_perftest_ignorePerfTest.@on_perftest_ignorePerfTest.@perfcmpPerfTest.@perfcomparePerfTest.@perfcomparePerfTest.@perftestPerfTest.@perftestPerfTest.@perftest_configPerfTest.@perftest_configPerfTest.@rooflinePerfTest.@roofline
Documentation
PerfTest._perftest_config — Method
Used on a generated test suite to import the configuration set during generation
PerfTest._treeRun — Method
This method gets a input julia expression, and a context register and executes a transformation of the input that converts a recipe script (input) into a fully-fledged testing suite (return value).
Arguments
input_exprthe recipe/source expression. (internally, a.k.a source code space)contexta register that will store information useful for the transformation over its run over the AST of the input
PerfTest.abort_if_invalid — Method
If validation failed with a false, the transform will abort returning an empty expression.
PerfTest.addLog — Function
Creates and/or appends to a log channel, the message is saved in that channel depending on verbosity the message will be also sent to standard output.
PerfTest.auxiliarMetricPrint — Method
This function is used to dump metric information regading auxiliar metrics, which are not used in testing.
PerfTest.buildMemTRPTMethodology — Method
Returns an expression used to evaluate the effective memory throughput over a test target
PerfTest.buildRegression — Method
Returns an expression used to evaluate regression over a test target
PerfTest.buildRoofline — Method
Returns an expression used to generate and evaluate a roofline model over a test target
PerfTest.by_index — Method
This method expects a hierarchy tree (dict) in the form of nested dictionaries and a vector of dictionary keys idx. The function will recursively index by the keys to get to a final element.
The DepthRecord struct represents an index.
PerfTest.checkType — Method
Returns a function that will return if its argument (x) is of type "type"
PerfTest.clearLogs — Function
Clears the IOBuffer of the specified channel if 0 all channels will be cleared.
Unbinds the channels to stdout as well.
PerfTest.dumpLogs — Function
Prints the selected channel number on stdout
PerfTest.dumpLogsString — Function
Prints the selected channel number on a string
PerfTest.empty_expr — Function
Returns an empty expresion for any input
PerfTest.extractMethodologyResultArray — Method
This method will return a flattened array of all of the results for all the methodologies exercised in the provided dictionary.
Example:
"Test Set 1" -> "Test 1" -> Methodology A result -> Methodology B result "Test Set 2" -> "Test 1" -> Methodology A result Returns: M. A result (Test Set 1) M. B result (Test Set 1) M. A result (Test Set 2)
PerfTest.extractNamesResultArray — Method
This method will return a flattened array of the whole test result hierarchy.
Example
Example:
"Test Set 1" -> "Test 1" -> Methodology A result -> Methodology B result "Test Set 2" -> "Test 1" -> Methodology A result Returns: "Test Set 1 -> Test 1 -> Methodology A" "Test Set 1 -> Test 1 -> Methodology B" "Test Set 2 -> Test 1 -> Methodology A"
PerfTest.flattenedInterpolation — Method
This method interpolates the inside_expr into outside_expr anywhere it finds the token substitution_token, which is a symbol. The outside_expr has to be a block or a quote block. It has the particularity that it will remove block heads from the inside_expr and add the nested elements onto the location where the token it.
Example:
outside_expr = :(:A; 4)
inside_expr = :(begin 2;3 end)
substitution_token = :A
returns = :(2;3;4)
PerfTest.genTestName! — Method
Function that generates a test name if needed, it is used to name test targets to distinguish them if several go in the same testset.
PerfTest.getMetricValue — Method
Given a series of methodology results, the the raw values of all the metrics contained in the methodology results.
PerfTest.getNumber — Method
From a string (field), it will parse the first number it finds as a Float
PerfTest.get_failed_tests — Method
This function will return all tests that failed (i.e., where succeeded is false).
PerfTest.get_metric_results — Method
This function will return all results for a specific metric across all methodologies.
PerfTest.get_metrics_for_test — Method
This function will return all metrics for a specific test name.
PerfTest.grepOutput — Method
From a string, it will divide it by lines and retrieve the ones that match the regular expression provided.
PerfTest.grepOutputXGetNumber — Method
Given a string output, it will retrieve the first number in the first line that contains the string string.
PerfTest.loadFileAsExpr — Method
Utility to get an expression from a Julia file stored at path
PerfTest.match_failure — Method
Constructs an ASTRule that will always match. The onfail function will then be applied. Useful to catch a match failure.
Arguments:
- onfail : a function that receives a Context argument.
PerfTest.metaGet — Method
Runs over an array of expressions trying to match the desired one. If not found returns "Nothing".
"sym" should follow the MacroTools nomenclature for the @capture macro
PerfTest.metaGetString — Method
PerfTest.newLocalScope — Method
Creates a new level in the dictionary and updates the local context with the new level
PerfTest.newLocalScopeFor — Method
Creates a new level in the dictionary and updates the local context with the new level
Applies to hierarchy levels with an iteration series.
PerfTest.no_transform — Function
Keeps the expression as it it
PerfTest.oldperftestsuffix — Method
DEPRECATED: PENDING REMOVALPerfTest.onMemoryThroughputDefinition — Method
Called when a effective memory throughput macro is detected, sets up the effective memory throughput methodology
PerfTest.onRegressionDefinition — Method
Called when a regression macro is detected, sets up the regression methodology
PerfTest.onRooflineDefinition — Method
Called when a roofline macro is detected, sets up the roofline methodology
PerfTest.openDataFile — Method
This method is used to get historical data of a performance test suite from a save file located in path.
PerfTest.p_blue — Method
Prints the element in color blue
PerfTest.p_green — Method
Prints the element in color green
PerfTest.p_red — Method
Prints the element in color red
PerfTest.p_yellow — Method
Prints the element in color yellow
PerfTest.perftestToBenchmark! — Method
DEPRECATED: PENDING REMOVALPerfTest.perftestsuffix — Method
Builds the Julia expression that goes after the performance tests, it registers, prints and saves results given the measurements done above it.PerfTest.popQuoteBlocks — Method
Useful to correct operations limited by the tree walking Will remove quote blocks inside the main block without recursion and push their expressions into the main block
PerfTest.printDepth! — Method
This method is used to print the test names, with consideration on the hierarchy and adding indentation whenever necessary
PerfTest.printIntervalLanding — Function
This method is used to print a graphical representation on a test result and the admisible intervals it can take. The result will and the two bounds will be printed in order.
PerfTest.printMethodology — Method
This function is used to print the information relative to a methodology, relative to a a specific test execution result. This will usually print a series of metrics and might also print plots.
PerfTest.printMetric — Method
This method is used to dump into the output the information about a metric and the value obtained in a specific test.
PerfTest.printfail — Method
This method dumps into the output a test result in case of failure. The output will be formatted to make it easy to read.
PerfTest.regression — Method
Executes regression for one single metric
PerfTest.removeBlock — Method
Pops expr which has a head that is :block or :quote and returns array of nested expressions which are the arguments of such head.
PerfTest.ruleSet — Method
This method builds what is known as a rule set. Which is a function that will evaluate if an expression triggers a rule in a set and if that is the case apply the rule modifier. See the ASTRule documentation for more information.
WARNING: the rule set will apply the FIRST rule that matches with the expression, therefore other matches will be ignored
Arguments
contextthe context structure of the tree run, it will be ocassinally used by some rules on the set.rulesthe collection of rules that will belong to the resulting set.
PerfTest.saveDataFile — Method
This method is used to save historical data of a performance test suite to a save file located in path.
PerfTest.saveExprAsFile — Function
Utility to save an expression (expr) to a Julia file stored at path
Requires a :toplevel symbol to be the head of the expression.
PerfTest.saveLogFolder — Method
Moves the temporal log directory to the persistent directory where results are saved and dumps all IOStream buffers to their respective log files
PerfTest.saveMethodologyData — Method
TODO
PerfTest.setLogFolder — Method
Creates temporal a directory for the logs of a specific execution
PerfTest.testsetUpdate! — Method
Function used to register a new test set in the hierarchy record of the context, where name is the name of the test set.
PerfTest.toggleMPI — Method
In order for the suite to be MPI aware, this function has to be called. Calling it again will disable this feature.
PerfTest.treeRun — Method
This method implements the transformation that converts a recipe script into a fully-fledged testing suite. The function will return a Julia expression with the resulting performance testing suite. This can be then executed or saved in a file for later usage.
Arguments
paththe path of the script to be transformed.
PerfTest.unblockAndConcat — Method
This function is useful to move expressions to the toplevel when they are enclosed inside a block
PerfTest.validASTRule — Method
Constructs an ASTRule that assumes that the expression is automatically valid if matched. Thus no validation is done.
PerfTest.verboseOutput — Method
Binds a channel to the standard output
Test.print_test_results — Method
Will print the overall result of the test suite execution
PerfTest.@auxiliary_metric — Macro
Defines a custom metric for informational purposes that will not be used for testing but will be printed as output.
Arguments
name: the name of the metric for identification purposes.units: the unit space that the metric values will be in.- formula block : an expression that returns a single value, which would be the metric value. The formula can have any julia expression inside and additionally some special symbols are supported. The formula may be evaluated several times, so its applied to every target in every test set or just once, if the formula is defined inside a test set, which makes it only applicable to it.
Special symbols:
:median_time: will be substituted by the median time the target took to execute in the benchmark.:minimum_time: will be substituted by the minimum time the target took to execute in the benchmark.:ret_value: will be substituted by the return value of the target.:autoflop: will be substituted by the FLOP count the target.:printed_output: will be substituted by the standard output stream of the target.:iterator: will be substituted by the current iterator value in a loop test set.
PerfTest.@define_benchmark — Macro
Defines a customized reference to compare against. Must be always defined inside a testset.Arguments
- name : (string) an identifier for the custom benchmark, methodologies that use it will refer too it using this name.
- units: (string) units of the reference value.
- formula: an expression to calculate the reference value.
PerfTest.@define_eff_memory_throughput — Macro
This macro is used to define the memory bandwidth of a target in order to execute the effective memory thorughput methodology.
Arguments
- formula block : an expression that returns a single value, which would be the metric value. The formula can have any julia expression inside and additionally some special symbols are supported. The formula may be evaluated several times, so its applied to every target in every test set or just once, if the formula is defined inside a test set, which makes it only applicable to it.
- ratio : the allowed minimum percentage over the maximum attainable that is allowed to pass the test, it can be a number or a Julia expression that evaluates to a number
- membenchmark : which STREAM kernel benchmark to use (e.g :MEMSTREAMCOPY for transfer operations :MEMSTREAM_ADD for transfer and computing)
- custom_benchmark : in case of using a custom benchmark, the symbol that identifies the chosen benchmark, (must have been defined before)
Special symbols:
:median_time: will be substituted by the median time the target took to execute in the benchmark.:minimum_time: will be substituted by the minimum time the target took to execute in the benchmark.:ret_value: will be substituted by the return value of the target.:autoflop: will be substituted by the FLOP count the target.:printed_output: will be substituted by the standard output stream of the target.:iterator: will be substituted by the current iterator value in a loop test set.
Example:
The following definition assumes that each execution of the target expression involves transacting 1000 bytes. Therefore the bandwith is 1000 / execution time.
@define_eff_memory_throughput begin
1000 / :median_time
endPerfTest.@define_metric — Macro
This macro is used to define a new custom metric.
Arguments
name: the name of the metric for identification purposes.units: the unit space that the metric values will be in.- formula block : an expression that returns a single value, which would be the metric value. The formula can have any julia expression inside and additionally some special symbols are supported. The formula may be evaluated several times, so its applied to every target in every test set or just once, if the formula is defined inside a test set, which makes it only applicable to it. NOTE: If there is the need of referring to a variable on a formula block, it first needs to be exported using the macro @export_vars, otherwise an error will occur.
Special symbols:
:median_time: will be substituted by the median time the target took to execute in the benchmark.:minimum_time: will be substituted by the minimum time the target took to execute in the benchmark.:ret_value: will be substituted by the return value of the target.:autoflop: will be substituted by the FLOP count the target.:printed_output: will be substituted by the standard output stream of the target.:iterator: will be substituted by the current iterator value in a loop test set.
PerfTest.@export_vars — Macro
@export_vars vars...
Exports the specified symbols –along with the values they hold at the moment of the calling– to the scope of metric definitions. In order to use any variable on the definition of a metric such variable needs to be exported with this macro.
PerfTest.@lpad — Macro
Macro that adds a space at the beggining of a string
PerfTest.@on_perftest_exec — Macro
The expression given to this macro will only be executed in the generated suite, and will be deleted if the source code is executed as is.
PerfTest.@on_perftest_ignore — Macro
The expression given to this macro will only be executed in the source code, and will be deleted in the generated performance test suite.
PerfTest.@perfcmp — Macro
Alias of @perfcompare
PerfTest.@perfcompare — Macro
This macro is used to manually declare performance test conditions.
Arguments
- An expression that must result to a boolean when evaluated. Being true if the comparison leads to a succesful performance test. Special symbols can be used.
Special symbols:
:median_time: will be substituted by the median time the target took to execute in the benchmark.:minimum_time: will be substituted by the minimum time the target took to execute in the benchmark.:ret_value: will be substituted by the return value of the target.:autoflop: will be substituted by the FLOP count the target.:printed_output: will be substituted by the standard output stream of the target.:iterator: will be substituted by the current iterator value in a loop test set.
Example:
@perfcompare :median_time < 0.05PerfTest.@perftest — Macro
This macro is used to signal that the wrapped expression is a performance test target, and therefore its performance will be sampled and then evaluated following the current suite configuration.
If the macro is evaluated it does not modify the target at all. The effects of the macro only show when the script is transformed into a performance testing suite.
This macro is sensitive to context since other adjacent macros can change how the target will be evaluated.
@perftest expression [parameters...]Run a performance test on a given target expression.
Basic usage
The simplest usage is to place @perftest in front of the expression you want to test:
julia> @perftest sin(1)Additional parameters
You can pass the following keyword arguments to configure the execution process:
setup: An expression that is run once per sample before the benchmarked expression. Thesetupexpression is run once per sample, and is not included in the timing results. Note that each sample can require multiple evaluations.teardown: An expression that is run once per sample after the benchmarked expression.samples: The number of samples to take. Execution will end if this many samples have been collected. Defaults to10000.seconds: The number of seconds budgeted for the benchmarking process. The trial will terminate if this time is exceeded (regardless of samples), but at least one sample will always be taken. In practice, actual runtime can overshoot the budget by the duration of a sample.evals: The number of evaluations per sample. For best results, this should be kept consistent between trials. A good guess for this value can be automatically set on a benchmark viatune!, but usingtune!can be less consistent than settingevalsmanually (which bypasses tuning).gctrial: Iftrue, rungc()before executing this benchmark's trial. Defaults totrue.gcsample: Iftrue, rungc()before each sample. Defaults tofalse.time_tolerance: The noise tolerance for the benchmark's time estimate, as a percentage. This is utilized after benchmark execution, when analyzing results. Defaults to0.05.
Examples
Basic performance test
@perftest sin(1)With setup and teardown
@perftest sort!(data) setup=(data=rand(100)) teardown=(data=nothing)With custom parameters
# Run with a 3-second time budget
@perftest sin(x) setup=(x=rand()) seconds=3
# Limit to 100 samples with 10 evaluations each
@perftest myfunction(data) samples=100 evals=10
# Disable garbage collection before each sample
@perftest allocating_function() gcsample=false gctrial=falseSee Also
- BenchmarkTools.jl Documentation for more details on the underlying
@benchmarkmacro and its parameters.
PerfTest.@perftest_config — Macro
Captures a set of configuration parameters that will override the default configuration. The parameters shall be written in TOML syntax, like a subset of the complete configuration (see config.toml generated by executing transform, or transform/configuration.jl for more information). Order is irrelevant. This macro shall be put as high as possible in the test file (code that is above will be transformed using the default configuration).
Recursive transformation:
This macro will set the new configuration keys for the current file and any other included files. If the included files have the macro as well, those macros will override the configuration locally for each file.
Arguments
- A String, with the TOML declaration of configuration keys
Example
@perftestconfig " [roofline] enabled = false [general] maxsaved_results = 1 recursive = false "
PerfTest.@roofline — Macro
This macro enables roofline modelling, if put just before a target declaration (@perftest) it will proceed to evaluate it using a roofline model.
Mandatory arguments
- formula block: the macro has to wrap a block that holds a formula to obtain the operational intensity of target algorithms.
Optional arguments
cpu_peak: a manual input value for the maximum attainable FLOPS, this will override the empirical runtime benchmarkmembw_peak: a manual input value for the maximum memory bandwith, this will override the empirical runtime benchmarktarget_opint: a desired operational intensity for the target, this will turn operational intensity into a test metricactual_flops: another formula that defines the actual performance of the test.target_ratio: the acceptable ratio between the actual performance and the projected performance from the roofline, this will turn actual performance into a test metric.
Special symbols:
:median_time: will be substituted by the median time the target took to execute in the benchmark.:minimum_time: will be substituted by the minimum time the target took to execute in the benchmark.:ret_value: will be substituted by the return value of the target.:autoflop: will be substituted by the FLOP count the target.:printed_output: will be substituted by the standard output stream of the target.:iterator: will be substituted by the current iterator value in a loop test set.
Any formula block specified in this macro supports these symbols.
Example
@roofline actual_flops=:autoflop target_ratio=0.05 begin
mem = ((:iterator + 1) * :iterator)
:autoflop / mem
endThe code block defines operational intensity, whilst the other arguments define how to measure and compare the actual performance with the roofline performance. If the actual to projected performance ratio goes below the target, the test fails.
PerfTest.Configuration.load_config — Method
Load configuration from a TOML file.
Args:
- filepath: Path to the TOML file
- schema: Optional schema for validation
Returns:
- Loaded configuration dictionary or nothing
PerfTest.Configuration.merge_configs — Method
Recursively merge two dictionaries, with values from override_dict taking precedence.
PerfTest.Configuration.save_config — Method
Save configuration to a TOML file.
Args:
- config: Configuration dictionary
- filepath: Path to save the TOML file
- schema: Optional schema for validation
Returns:
- Boolean indicating successful save
PerfTest.Configuration.validate_config — Method
Validate configuration against a predefined schema.
Args:
- config: Dictionary to validate
- schema: Dictionary defining expected structure and types
Returns:
- Boolean indicating whether configuration is valid
PerfTest.BencherREST.add_body_field! — Method
add_body_field!(builder::RESTRequestBuilder, key::String, value)Add a single field to the request body.
PerfTest.BencherREST.add_header! — Method
add_header!(builder::RESTRequestBuilder, key::String, value::String)Add a single header to the request.
PerfTest.BencherREST.add_headers! — Method
add_headers!(builder::RESTRequestBuilder, headers::Dict{String, String})Add multiple headers to the request.
PerfTest.BencherREST.add_query_param! — Method
add_query_param!(builder::RESTRequestBuilder, key::String, value)Add a single query parameter.
PerfTest.BencherREST.add_query_params! — Method
add_query_params!(builder::RESTRequestBuilder, params::Dict{String, Any})Add multiple query parameters.
PerfTest.BencherREST.build — Method
build(builder::RESTRequestBuilder)Build and return the request dictionary.
PerfTest.BencherREST.create_branch — Method
create_branch(project_uuid::String, branch_name::String)Create a new branch in the project.
PerfTest.BencherREST.create_measure — Method
create_measure(project_uuid::String, metric::BencherMetric)Create a new measure in the project.
PerfTest.BencherREST.create_project — Method
create_project(project_name::String)Create a new project in the organization.
PerfTest.BencherREST.create_testbed — Method
create_testbed(project_uuid::String, testbed_name::String)Create a new testbed in the project.
PerfTest.BencherREST.ensure_bencher_resources — Method
ensure_bencher_resources()Ensure that the current project, branch, and testbed exist in Bencher. Returns a tuple of (project, branch, testbed) with their respective information.
PerfTest.BencherREST.get_all_measures — Method
get_all_measures(project_uuid::String)Get all measures for a project.
PerfTest.BencherREST.get_current_git_branch — Method
get_current_git_branch()Get the current git branch name from the repository.
PerfTest.BencherREST.get_testbed_name — Method
get_testbed_name()Get the testbed name based on hostname and architecture.
PerfTest.BencherREST.make_request — Method
make_request(request_dict::Dict{String, Any})Execute a REST request based on the provided dictionary.
Returns a tuple of (success::Bool, responsedata::Any, errormessage::String)
PerfTest.BencherREST.prepare_benchmark_results — Method
prepare_benchmark_results(raw_results::Dict{String, Any}, measures_dict::Dict{String, Any})Prepare benchmark results for submission to Bencher. Filters out any metrics that don't have corresponding measures.
PerfTest.BencherREST.processTestHierarchy — Method
TODO flatten_test_hierarchy(results::Dict{String, Union{Dict, Test_Result}}, prefix::String="") -> Dict{String, Test_Result}Recursively flatten a nested dictionary of test results into a single-level dictionary. The hierarchy levels are incorporated into the keys using "::" as a separator.
Example: { "level1": { "level2": { "benchmarkname": TestResult } } }
becomes:
{
"level1::level2::benchmark_name": Test_Result
}PerfTest.BencherREST.quick_get — Method
quick_get(url::String; headers::Dict{String, String}=Dict{String, String}(),
query_params::Dict{String, Any}=Dict{String, Any}())Convenience function for simple GET requests.
PerfTest.BencherREST.quick_post — Method
quick_post(url::String, body::Dict{String, Any};
headers::Dict{String, String}=Dict{String, String}())Convenience function for simple POST requests.
PerfTest.BencherREST.search_branch — Method
search_branch(project_uuid::String, branch_name::String)Search for a branch by name in the project.
PerfTest.BencherREST.search_measure — Method
search_measure(project_uuid::String, measure_name::String)Search for a measure by name in the project.
PerfTest.BencherREST.search_project — Method
search_project(project_name::String)Search for a project by name in the organization.
PerfTest.BencherREST.search_testbed — Method
search_testbed(project_uuid::String, testbed_name::String)Search for a testbed by name in the project.
PerfTest.BencherREST.set_api_key! — Method
set_api_key!(builder::RESTRequestBuilder, api_key::String; header_name::String="X-API-Key")Convenience method to set API key in headers.
PerfTest.BencherREST.set_bearer_token! — Method
set_bearer_token!(builder::RESTRequestBuilder, token::String)Convenience method to set Bearer token for authentication.
PerfTest.BencherREST.set_body! — Method
set_body!(builder::RESTRequestBuilder, body::Dict{String, Any})Set the request body (for POST, PUT requests).
PerfTest.BencherREST.set_method! — Method
set_method!(builder::RESTRequestBuilder, method::String)Set the HTTP method (GET, POST, PUT, DELETE, etc.)
PerfTest.BencherREST.set_timeout! — Method
set_timeout!(builder::RESTRequestBuilder, timeout::Int)Set the request timeout in seconds.
PerfTest.BencherREST.set_url! — Method
set_url!(builder::RESTRequestBuilder, url::String)Set the target URL.
PerfTest.BencherREST.validate_metric_results — Method
validate_metric_results(results::Dict{String, Any}, measures_dict::Dict{String, Any})Validate that all metrics in the results have corresponding measures.